:thinking_face: Do you agree? :thinking_face:) Yo...
# general
s
🤔 Do you agree? 🤔) You can't evaluate your platform engineering team based on DORA metrics? let the debate begin ✍️ (Limitations in Measuring Platform Engineering with DORA Metrics) only reason I included the link (words taken from the linked article), while reading the content, seems very reflective and critical debates on DORA metrics for PE team, while during my e-chat with folks at KubeCon and some similar patterns and suggestion has emerged, Looking forward to some insightful pointers from this community. Platform engineering encompasses a large amount of work that is outside the measurement of DORA metrics. When we talk about evaluating the performance of our platform engineering team, using DORA metrics is not a way to encourage trust within that team. At a glance, DORA metrics: • Don’t capture everything the team does. • Are strongly affected by code quality and work done by other teams. • Will rise and fall stochastically depending on the features currently in development.
g
Agreed, I think DORA are too much of a lagging metrics, there is a lot to it that cannot be influenced by your Platform Teams, and there is a lot that it does not capture. I think surveys to take the temperature are the way to go, what matters is the trend, are things improving/getting worse from the feature team perspective? (ease of debugging, speed of ci pipelines, quality of documentation, etc)
k
As such DORA metrics are measuring the whole engineering function of a company (and tbh even product org). Platform engineering, or traditionally speaking DevOps is only one of contributors (super important, but not the only one). Ability for the application teams to split tasks at hand into the smallest deployable chunks or effective use of feature flags are examples of variables as important as quality of your Golden Paths. I would not use DORA to measure PT, but I would use DORA metrics dynamic to asses the influence of PT on overall engineering function. After all DORA, thanks to the awesome work of the research authors, have something no other engineering efficiency measures has - very good benchmarking data.
t
If you build your platform as a product, DORA works great for platform teams. In my role as CTO for a company with 8 product and 4 small platform teams, we saw over 200 weekly production plus ~50 internal platform component deployments. DORA metrics (especially deployment frequency) proved to be equally effective for Product and Platform teams, although it was very rare that platform teams caused incidents. But then again, we had less than a handful of small incidents per month (change failure rate was less than 1%). The critique that platform work is immeasurable by DORA, due to Platform Teams having a large amount of work outside DORA, makes me wonder what your Product Teams are doing? I found the exact opposite. The Empowered Product Teams spend a lot of time doing discovery, engaging with customers, support, sales, etc., to make sure they were building the right thing. Platform teams, on the other hand, were sitting next to the product teams and were spending more time in delivery mode. If you have feature teams, where engineers write code 80% of the time you might get a lot of output, but fail on delivering on outcomes.
s
I'm assuming we all agree the platform team would use DORA metrics to inform their own software delivery improvements and we're talking about using their customers' DORA metrics to see if the platform is having a positive impact on them. I don't think DORA metrics are the only option. If that's what the development teams are focussed on, it helps to know that this is something you can help them achieve. The "K" in the MONK metrics is "key customer metrics". This ought to align with the reasons these teams would "buy in" to the platform. If the problem is developer experience, you'd be better off starting there. The crucial question that should always be at the heart of platform engineering is "what is the customer's problem?" so the metrics ought to help answer this.
I agree with many things in the linked article. But I'd also question some of the statements...
Tech debt is invisible to DORA metrics.
If this was true (that you could have all the tech debt in the world and it wouldn't impact the DORA metrics) I'm not sure there would be any pressing reason to resolve tech debt. I think the reason tech debt is a problem is because it impacts several DORA metrics.