DORA is good , but is it enough for platform engin...
# metrics
c
DORA is good , but is it enough for platform engineering ? Let's talk šŸ™‚
r
Would be surprised if you find someone saying Yes šŸ˜‰
But even for any aspects of DORA (such as lead time to change) there are many granularities such as:
Question: what is platform engineering for you and which (metrics) problem you like to solve?
c
Let me think šŸ˜„ DORA is for a project, a team(also a group of projects) or a company,we can record the four key metrics to measure our project, team or the whole org. But to platform engineering, DORA meet application projects and platform projects on one level , actually they are not on same level, good platforms and good developer expericence will facilitate application developement , right ? So I think DORA is not enough for paltform engineering. We need to measure the value of platforms,especailly how important the platforms are .
o
Balancing productivity, stability, efficiency and risk as the article says I think are at the core of every platform team and having metrics in place tracking each of these outcomes is needed. DORA is good for certain dimensions of Productivity for example, but doesn't measure at all Developer Happiness which has been repeatedly shown to be positively correlated with productivity.
f
Lead Time for Change measures overall efficiency. When broken down into the 4 phases of time - Coding (time to PR) > Pickup (time to first response) > Review (time to merge) > Deploy (time to prod) - this is what allows Platform Eng to know what to optimize for (@Ralf Huuck’s image - is it the PR time? build times? scanning? etc?). Tons and tons of research out there already specifically around PR Cycle Time. 1. Nudging
In a randomized trial on 147 repositories in use at Microsoft, Nudge was able to reduce pull request resolution time by 60% for 8,500 pull requests, when compared to overdue pull requests for which Nudge did not send a notification. Furthermore, developers receiving Nudge notifications resolved 73% of these notifications as positive. We observed similar results when scaling up the deployment of Nudge to 8,000 repositories at Microsoft, for which Nudge sent 210,000 notifications during a full year.
2. Meta
We’ve found a correlation between slow diff review times (P75) and engineer dissatisfaction. Our tools to surface diffs to the right reviewers at key moments in the code review lifecycle have significantly improved the diff review experience.
3. LinkedIn - Dev Prod Engineering - measures happiness and productivity @Olivier Kouame
Here are some of the metrics we decided to adopt:
• Developer Build Time (P50 and P90) - measures the time, in seconds, developers spend waiting for their builds to finish locally during development.
• Code Reviewer Response Time (P50 and P90) - measures how long it takes, in business hours, for code reviewers to respond to each update of the code review from the author.
• Post-Commit CI Speed (P50 and P90) - measures how long it takes, in minutes, for each commit to get through the continuous integration (CI) pipeline.
• CI Determinism - the opposite of test flakiness—the chance that a test suite’s result will be valid (not a flake)
• Deployment Success Rate - measures how often deployments succeed.
• Net User Satisfaction (NSAT) - measures, on a quarterly basis, how happy developers are overall with our development systems.
a
Edited
IMO the answer is clearly no. Nicole Forsgren, the creator of DORA, has gone on to publish two major papers on metrics since the book Accelerate. This interview specifically discusses the question of why DORA metrics aren’t ā€œenoughā€. https://newsletter.pragmaticengineer.com/p/developer-productivity-a-new-framework
c
@Nočnica Mellifera FYI about DORA & PE šŸ˜„