<@U03DUD9GU2W> You mentioned that one of the PMs i...
# platform-culture
j
@Alison Rosewarne You mentioned that one of the PMs in your org was measuring an “hours saved” metric for your OKRs … can you give any detail about how you approached measuring this?
👀 2
g
We didn't use it as OKR but did track an approximate metric for this in some tools. One example: we built some automation to auto-merge PRs when approved, send reminders to reviewers, or send PRs with dependency updates (this was some time before GH implemented many of these). We just took conservative estimates of how much each of those actions took on average (e.g. "sending a PR with a lib update takes 10 min") and multiply by executions of the tool. 10 mins may look small, but if you're automating this across the org the numbers add up to quite a lot already. And, being conservative gives you the luxury of not having to defend the number (in fact, you can easily argue that it actually saves a lot more than 10 mins). We didn't use this so much for OKRs, but it did appear in a few presentations to execs in this form: "Look, it took N engineer-days to build this tool, and now it's saving 10*N engineer-days per month." It helps a lot to justify getting headcount.
a
It's an evolving practice but involved: 1. Comparison of before platform and after platform efforts for the work no longer required, e.g. if you use our web micro frontend approach you save 90 days up front and then ongoing maintenance costs. 2. Multiplying effort to build components as savings everytime you use them. e.g. a button in our design system took so many hours to build so everytime you use a button you've saved those hours. 3. Reduction of toil time savings - some automation saves ongoing manual effort (especially per AWS account which really adds up).
Like any metric it will ultimately be gamified but while it is driving the right behaviours (platform adoption and platform development for maximum impact) we'll stick with it.
j
To what extent did you find yourself using potential time saving to prioritize vs estimated savings to demonstrate benefit after the fact?
I do like the idea of specifically finding an activity that can be measured (like CI). It's a good approach to focus the team and also to show the benefits. Nice and concrete, and serves double duty!
g
This type of very objective metric was quite useful for us to focus the team and related to the autonomy topic we discussed a few days ago. A typical problem we has was "engineer has an idea about $tool, wants time to build it, will get annoyed if we say no". This type of conversation can lead to engineers feeling frustrated because we impose features etc. So we tried to say "ok, bring us the data: show how much time your tool is expected to save per execution, how is it going to get adopted, etc.". If you bring a business case and it works out, let's do it.
It's useful not only for prioritisation, but also gets engineers into the product mindset.
j
Yeh that makes sense. Used carefully it sounds like a useful practice - thanks for the info!
a
Orgs also need to decide how they want to balance throughput vs latency. There’s a lot of tools that reduce engineers cycle time locally but have no actual impact on the throughput of the organization. Using time saved metrics works well at macro blocks but breaks down at the micro bc that time isn’t recapitalized into something else.
a
To what extent did you find yourself using potential time saving to prioritize vs estimated savings to demonstrate benefit after the fact?
Historically I'd say we were definitely focused on the latter - demonstrating benefit after the fact. Moving forward we want to do both (prioritise according to hours saved as well as report actuals).
There’s a lot of tools that reduce engineers cycle time locally but have no actual impact on the throughput of the organization.
Could you expand on this a little more? If this point is about system inefficiencies entirely related to tech and tools, yes this is a problem to measure, identify, and often solve with something other than more tech and more tools 🙂
r
In our team called Ethos at Adobe, we measure zero to hello world and zero to production readiness (cycle time) as key metrics which we report on a quarterly basis. We have a semi-automated way of capturing these for now so we ask developers who go through the onboarding documentation to track these times in a spreadsheet (not perfect) which we use as a baseline and calculate the mean time over the year to see performance.
We also quantified these times in terms of $ value by multiplying it by average developer salary and potential hour savings (10-20%) to come up with reasonable monetary savings value in addition to the time taken
g
We did this too ^ I'd note something I mentioned in my talk though: I found it dangerous to frame the platform's benefits as cost reduction. So when we used that type of metric we gave it a spin and presented those numbers from a different angle. We did not save 10% engineer hours. We added 10% engineering capacity.
r
@Galo Navarro Very interesting. Were you able to quantify it in $ amount? Wondering what adds the most impact for leadership
g
You can do hours saved * avg salary
But we did not use that argument much, it is dangerous because you create the expectation that you will generate those savings consistently, which ls hard and will backfire
With leadership the general argument was “teams following our recommended practises and using our tooling perform better than those who don't”
Better being quantified with metrics backed by industry research like those in Accelerate
Which tend to correlate with the subjective perception people make about teams (e,g teams that seem reliable and effective happen to fare well in those metrics)
r
Got it, you mentioned you did a talk. Did you share some of these best practices there? Would love to watch it!
g
Here is the link

https://www.youtube.com/watch?v=ApEOiNC4GrA

👀 1
r
Thank you!
🙌 1