I have a question for<@U02UBR0FL8J> and <@U0320EUV...
# platform-blueprints
t
I have a question for@Schuyler Bishop and @Michael Galloway. We’re struggling with Platform PM, as @Alan Barr said in one of the meetups, platforms tend to be super thick, so finding the thinnest possible MVP is always a challenge, same with taking care that the overwhelming backlog doesn’t become the roadmap. Any best practices you can share? 🙂
👀 4
m
On the backlog topic specifically: • Carve time to understand your customer experience more deeply ◦ Do customer interviews ◦ Define journey maps ◦ Pull data from your pipelines, SCM, and infrastructure to put together a timeline • Identify where customers are spending the majority of their time, or asking the majority of questions ◦ This is partly bucketing of your backlog, but should line up with your interviews and data • Carve out time (1 - 2 days, or more) to step away from the individual problems and go back to first principles ◦ What are the engineering teams trying to achieve? ◦ What is the ideal experience for them? ◦ Dive into specific questions like: ▪︎ What information are they the most knowledgable about? ▪︎ What information are you most knowledgable about? ▪︎ What if they didn't need to care or know about the information / layer you manage? • Identify 1 or 2 teams to get deeper with and start experimenting with ◦ Go for highest possible experience first, even if it means a shift in burden. The less the customer needs to know or deal with, the more pain you hide from them, the more you can be proactive instead of reactive in the work you do. • PoC solutions that have compelling experience improvements that will resonate with other teams The rest after that is standard iteration and adoption work (not easy either).
💯 1
I think the main takeaway here is that you need to look at ways to understand what customers are trying to accomplish, rather than just addressing the immediate ask. It's a balance of course, sometimes you need to just get rid of the immediate pain.
💯 1
t
Thanks for your quick response! By saying
Identify 1 or 2 teams to get deeper with and start experimenting with
, which teams (example) would you see to take the lead at such role?
l
On the thinnest MVP, you may be surprised at how much you can remove from scope while nailing 80% of the value you aim to deliver. As an example, you don’t need to go 100% on an IDP to ship tools/workflow improvements to engineers. In fact, I’d advise most teams who want to ship tools to consider building a simple CLI binary and distributing it everywhere, from dev laptops to production environments. We went pretty far with this before, managing to ship: • Deployment tooling • Production tools like grabbing a console, scaling replicas, etc • Inspecting your infrastructure estate, like “show me which environments exist – and where – for service X” In just a single Go binary, deployed with goreleaser to Brew/debian/docker. The alternative would’ve been providing this functionality behind a web UI, which would have taken much longer to prove the value. My talk was about how to ship platform changes with an MVP/lean start-up mentality, aiming to reduce time-to-value. It’s definitely possible, you just have to be disciplined with how you cut scope.
👏 1
m
Great question! The customer teams that you want to pick should meet the following criteria: 1. They are a team that is relevant and respected by many other teams. 2. They are considered technically strong (i.e. thought leaders). 3. They are open to, and even eager, to experiment on workflow improvements.
You want to make sure that wins defined with those teams will cause other teams to think "well if it works for them, then we should definitely do it too!"
l
^ This is good advice, but word of caution with picking teams who are really enthusiastic about using the new tools: Those teams are likely to be the closest in mindset to your platform team, and will be great beta-testers because they’re bought in by default and willing to put up with sharp edges/research things themselves. If you use their enthusiasm and adoption as a proxy for broader rollout, you might be surprised when other teams (maybe even the majority of the eng org) don’t have the same requirements/see the same value. In short, make sure you don’t oversample on feedback from early adopters, or you may build something that only works for that minority.
💯 1
m
Agree with your point in general @Lawrence Jones. I think the key criteria of "They are a team that is relevant and respected by other teams" is what should help address the "don’t have the same requirements/see the same value." At Netflix the cogent example was the Edge team that worked on the front-line of the streaming service.
💯 1
When a new CI/CD solution (Spinnaker) was being designed, the first group we focused on was those groups in Edge.
We knew we needed to incorporate the canary experience and make it much easier for them.
Success with them translated to the rest of the company because teams felt that Edge was a battle hardened group that had the hardest requirements to meet
Most other teams didn't need canary support, but the credibility gained by make stuff work well for them (canaries and other things), translated to the rest of the company.
l
Totally agree with the approach, and the qualities of an ideal early-adopter team. We did just this at GoCardless, when we created a new toolchain to deploy apps into Kubernetes, with a strong focus on enabling teams to self-service. Our early adopters were the banking teams, who have the most critical requirements. They’re often fighting issues in payment pipelines that can be holding up hundreds of millions, and have the greatest scaling/infra challenges in the org. They loved it, which was great! But wider adoption stalled when we realised banking was quite unusual, in that they were already very operationally mature and desperate for the solution we provided. Had we spoken with other teams early in the process, especially those we could guess wouldn’t be the most excited, we could have helped win them over with a focus on their specific area, or even worked with them in advance to prepare them for the changing responsibilities. Alls well that ends well, but I took a learning about platform team blindspots from that experience, and I’d want to balance any enthusiastic early adopter feedback with harsher critics, if given the choice 🙂
m
^ Great context @Lawrence Jones! Thanks for sharing that. 100% agree on not over-indexing on any one team.
❤️ 1
s
Great question and lots of great context. I’ll also say that a lot of platform teams I’ve seen have had exceedingly broad domains defined that create some of these problems, in the conway’s law method that your org design tends to dictate the types of outcomes you create. Do you have a monolithic team, @Terry Davis? Or is it very very broad? It’s hard to pick thin vertical stripes for an MVP when your team is responsible for too much.
The teams I lead in my last org went through a bounded context exercise (Domain Driven design concepts) where we grouped similar contexts together and then re-formed the org around those contexts.
That was a very revolutionary (vs evolutionary, much to the CTO’s chagrin) approach but it lead to a smaller area of context for the TPM.
Michael and Lawrence did a great job of also bringing up some great points though too!
🙏 2
m
Great thread!
🙏 3
t
@Schuyler Bishop our team is broad, that's why it's hard for an MVP.
💯 1
a
Writing a narrative on the story and the strategies made a lot of problems go away. Might be part of the work over time