This message was deleted.
# platform-culture
s
This message was deleted.
j
Sure, was pretty successful using GitOps patterns for this. Conventional commits and semantic versioning can be used in conjunction to identify major, minor, and breaking changes continuously. Takes some adoption, but it is easy to audit and enforce, and easy to build automation around. For instance a changelog can be automatically produced based on commit messages. And since you can add owner based, approval workflows around any change in git you can increase the scrutiny based on the type of change prior to it being made rather than after. Can be used for application, infrastructure, and multi-module repos. As for lifecycle, if you use trunk-based development, then anything merged into main is GA, and feature branches should be aimed at being timebound with a goal of being merged. You can then automate the release of all GA code to different environments/stages, i.e dev, stg, prd. Dev is always kept up to date w/main, stg is promoted to after dev testing is passed, prd is promoted to after passing tests in stg. And if you need to test a long lived potentially breaking feature branch you could update dev to the branch ref. This pattern is also compatible with most CICD tools, fully auditable, and fully integrated into developer tooling. Additionally, big announcements are usually best in a version controlled document system organized by time and date, then referenced in the messaging system folks use, like slack. That way it is in a stream, but folks can still browse major change history.
Obviously a bit opinionated, but hope it helps!
d
Thanks for taking the time to reply. The challenge to your approach for me is we often use a lot of smaller micro services to build our platform - essentially numerous tools. It’s worth playing around with though. The idea of it being git backed is very good
It does help!
j
We were able to use this method on both a monorepo with x number of microservices in conjunction with x number of env gitops repos that held all the infrastructure as code and config state of each x env.
d
Oh interesting
j
Conventional commit supports (scope) for modules.
If your microservices aren’t in a monorepo, I would highly recommend a templater that abstracts common repo config into a single versioned location that can be applied to configure and update the microservice repos.
d
Do you mean to apply to microservices that product teams use?
j
Yes, depending on where your microservices live. If they are in a monorepo, then a templater is less necessary. If each microservice is in its own repo a templater can be used to create, wrangle, and update common configs across those repos. Like codeowner files, pr templates, cicd workflow files, policy READMEs like what is expected when writing a commit, etc… That way stuff like that is updated once and applied many times to update each repo. That same pattern can be used for templating out the gitops repos too. So a repo for dev that contains it’s infra, workflows, and config definition is going to be incredibly similar to a repo for staging with only small differences like versions of microservices that are currently deployed, size of instances it is deployed on, region it is deployed in, etc… A templater can be used to scaffold an entirely new gitops env using defaults and a finite amount of user inputs. This means you can keep common configs and infra definition DRY like your app code.
d
What templating tools do you recommend?
j
It depends on tools and complexity of the config you are trying to produce. Yeoman is great, because it is centered around prompt based workflows for a nice experience for devs and can scaffold/template nearly anything, but it is not easy to get started and requires JS experience. Cookiecutter is another but mostly limited to python. Some other options are building one, using the templating functionality built into CM tools like ansible, or bundling multiple templaters built for the types of files you are producing.
Yeoman, can be used to wrap tools and workflows as well. Like connecting to your clusters/envs. Since it has knowledge of the environment you are connecting to, you could boil a 10 step process down into one command.
We did this to make updating our clusters easier with kops workflows, connecting to our clisters, bootstrapping, creating sealed secrets… etc.
d
Cool. We use cookie cutter and I’ve had experience before with yeoman.
Are there any operating rhythms you run with your product teams like Town Halls, Surgeries?
We are very advanced on the tooling but it’s the outreach side we should improve on!
j
Ah gotcha, post mortems and lots of demoing. Demo to each audience level not just a single one. IE high level and external customers with polish and simplicity, internal devs, and team level. Having champions from each team the platform team delivers to and have them regularly attend meetings geared towards their area of focus is good too.
A mistake i’ve made is assuming a demo made a year ago over all of the platform and pipelines was sufficient, those should be updated and re-demoed twice a year. It helps everyone stay up to date on the vision and progress.
d
Yes, this is something I’ve been saying to managers of my teams - you’ve got to repeat the message over and over and over and…
j
Couldn’t agree more.
j
Might be a different direction, but my developer experience team has been experimenting with hosting changelog-style Confluence (wiki) pages that create a single place for any engineer to find the latest (and upcoming) changes we’re making. For instance, to see when we’re upgrading to Ruby 3.2 in our primary monolith and what changes would be needed from folks working in that service. Happy to share more if that sounds interesting.
r
Hey folks. I’m assuming you’re referring to announcements targeting the platform’s consumers (developers) instead of the platform team itself Darren. If so, an approach that we’re currently following is summarizing the changes the platform team is making in a language that’s understandable for most developers. Rather than throwing a changelog at the developers that contains highly technical changes like “enabled pod anti affinity using the AZ topology key” , we would just say “configured high availability measures on kubernetes”. In essence, this is a bit of an extrapolation on the principle of reducing developer’s cognitive load (in this case, by not expecting them to learn the complexity of k8s resources). Folks who are more curious about the technicality of the change can always checkout the changelogs. We also ensure that each announcements we throw at devs have an associated “action items”/“impact” section to simplify adoption of what was released. The cons of this approach is that the generation of these announcements is a manual process. You need to know your stakeholders (devs) and the language they can identify with to taylor the message. It would have been much easier to just throw a change log in slack on new releases but I’m pretty sure folks will lose interest over time cause the language is just too foreign.
d
Thanks, that’s exactly what I mean, yes. How do you create these curated change logs and how do you ship them?
j
Not sure I agree there should be a second source, though that’s the beauty of process, it should fit your needs and is semantic. I usually recommend though that if you changelogs aren’t easily consumable and understandable by your devs, asking then who are they for? Fix the changelogs and the process to write them rather than create a second source.