What different patterns do people use to manage ku...
# kubernetes
h
What different patterns do people use to manage kubernetes applications when they are spread across many repos and what practical experiences do you have using them? We are currently using kustomize with manifests in the same repo as the application code and flux to deploy the manifests. We are working on a single branch, different environments (for the kubernetes manifensts) are placed in separate folders. We are currently facing two separate problems: 1. the infra team finds it time consuming to update the kustomize using in each repo, if a team touches the setup in unintended ways, it can make it more cumbersome to update 2. kustomize is very complex for the app teams To solve #1 we have been looking into patterns using a single helm chart which each repo references and use values.yaml to define their application + environment. This could solve the problem for the infra team, but adds another layer of abstractions for the app teams. To solve #2 we have been thinking to teach the teams or start using raw kubernetes manifests. We don’t have any good experiences using raw kubernetes manifests, but it seems to be easier for our app teams to relate to also since they don’t understand kustomize, they end up just using raw kubernetes manifests duplicated per env folder.
t
My goals are to 1. Maintain a single source of truth for each environment. I want to deploy an entire environment as an artifact. I don't roll back single services, I roll back environments. This minimizes the danger that I ever deploy an environmental state that hasn't been integration tested. 2. Maintain an easy API for devs to use. I like shift left things as much as the next person... but I find devs to be less than enthusiastic about learning tools like helm, kustomize, kubectl, etc. The way I've solved this is by having org wide helm charts that the DevOps team maintains. Each microservice uses that chart. The API to the charts are tailored to maintain a good balance of extensibility/simplicity. The helm charts make certain assumptions like we'll be using Istio, external dns will provision IP addresses, etc. These assumptions help to simplify the API that devs need to use considerably, and it keeps their implementations DRY and easily refactorable. From there I have an aggregate repo where my CI/CD pipelines put the generated helm values files for every deployment in a directory structure like: environment > service (I'm mainly just updating image tags, the values files don't change much). Then I use Argo to deploy everything. As a side note, when Argo deploys helm templates, it just uses helm as a manifest generator, it doesn't use helm's state tracking etc. It pretty much just runs a
helm template . > manifests.yaml
to generate the manifests and ignores the rest. Finally, in the repo I have documentation/scripts and such for manual deployment of an entire environment by bypassing Argo for break glass situations.
n
I had a lot of success with using common/shared helm charts with sane defaults. Just keep it scoped well meaning like services, easier to have sane defaults for all Python apps vs Python and Java and .net mixed together but YMMV.
c
You should take a look at Score. If you wan to use a generic helm chart and have a nice developer centric interface instead of values files, then Score might be your cup of tea. https://score.dev
h
Thanks for the input! We will definitely explore the option to use a org wide helm charts @Clemens Jütte thanks for bringing score up, this is also something we are looking into, with
score
we don’t need helm wide charts since we can just generate them or a modified version of them using the score file + the added bonus that developers get a way to generate a docker compose for local env including all dependencies per service
a
I've definitely had success with abstracting common k8s objects into helm charts, but I've been bit hard by the day 2 challenges. A few companies ago I grateful our microservices only counted in the 70s as the team who changed the deployment strategy needed to update all those repos. Not saying that's a fault of helm, just that helm isn't a complete strategy in my experience.
t
As someone who has had to manually rewrite helm history due to out of date CRDs and other APIs that were preventing a graceful K8s upgrade, I can confidently say it isn't a perfect product.
s
I'm not sure hwat the prefext solution would be, but Helm would definitely not be a part of it., Currently investigating a programmatic approach with cdk8s, instead of yet another abstraction layer and yes I know that sounds xkcd like....
a
Very interested to hear more about the design @Serge van Ginderachter! I believe that helm is a (reasonable) templater but we often ask it to do more. Interested to see how you are tackling with a programmatic solution!