Hi folk, I am working with a product that has ~15 ...
# kubernetes
j
Hi folk, I am working with a product that has ~15 microservices. Most are .NET APIs or worker services. On advice from one of our developers I started to learn helm as the way to deploy each of the services to the various environments (dev,test,prod,etc). Since my initial learning experience, which was painful 😅, I have had several rounds of refactoring our charts. I'm still struggling with understanding the right lens or viewpoint when working with Helm. Some things I have learnt on the way: • When using individual charts for each microservice the Helm is super simple and you get the most flexibility if a service is 'a bit different'. The downside is a sprawl of Helm charts and then deployment orchestration. Ie. 15 services = 15
helm install
. • Adding an Umbrella chart makes the deployment orchestration a lot easier but the helm and particularly the values.yaml start getting complicated. • Trying to reduce duplication of the Helm using library charts seemed like a good idea but got out of hand quickly as the more generic the library gets the more difficult the Helm becomes. I'm interested in what others are doing in this space? What have you found successful?
m
One of the options is a custom operator where you define a Microservice CRD and then use something like https://kyverno.io/docs/writing-policies/generate/ to create the underlying resources (could be raw resources or Flux/Argo objects) and then use resource filters to select different variants/versions
Or try a next-gen config language like https://timoni.sh/ or https://tanka.dev/
t
I feel like every single one of us was this person at some point. I had a bunch of microservices, each on their own charts and started looking into umbrella charts. Shit happened along the way that started making me give helm some major side eye as a package manager, in general. We've switched to ApplicationSets in Argo. You can deploy the set of apps as one object, you can use sync waves and write backs to automatically progress apps, you don't need to deal with rewriting the helm history when objects it had used in the past become deprecated, and its waaay more intuitively obvious what fails where when something goes wrong in a bad deploy (instead of Helm's "timeout failed on the condition"... or whatever the bullshit error message is from the default test). One of the reasons using Argo is the natural approach is that takes your helm chart and uses it to do what helm is best at, and renders the manifests via the
helm template
command. Then it pushes the newly rendered manifests to k8s all the while showing you in the UI the diffs between the new and the live manifests for each k8s object individually (or you can just let it deploy automatically and skip any human compare steps). This makes it really easy to go from helm to Argo.
m
Definitely agree with Troy on the Argo part. It helps a lot on the deployment front and you can make it as granular as you need through the use of ApplicationSets, or keep it even simpler with 1 chart == 1 application. Argo templates out the helm chart, which in our experience, is the best uses of Helm, rather than using Helm to deploy your charts
and that way, you adhere to having your components as code, with Git being your source of truth/expected state
r
We see this situation a lot. Argo Application Sets works really well there. Another option that works well is to start pushing ‘standard helm charts’. you define 3 or 4 charts (e.g. java REST API, python django app, etc) with very clear rules (you can only expose port X, you need to define the container like Y), and you store those on a common location. With this, the app repository is all about code and the Dockerfile. I’ve seen this applied very succesfully in large organizations. This approach requires a strong mandate from leadership, because you are going to have either get buyin from either team, or, most likely, to start forcing them to have a more ‘platform’ thinking and less ‘every service is unique’. I think this has a huge value besides reducing helm chart sprawl, but it’s not something that can be implemented by a single person
j
Really great feedback, thank-you for all the great responses. I have been looking at Argo but if I am not mistake this doesn't replace Helm. Argo is handling the deployment orchestration part yes? That would still leave me with the issue of having to figure out how to reduce Helm duplication across microservices without excessively complicated common/base Helm charts.
c
Another approach would be to get to the right abstraction level for the devs - Score is doing a great job at that. It’s a CNCF-hosted project that aims to eliminate those “lost in translation” problems that arise when you use a full-blown package manager like Helm for software that is not meant to be distributed. You can find it at score.dev
m
Helm is packaging format, Argo/Flux are mostly GitOps tools responsible for the deployment of a package. Note that Argo is not 100% compatible with Helm as it uses
helm template
which doesn't support the
lookup
function which is really useful for password/cert generation flow Argo generators could be useful in discovering microservices to deploy and/or targets to deploy them to, but I don't think it is intended for this use case
t
I'm not convinced there's a perfect solution between making a resource (like Helm) more complicated so that it can handle more use cases, and repeating config so that its simpler for any given microservice. There is always a trade off... its just a matter of which positives and negatives you're willing to put up with. For me, I've errored on the side of having a single complicated chart with a verbose and well documented values file. Another strategy is to use a more basic base chart and use kustomize to patch special cases in a given microservice. I would still argue that if you have to do the same patch more than once, that the base helm chart should be upgraded... but if its a very unique case, its fine.
j
After a lot of back and forth it looks like we may settle on using Kustomize in combination with Octopus Deploy. Octopus handles the deployment orchestration and variable substitution logic in a much simpler way than other tools, IMHO.