Question for the room - who all is using Kustomize...
# kubernetes
m
Question for the room - who all is using Kustomize as their daily driver to templatize configurations across clusters? Or do you just indirectly consume it via other services like Anthos Config Management? If not, what would you suggest as an alternative?
t
We wrap everything up in Helm
d
Helm . best in list .
t
I just wish that Go Templates sucked less
a
Helm in (almost) all cases, unless forced to use an Operator. Argo to manage some applications via GitOps, other tooling to manage artifacts/deployments/promotions more granularly with Helm handling the deployment steps.
t
One of Helm's biggest failures, imo, is the way it abstracts errors and just echoes something like "helm install failed on the condition." Therefore, I think its really important to have a debugging strategy in place. So I make sure I have my releases labeled or annotated appropriately and I set up my CD pipeline to output a link to DataDog where it gets the status of all the deployment in the selected release... That makes it way easier to debug and pushes users to traces and logs as opposed to output logs in the CD pipeline.
d
We use Carvel for to deploy to kubernetes packages/apps for everything. Also to wrap some helm charts. We have custom operators based on kubebuilder and in there we use kustomize, as much as kubebuilder does
j
Pulumi user here 🙋 it has its own pains but overall it feels much nicer to use an expressive language like Python to declare deployments than yaml.
m
Interesting - I've not really dug into Pulumi aside from knowing it exists, and Carvel I'd never heard of until today (granted, google first sends me to an ice cream shop in the US so I shouldn't be surprised). Thanks for sharing about those! I agree on how errors are surfaced in Helm - I have an inkling of a suspicion is that is part of why I'm not seeing it as much as one might expect in very large environments - too much work, or maybe too costly to keep all of that extra tooling in place to be worth it vs alternatives? That said, it's been years since I looked into Helm so I need to do some re-learning there - I'm not familiar enough to recall how well it lets you manage when/how variables are managed across regions and clusters. I have to say the component/overlay concept used by Kustomize is quite flexible, and the fact that's getting rolled up into other platforms like ACM and OpenShift GitOps is nice - making it a bit more universal. Edit: I totally forgot Argo also implements Kustomize, AND that technically, you can Kustomize your Helm charts.
d
Terraform is also a good option .
d
Helm can easily turn into something very hard to maintain. Especially with a helm operator where configuration is stored in config maps. This makes debugging hard and testing even harder. To make it worse, I’ve seen people putting logic into their templates. If I needed to pick between helm and kustomize I’d pick the latter but of course there’s applications for helm and it is very successful project. I’m not a fan of terraform managing kubernetes clusters. The reason is that I believe the state of your k8s environment should be stored in your clusters. Terraform’s state won’t ever reflect the real state of clusters. It’ll just reflect the fact a change has been applied but it doesn’t know if it’s been reconciled. Never used pulumi but from what I read some time ago it relies on state too
a
We’ve gone with our own solution for complex configuration management and made it open source in the end, as both Helm and Kustomize were not fitting when used on their own. Kluctl acts as the glue for these two and adds powerful configuration management, multi-env and multi-cluster support on top of it. You can find it at https://kluctl.io Github: https://github.com/kluctl/kluctl Hands-on introduction with Rawkode:

https://www.youtube.com/watch?v=9LoYLjDjOdgâ–¾

A few blog posts: https://medium.com/kluctl
t
Helm is very powerful, and like any powerful tool, it can be unwieldy and overly complex for your use case. Another big disadvantage (that I hinted at earlier), imo is that Go Template language kinda sucks and support for it in an IDE is really limited. You can write invalid templates all day long and your IDE won't be smart enough to tell you that you suck. That being said, Helm allows you to write very DRY code and have a single very flexible org wide chart that can be used with any microservice. You also have the ability to wrap charts into one big chart of sub charts. What I like about this approach is that it allows you to write services in a non monolithic way while ensuring that there has never been an untested configuration on your cluster. One big problem is that integration and e2e tests, etc. test your cluster at a certain configuration. If you deploy to production and you find that you need to roll back something, you need to know that the configuration you roll back to has been thoroughly vetted at some time... otherwise, you're just trading a known issue for an unknown one. In practice I've seen this often results in reluctance to rollback and have a "forward only" mentality. Therefore, I prefer to treat my deployment artifacts as single vetted objects, not a series of objects. Helm makes it incredibly easy to You can absolutely do the same thing with kustomize... it just gets a bit harder to manage, imo.
d
We use kapp-controller (https://carvel.dev/kapp-controller/) that supports gitops, the advantage of this approach is that it consumes something called imgpkg bundles (https://carvel.dev/imgpkg/), which is basically an oci image containing all the manifests you want to apply, it will point onto the images to be deployed. The imgpkg bundles can be versioned like packages, so you can be fairly sure about what you're deploying (the full CRD spec: https://carvel.dev/kapp-controller/docs/v0.43.2/app-spec/). This approach works really good if you run kapp-controller along with cluster-api and your k8s clusters are also represented by objects in k8s. Then it can manage multiple clusters at the same time. This approach works well for us. The App CR can also consume helm charts and constantly reconcile them
btw, nice job with the kluctl!
a
We use a combination of kustomize and helm.
t
We use Helm charts for standardization and its templating features, and ArgoCD for managing deployments. 1 ArgoCD installation can manage deployments on multiple clusters. Our terraform setup allows for adding new clusters that can either have their own ArgoCD or "become managed" by an existing ArgoCD.
e
Carvel Ytt is the best
h
This is one of the reasons why we are working on a cli tool that helps you create and manage Cloud resources and kubernetes cluster using helm, Terraform and Argocd. https://github.com/polyseam/cndi Templates in https://github.com/polyseam/cndi/tree/main/src/templates