Do you like GitOps? As in flux & argocd.. As ...
# platform-toolbox
l
Do you like GitOps? As in flux & argocd.. As a practitioner, you probably saw that much of it comes down to implementation.. How is it working out for you? What is working well, what is most annoying?
j
i love gitops, been an argocd user for a handful of years, i’m yet to discover a better way to manage and keep inventory of kubernetes assets. the most undersold element of gitops is how immediately impactful your new hires are when they’re already familiar with your gitops engine - and that’s almost never the case with scripted/bespoke cd systems that organizations create that end up sprawling beyond the point of clarity.
l
This is a great point, John. Can’t recall if i read it before. So true!
a
Gitops is declarative. Whats described as kube state, will be applied. Underlying tool being helm. Works well. That's very neat up-til you are only deploying kube objects. From growth perspective, think about: 1. What does the infrastructure control plane looks like. Do your developer always create a ticket when they need a new queue in SQS. 2. As you expand, add more developers and regions are you sure your environments will remain as simple as your branching mechanism. 3. Do you expect your devs to read in helm and kubernetes error. 4. Dynamic Rollouts, predictable workflows and events. Argo provides solutions to problems but not from argocd itself but purpose built tools like argo events, workflows, rollout. Soon your deployment workflow can get from managing one tool to multiple tools.
l
Very good points @ADIL RAFIQ. • Do you use branching in your gitops repo? I never tried that, felt cumbersome. Does it work for you? • Regarding the "new SQS queue". Do you have Crossplane or something in your tooling? I want to go towards Crossplane, but never had the time,
Argo provides solutions to problems but not from argocd itself but purpose built tools like argo events, workflows, rollout. Soon your deployment workflow can get from managing one tool to multiple tools. (edited)
💯 this is what I see as well. Gitops is great and all, but a bit too loose definition. You need to augment it with different workflows and the details all come down to your implementation.
a
• Last employer I worked at. They had very strict semantic versioning where master was only used as upstream single source of truth. Release candidates had there own versioned branches. Can't imagine gitops being implemented there in true sense. One workaround that was discussed, make a new repo using app of apps pattern. Where all of your services are added as argo applications. And at the time of release, just create a commit updating the version in each application by desired release candidate. i.e update to apps repo, releases all software. • In my time, there was'nt. We served tickets as well. But I am actually doing a hobby project comparing terraform to kubernetes for infrastructure control plane. You are welcome to join in and give inputs.
l
doing a hobby project
Is it a blog post, or a tool? I'm in the kubernetes for control plane camp. Cause I have the tooling in place for gitops. But then Flux has a Terraform provider.. so there is crossover between the two worlds. I am happy to team up with a blog post perhaps.
d
@ADIL RAFIQ Maybe a split between configuration and application repo would help there? For configuration repo I do not use versioning, main branch = source of truth for releases. For application repo there is a mandatory branching and tagging strategy.
In a distributed way: Give every team their own deployment repo, where you write release info after CI builds artifacts and always sync from main branch. Let teams accept merge requests to it made by CI pipeline. The CI pipeline could be triggered on a commit to application repo and on a tag, and distinct those two events to know how to set a proper version for artifacts and to which directory in deployment repo to place the version info, so a GitOps reconciler can deploy the application in proper version and environment.
Responding to main question - I love GitOps, it allows me to efficiently manage a lot of things at a scale with an overview in a Git repository. Check my example automation examples there; With Helm and ArgoCD I provision dozens of CI/CD stacks from a template, manage Resource Quotas from a centralized place at all namespaces, etc. https://medium.com/faun/ci-cd-from-kitchen-1-the-cd-part-with-argocd-dc95e7c49eb6
What's the most annoying? Testing feature branches, as the GitOps means
main branch = single source of truth
. A step to achieve good testing could be usage of a K3s cluster locally in an automated way for example, but worst is if a prod cluster is OpenShift and the tested automation would not run on plain Kubernetes.
l
Really interesting discussion. I'm coming from the other end of the spectrum here ... just getting started moving from pipelines to gitops and chose Flux as a starting point as it seems pretty lightweight and a little less opinionated than Argo. A lot of what I am deploying atm are Helm charts and it would be nice to see what version is deployed with native Helm tooling. Now .. I am in a situation with my current client, where we need to support a number of older "releases" of an entire cluster, from Infrastructure through applications, we need to lock things down into a versioned "release". Regulatory stuff. Think SBOM but for Helm charts and so on. And developing a new version in another folder in main/master seems a little too risky? I think. Making things scale and controlling versions across such a setup is definitely a fun challenge. I think I have a design brewing where I can version and develop applications and k8s infrastructure in branches in separate repos, and release it to different environments from a separate flux-system repo for each enviroment (dev test prod). However ... it may be a stretch to map it out in a slack message 😂 And I have to play around with it more to figure out where I am shooting my foot off, and whether I balance on the genius or crazy loonatic side of this.
And there is probably a name for the pattern I am thinking ... just havent found it yet. 😂
d
I don't know if I understand properly your case, I was recently implementing versioning of cluster resources and end up using this: https://argocd-applicationset.readthedocs.io/en/stable/Generators-SCM-Provider/ The
kind: ApplicationSet
is discovering different branches e.g. release-1.0, release-1.1, etc. in a repository, and creating versioned objects on cluster (using Helm - branch name is passed as a Helm variable, then it is stripped down to a version number). I will probably create a blog post about it soon.
Personally I don't think ArgoCD is opinionated. But FluxCD is. I was trying to start using Flux but ended in looking at Github issues, where Flux maintainers were saying "you cannot do this, we don't want to allow you to do it". It was about using Kustomize plugins. It's totally opinionated. ArgoCD has thousands of options and it's up to you how you use it. The thing I wanted to implement with FluxCD was: https://medium.com/@keska.damian/deploying-non-deployable-things-on-argocd-with-kustomize-handling-edge-cases-aa51d24b3e4d
https://github.com/fluxcd/kustomize-controller/issues/323 I think it is about FluxCD bad design. ArgoCD is rendering manifests without cluster-admin access in a separate Pod, then in other Pod with cluster-admin access those manifests are applied. So the Kustomize plugins do not have access to the cluster directly 🙂