Hi, for folks using ArgoCD, what strategies would ...
# general
d
Hi, for folks using ArgoCD, what strategies would you recommend to automate and manage deployment configurations across k8s environments? We are deploying one ArgoCD instance per cluster and leverage kustomize. On top of that we have a dedicated “apps” application that can deploy all apps into ArgoCD for a given cluster. Was curious if anyone is a proponent of their ApplicationSet or App of Apps pattern or another approach I may have missed in their docs?
Just downloaded GitOps with ArgoCD (Manning Report) which I didn’t realize was free from codefresh 🫶
a
I’m similarly curious as we’re just going through a similar exercise. Every example of multi-cluster I’ve come across seems less useable/intuitive from a Developer Experience perspective? (would love to be proven wrong!)
a
I used Argo at two previous jobs. In both cases I followed a very similar approach @dat. Specifically, we deployed an Argo per cluster (for us this was per env like production, staging, dev) and each cluster was managed by cascading app of apps. This seemed to work well for us, but tbf it was not a huge scale (~100 engineers, ~20 biz apps and another ~20 ops apps)
I am curious @Andrew Kirkpatrick what you mean by this?
Every example of multi-cluster I’ve come across seems less useable/intuitive from a Developer Experience perspective
Do you mean using Argo is less intuitive or do you mean not using Argo is less intuitive?
a
Oh sorry I meant using a single Argo instance to manage Apps across multiple clusters (as opposed to using an instance per-cluster) Visually/logically it seems a little more straightforward to have an instance per-cluster?
More on-topic was also curious how folks were organising their App of Apps groupings. Like using a single parent App for their entire platform, or grouping them by domain boundary/responsibility, or by resource type, etc.
s
@Andrew Fong
a
Ah that makes sense. Yea we used subdomains (argo.dev.domain.com vs argo.staging.domain.com) and that worked well. Though I do think if things are namespaced well, the filtering can work well on a single host as well. Our issue was with the managing of keys because Argo needs to register remote clusters and managing those permissions can be a pain.
g
Very nice topics. My experience is with 1 ArgoCD instance managing multiple clusters (stagings and multiple prods) at many workplaces. It works really well with the apps of apps pattern where microservices have their parent app and infra services have another one. Another aspect of GitOps is organizing the k8s configurations for services in GH. I worked with 2 approaches • each service has a config repo with their k8s config. We used this repo to release from by merging a PR. • one monster repo for every k8s config. Both can work depending on your CI/CD release process but I would recommend the repo for each service because releases can be done from it via PR.s
a
Ah I am absolutely a fan of per app config, and actually a bit advocate for it in the same repo as the software code. We used mutating git tags (I know, pick your battles) to manage deployments only after CI/CD was successful.
h
Well, if I can add my 10¢….I have worked with both setups (Single ArgoCD managing multiple apps and one-ArgoCD per env), supporting teams of different sizes, and my takeaways are: • Managing a single or multiple ArgoCDs does not make much difference as long as you adopt recommended practices, such as EVERYTHING in code (including app of apps)… no exception, central authentication, proper RBAC, Alerting, etc; • From a Developer perspective, based on my experience, they do not care much in the apps is being managed by a single or multiple Argos, as long as the authentication is unified and the interface is the same, it should be fine. Also, once they receive the notification, open the app is just a matter of open the link the follows along with the notification on Slack/Teams/etc, which takes them to the right cluster; • App-of-Apps is very handy indeed and has helped to abstract a lot of complexities. Onboard a new app is just a matter of updating a single yaml file and ArgoCD does the rest. The effort is the same no matter if we are talking about one or multiple clusters; • When comes to application management, most of platforms I’ve implemented makes use of central (umbrella) Helm charts that ArgoCD pulls and combine with the values.yaml that are stored along with the application code. This approach allows the charts to be maintained/evolved by Platform Team and the Product Teams have the autonomy for adjusting the settings as per their needs based via values.yaml. The key of success here is making sure that we have an accurate documentation that runs side-by-side with the central charts evolvement. Also, regular demo sessions presenting new features and use cases turned out to be a game changer for most teams I have led (RTFM doesn’t work for most cases).