Does anyone have experience managing k8s resources...
# terraform
Does anyone have experience managing k8s resources in tf (hcl) instead of yaml? I've read it's not a good idea when using custom CRDs. I mainly want to explore if I can use one tool for managing infra and k8s resources. So either everything in tf or in crossplane
I did this at my last job for our disaster recovery. The orchestration cluster was completely in TF to the point where we had atlantis running and then that could manage the rest of terraform for additional resources and bring up argo which orchestrated yamls. My experience was that yamls in tf was verbose and a bit hard to manage, but doable. The bigger concern was that we did not invest in auto reconciliation or drift detection for our terraform which meant that the lifecycles felt really wrong when it came to kubernetes objects. Often there was drift which led to ignoring a lot of k8s fields in the tf. I am sure we could have improved things which may have changed my opinion but given the small footprint we just sort of accepted it so I am left with a feeling of not wanting to scale that experience.
I see your goal is to use a single tool, is that because of needing to manage the lifecycle of both cloud resources (which you currently have tf for) and k8s resources (which you currently have yaml for) in the same way to lower the number of tools your team needs to learn? Or more so to allow for an atomic commit to change both types? or something completely different?
I wouldn't recommend this; I'd use TF to set up the EKS cluster and something like Flux or Argo to perform CD via gitops, and then manage your k8s YAML in a git repo that gets deployed by Flux/Argo
If you want to manage both with one tool crossplane would be worth looking into, but like Alexandre mentioned most use tf for managing the cluster then Argo to manage apps
I'd recommend avoiding the "one ring to rule them all" approach. You'd have way less headaches using TF for infrastructure and something like Flux or Argo as mentioned above. We tried doing it all in TF, it's not worth it. It can be done, but you'll end up in a terraform state hell where things don't agree etc. 0/10 do not recommend 😂
Thanks for the responses everyone. We currently have Terraform+Argo CD, will probably stick with it. @Abby Bangser atomic commit (one PR where we can see all changes), and also avoid copy/pasting output from terraform to k8s resources (for example creating an RDS instance in tf then copy the connection string and pasting it into a k8s secret)
Yea makes sense. Looking to basically package up the whole thing. For whatever it is worth I had the same issues and hopes and is part of why I am working on now. Its FOSS so this is not a sales pitch 😅 but happy to see if it can help since we work with/package up both terraform and argocd natively!
@Phillip Meng I feel like I push Env0 a lot in here, but you might want to check that out. One of the cool things you can do is define your outputs in TF, then make authorized api calls to Env0 to get said outputs. That means there's no need for a middle man between the TF runs and whatever needs to consume them. I use TF for some really base level provisioning things that will likely not change much. I use it to set up external dns, jetstack, datadog, KEDA, cuda-drivers, istio, env0-agents, and harness delegates... basically stuff that services need to hook into, and/or things needed to get CI/CD running. One of the big (theoretical... cause this hasn't happened to me, yet) advantages is that if shit hits the fan, I can get up a base level stack deployed from my local that can unblock all the other CI/CD systems I need to run. Whatever choices you make, just keep disaster recovery in mind... you don't want to get in a state where you need an agent on a cluster to be running to execute infra/k8s configuration changes when you might not have any clusters.