Hi all, after about 18months down the Kubernetes p...
# kubernetes
j
Hi all, after about 18months down the Kubernetes path our team have decided to move away from K8s to Azure Container Apps. I don't disagree with the decision as our team is small and inexperienced with K8s. However my gut feeling is we will eventually find there are things that Container Apps does not provide us that K8s does. I am looking for any advice from anyone who has made the switch either from Container Apps to K8s or back the other way. What was your experience?
j
ACI/ACA def is less powerful that k8s, but I've found that most teams don't need that kind of power. k8s also is a ton of maintenance. I've found that the "tipping point" is when your company is big enough to justify putting multiple people on k8s maintenance full time.
It's important to realize that you're gaining a lot of engineering time not dealing with k8s for the functionality you're giving up, which is time you can use to solve other engineering problems. One piece of advice I'd recommend is defining whatever point you think you'll want to switch back now, because otherwise you'll fall into a cycle of having to meet about "is it time to move back to k8s" every time you miss a k8s feature.
j
For 2-3 clusters how many FTEs do you think would be needed. Keeping in mind we are using AKS so we don't need to do all of the work.
j
I was supporting three environments with one set of hot-hot k8s clusters in each, using AKS. We spent 3 FTEs time keeping those happy.
j
It's just me atm, and I'm not full time dedicated to it. No wonder it's been tricky..
s
One place I worked at had 2 dedicated engineers supporting a single K8s cluster for 1 product and nothing else. Not even helping with the containers. I don't know why it took so many for so little. But I suspect the issue is there's a huge amount of startup effort & research needed for K8s and especially to do it right. Which makes it tough on small installations. As they said above, you might be better off without all of that flexibility. But expect complaints above missing "critical" functionality. One thing I've done in similar but different situations is to ask them to bear with it for 3 months and then come back. Usually, they find out it wasn't really so needed. In the other cases we've worked together to solve their real problem and found solutions (we did run software before K8s after all).
t
I don't think I fully agree with that sentiment. If you are running a self-managed bare metal k8s instance, then yes loads of maintenance (speaking from personal experience). However with the level of managed k8s services (especially with the new AWS EKS Auto Mode), I personally consider k8s to be almost a nothing-burger. A single good experience k8s operator can maintain clusters like cattle without breaking a sweat That being said these people do not come cheap, and if you do not have the expertise or are using more hands-on flavours of k8s, then yes, it may not be worth the operational cost However past experience has shown that every company eventually hits a point of container management where k8s ultimately wins and becomes a need. Until you hit that point, build without lock-in so that a migration (when it happens) will be easy
j
Cloud providers should not be considered vendor lock in, but business partners. Switching cloud providers is a much bigger deal than just "oh I need to change my deploy scripts now."
t
Lock-in was not referencing the cloud-providers, rather their specific container flavours (or other lock-in features)
s
I mean it’s difficult to scope unless you describe your workloads. How large is your team? What do your deployment models looks like? What is the complexity of workloads supported?
c
Sorry for coming in so late on this one - I’ll drop my 5 cent anyway 🙂 I’m with ThameezBo on this one. If you’re running AKS Automatic, the k8s maintenance on itself is really not that much. The problem comes, when teams that have zero to no k8s knowledge need to drop their deliverables onto a cluster and attach them to the surroundings. Surroundings here being sometimes easy, like DNS and sometimes rather hard, like CDN with all kinds of cache integration with their app. Who helps them in running the additional components they need to have? Who makes the configuration of stuff that is “just a checkbox” in ACI / ACA work together between external services and stuff on the k8s cluster. If you do not have a platform with a sensible abstraction for that in place, then running with a PaaS or the integrated offering of a cloud provider makes a lot sense if you don’t want to compensate in people. Those would most probably the people “managing the k8s cluster” in that scenario.
r
under 10 or so services = k8s is prob too much of.a hassle long term you will want to move back to k8s for the superior scaling options. Those app services run on instances and generally are more costly than k8s worker nodes with the trade off of being easier to manage. Small teams don't need k8s IMO. Like others are saying it requires a full time platform eng and I would say that goes for managed k8s too. People underestimate all the tooling around k8s needs to be developed and also developers need guidance to use those or existing tools
s
Sorry for coming late to this but a good middle ground is to deploy aca on an aks cluster instead of a managed instance. This means you get access to the k8s cluster with all aca objects deployed to. Making it much easier to migrate away when you outgrow aca and k8s becomes relevant again. Basically use aca as an abstracted api and not as a PaaS