This message was deleted.
# platform-blueprints
s
This message was deleted.
h
https://devpress.csdn.net/cicd/62ec33a489d9027116a10669.html like always it depends 😂, another thing to consider is the more nodes you have the more pressure you put in the control plane, might not be as much of an issue on eks since I think the cplane scales I could be wrong though my clusters are fairly small
r
cc @Pierre Mavro 👀
Thank you for the link @Hugo Pinheiro - and yes, you’re right. It always depends. I’m curious to see if someone around already push the limit of EKS on the number of worker nodes
h
I guess it would also depend on the workload, if I had a dev cluster that I was going to setup vcluster on or devpod on I would probably go with a bigger cluster 😁
h
Read the docs? https://aws.github.io/aws-eks-best-practices/scalability/docs/ There are no absolute numbers, but they recommend you start to plan this seriously if scaling beyond 300 nodes or 5000 pods respectively in a single cluster. Kubernetes Cluster Autoscaler has been tested to scale up to 1000 nodes: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/proposals/scalability_tests.md
As I understand the docs, scaling past 1000 nodes or 50.000 pods is considered Extra large scaling and is possible and such deployments do exist. Personally I’ve never run anything over about 70-80 nodes. It was during a full tilt system load test and we found we were running into bottlenecks not related to EKS or k8s directly like CoreDNS needed tuning due to the amount of querying going on but that was more about the abnormal load per pod was generating than the number of nodes or pods…
r
Thanks @Heiðar Eldberg Eiríksson 👍