Hey everyone! :wave: Just finished a fun project m...
# general
e
Hey everyone! šŸ‘‹ Just finished a fun project migrating an 8-container Docker Compose stack to a full K8s deployment with Terraform IaC. Tech Stack: • Kubernetes manifests (Deployments, Services, ConfigMaps) • Terraform for cluster provisioning & resource management • Multi-container microservices architecture • K9s for cluster monitoring/management (screenshot attached) The complexity jump from Docker Compose to K8s orchestration was real - especially handling service discovery, persistent volumes, and resource constraints. But the scalability and declarative management you gain is worth it. Terraform integration made the whole infrastructure reproducible and version-controlled, which was satisfying to architect. Also - I'm actively looking for a Platform Engineering role! If anyone knows of opportunities or wants to chat about K8s etc, I'd love to connect. Full code/configs on GitHub if anyone wants to dive deeper: https://www.github.com/pyvel26/Kubernetes-Project #C037GR3TNH5
šŸ‘ 2
j
Feedback: • Kafka doesn't need Zookeeper anymore. • Generally you'd use a cloud managed service, not PVC for services as this significantly affects scalability (why you'd even use k8s) • There are operators for both Kafka and Postgres; you'd generally never just write your own Deployment/Service for these. • Flux/ArgoCD are seen as better resource deployment patterns for k8s than terraform
e
@Jordan Thanks for the feedback! You're absolutely right on each count: KRaft vs ZooKeeper: I actually started with KRaft but ran into the service naming conflicts and storage formatting complexity. Switched to ZooKeeper to focus on learning Kubernetes fundamentals rather than fighting KRaft's operational quirks. Definitely plan to revisit KRaft once I'm more comfortable with K8s. Cloud managed services: Completely agree for production workloads. Using PVCs taught me about storage persistence issues and reclaim policies the hard way, which I wouldn't have learned with managed services. But you're right that it defeats the scalability benefits of K8s. Operators: This is where I see the biggest gap in my approach. Strimzi and CloudNativePG would have saved me hours of debugging pg_hba.conf and storage issues. My reasoning was to learn the underlying concepts first, then move to operators. But I can see how that's backwards from a production perspective. GitOps: I used Terraform because that's what I knew, but I keep hearing about ArgoCD for application lifecycle management. This project was definitely learning-focused . Appreciate the reality check on current platform engineering practices. Any specific operators or GitOps tools you'd recommend starting with?
j
There's a project idpbuilder that helps setup ArgoCD projects and a gitea server for you as an artifact registry as well. Regarding operators, Strizmi or Bitnami Kafka one are most common I've seen used, though confluent has one. I'm not sure if AutoMQ, Pulsar, Buf or Redpanda have one yet. The Kafka ecosystem is fracturing in the cloud native space. I don't touch postgres much, but there's multiple operators. Like I said, serverless offerings are often used in enterprise, so maybe Amazon ACK rds or aurora operators, if not Crossplane
šŸ‘ 1
a
For PostgreSQL definitely recommend https://cloudnative-pg.io as beyond that you'd potentially look into psql-compatible databases (such as https://docs.yugabyte.com/preview/deploy/kubernetes/single-zone/oss/yugabyte-operator and https://github.com/cockroachdb/cockroach-operator) Although do think there's value in learning how the fundamentals behind you might deploy a database to k8s, prior to the magic of an operator. Agree that typically you wouldn't use Terraform for this kind of workload, would recommend taking a peek at https://codefresh.io/learn/argo-cd for learning more about Argo CD itself
šŸ‘ 3
d
Was the goal to run already cloud-native services like PostgeSQL (RDS) directly on K8s instead of using the cloud service? There might be legitimate reasons to do that - but IMO if your K8s is in a public cloud already, it's hard to beat the value you get from using a managed database service when you consider the scope of requirements, especially as you grow. I appreciate the desire/goal to be completely platform agnostic too, but i've rarely seen it shake out that way in growing businesses with significant database demands. (And in this scenario PVCs would be essential constructs for data persistence, i would think...) That said - we recently shifted away from SQS/Rabbit to a K8s/Operator because of shortcomings... so it's a nice option to have!
šŸ‘ 1
e
@DanK The goal was tool learn docker at no cost with services i was familiar with as a Data engineer. After learning Docker I took interest in Kubernetes so, I just used the containers I had from the first project. Now I have a good idea of how a realistic platform engineering infrastructure should look. The feedback will help me on my next project.
šŸ’Æ 1
šŸ‘ 1
@Andrew Kirkpatrick I appreciate the feedback. I'm going to take a look at the resources you've provided.