Hey folks, just wondering how people manage their ...
# general
k
Hey folks, just wondering how people manage their application configuration here. We're deploying to EKS, but don't want to run k8s locally, and want to manage configuration all in one place. There's just a knowledge gap here for me.
m
Speaking from previous experience we used to keep the configuration side-by-side with the service as a ConfigMap in the deployment manifest in services’ repos. We were using Kustomize so there was a single base manifest with default configuration parameters and a per environment (overlay) manifest that would provide environment specific parameters such as database endpoints. The result was a file structure that looked something like this:
Copy code
/service-x
  /deployment
    /base
      kustomization.yaml
      deployment.yaml
      service.yaml
      config.yaml
    /overlays
      /development
        kustomization.yaml
        config.yaml
      /staging
        kustomization.yaml
        config.yaml
      /production
        kustomization.yaml
        config.yaml
Then to generate the complete configuration we had an ArgoCD Application that would run `kustomize build` on the overlay that corresponded with the target service & environment being deployed. This was only configuration though; secrets were being managed in Hashicorp Vault and the Vault Agent injector.
If you want the configuration for all services to be in a single place then you could follow a similar strategy, but introduce an intermediate step via a Github Action (or some other CI tool) to run
kustomize build
yourself and commit the Kubernetes manifests that are built to a single Github repository that contains the manifests for all of your services. That is to put the Kustomize “templates” into each service’s repository:
Copy code
# repo service-x
/service-x
  /deployment
    /base/...
    /overlays
      /development/...
      /staging/...
      /production/...

# repo service-y
/service-y
  /deployment
    /base/...
    /overlays
      /development/...
      /staging/...
      /production/...
Then have the CI run
kustomize build
on them and dump the complete manifests to a central repository for all services.
Copy code
# repo service-deployments
/service-deployments
  /service-x
    /development/manifest.yaml
    /staging/manifest.yaml
    /production/manifest.yaml
  /service-y
    /development/manifest.yaml
    /staging/manifest.yaml
    /production/manifest.yaml
This is only one way though and I’m curious to hear how others have done it.
a
We use chamber which extracts secrets or values from aws parameter store as env variables at runtime
k
+1 to @Mike Lee
In addition to what Mike mentioned, an alternative solution native to Amazon based solution could be something like mentioned below: • Consider using Amazon Systems Manager Parameter Store, a managed service as a centralized configuration store. And it works with EKS. • You can then set up the AWS Systems Manager Agent to retrieve the configuration data from the Parameter Store to make it available to the applications running on the EKS.
k
These are interesting, thanks. I've been poking around with sops for managing secrets, and how it may work with fluxcd. It looks really slick. It also seems like it may be possible to use sops to decrypt local values and place them into the environment. Does anyone have any experience using sops like this? What I meant by managing configuration in one place, is that per service, it's managed in one place, not a global config store for all services. In case there was confusion. 🙏
d
We have met the same question in our practice and store all our app configs in a monorepo. We open-source our practice, and you can find more details in this project https://github.com/KusionStack/kusion