Hi @Sibu Thomas Mathew,
When running HashiCorp Vault as a Secrets Manager on an on-premises Kubernetes cluster, several considerations come into play for authentication methods, secrets storage engines, user access to the UI, and managing secrets.
1. Authentication Methods for Workloads:
◦ Kubernetes Service Account: In a Kubernetes environment, workloads running within pods can authenticate with Vault using Kubernetes Service Account authentication. When a pod is launched, it automatically gets a service account token that can be used to authenticate with Vault. Vault can be configured with Kubernetes authentication methods to verify these tokens and authorize access to secrets.
2. Secrets Storage Engine:
▪︎ The choice of secrets storage engine depends on your specific requirements and use cases. Some of the commonly used secrets engines include:Key/Value (kv): This is a flexible and commonly used secrets engine for storing arbitrary key-value pairs.
▪︎ Database Secrets Engine: For managing dynamic database credentials.
▪︎ AWS, Azure, or GCP Secrets Engines: For managing cloud-related secrets.
▪︎ Cubbyhole Secrets Engine: Offers a private key-value store for each token.
◦ Transit Secrets Engine: For encryption and decryption of data.
3. Access to the UI:
◦ In a production environment, it's common to restrict access to the Vault UI due to security concerns. The Vault UI is powerful but can expose sensitive information and actions. Vault can be accessed via its API, and you can limit access to the UI by firewall rules or network policies.
4. User/Team Management of Secrets:
◦ Users and teams can interact with Vault programmatically using Vault's CLI or API. This allows them to manage their secrets securely without direct access to the Vault UI. Access control and authorization are set up through policies that define what actions are allowed within Vault.