What are people's experiences with running SQL Ser...
# kubernetes
s
What are people's experiences with running SQL Server (& Windows Server) as containers in Kubernetes, especially for Mission Critical tasks w/ remote D.R.? I've encountered some folks attempting to move from VMware to Kubernetes and I'm concerned that they've mistaken ephemeral containers as lift-and-shift replacements for VMs. The MS documentation I've looked as ranges from "As a general rule, database applications are not great candidates for containerization.", can't join AD (but has a workaround), to a specific tutorial for bringing up a-should-only-be-one-replica followed by several manual commands to the new pod (apparently after each spin up/recovery). It looks like a theoretically possible but really hard task to turn this into Mission Critical High Availability. This just seems like using the Kubernetes Hammer to pound in a screw.
c
Hmmm… is there a specific reason to not use the containerized Linux version of SQL Server?
s
It's not clear which version of SQL server they're using at the moment. I was a bit unclear: they're using SQL Server containers and Windows Server (for .NET apps) containers. These are apparently replacing systems on VMware (where I would guess it's SQL Server on Windows).
But fundamentally, isn't SQL Server as a long lived system (typically years) requiring significant post-launch configuration with critical state (probably RPO=0) and having its own HA & replication software a poor match in conceptual models for ephemeral Kubernetes containers/pods?
n
I would be extremely hesitant (I.e. would not) to run sql for mission critical workloads inside k8s. I barely trust k8s native DB workloads
All of the mechanisms for normal operations are harder in k8s
a
I've encountered (and continue to encounter) lots of people who make the same mistake, regarding container = VM. I'd also be hesistant to run a database using containers unless either • It was designed with that architecture in mind (e.g. CockroachDB, Vitess, etc.) • Significant efforts have gone into polyfilling the gaps required to make it work well with containerisation (e.g. CloudNativePG for PostgreSQL) So if you're looking to run SQL Server, I think it depends how many analogues there are using containers for what you'd normally do maintenance-wise on VMs?
s
if you want to move to kubernetes fully instead of VMware, that is where KubeVirt is your friend. dont run SQL Server as pods rather as VMs using KubeVirt if you want to have a not as horrible experience. if your K8s knowledge and troubleshooting skills are good, KubeVirt can be a great solution. there are also commercial offerings of KubeVirt that make it also suitable for enterprise customers when relevant
s
Thank you. That's pretty much in line with what I expected.
r
Conflating VMware with k8s is strange. k8s still needs a hypervisor to run it's nodes on and integrate to scale out nodes.
I am assuming they are staying self-hosted?
They will first need to decide if they are going to replace VMware with OpenStack, Proxmox, or some other hypervisor first (those would be lift-and-shift-able replacements of VMWare)
s
you can also deploy k8s on bare metal and for VM use cases utilize KubeVirt. it is not a necessity to run K8s on VMs. with that said i do recommend it for most use cases however tools like canonical MaaS and its clusterAPI provider make K8s on bare metal a less painful operation than it used to be
s
Thank you everyone.