Welcome to AKS I guess...
In theory, I'll create a new node group commit those changes, spin up the nodes, then delete the old node group with the next PR.
However, IN PRACTICE I often find that the AKS api does a woefully inadequate job at managing changes like these and gets hung up trying to migrate workloads... so before I delete the legacy node group I just taint all the nodes and I start killing off pods that I know are replicated on my good nodes. Workloads like redis and postgres get special attention (the actual data is stored in persistent volumes so I won't have a catastrophic loss if all my pods are gone at once, but a lot of things rely on those stateful dbs). Bassically, I'm prepping the cluster to give the AKS api as easy of a job as possible.
Once the legacy node group is prepped, then I commit the deleted node group to IaC and watch the cluster to make sure it doesn't get hung on any nodes.
For node upgrades I commit the change to IaC and run them, then immediately taint all the nodes and "help" the AKS process when it gets stuck on nodes.
Regardless how I try to upgrade AKS or change the nodes I find its an awfully manual process. The Azure portal has the same issues, and if your upgrade or change times out, its kinda a pain in the ass to fix. I just find it much easier to shepherd k8s through the process.