My company has been trying out the utility of LLMs in the platform engineering space and our current takeaway is that yes, they have some utility, but they are not yet production ready. For example, using GPT4 to define a Terraform to deploy a containerized application on AWS is possible, and you can readily get a good starter template, but you will quickly discover that pieces are missing or deprecated commands and functions are being called. If you are comfortable creating a Terraform template on your own, GPT4 will absolutely save you time in getting a reasonable 'boilerplate' template pulled together, but you will still need to tune it to get it to do what you want. There is potential there, though I am not sure that the "chat" model is really the best interface for platform engineering tasks.