I wrote a quick readme about a Terraform engineeri...
# terraform
t
I wrote a quick readme about a Terraform engineering pattern I've been using for quite some time. Being a software engineer turned DevOps engineer I always found the organization of Terraform repos troubling. Terraform deployments tend towards monoliths because of the information sharing patterns it uses. You can grab the remote state of another environment and use its outputs, but this required knowledge of the parent. To this end, I use modules that do remote data lookups on environments that can be consumed by other environments. I call them interfaces... because, functionally, they kinda are. This approach just uses Terraform, it doesn't rely on any other tools or plugins and helps break up dependencies in Terraform rather well, imo. https://github.com/ImIOImI/Terraform-Interfaces/blob/main/README.md
a
I love this I was trying to solve this exact problem and you've enunciated it perfectly, kudos
k
That's an awesome pattern. We use a lot of terragrunt to handle the small stack and output to input problems, but that has it's challenges as well with the additional terragrunt configuration needed. What would be neat is to use terragrunt/terramate to handle the dynamic backend config while using this pattern handle the stack dependencies. 🙂
t
Well... I kinda have a github action project I'm working on that automates the sync between the TF template and the interface. Keeping the interface synced with the Terraform code is, admittedly, kinda annoying so I figured out how to not do it myself, lol. It's been working for me in Azure, but it feels really brittle to me right now and I haven't added any AWS or GCP support because I currently work in Azure. Please hold and I'll get it online.
k
I can see that being a problem, it might be easier to create a CLI tool to just generate the interface from module code. As I say that, I wonder if terramate could do this?
t
I mean the github action just calls a python script... you could run that via cli too
or have whatever other CI/CD tool you use call it
a
To what extent does using something like Pulumi allow you to have those interfaces without trying to force interfaces into HCL?
t
I really haven't worked much in Pulumi... but I'm interested to know the answer to this question as well. I'm also really interested in mocking providers for testing purposes, so I was planning to look into that as well.
Alright, here is the interface builder stuff. Its very alpha, use with caution. I'll be refactoring it into a proper gh action and adding tests soon. https://github.com/ImIOImI/github-workflow-build-terraform-interface
k
The way I like to handle integration between terraform projects also draws from software engineering, dependency injection. I haven't fully grokked your solution, but it looks like a similar direction. For dependency injection, each project declares the resources that are provided as outputs and those consumed as input parameters. The script that provisions a project stores its outputs and reads its inputs in a registry. I think that's similar to what you're doing, but there's no use of statefiles, so you don't need a separate tf project to act as an interface. A few things that this approach gives you. By not using tf state it's tool independent - different teams can make infrastructure using different tools, and you can integrate across them if they all store their provided and consumed resources in the registry. And because the storage and retrieval of integration points is handled outside the tf projects, you can swap out different provider implementations. For example, for production you might use hardened networking structures, but for faster testing of infra you can use a simpler networking infra package. Fakes and that sort of thing.
t
That makes total sense if you need tool independence. My current stack is 100% Terraform, so this would be overkill for me. If I had this need I would probably use a TACOS to expose the outputs of an environment via an API so that other tools can consume them. I currently use Env0 and they have one such example of this kind of implementation here for Terraform that's rather slick... but it could be easily adapted to any other language. Its just a bash script that makes authorized curl requests to their platform and gets the response. However, I really enjoy having all the outputs from other templates autocomplete in my IDE, and using boring standard tooling helps me accomplish this. (there are other benefits like being able to change my outputs and launder them to fit the old interface... which is cool... but if I'm being honest, autocompletion is the goal here) BUT, it sounds like there's a hybrid approach that's possible (forgive me if I misunderstand what you wrote and this is what you do already). If you built interface modules similar to mine, but instead of using the remote state data source, you could use the external data source that is compatible to any scripts that output json and turn them into outputs that can be consumed by other projects as well. This pattern would be amazing for consuming outputs from Bicep, AWS CDK, or Pulumi, for example.
k
I do think it's useful even with a single tool if you have a fairly large system with multiple teams working on infrastructure. It's useful to keep boundaries clean and decoupled.
t
I agree. Its just a different approach for the same end.