This message was deleted.
# terraform
s
This message was deleted.
t
There's not a great way as far as I can tell... but you can mv individual resources' states to a local statefile and push that up later like:
Copy code
terraform state mv -state-out=../terraform.tfstate aws_instance.example_new aws_instance.example_new
https://developer.hashicorp.com/terraform/tutorials/state/state-cli
t
Or use the
moved
keyword in your TF files (prevents CLI state changes): https://developer.hashicorp.com/terraform/tutorials/configuration-language/move-config#move-your-resources-with-the-moved-configuration-block You might still generate all the temporary blocks using a shell script, by `grep`ping the
from
values from the original plan, and manipulating to create the
to
values, perhaps in a few steps, and with that have the script “render” all
moved
blocks. When creating the plan you’ll see that it plans to move the resources and after applying that once the
moved
blocks are obsolete (but won’t break the next plan).
n
moved
won't work between state files though - just location inside one state file
t
@Thomas Kraus moved blocks are great for refactoring and changing the names of things that exist in the same statefile, but can you do that to move the state from one statefile to another?
n
Oren, sounds like a fun project! I think the answer is no Terraform-native way to do this, as in they don't provide some out of the box tool. Some good approaches here though. Sounds like a fun project 🙂
t
I agree, if you write a tool to do it, please share with the class.
t
Ah indeed,
moved
is just within a single state, must have overlooked this part from the question 🙈
t
I was hoping you knew something awesome that I didn't about them 😢
n
I'm curious what version of Terraform you are on though? the
moved
blocks were added in 1.1, although as noted probably not going to help you. Not sure if the
state mv -state-out
option was added in somewhat newer version, looks like maybe it may be pretty old as it's not "recommended" any longer (and only works with local backends) https://developer.hashicorp.com/terraform/language/settings/backends/local#command-line-arguments
I had a coworker cook up some ugly scripts to do essentially this exact thing (and we were on TF 0.11/0.12 at the time). We had created hundreds of google storage buckets in a single state file, along with IAM policies, service accounts, a bunch of crap (the oldest being bare resources, the newer ones contained in a module). We made some tactical errors in the early going with storage ACLs and IAM policies "fighting" over the bucket permissions, resulting in permadiffs and in the worst case, broken permissions after an apply when you changed something completely unrelated 😞 . Total mess and huge blast radius. The mitigation was to move each bucket to its own state. I don't recall how he did it, and I no longer work there so can't even give you the broad strokes without divulging any IP 😞 . But we used a Consul k/v state backend so he may have been pulling the state JSON and twiddling with it directly, not sure.
t
I think it would be easy enough to grab the output from the
terraform state list
command, then grep for something like
module_name.*
then get a list of all the relevant objects and their states, then translate them into a new state file (or use the
state mv -state-out
command if its available).
Honestly, when I hear about what's recommended by Hashicorp and what's not, I kind of just stop listening... They've done a tremendous job of writing a language that's impossible to implement at any kind of scale without some kind of anti-pattern so I take their recommendations with a grain of salt. I'd be willing to bet that the OP is in this mess in the first place because TF naturally tends to be a monolith.
o
moved
block work when moving resources inside the same state as far as I know. Not sure in understand how you want to use them in my use case.
My custom solution is indeed greping the output of terraform state list. I was wondering if there was not a better tool since this is presumably a common problem. Thank you all for you input.
t
I feel like because its so hard to break up an existing terraform template (especially with a state attached) that its not a common use case. This is even more true when you think about how to implement any kind of a DRY solution to getting the outputs of your original template to feed into the new one.
n
Developing some automation/CI to make it easy to create new state files in one repo, OR even better some automation or platform to make it easy to generate a brand spanking new repository hooked up to a CI/pipeline - this can head off the "monolith" problem, but you have to start early or you have problems like this one. Wayfair started with a monorepo wherein any directory you created became a new state file. Later we built a platform on top of Terraform Enterprise so you could get your workspace and repo self service. If you're curious I presented on this at HashiConf a few months ago 😄 https://www.hashicorp.com/resources/transforming-access-to-cloud-infrastructure-at-wayfair-with-terraform-enterprise
t
The problem isn't creating new statefiles or backends... that's easy. The problem is sharing outputs from one environment to many others and uncoupling the them all while trying to keep it DRY. Then orchestrating it all so it doesn't act like a fragmented monolith.
For example, I can have a template that runs many workspaces and environments that codes for a VPC. Other templates/workspaces/environments may depend on this. However, running a change to the VPC template doesn't necessitate a change to the dependencies and we shouldn't have to run through the entire dependency graph. Figuring out ways to read changes from a producing environment and evaluate its impact on the dependent environment before running it is WAAAAY harder than it should be.
n
Have you seen the Stacks feature? Still in private preview, but aims to solve a lot of those problems
Of course, gotta be on TFC or (eventually) TFE
To be clear I haven't been hands-on with Stacks, we had just gotten in to the private preview when my employment ended 😐
t
I have and I expect it to foster a new wave of competing and hopefully open source alternatives.
n
My hot take is that opentofu won't go anywhere, but I'm a HC fanboi. Also I've been ruined by the value-add of TFE, building something myself or even rolling out Atlantis would seem like a chore when I can pay HC for the Cadillac version. Of course, you have to have budget for that and make the case it's more efficient than paying me to do it 😄
The cross-state dependency thing is an interesting one. The way we implemented decoupled TF workspaces, they were each their own hermetic thing. Communication with other components was over a service mesh using some GCP shared DNS zones, and then we implemented something insane with Istio/Envoy (I didn't do the implementation of this part). But anyway, each TF org/workspace didn't really have any need to interact with the other orgs/workspaces. All separate VPC networks, GCP projects, etc.
t
I'll say this, I'm on the OpenToFu slack and they're actively working on stale issues that I've been tracking for YEARS with expected release dates within the year. They've got 5x the engineers working on it than HC has on Terraform. The amount of community engagement and work they're putting in eclipse's HC's efforts right now.
n
That is cool! It will be interesting to see what happens.
t
To solve cross state dependencies I use what I've been referring to as interfaces (for lack of a better term). I create a module that does a remote state lookup on another environment's state file, then makes that environment's output "public" by having its own outputs. From there it can be included anywhere, have the values from the producing environment without having to run the resources. However, its definitely anti-pattern. Remote state lookups are a terrible security hazard and that function should not exist in TF, imo.
Two features that are being worked on now that I'm especially excited about are: 1. dynamic providers (being able to declare the providers in a for loop, then refer to them and pass them dynamically to modules) 2. the ability to define module versions outside the module source definition
n
I like what you've done there with the interfaces. I think in my past we avoided that need by just having lots of mini-monolith state files in our huge monolith repo lolsob
the ability to define module versions outside the module source definition
Do you mean like as a local or variable?
t
Yes
n
How about that really fun interaction with feeding outputs into a for_each when it just crashes and burns even when it is possible to determine the total number of things before the plan executes? Hate that one, and mostly hate it when other people come to me with that one and I have to explain it to them lolsob I understand sometimes it is impossible, but there are certain cases where if you follow the DAG from the root to that for_each, everything is known
t
Here's that PR and note the last comment: > Something I hadn't fully considered is how module calls will interact with for-each. This will need to be explored. I believe that each.keys will be known, but values may be undefined. Same situation with the count meta argument.
Also:
I think in my past we avoided that need by just having lots of mini-monolith state files in our huge monolith repo lolsob
Seriously, I will give no shade here. Managing TF at scale is terrible and you have to do some shitty things to get it done. Its just a matter of what shitty thing can you live with.
n
This has been a really great conversation. My entire Terraform experience from learning the tool on 0.11 to becoming pretty good at it and moving up to 1.x was with one single company, and we definitely had our own problems and anti-patterns. So I'm enjoying being a part of communities such as this one and hearing about the different types of problems other companies are having, and of course more interestingly how they are solving them.
Now time for an afternoon full of interviews
t
This is my 3rd company where I've used TF, and I feel like I've learned a LOT from the experience of jumping around to companies with remarkably different sizes and problems.