Is Terraform or OpenTOFU the better / safer choice...
# terraform
r
Is Terraform or OpenTOFU the better / safer choice to go with if starting from scratch?
t
It depends... I'm going with OpenTofu, they've got more staff working on changes than Terraform, and some of their new features are 🔥. If I wanted to use Terraform Cloud instead of Env0/Scalr/Spacelift then I would have chosen differently.
c
Hey Troy! Anything special in mind when it comes to new features?
t
This is the most anticipated feature that I've seen come out, and it has been on my wishlist for years for Terraform https://github.com/opentofu/opentofu/issues/1042
The first solution is intentionally narrow in scope, but the potential use cases are huge. To do things like pinning a provider across many stacks I've been using Terramate to autogenerate all my Tofu code. Being able to define it as a variable will really simplify things.
t
You can target version 1.5.x (prior to the fork) and stay in a holding pattern until a more definitive answer comes. With IBM buying hashi, we're hoping for a merging of the codebases (🤞), but it's just a hopeful wish right now.
l
Any thoughts @Soren Martius?
t
There are migration guides with Terraform 1.6 versions... but I did the same. I held at 1.5.7 until I started seeing the differences in the ways that Tofu contributors interacted with the community and started tackling issues that have been long ignored by Hashi... Furthermore, we resort to slack channels that are just side quests of other interests (I'm in the K8s slack's terraform provider channel as well) whereas OpenTofu has their own slack where we can talk to contributors all the time (I'm having a discussion with Janos about this issue I created yesterday, already), and their standups are open and I attend fairly regularly... so its less of a black box when it comes to roadmaps and transparency.
Also, I'm on Soren's Terramate beta cloud product and I'm happy to say it works just fine with OpenTofu
c
That are really some exciting news Troy! I sadly lack the time to follow this one closely as well. Probably I have to observe closer. Hashi -> IBM -> ?! is another interesting variable on the move. The next months will tell, I guess.
t
Yeah, the OpenTofu contributors have signaled that they are happy to merge the code bases as long as the end result is something that will be owned by the CNCF. IMO, that would be what's best for everyone. I'm hoping IBM takes them up on their offer.
c
That would be quite wild. IBM did just spend quite a premium to secure Hashi and then spins out TF into CNCF? They will most probably want to keep stewardship of the result, so I guess this is not going to happen.
t
I don't know... Terraform cloud and Hashicorp hosted Vault are potentially great money makers. Also they own Ansible and its still very open source.
c
That was exactly my point - they swallowed RedHat and didn’t re-invent the world but let their portfolio stay intact so that not the whole customer base is shook and needs to adapt. Broadcom is showcasing how the opposite direction looks like. IBM playbook for Hashi is probably largely the same, while architects in the background work on better integrations for the now diversified overall portfolio.
s
Hey folks
First of all I'd like to emphasize on the fact that the HC acquisition by IBM has not been finalized yet!
To answer the question if it is safe to use OpenTofu. TLDR; "yes, but it depends on your business case" Long Version: If there's no urgent reason for you to switch (in a sense of features that exist in OpenTofu only vs you are impacted by the license change) you probably don't need to switch / make a fast move. Both projects are already diverging and both the OpenTofu as well as HashiCorps team have accelerated adding new features.
Yeah, the OpenTofu contributors have signaled that they are happy to merge the code bases as long as the end result is something that will be owned by the CNCF. IMO, that would be what's best for everyone.
That'd be my preferred scenario too. Merging both projects under the umbrella of the Linux Foundation to guarantee a decentralized, independent and proven governance model.
r
Wasn't there some FUD about Hashicorp threatening to sue OpenTofu over the forking? Is that no longer the case?
s
They sent a cease and desist letter to the OpenTofu team a couple of months back threatening with legal actions because of copytheft allegations that turned out to be wrong - so this made HC look back.
Other than that I am currently not aware of any talks or legal issues between the both
t
@Roger Foss you can look at the code in question yourself, if you're curious https://opentofu.org/blog/our-response-to-hashicorps-cease-and-desist/
IMO, there were some comically bad takes on the letter early on that started getting traction on social media, and that muddied the waters considerably.
s
correct!
@Troy Knapp re your issue in https://github.com/opentofu/opentofu/issues/1760 Currently, you can work around this by using
null_resources
to delay the execution of data sources to the apply phase.
Copy code
data "aws_subnet" "subnet" {
  filter {
    name = "tag:Name"
    values = [
      "us-east-1a",
    ]
  }

  depends_on = [
    null_resource.initial_deployment_trigger,
  ]
}

resource "null_resource" "initial_deployment_trigger" {}
t
Ok, so that's brilliant, but also I kinda hate it!
I have an ulterior motive for this PR (besides the fact that its annoyed me for years) The real reason I created this issue is that I've got a go script that I'm testing out that • allows you to annotate an output like this:
Copy code
# Output the path to the local file
# @public
output "local_file_path" {
  description = "Filename of the local file"
  value       = local_file.my_local_file.filename
}
• it pulls the provider schema • looks up the corresponding data source • finds the required attributes for said source • pulls the attributes from the state • creates a new module with data sources, providers, and outputs that can be consumed by other stacks The problem is that when you include a module like the one above, it'll only work if the apply in the parent has already happened. I want to be able to validate and not throw an error. So the problem here isn't delaying it until the apply phase (which your solution definitely fixes) but not throwing an error when a parent stack's resource isn't created yet and I want to generate a plan or something. I ran into the issue I described in the PR a lot when I was really fresh in IAC and trying to migrate from clickops. Things in the cloud were applied inconsistently in different environments. I tried to use as much of the same code everywhere as possible, but I was constantly trying to import resource after resource. So having some way to normalize everything and get a good starting point would have been really helpful.
s
This makes a lot of sense and is pretty much matches what we will be adding to Terramate in one of the upcoming releases. You will be able to chose either data sources or remote state lookups 🙂
t
Well... you inspired me to write it, as opposed to looking at parent stack's state directly like I was doing previously. I'd hope it was compatible with what you're doing, lol.
186 Views