Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow provider declarations in submodules #1896

Open
cveld opened this issue Aug 7, 2024 · 4 comments
Open

Allow provider declarations in submodules #1896

cveld opened this issue Aug 7, 2024 · 4 comments
Labels
blocked Issues which are blocked by inbound dependencies enhancement New feature or request needs-community-input pending-decision This issue has not been accepted for implementation nor rejected. It's still open to discussion.

Comments

@cveld
Copy link

cveld commented Aug 7, 2024

OpenTofu Version

1.8.0

Use Cases

We are using opentofu to manage small pieces of our kubernetes configuration. We leverage the azurerm provider to provide connection details, and the kubernetes providers to configure kubernetes. We put the desired kubernetes configuration in a moduel as much as possible. Unfortunately currently a module is considered legacy with a provider declaration inside which blocks the usage of calling it with count or for_each. It would be great if provider declarations inside submodules got promoted to modern usage. In this way we can put all required logic inside a kubernetes configuration module and we only require to pass the connection details into the module.

Attempted Solutions

provider "kubectl" {
  alias                  = "sharedprd"
  host                   = module.aks.aks.kube_admin_config[0].host
  client_certificate     = base64decode(module.aks.aks.kube_admin_config[0].client_certificate)
  client_key             = base64decode(module.aks.aks.kube_admin_config[0].client_key)
  cluster_ca_certificate = base64decode(module.aks.aks.kube_admin_config[0].cluster_ca_certificate)
  load_config_file       = false
}

# Inside the module:
resource "kubectl_manifest" "some_resource" {
  provider = kubectl.myalias
  yaml_body = templatefile("${path.module}/manifests/some-template.yaml", {
  })
}

But this very inflexible as it does not allow for looping across kubernetes cluster resources.

Proposal

One of the options could be to promot provider aliases to real sub resources in the tree.

I guess we could live with an architecture where a single provider server is running (although not being declared in the root), but with the option to configure aliases at any place in the tree. E.g. any arbitrary subscription id for azurerm or any arbitrary kubernetes config for the kubernetes providers.

References

No response

@cveld cveld added enhancement New feature or request pending-decision This issue has not been accepted for implementation nor rejected. It's still open to discussion. labels Aug 7, 2024
@Evi1Pumpkin
Copy link
Contributor

Hello and thank you for this issue! The core team regularly reviews new issues and discusses them, but this can take a little time. Please bear with us while we get to your issue. If you're interested, the contribution guide has a section about the decision-making process.

@cam72cam
Copy link
Member

cam72cam commented Aug 8, 2024

Providers in OpenTofu are globally scoped. In for_each and count, the provider configuration is still tied to that global scope and not to the module instance. We investigated this sort of scenario and decided against it due to intense complexity: https://github.com/opentofu/opentofu/blob/main/rfc/20240513-static-evaluation/module-expansion.md

We however did find a subset of the problem which we believe is easier to solve. Please take a look at #300, which we formalized in an RFC.

This is not exactly what you are looking for, but could potentially be a reasonable solution to your problem. Could you take a look at the linked issue/RFC and let us know what you think?

@apparentlymart
Copy link
Contributor

apparentlymart commented Oct 2, 2024

I've just left a long comment at #300 (comment) about a design hazard with that feature, and I think a variation of that hazard applies to this feature request too.

The variation for this one would be if a shared module contains both a provider "aws" block and a resource "aws_s3_object" "example" block, in this case neither using for_each, and the module were called from the root module like this:

module "has_own_aws_provider_and_resource" {
  # ...
}

Removing that module block effectively removes both the provider "aws" block and the resource "aws_s3_object" "example" block from the configuration at the same time. OpenTofu would notice that the S3 object needs to be deleted to achieve convergence, but now has no provider "aws" block to use to configure that provider.

The root problem both of these ideas have in common is that in today's OpenTofu a provider configuration must outlive all of the resource instances it manages by at least one plan/apply round. Unless that problem is somehow addressed, any feature that effectively forces removing both a resource block and its associated provider block from the configuration at the same time will cause OpenTofu to fail when planning the destroy action.

Hopefully there is a solution to that root problem that would then make both of these feature requests considerably more feasible.

(This is, for what it's worth, also why currently OpenTofu will not allow using for_each in a call to a module that contains a provider block. That's an existing example of something that would cause this trap, if it were allowed. OpenTofu blocks it at creation time to avoid causing someone to create something that they'd later be unable to destroy.)

@apparentlymart
Copy link
Contributor

Hi again!

As I alluded to before, this has some overlap with the problem of dynamic provider expansion: it's largely the same thing just with the expansion happening at the module call level instead of the provider configuration level, giving instances with addresses like like module.foo["a"].provider.bar instead of module.foo.provider.bar["a"].

Therefore this is likely to have a lot of technical work in common with #2155 and so for the moment we're going to consider this issue blocked on that one, just because the provider expansion work already has some design work done and so that one is likely to drive the common work that would later enable this one.

Despite it being marked as blocked we're still interested in community upvotes on the original issue comment to gauge interest in this particular use-case, separately from dynamic provider expansion.

Thanks!

@apparentlymart apparentlymart added the blocked Issues which are blocked by inbound dependencies label Nov 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blocked Issues which are blocked by inbound dependencies enhancement New feature or request needs-community-input pending-decision This issue has not been accepted for implementation nor rejected. It's still open to discussion.
Projects
None yet
Development

No branches or pull requests

4 participants