Skip to content

Empty list in cluster_compute_config.nodepools forces cluster replacement #3321

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 task done
ChrisEke opened this issue Mar 12, 2025 · 3 comments
Closed
1 task done

Comments

@ChrisEke
Copy link

Description

Setting cluster_compute_config.node_pools = [] in an existing EKS Auto mode cluster that was created with built-in nodepools cluster_compute_config.node_pools = ["general-purpose", "system"] forces replacement of the whole cluster.

From what I can tell the replacement is caused by the module changing compute_config.node_role_arn to null.

The use case here is that some EKS Auto mode clusters can start out with the built-in nodepools but might later opt to to disable the built-in and only use custom nodepools via k8s API. The AWS documentation mentions how to do this with the CLI and I would hope to achieve the same using Terraform: https://docs.aws.amazon.com/eks/latest/userguide/set-builtin-node-pools.html

  • ✋ I have searched the open/closed issues and my issue is not listed.

Versions

  • Module version [Required]:
    20.34.0
  • Terraform version:
    Terraform v1.11.1
    on darwin_arm64
  • Provider version(s):
    registry.terraform.io/hashicorp/aws v5.84.0

Reproduction Code [Required]

Steps to reproduce the behavior:

  1. Have a previously created cluster with:
    cluster_compute_config = {
      enabled       = true
      node_pools    = ["general-purpose", "system"]
    }
  2. Remove the default node_pools:
    cluster_compute_config = {
      enabled       = true
      node_pools    = []
    }

Expected behavior

The built-in node pools should be disabled in the cluster while compute_config.node_role_arn is unchanged.

Actual behavior

terraform plan
# module.eks.aws_eks_cluster.this[0] must be replaced
+/- resource "aws_eks_cluster" "this" {
  ...
      ~ compute_config {
          - node_pools    = [
              - "general-purpose",
              - "system",
            ] -> null
          - node_role_arn = "arn:aws:iam::123456789123:role/my-cluster-eks-auto-20250217081502451000000001" -> null # forces replacement
            # (1 unchanged attribute hidden)
        }
  ...
}
@bryantbiggs
Copy link
Member

duplicate #3273

@ChrisEke
Copy link
Author

A workaround to at least disable the "general-purpose" nodepool is to remove it as an element while "system" remains in the array. This successfully removes the "general-purpose" from the EKS cluster on the next terraform apply, without forcing replacement of the EKS cluster.

  cluster_compute_config = {
    enabled = true
    node_pools =  ["system"]
  }

Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 12, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants