You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Setting cluster_compute_config.node_pools = [] in an existing EKS Auto mode cluster that was created with built-in nodepools cluster_compute_config.node_pools = ["general-purpose", "system"] forces replacement of the whole cluster.
From what I can tell the replacement is caused by the module changing compute_config.node_role_arn to null.
The use case here is that some EKS Auto mode clusters can start out with the built-in nodepools but might later opt to to disable the built-in and only use custom nodepools via k8s API. The AWS documentation mentions how to do this with the CLI and I would hope to achieve the same using Terraform: https://docs.aws.amazon.com/eks/latest/userguide/set-builtin-node-pools.html
✋ I have searched the open/closed issues and my issue is not listed.
Versions
Module version [Required]:
20.34.0
Terraform version:
Terraform v1.11.1
on darwin_arm64
A workaround to at least disable the "general-purpose" nodepool is to remove it as an element while "system" remains in the array. This successfully removes the "general-purpose" from the EKS cluster on the next terraform apply, without forcing replacement of the EKS cluster.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Description
Setting
cluster_compute_config.node_pools = []
in an existing EKS Auto mode cluster that was created with built-in nodepoolscluster_compute_config.node_pools = ["general-purpose", "system"]
forces replacement of the whole cluster.From what I can tell the replacement is caused by the module changing
compute_config.node_role_arn
tonull
.The use case here is that some EKS Auto mode clusters can start out with the built-in nodepools but might later opt to to disable the built-in and only use custom nodepools via k8s API. The AWS documentation mentions how to do this with the CLI and I would hope to achieve the same using Terraform: https://docs.aws.amazon.com/eks/latest/userguide/set-builtin-node-pools.html
Versions
20.34.0
Terraform v1.11.1
on darwin_arm64
registry.terraform.io/hashicorp/aws v5.84.0
Reproduction Code [Required]
Steps to reproduce the behavior:
node_pools
:Expected behavior
The built-in node pools should be disabled in the cluster while
compute_config.node_role_arn
is unchanged.Actual behavior
The text was updated successfully, but these errors were encountered: