Skip to content

Commit 11147e9

Browse files
Node groups submodule (#650)
* WIP Move node_groups to a submodule * Split the old node_groups file up * Start moving locals * Simplify IAM creation logic * depends_on from the TF docs * Wire in the variables * Call module from parent * Allow to customize the role name. As per workers * aws_auth ConfigMap for node_groups * Get the managed_node_groups example to plan * Get the basic example to plan too * create_eks = false works "The true and false result expressions must have consistent types. The given expressions are object and object, respectively." Well, that's useful. But apparently set(string) and set() are ok. So everything else is more complicated. Thanks. * Update Changelog * Update README * Wire in node_groups_defaults * Remove node_groups from workers_defaults_defaults * Synchronize random and node_group defaults * Error: "name_prefix" cannot be longer than 32 * Update READMEs again * Fix double destroy Was producing index errors when running destroy on an empty state. * Remove duplicate iam_role in node_group I think this logic works. Needs some testing with an externally created role. * Fix index fail if node group manually deleted * Keep aws_auth template in top module Downside: count causes issues as usual: can't use distinct() in the child module so there's a template render for every node_group even if only one role is really in use. Hopefully just output noise instead of technical issue * Hack to have node_groups depend on aws_auth etc The AWS Node Groups create or edit the aws-auth ConfigMap so that nodes can join the cluster. This breaks the kubernetes resource which cannot do a force create. Remove the race condition with explicit depend. Can't pull the IAM role out of the node_group any more. * Pull variables via the random_pet to cut logic No point having the same logic in two different places * Pass all ForceNew variables through the pet * Do a deep merge of NG labels and tags * Update README.. again * Additional managed node outputs #644 Add change from @TBeijin from PR #644 * Remove unused local * Use more for_each * Remove the change when create_eks = false * Make documentation less confusing * node_group version user configurable * Pass through raw output from aws_eks_node_groups * Merge workers defaults in the locals This simplifies the random_pet and aws_eks_node_group logic. Which was causing much consernation on the PR. * Fix typo Co-authored-by: Max Williams <[email protected]>
1 parent d79c8ab commit 11147e9

File tree

15 files changed

+254
-150
lines changed

15 files changed

+254
-150
lines changed

CHANGELOG.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,8 @@ project adheres to [Semantic Versioning](http://semver.org/).
2626
- Adding node group iam role arns to outputs. (by @mukgupta)
2727
- Added the OIDC Provider ARN to outputs. (by @eytanhanig)
2828
- **Breaking:** Change logic of security group whitelisting. Will always whitelist worker security group on control plane security group either provide one or create new one. See Important notes below for upgrade notes (by @ryanooi)
29+
- Move `eks_node_group` resources to a submodule (by @dpiddockcmp)
30+
- Add complex output `node_groups` (by @TBeijen)
2931

3032
#### Important notes
3133

README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -181,7 +181,8 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a
181181
| map\_accounts | Additional AWS account numbers to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | list(string) | `[]` | no |
182182
| map\_roles | Additional IAM roles to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | object | `[]` | no |
183183
| map\_users | Additional IAM users to add to the aws-auth configmap. See examples/basic/variables.tf for example format. | object | `[]` | no |
184-
| node\_groups | A list of maps defining node group configurations to be defined using AWS EKS Managed Node Groups. See workers_group_defaults for valid keys. | any | `[]` | no |
184+
| node\_groups | Map of map of node groups to create. See `node_groups` module's documentation for more details | any | `{}` | no |
185+
| node\_groups\_defaults | Map of values to be applied to all node groups. See `node_groups` module's documentaton for more details | any | `{}` | no |
185186
| permissions\_boundary | If provided, all IAM roles will be created with this permissions boundary attached. | string | `"null"` | no |
186187
| subnets | A list of subnets to place the EKS cluster and workers within. | list(string) | n/a | yes |
187188
| tags | A map of tags to add to all resources. | map(string) | `{}` | no |
@@ -218,7 +219,7 @@ MIT Licensed. See [LICENSE](https://github.com/terraform-aws-modules/terraform-a
218219
| config\_map\_aws\_auth | A kubernetes configuration to authenticate to this EKS cluster. |
219220
| kubeconfig | kubectl config file contents for this EKS cluster. |
220221
| kubeconfig\_filename | The filename of the generated kubectl config. |
221-
| node\_groups\_iam\_role\_arns | IAM role ARNs for EKS node groups |
222+
| node\_groups | Outputs from EKS node groups. Map of maps, keyed by var.node_groups keys |
222223
| oidc\_provider\_arn | The ARN of the OIDC Provider if `enable_irsa = true`. |
223224
| worker\_autoscaling\_policy\_arn | ARN of the worker autoscaling IAM policy if `manage_worker_autoscaling_policy = true` |
224225
| worker\_autoscaling\_policy\_name | Name of the worker autoscaling IAM policy if `manage_worker_autoscaling_policy = true` |

aws_auth.tf

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -43,13 +43,10 @@ data "template_file" "worker_role_arns" {
4343
}
4444

4545
data "template_file" "node_group_arns" {
46-
count = var.create_eks ? local.worker_group_managed_node_group_count : 0
46+
count = var.create_eks ? length(module.node_groups.aws_auth_roles) : 0
4747
template = file("${path.module}/templates/worker-role.tpl")
4848

49-
vars = {
50-
worker_role_arn = lookup(var.node_groups[count.index], "iam_role_arn", aws_iam_role.node_groups[0].arn)
51-
platform = "linux" # Hardcoded because the EKS API currently only supports linux for managed node groups
52-
}
49+
vars = module.node_groups.aws_auth_roles[count.index]
5350
}
5451

5552
resource "kubernetes_config_map" "aws_auth" {

examples/managed_node_groups/main.tf

Lines changed: 14 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -92,27 +92,29 @@ module "eks" {
9292

9393
vpc_id = module.vpc.vpc_id
9494

95-
node_groups = [
96-
{
97-
name = "example"
95+
node_groups_defaults = {
96+
ami_type = "AL2_x86_64"
97+
disk_size = 50
98+
}
9899

99-
node_group_desired_capacity = 1
100-
node_group_max_capacity = 10
101-
node_group_min_capacity = 1
100+
node_groups = {
101+
example = {
102+
desired_capacity = 1
103+
max_capacity = 10
104+
min_capacity = 1
102105

103106
instance_type = "m5.large"
104-
node_group_k8s_labels = {
107+
k8s_labels = {
105108
Environment = "test"
106109
GithubRepo = "terraform-aws-eks"
107110
GithubOrg = "terraform-aws-modules"
108111
}
109-
node_group_additional_tags = {
110-
Environment = "test"
111-
GithubRepo = "terraform-aws-eks"
112-
GithubOrg = "terraform-aws-modules"
112+
additional_tags = {
113+
ExtraTag = "example"
113114
}
114115
}
115-
]
116+
defaults = {}
117+
}
116118

117119
map_roles = var.map_roles
118120
map_users = var.map_users

examples/managed_node_groups/outputs.tf

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,3 +23,7 @@ output "region" {
2323
value = var.region
2424
}
2525

26+
output "node_groups" {
27+
description = "Outputs from node groups"
28+
value = module.eks.node_groups
29+
}

local.tf

Lines changed: 2 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,8 @@ locals {
1616
default_iam_role_id = concat(aws_iam_role.workers.*.id, [""])[0]
1717
kubeconfig_name = var.kubeconfig_name == "" ? "eks_${var.cluster_name}" : var.kubeconfig_name
1818

19-
worker_group_count = length(var.worker_groups)
20-
worker_group_launch_template_count = length(var.worker_groups_launch_template)
21-
worker_group_managed_node_group_count = length(var.node_groups)
19+
worker_group_count = length(var.worker_groups)
20+
worker_group_launch_template_count = length(var.worker_groups_launch_template)
2221

2322
default_ami_id_linux = data.aws_ami.eks_worker.id
2423
default_ami_id_windows = data.aws_ami.eks_worker_windows.id
@@ -80,15 +79,6 @@ locals {
8079
spot_allocation_strategy = "lowest-price" # Valid options are 'lowest-price' and 'capacity-optimized'. If 'lowest-price', the Auto Scaling group launches instances using the Spot pools with the lowest price, and evenly allocates your instances across the number of Spot pools. If 'capacity-optimized', the Auto Scaling group launches instances using Spot pools that are optimally chosen based on the available Spot capacity.
8180
spot_instance_pools = 10 # "Number of Spot pools per availability zone to allocate capacity. EC2 Auto Scaling selects the cheapest Spot pools and evenly allocates Spot capacity across the number of Spot pools that you specify."
8281
spot_max_price = "" # Maximum price per unit hour that the user is willing to pay for the Spot instances. Default is the on-demand price
83-
ami_type = "AL2_x86_64" # AMI Type to use for the Managed Node Groups. Can be either: AL2_x86_64 or AL2_x86_64_GPU
84-
ami_release_version = "" # AMI Release Version of the Managed Node Groups
85-
source_security_group_id = [] # Source Security Group IDs to allow SSH Access to the Nodes. NOTE: IF LEFT BLANK, AND A KEY IS SPECIFIED, THE SSH PORT WILL BE OPENNED TO THE WORLD
86-
node_group_k8s_labels = {} # Kubernetes Labels to apply to the nodes within the Managed Node Group
87-
node_group_desired_capacity = 1 # Desired capacity of the Node Group
88-
node_group_min_capacity = 1 # Min capacity of the Node Group (Minimum value allowed is 1)
89-
node_group_max_capacity = 3 # Max capacity of the Node Group
90-
node_group_iam_role_arn = "" # IAM role to use for Managed Node Groups instead of default one created by the automation
91-
node_group_additional_tags = {} # Additional tags to be applied to the Node Groups
9282
}
9383

9484
workers_group_defaults = merge(
@@ -133,7 +123,4 @@ locals {
133123
"t2.small",
134124
"t2.xlarge"
135125
]
136-
137-
node_groups = { for node_group in var.node_groups : node_group["name"] => node_group }
138-
139126
}

modules/node_groups/README.md

Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
# eks `node_groups` submodule
2+
3+
Helper submodule to create and manage resources related to `eks_node_groups`.
4+
5+
## Assumptions
6+
* Designed for use by the parent module and not directly by end users
7+
8+
## Node Groups' IAM Role
9+
The role ARN specified in `var.default_iam_role_arn` will be used by default. In a simple configuration this will be the worker role created by the parent module.
10+
11+
`iam_role_arn` must be specified in either `var.node_groups_defaults` or `var.node_groups` if the default parent IAM role is not being created for whatever reason, for example if `manage_worker_iam_resources` is set to false in the parent.
12+
13+
## `node_groups` and `node_groups_defaults` keys
14+
`node_groups_defaults` is a map that can take the below keys. Values will be used if not specified in individual node groups.
15+
16+
`node_groups` is a map of maps. Key of first level will be used as unique value for `for_each` resources and in the `aws_eks_node_group` name. Inner map can take the below values.
17+
18+
| Name | Description | Type | If unset |
19+
|------|-------------|:----:|:-----:|
20+
| additional\_tags | Additional tags to apply to node group | map(string) | Only `var.tags` applied |
21+
| ami\_release\_version | AMI version of workers | string | Provider default behavior |
22+
| ami\_type | AMI Type. See Terraform or AWS docs | string | Provider default behavior |
23+
| desired\_capacity | Desired number of workers | number | `var.workers_group_defaults[asg_desired_capacity]` |
24+
| disk\_size | Workers' disk size | number | Provider default behavior |
25+
| iam\_role\_arn | IAM role ARN for workers | string | `var.default_iam_role_arn` |
26+
| instance\_type | Workers' instance type | string | `var.workers_group_defaults[instance_type]` |
27+
| k8s\_labels | Kubernetes labels | map(string) | No labels applied |
28+
| key\_name | Key name for workers. Set to empty string to disable remote access | string | `var.workers_group_defaults[key_name]` |
29+
| max\_capacity | Max number of workers | number | `var.workers_group_defaults[asg_max_size]` |
30+
| min\_capacity | Min number of workers | number | `var.workers_group_defaults[asg_min_size]` |
31+
| source\_security\_group\_ids | Source security groups for remote access to workers | list(string) | If key\_name is specified: THE REMOTE ACCESS WILL BE OPENED TO THE WORLD |
32+
| subnets | Subnets to contain workers | list(string) | `var.workers_group_defaults[subnets]` |
33+
| version | Kubernetes version | string | Provider default behavior |
34+
35+
<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
36+
## Inputs
37+
38+
| Name | Description | Type | Default | Required |
39+
|------|-------------|:----:|:-----:|:-----:|
40+
| cluster\_name | Name of parent cluster | string | n/a | yes |
41+
| create\_eks | Controls if EKS resources should be created (it affects almost all resources) | bool | `"true"` | no |
42+
| default\_iam\_role\_arn | ARN of the default IAM worker role to use if one is not specified in `var.node_groups` or `var.node_groups_defaults` | string | n/a | yes |
43+
| node\_groups | Map of maps of `eks_node_groups` to create. See "`node_groups` and `node_groups_defaults` keys" section in README.md for more details | any | `{}` | no |
44+
| node\_groups\_defaults | map of maps of node groups to create. See "`node_groups` and `node_groups_defaults` keys" section in README.md for more details | any | n/a | yes |
45+
| tags | A map of tags to add to all resources | map(string) | n/a | yes |
46+
| workers\_group\_defaults | Workers group defaults from parent | any | n/a | yes |
47+
48+
## Outputs
49+
50+
| Name | Description |
51+
|------|-------------|
52+
| aws\_auth\_roles | Roles for use in aws-auth ConfigMap |
53+
| node\_groups | Outputs from EKS node groups. Map of maps, keyed by `var.node_groups` keys. See `aws_eks_node_group` Terraform documentation for values |
54+
55+
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->

modules/node_groups/locals.tf

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
locals {
2+
# Merge defaults and per-group values to make code cleaner
3+
node_groups_expanded = { for k, v in var.node_groups : k => merge(
4+
{
5+
desired_capacity = var.workers_group_defaults["asg_desired_capacity"]
6+
iam_role_arn = var.default_iam_role_arn
7+
instance_type = var.workers_group_defaults["instance_type"]
8+
key_name = var.workers_group_defaults["key_name"]
9+
max_capacity = var.workers_group_defaults["asg_max_size"]
10+
min_capacity = var.workers_group_defaults["asg_min_size"]
11+
subnets = var.workers_group_defaults["subnets"]
12+
},
13+
var.node_groups_defaults,
14+
v,
15+
) if var.create_eks }
16+
}

modules/node_groups/node_groups.tf

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
resource "aws_eks_node_group" "workers" {
2+
for_each = local.node_groups_expanded
3+
4+
node_group_name = join("-", [var.cluster_name, each.key, random_pet.node_groups[each.key].id])
5+
6+
cluster_name = var.cluster_name
7+
node_role_arn = each.value["iam_role_arn"]
8+
subnet_ids = each.value["subnets"]
9+
10+
scaling_config {
11+
desired_size = each.value["desired_capacity"]
12+
max_size = each.value["max_capacity"]
13+
min_size = each.value["min_capacity"]
14+
}
15+
16+
ami_type = lookup(each.value, "ami_type", null)
17+
disk_size = lookup(each.value, "disk_size", null)
18+
instance_types = [each.value["instance_type"]]
19+
release_version = lookup(each.value, "ami_release_version", null)
20+
21+
dynamic "remote_access" {
22+
for_each = each.value["key_name"] != "" ? [{
23+
ec2_ssh_key = each.value["key_name"]
24+
source_security_group_ids = lookup(each.value, "source_security_group_ids", [])
25+
}] : []
26+
27+
content {
28+
ec2_ssh_key = remote_access.value["ec2_ssh_key"]
29+
source_security_group_ids = remote_access.value["source_security_group_ids"]
30+
}
31+
}
32+
33+
version = lookup(each.value, "version", null)
34+
35+
labels = merge(
36+
lookup(var.node_groups_defaults, "k8s_labels", {}),
37+
lookup(var.node_groups[each.key], "k8s_labels", {})
38+
)
39+
40+
tags = merge(
41+
var.tags,
42+
lookup(var.node_groups_defaults, "additional_tags", {}),
43+
lookup(var.node_groups[each.key], "additional_tags", {}),
44+
)
45+
46+
lifecycle {
47+
create_before_destroy = true
48+
}
49+
}

modules/node_groups/outputs.tf

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
output "node_groups" {
2+
description = "Outputs from EKS node groups. Map of maps, keyed by `var.node_groups` keys. See `aws_eks_node_group` Terraform documentation for values"
3+
value = aws_eks_node_group.workers
4+
}
5+
6+
output "aws_auth_roles" {
7+
description = "Roles for use in aws-auth ConfigMap"
8+
value = [
9+
for k, v in local.node_groups_expanded : {
10+
worker_role_arn = lookup(v, "iam_role_arn", var.default_iam_role_arn)
11+
platform = "linux"
12+
}
13+
]
14+
}

0 commit comments

Comments
 (0)