# Terraform AWS EKS Cluster Module ## Overview This Terraform module provisions an Amazon EKS (Elastic Kubernetes Service) cluster with comprehensive configuration options including VPC integration, IAM roles, security groups, OIDC provider support, and ConfigMap authentication management for worker nodes. ## Features - Complete EKS cluster provisioning - Configurable Kubernetes version - VPC and subnet integration - Security group management with flexible ingress rules - IAM roles and policies for cluster operation - OIDC Identity Provider for IAM roles for service accounts (IRSA) - ConfigMap-based authentication for worker nodes - Support for private and public API endpoints - Control plane logging configuration - CloudPosse naming conventions - Conditional module enablement ## Resources Created ### EKS - AWS EKS Cluster - EKS Cluster IAM Role - EKS Cluster Security Group ### IAM - IAM Role for EKS cluster - IAM Policy Attachments: - AmazonEKSClusterPolicy - AmazonEKSServicePolicy - IAM OIDC Identity Provider (optional) ### Security - Security Group for cluster - Security Group Rules (egress, worker ingress, custom ingress) ### Authentication - ConfigMap for AWS auth (optional) - Kubeconfig file generation (optional) ## Usage ### Basic Example ```hcl module "eks_cluster" { source = "git@github.com:webuildyourcloud/terraform-aws-eks-cluster.git?ref=tags/0.0.3" # Naming namespace = "myorg" stage = "prod" name = "app" region = "us-east-1" # Network Configuration vpc_id = "vpc-12345678" subnet_ids = ["subnet-12345678", "subnet-87654321", "subnet-11111111"] # Kubernetes Configuration kubernetes_version = "1.21" # Worker Nodes workers_role_arns = [module.eks_node_group.eks_node_group_role_arn] workers_security_group_ids = [module.eks_node_group.security_group_id] # Kubeconfig kubeconfig_path = "./kubeconfig" # Tags tags = { Environment = "production" ManagedBy = "terraform" } } ``` ### Advanced Example with OIDC and Private Access ```hcl module "eks_cluster" { source = "git@github.com:webuildyourcloud/terraform-aws-eks-cluster.git?ref=tags/0.0.3" namespace = "myorg" stage = "prod" name = "secure-app" region = "us-east-1" vpc_id = module.vpc.vpc_id subnet_ids = module.vpc.private_subnet_ids # Kubernetes Version kubernetes_version = "1.24" # API Endpoint Access endpoint_private_access = true endpoint_public_access = false # Worker Configuration workers_role_arns = [ module.eks_node_group.eks_node_group_role_arn, module.eks_fargate.role_arn ] workers_security_group_ids = [module.vpc.default_security_group_id] # Security allowed_security_groups = ["sg-12345678"] allowed_cidr_blocks = ["10.0.0.0/8"] # OIDC for IRSA oidc_provider_enabled = true # Control Plane Logging enabled_cluster_log_types = [ "api", "audit", "authenticator", "controllerManager", "scheduler" ] # Additional IAM Mappings map_additional_iam_roles = [ { rolearn = "arn:aws:iam::123456789012:role/DevOpsRole" username = "devops" groups = ["system:masters"] } ] map_additional_iam_users = [ { userarn = "arn:aws:iam::123456789012:user/admin" username = "admin" groups = ["system:masters"] } ] kubeconfig_path = "./kubeconfig" tags = { Environment = "production" Security = "high" } } ``` ## Variables | Name | Description | Type | Default | Required | |------|-------------|------|---------|----------| | region | AWS Region | `string` | n/a | yes | | namespace | Namespace (e.g., 'eg' or 'cp') | `string` | `""` | no | | stage | Stage (e.g., 'prod', 'staging', 'dev') | `string` | `""` | no | | name | Solution name (e.g., 'app' or 'cluster') | `string` | n/a | yes | | delimiter | Delimiter between namespace, stage, name | `string` | `"-"` | no | | attributes | Additional attributes | `list(string)` | `[]` | no | | tags | Additional tags | `map(string)` | `{}` | no | | enabled | Enable/disable module resources | `bool` | `true` | no | | vpc_id | VPC ID for the EKS cluster | `string` | n/a | yes | | subnet_ids | List of subnet IDs to launch the cluster in | `list(string)` | n/a | yes | | associate_public_ip_address | Associate public IP with instances in VPC | `bool` | `true` | no | | allowed_security_groups | Security Group IDs allowed to connect to EKS cluster | `list(string)` | `[]` | no | | allowed_cidr_blocks | CIDR blocks allowed to connect to EKS cluster | `list(string)` | `[]` | no | | workers_role_arns | List of Role ARNs of worker nodes | `list(string)` | n/a | yes | | workers_security_group_ids | Security Group IDs of worker nodes | `list(string)` | n/a | yes | | kubernetes_version | Desired Kubernetes master version | `string` | `"1.14"` | no | | oidc_provider_enabled | Create IAM OIDC identity provider | `bool` | `false` | no | | endpoint_private_access | Enable private API server endpoint | `bool` | `false` | no | | endpoint_public_access | Enable public API server endpoint | `bool` | `true` | no | | enabled_cluster_log_types | List of control plane logging types to enable | `list(string)` | `[]` | no | | apply_config_map_aws_auth | Apply ConfigMap to allow worker nodes to join | `bool` | `true` | no | | map_additional_aws_accounts | Additional AWS account numbers for config-map-aws-auth | `list(string)` | `[]` | no | | map_additional_iam_roles | Additional IAM roles for config-map-aws-auth | `list(object)` | `[]` | no | | map_additional_iam_users | Additional IAM users for config-map-aws-auth | `list(object)` | `[]` | no | | kubeconfig_path | Path to kubeconfig file | `string` | `"./files/config"` | no | | local_exec_interpreter | Shell to use for local exec | `string` | `"/bin/bash"` | no | | install_aws_cli | Install AWS CLI if not present | `bool` | `false` | no | | install_kubectl | Install kubectl if not present | `bool` | `false` | no | | kubectl_version | kubectl version to install | `string` | `""` | no | | aws_eks_update_kubeconfig_additional_arguments | Additional arguments for aws eks update-kubeconfig | `string` | `""` | no | | aws_cli_assume_role_arn | IAM Role ARN for AWS CLI to assume | `string` | `""` | no | ## Outputs | Name | Description | |------|-------------| | security_group_id | ID of the EKS cluster Security Group | | security_group_arn | ARN of the EKS cluster Security Group | | security_group_name | Name of the EKS cluster Security Group | | eks_cluster_id | Name of the cluster | | eks_cluster_arn | Amazon Resource Name (ARN) of the cluster | | eks_cluster_endpoint | Endpoint for Kubernetes API server | | eks_cluster_version | Kubernetes server version of the cluster | | eks_cluster_identity_oidc_issuer | OIDC Identity issuer for the cluster | | eks_cluster_identity_oidc_issuer_arn | OIDC Identity issuer ARN for IRSA | | eks_cluster_certificate_authority_data | Kubernetes cluster certificate authority data | | eks_cluster_auth_token | Cluster authentication token | | workers_security_group_ids | Security group of worker nodes | ## Requirements | Name | Version | |------|---------| | terraform | >= 0.13 | | aws | ~> 3.27 | | template | ~> 2.2 | | null | ~> 3.0 | | local | ~> 2.0 | ## Dependencies - [cloudposse/terraform-null-label](https://github.com/cloudposse/terraform-null-label) - Resource naming ## IAM Roles for Service Accounts (IRSA) When `oidc_provider_enabled = true`, the module creates an OIDC identity provider that enables IRSA. This allows Kubernetes service accounts to assume IAM roles: ```hcl # Example: Create IAM role for a service account data "aws_iam_policy_document" "s3_access" { statement { actions = [ "sts:AssumeRoleWithWebIdentity" ] effect = "Allow" principals { identifiers = [module.eks_cluster.eks_cluster_identity_oidc_issuer_arn] type = "Federated" } condition { test = "StringEquals" variable = "${replace(module.eks_cluster.eks_cluster_identity_oidc_issuer, "https://", "")}:sub" values = ["system:serviceaccount:default:s3-reader"] } } } resource "aws_iam_role" "s3_reader" { name = "eks-s3-reader" assume_role_policy = data.aws_iam_policy_document.s3_access.json } ``` ## Control Plane Logging Enable comprehensive control plane logging for audit and debugging: ```hcl enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"] ``` Log types: - **api**: API server logs - **audit**: Audit logs - **authenticator**: Authenticator logs - **controllerManager**: Controller manager logs - **scheduler**: Scheduler logs ## Important Notes 1. **Cluster Creation Time**: EKS cluster creation takes 10-15 minutes 2. **Subnets**: Use private subnets for worker nodes; public subnets for public load balancers 3. **Kubernetes Version**: Specify a version or use latest available 4. **OIDC Provider**: Enable for using IAM roles for service accounts 5. **Worker Authentication**: Workers need proper IAM roles and security group configuration 6. **API Endpoints**: - Public access: Accessible from internet (with optional CIDR restrictions) - Private access: Only accessible from within VPC 7. **ConfigMap Auth**: The module can automatically configure aws-auth ConfigMap for worker node authentication ## Best Practices 1. **Private API Endpoint**: Use private endpoint for production clusters 2. **Enable Logging**: Enable all control plane log types for production 3. **Use OIDC**: Enable OIDC provider for better security with IRSA 4. **Network Security**: Restrict `allowed_cidr_blocks` and `allowed_security_groups` 5. **Version Pinning**: Pin Kubernetes version for consistency 6. **Multiple Subnets**: Use subnets across multiple AZs for high availability 7. **IAM Permissions**: Follow principle of least privilege for worker roles 8. **Tagging**: Use consistent tagging for cost allocation and management ## Troubleshooting ### Cluster creation fails - Verify IAM permissions for creating EKS clusters - Check subnet tags include `kubernetes.io/cluster/ = "shared"` - Ensure subnets have sufficient IP addresses ### Workers cannot join cluster - Verify workers_role_arns are correctly specified - Check security group rules allow communication - Ensure aws-auth ConfigMap is properly configured ### Cannot access cluster API - Verify endpoint access settings - Check security group and CIDR restrictions - Ensure kubeconfig is correctly generated ## License This module is provided as-is for use within your organization.