|  | 
|  | 1 | +# Enabling IPv6 | 
|  | 2 | + | 
|  | 3 | +## Overview | 
|  | 4 | + | 
|  | 5 | +CAPA enables you to create an IPv6 Kubernetes clusters on Amazon Web Service (AWS). | 
|  | 6 | + | 
|  | 7 | +Only single-stack IPv6 clusters are supported. However, CAPA utilizes a dual stack infrastructure (e.g. dual stack VPC) to support IPv6. In fact, it is the only mode of operation at the time of writing. | 
|  | 8 | + | 
|  | 9 | +> **IMPORTANT NOTE**: Dual stack clusters are not yet supported. | 
|  | 10 | +
 | 
|  | 11 | +## Prerequisites | 
|  | 12 | + | 
|  | 13 | +The instance types for control plane and worker machines must be [Nitro-based](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html) in order to support IPv6. To see a list of Nitro instance types in your region, run the following command: | 
|  | 14 | + | 
|  | 15 | +```bash | 
|  | 16 | +aws ec2 describe-instance-types \ | 
|  | 17 | +  --filters Name=hypervisor,Values=nitro \ | 
|  | 18 | +  --query="InstanceTypes[*].InstanceType" | 
|  | 19 | +``` | 
|  | 20 | + | 
|  | 21 | +## Creating IPv6 EKS-managed Clusters | 
|  | 22 | + | 
|  | 23 | +To quickly deploy an IPv6 EKS cluster, use the [IPv6 EKS cluster template](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/refs/heads/main/templates/cluster-template-eks-ipv6.yaml). | 
|  | 24 | + | 
|  | 25 | +<aside class="note warning"> | 
|  | 26 | + | 
|  | 27 | +<h1>Warning</h1> | 
|  | 28 | + | 
|  | 29 | +You can't define custom Pod CIDRs on EKS with IPv6. EKS automatically assigns an address range from a unique local | 
|  | 30 | +address range of `fc00::/7`. | 
|  | 31 | + | 
|  | 32 | +</aside> | 
|  | 33 | + | 
|  | 34 | +**Notes**: All addons **must** be enabled. A working cluster configuration looks like this: | 
|  | 35 | + | 
|  | 36 | +```yaml | 
|  | 37 | +kind: AWSManagedControlPlane | 
|  | 38 | +apiVersion: controlplane.cluster.x-k8s.io/v1beta1 | 
|  | 39 | +metadata: | 
|  | 40 | +  name: "${CLUSTER_NAME}-control-plane" | 
|  | 41 | +spec: | 
|  | 42 | +  network: | 
|  | 43 | +    vpc: | 
|  | 44 | +      ipv6: {} | 
|  | 45 | +  region: "${AWS_REGION}" | 
|  | 46 | +  sshKeyName: "${AWS_SSH_KEY_NAME}" | 
|  | 47 | +  version: "${KUBERNETES_VERSION}" | 
|  | 48 | +  addons: | 
|  | 49 | +    - name: "vpc-cni" | 
|  | 50 | +      version: "v1.11.0-eksbuild.1" | 
|  | 51 | +      # this is important, otherwise environment property update will not work | 
|  | 52 | +      conflictResolution: "overwrite" | 
|  | 53 | +    - name: "coredns" | 
|  | 54 | +      version: "v1.8.7-eksbuild.1" | 
|  | 55 | +    - name: "kube-proxy" | 
|  | 56 | +      version: "v1.22.6-eksbuild.1" | 
|  | 57 | +``` | 
|  | 58 | +
 | 
|  | 59 | +## Creating IPv6 Self-managed Clusters | 
|  | 60 | +
 | 
|  | 61 | +To quickly deploy an IPv6 self-managed cluster, use the [IPv6 cluster template](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/refs/heads/main/templates/cluster-template-ipv6.yaml). | 
|  | 62 | +
 | 
|  | 63 | +When creating a self-managed cluster, you can define the Pod and Service CIDR. For example, you can define ULA IPv6 range `fd01::/48` for pod networking and `fd02::/112` for service networking. | 
|  | 64 | + | 
|  | 65 | +<aside class="note warning"> | 
|  | 66 | + | 
|  | 67 | +<h1>Warning</h1> | 
|  | 68 | + | 
|  | 69 | +**Action required**: Since coredns pods run on the single-stack IPv6 pod network, they will fail to resolve non-cluster DNS queries | 
|  | 70 | +via the IPv4 upstream nameserver in `/etc/resolv.conf`. | 
|  | 71 | + | 
|  | 72 | +Here are workaround options: | 
|  | 73 | +- Edit the `coredns` deployment and add `hostNetwork: true`, so it can leverage host routes for the v4 network. | 
|  | 74 | +  ```bash | 
|  | 75 | +  kubectl -n kube-system patch deploy/coredns \ | 
|  | 76 | +    --type=merge -p '{"spec": {"template": {"spec":{"hostNetwork": true}}}}' | 
|  | 77 | +  ``` | 
|  | 78 | +- Edit the `coredns` ConfigMap to use Route53 Resolver nameserver `fd00:ec2::253`, by setting `forward . /etc/resolv.conf` part to `forward . fd00:ec2::253 /etc/resolv.conf`. | 
|  | 79 | +  ```bash | 
|  | 80 | +  kubectl -n kube-system edit cm/coredns | 
|  | 81 | +  ``` | 
|  | 82 | +</aside> | 
|  | 83 | + | 
|  | 84 | +### CNI IPv6 support | 
|  | 85 | + | 
|  | 86 | +By default, no CNI plugin is installed when provisioning a self-managed cluster. You need to install your own CNI solution that supports IPv6, for example, Calico with VXLAN. | 
|  | 87 | + | 
|  | 88 | +You can find the guides to enable [IPv6](https://docs.tigera.io/calico/latest/networking/ipam/ipv6#ipv6) and [VXLAN](https://docs.tigera.io/calico/latest/networking/configuring/vxlan-ipip) support for Calico on their official documentation. Or you can use a customized Calico manifests [here](https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-aws/refs/heads/main/test/e2e/data/cni/calico_ipv6.yaml) for IPv6. | 
|  | 89 | + | 
|  | 90 | +## IPv6 CIDR Allocations | 
|  | 91 | + | 
|  | 92 | +### AWS-assigned IPv6 VPC CIDR | 
|  | 93 | + | 
|  | 94 | +To request AWS to automatically assign an IPv6 CIDR from an AWS defined address pool, use the following setting: | 
|  | 95 | + | 
|  | 96 | +```yaml | 
|  | 97 | +spec: | 
|  | 98 | +  network: | 
|  | 99 | +    vpc: | 
|  | 100 | +      ipv6: {} | 
|  | 101 | +``` | 
|  | 102 | + | 
|  | 103 | +### BYOIPv6 VPC CIDR | 
|  | 104 | + | 
|  | 105 | +To define your own IPv6 address pool and CIDR set the following values: | 
|  | 106 | + | 
|  | 107 | +```yaml | 
|  | 108 | +spec: | 
|  | 109 | +  network: | 
|  | 110 | +    vpc: | 
|  | 111 | +      ipv6: | 
|  | 112 | +        poolId: pool-id | 
|  | 113 | +        cidrBlock: "2009:1234:ff00::/56" | 
|  | 114 | +``` | 
|  | 115 | + | 
|  | 116 | +There must already be a provisioned pool and a set of IPv6 CIDRs for that. | 
|  | 117 | + | 
|  | 118 | +### BYO IPv6 VPC | 
|  | 119 | + | 
|  | 120 | +If you have a VPC that is IPv6 enabled (i.e. dual stack VPC) and you would like to use it, please define it in the `AWSCluster` specs: | 
|  | 121 | + | 
|  | 122 | +```yaml | 
|  | 123 | +spec: | 
|  | 124 | +  network: | 
|  | 125 | +    vpc: | 
|  | 126 | +      id: vpc-1234567890abcdefg | 
|  | 127 | +      cidrBlock: 10.0.0.0/16 | 
|  | 128 | +      ipv6: | 
|  | 129 | +        cidrBlock: "2001:1234:ff00::/56" | 
|  | 130 | +        egressOnlyInternetGatewayId: eigw-1234567890abcdefg | 
|  | 131 | +``` | 
|  | 132 | + | 
|  | 133 | +This has to be done explicitly because otherwise, it would break in the following two scenarios: | 
|  | 134 | +- During an upgrade from 1.5 to >=2.0 where the VPC is ipv6 enabled, but CAPA was only recently made aware. | 
|  | 135 | +- During a migration on the VPC, switching it from only IPv4 to Dual Stack (it would see that ipv6 is enabled and | 
|  | 136 | +  enforce it while doing that would not have been the intention of the user). | 
0 commit comments