@@ -7,6 +7,8 @@ categories: [Kubernetes, Amazon EKS, Cilium]
7
7
tags : [Amazon EKS, k8s, kubernetes, karpenter, eksctl, cert-manager, external-dns, podinfo, cilium, prometheus, sso, oauth2-proxy, metrics-server]
8
8
image :
9
9
path : https://raw.githubusercontent.com/cncf/artwork/ac38e11ed57f017a06c9dcb19013bcaed92115a9/projects/cilium/icon/color/cilium_icon-color.svg
10
+ shell : /usr/local/bin/bash -eu
11
+ cwd : /tmp
10
12
---
11
13
12
14
I'm going to describe how to install [ Amazon EKS] ( https://aws.amazon.com/eks/ )
@@ -54,7 +56,7 @@ The Cilium installation should meet these requirements:
54
56
If you would like to follow this documents and it's task you will need to set up
55
57
few environment variables like:
56
58
57
- ``` bash
59
+ ``` bash { interactive=false }
58
60
# AWS Region
59
61
export AWS_DEFAULT_REGION=" ${AWS_DEFAULT_REGION:- us-east-1} "
60
62
# Hostname / FQDN definitions
@@ -75,7 +77,7 @@ mkdir -pv "${TMP_DIR}/${CLUSTER_FQDN}"
75
77
You will need to configure [ AWS CLI] ( https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html )
76
78
and other secrets/variables.
77
79
78
- ``` shell
80
+ ``` sh { excludeFromRunAll=true }
79
81
# AWS Credentials
80
82
export AWS_ACCESS_KEY_ID=" xxxxxxxxxxxxxxxxxx"
81
83
export AWS_SECRET_ACCESS_KEY=" xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
@@ -117,7 +119,7 @@ Install necessary tools:
117
119
118
120
Create DNS zone for EKS clusters:
119
121
120
- ``` shell
122
+ ``` sh { excludeFromRunAll=true }
121
123
export CLOUDFLARE_EMAIL=
" [email protected] "
122
124
export CLOUDFLARE_API_KEY=" 1xxxxxxxxx0"
123
125
@@ -131,7 +133,7 @@ Use your domain registrar to change the nameservers for your zone (for example
131
133
` mylabs.dev ` ) to use the Amazon Route 53 nameservers. Here is the way how you
132
134
can find out the the Route 53 nameservers:
133
135
134
- ``` shell
136
+ ``` sh { excludeFromRunAll=true }
135
137
NEW_ZONE_ID=$( aws route53 list-hosted-zones --query " HostedZones[?Name==\` ${BASE_DOMAIN} .\` ].Id" --output text)
136
138
NEW_ZONE_NS=$( aws route53 get-hosted-zone --output json --id " ${NEW_ZONE_ID} " --query " DelegationSet.NameServers" )
137
139
NEW_ZONE_NS1=$( echo " ${NEW_ZONE_NS} " | jq -r " .[0]" )
@@ -142,7 +144,7 @@ Create the NS record in `k8s.mylabs.dev` (`BASE_DOMAIN`) for
142
144
proper zone delegation. This step depends on your domain registrar - I'm using
143
145
CloudFlare and using Ansible to automate it:
144
146
145
- ``` shell
147
+ ``` sh { excludeFromRunAll=true }
146
148
ansible -m cloudflare_dns -c local -i " localhost," localhost -a " zone=mylabs.dev record=${BASE_DOMAIN} type=NS value=${NEW_ZONE_NS1} solo=true proxied=no account_email=${CLOUDFLARE_EMAIL} account_api_token=${CLOUDFLARE_API_KEY} "
147
149
ansible -m cloudflare_dns -c local -i " localhost," localhost -a " zone=mylabs.dev record=${BASE_DOMAIN} type=NS value=${NEW_ZONE_NS2} solo=false proxied=no account_email=${CLOUDFLARE_EMAIL} account_api_token=${CLOUDFLARE_API_KEY} "
148
150
```
@@ -405,6 +407,11 @@ managedNodeGroups:
405
407
disablePodIMDS: true
406
408
volumeEncrypted: true
407
409
volumeKmsKeyID: ${AWS_KMS_KEY_ID}
410
+ # iam:
411
+ # attachPolicyARNs:
412
+ # - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
413
+ # - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
414
+ # - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
408
415
taints:
409
416
- key: "node.cilium.io/agent-not-ready"
410
417
value: "true"
@@ -1652,7 +1659,7 @@ Cluster health: 2/2 reachable (2023-08-18T17:53:44Z)
1652
1659
1653
1660
Handy details abou Cilium networking can be found by listing ` ciliumnodes ` CRD:
1654
1661
1655
- ``` shell
1662
+ ``` sh { excludeFromRunAll=true }
1656
1663
kubectl describe ciliumnodes.cilium.io
1657
1664
```
1658
1665
@@ -2112,15 +2119,15 @@ Events: <none>
2112
2119
2113
2120
Remove EKS cluster and created components:
2114
2121
2115
- ``` sh
2122
+ ``` sh { category=destroy }
2116
2123
if eksctl get cluster --name=" ${CLUSTER_NAME} " ; then
2117
2124
eksctl delete cluster --name=" ${CLUSTER_NAME} " --force
2118
2125
fi
2119
2126
```
2120
2127
2121
2128
Remove Route 53 DNS records from DNS Zone:
2122
2129
2123
- ``` sh
2130
+ ``` sh { category=destroy }
2124
2131
CLUSTER_FQDN_ZONE_ID=$( aws route53 list-hosted-zones --query " HostedZones[?Name==\` ${CLUSTER_FQDN} .\` ].Id" --output text)
2125
2132
if [[ -n " ${CLUSTER_FQDN_ZONE_ID} " ]]; then
2126
2133
aws route53 list-resource-record-sets --hosted-zone-id " ${CLUSTER_FQDN_ZONE_ID} " | jq -c ' .ResourceRecordSets[] | select (.Type != "SOA" and .Type != "NS")' |
2135
2142
2136
2143
Remove orphan EC2s created by Karpenter:
2137
2144
2138
- ``` sh
2145
+ ``` sh { category=destroy }
2139
2146
for EC2 in $( aws ec2 describe-instances --filters " Name=tag:kubernetes.io/cluster/${CLUSTER_NAME} ,Values=owned" Name=instance-state-name,Values=running --query " Reservations[].Instances[].InstanceId" --output text) ; do
2140
2147
echo " Removing EC2: ${EC2} "
2141
2148
aws ec2 terminate-instances --instance-ids " ${EC2} "
@@ -2144,20 +2151,20 @@ done
2144
2151
2145
2152
Remove CloudFormation stack:
2146
2153
2147
- ``` sh
2154
+ ``` sh { category=destroy }
2148
2155
aws cloudformation delete-stack --stack-name " ${CLUSTER_NAME} -route53-kms"
2149
2156
```
2150
2157
2151
2158
Wait for all CloudFormation stacks to be deleted:
2152
2159
2153
- ``` sh
2160
+ ``` sh { category=destroy }
2154
2161
aws cloudformation wait stack-delete-complete --stack-name " ${CLUSTER_NAME} -route53-kms"
2155
2162
aws cloudformation wait stack-delete-complete --stack-name " eksctl-${CLUSTER_NAME} -cluster"
2156
2163
```
2157
2164
2158
2165
Remove Volumes and Snapshots related to the cluster (just in case):
2159
2166
2160
- ``` sh
2167
+ ``` sh { category=destroy }
2161
2168
for VOLUME in $( aws ec2 describe-volumes --filter " Name=tag:eks:cluster-name,Values=${CLUSTER_NAME} " --query ' Volumes[].VolumeId' --output text) ; do
2162
2169
echo " *** Removing Volume: ${VOLUME} "
2163
2170
aws ec2 delete-volume --volume-id " ${VOLUME} "
@@ -2166,7 +2173,7 @@ done
2166
2173
2167
2174
Remove ` ${TMP_DIR}/${CLUSTER_FQDN} ` directory:
2168
2175
2169
- ``` sh
2176
+ ``` sh { category=destroy }
2170
2177
[[ -d " ${TMP_DIR} /${CLUSTER_FQDN} " ]] && rm -rf " ${TMP_DIR} /${CLUSTER_FQDN} " && [[ -d " ${TMP_DIR} " ]] && rmdir " ${TMP_DIR} " || true
2171
2178
```
2172
2179
0 commit comments