Skip to content

Commit b85ae04

Browse files
committed
feat(posts): add runme
1 parent aa2fa11 commit b85ae04

File tree

1 file changed

+20
-13
lines changed

1 file changed

+20
-13
lines changed

_posts/2023/2023-08-03-cilium-amazon-eks.md

+20-13
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@ categories: [Kubernetes, Amazon EKS, Cilium]
77
tags: [Amazon EKS, k8s, kubernetes, karpenter, eksctl, cert-manager, external-dns, podinfo, cilium, prometheus, sso, oauth2-proxy, metrics-server]
88
image:
99
path: https://raw.githubusercontent.com/cncf/artwork/ac38e11ed57f017a06c9dcb19013bcaed92115a9/projects/cilium/icon/color/cilium_icon-color.svg
10+
shell: /usr/local/bin/bash -eu
11+
cwd: /tmp
1012
---
1113

1214
I'm going to describe how to install [Amazon EKS](https://aws.amazon.com/eks/)
@@ -54,7 +56,7 @@ The Cilium installation should meet these requirements:
5456
If you would like to follow this documents and it's task you will need to set up
5557
few environment variables like:
5658

57-
```bash
59+
```bash { interactive=false }
5860
# AWS Region
5961
export AWS_DEFAULT_REGION="${AWS_DEFAULT_REGION:-us-east-1}"
6062
# Hostname / FQDN definitions
@@ -75,7 +77,7 @@ mkdir -pv "${TMP_DIR}/${CLUSTER_FQDN}"
7577
You will need to configure [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
7678
and other secrets/variables.
7779

78-
```shell
80+
```sh { excludeFromRunAll=true }
7981
# AWS Credentials
8082
export AWS_ACCESS_KEY_ID="xxxxxxxxxxxxxxxxxx"
8183
export AWS_SECRET_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
@@ -117,7 +119,7 @@ Install necessary tools:
117119

118120
Create DNS zone for EKS clusters:
119121

120-
```shell
122+
```sh { excludeFromRunAll=true }
121123
export CLOUDFLARE_EMAIL="[email protected]"
122124
export CLOUDFLARE_API_KEY="1xxxxxxxxx0"
123125

@@ -131,7 +133,7 @@ Use your domain registrar to change the nameservers for your zone (for example
131133
`mylabs.dev`) to use the Amazon Route 53 nameservers. Here is the way how you
132134
can find out the the Route 53 nameservers:
133135

134-
```shell
136+
```sh { excludeFromRunAll=true }
135137
NEW_ZONE_ID=$(aws route53 list-hosted-zones --query "HostedZones[?Name==\`${BASE_DOMAIN}.\`].Id" --output text)
136138
NEW_ZONE_NS=$(aws route53 get-hosted-zone --output json --id "${NEW_ZONE_ID}" --query "DelegationSet.NameServers")
137139
NEW_ZONE_NS1=$(echo "${NEW_ZONE_NS}" | jq -r ".[0]")
@@ -142,7 +144,7 @@ Create the NS record in `k8s.mylabs.dev` (`BASE_DOMAIN`) for
142144
proper zone delegation. This step depends on your domain registrar - I'm using
143145
CloudFlare and using Ansible to automate it:
144146

145-
```shell
147+
```sh { excludeFromRunAll=true }
146148
ansible -m cloudflare_dns -c local -i "localhost," localhost -a "zone=mylabs.dev record=${BASE_DOMAIN} type=NS value=${NEW_ZONE_NS1} solo=true proxied=no account_email=${CLOUDFLARE_EMAIL} account_api_token=${CLOUDFLARE_API_KEY}"
147149
ansible -m cloudflare_dns -c local -i "localhost," localhost -a "zone=mylabs.dev record=${BASE_DOMAIN} type=NS value=${NEW_ZONE_NS2} solo=false proxied=no account_email=${CLOUDFLARE_EMAIL} account_api_token=${CLOUDFLARE_API_KEY}"
148150
```
@@ -405,6 +407,11 @@ managedNodeGroups:
405407
disablePodIMDS: true
406408
volumeEncrypted: true
407409
volumeKmsKeyID: ${AWS_KMS_KEY_ID}
410+
# iam:
411+
# attachPolicyARNs:
412+
# - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
413+
# - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
414+
# - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
408415
taints:
409416
- key: "node.cilium.io/agent-not-ready"
410417
value: "true"
@@ -1652,7 +1659,7 @@ Cluster health: 2/2 reachable (2023-08-18T17:53:44Z)
16521659

16531660
Handy details abou Cilium networking can be found by listing `ciliumnodes` CRD:
16541661

1655-
```shell
1662+
```sh { excludeFromRunAll=true }
16561663
kubectl describe ciliumnodes.cilium.io
16571664
```
16581665

@@ -2112,15 +2119,15 @@ Events: <none>
21122119

21132120
Remove EKS cluster and created components:
21142121

2115-
```sh
2122+
```sh { category=destroy }
21162123
if eksctl get cluster --name="${CLUSTER_NAME}"; then
21172124
eksctl delete cluster --name="${CLUSTER_NAME}" --force
21182125
fi
21192126
```
21202127

21212128
Remove Route 53 DNS records from DNS Zone:
21222129

2123-
```sh
2130+
```sh { category=destroy }
21242131
CLUSTER_FQDN_ZONE_ID=$(aws route53 list-hosted-zones --query "HostedZones[?Name==\`${CLUSTER_FQDN}.\`].Id" --output text)
21252132
if [[ -n "${CLUSTER_FQDN_ZONE_ID}" ]]; then
21262133
aws route53 list-resource-record-sets --hosted-zone-id "${CLUSTER_FQDN_ZONE_ID}" | jq -c '.ResourceRecordSets[] | select (.Type != "SOA" and .Type != "NS")' |
@@ -2135,7 +2142,7 @@ fi
21352142

21362143
Remove orphan EC2s created by Karpenter:
21372144

2138-
```sh
2145+
```sh { category=destroy }
21392146
for EC2 in $(aws ec2 describe-instances --filters "Name=tag:kubernetes.io/cluster/${CLUSTER_NAME},Values=owned" Name=instance-state-name,Values=running --query "Reservations[].Instances[].InstanceId" --output text) ; do
21402147
echo "Removing EC2: ${EC2}"
21412148
aws ec2 terminate-instances --instance-ids "${EC2}"
@@ -2144,20 +2151,20 @@ done
21442151

21452152
Remove CloudFormation stack:
21462153

2147-
```sh
2154+
```sh { category=destroy }
21482155
aws cloudformation delete-stack --stack-name "${CLUSTER_NAME}-route53-kms"
21492156
```
21502157

21512158
Wait for all CloudFormation stacks to be deleted:
21522159

2153-
```sh
2160+
```sh { category=destroy }
21542161
aws cloudformation wait stack-delete-complete --stack-name "${CLUSTER_NAME}-route53-kms"
21552162
aws cloudformation wait stack-delete-complete --stack-name "eksctl-${CLUSTER_NAME}-cluster"
21562163
```
21572164

21582165
Remove Volumes and Snapshots related to the cluster (just in case):
21592166

2160-
```sh
2167+
```sh { category=destroy }
21612168
for VOLUME in $(aws ec2 describe-volumes --filter "Name=tag:eks:cluster-name,Values=${CLUSTER_NAME}" --query 'Volumes[].VolumeId' --output text) ; do
21622169
echo "*** Removing Volume: ${VOLUME}"
21632170
aws ec2 delete-volume --volume-id "${VOLUME}"
@@ -2166,7 +2173,7 @@ done
21662173

21672174
Remove `${TMP_DIR}/${CLUSTER_FQDN}` directory:
21682175

2169-
```sh
2176+
```sh { category=destroy }
21702177
[[ -d "${TMP_DIR}/${CLUSTER_FQDN}" ]] && rm -rf "${TMP_DIR}/${CLUSTER_FQDN}" && [[ -d "${TMP_DIR}" ]] && rmdir "${TMP_DIR}" || true
21712178
```
21722179

0 commit comments

Comments
 (0)