Skip to content

Commit c8e16df

Browse files
committed
Update docs
1 parent 6d8d097 commit c8e16df

File tree

1 file changed

+87
-38
lines changed

1 file changed

+87
-38
lines changed

docs/operations/karpenter.md

Lines changed: 87 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -2,68 +2,117 @@
22

33
[Karpenter](https://karpenter.sh) is a Kubernetes-native capacity manager that directly provisions Nodes and underlying instances based on Pod requirements. On AWS, kOps supports managing an InstanceGroup with either Karpenter or an AWS Auto Scaling Group (ASG).
44

5+
## Prerequisites
6+
7+
Managed Karpenter requires kOps 1.34+ and that [IAM Roles for Service Accounts (IRSA)](/cluster_spec#service-account-issuer-discovery-and-aws-iam-roles-for-service-accounts-irsa) be enabled for the cluster.
8+
59
## Installing
610

7-
If using kOps 1.26 or older, enable the Karpenter feature flag :
11+
### New clusters
812

913
```sh
10-
export KOPS_FEATURE_FLAGS="Karpenter"
14+
export NAME="my-cluster.example.com"
15+
export REGION="us-east-1"
16+
export ZONE="us-east-1a"
17+
18+
kops create cluster --name ${NAME} \
19+
--state=s3://my-state-store \
20+
--discovery-store=s3://my-discovery-store \
21+
--cloud=aws \
22+
--networking=cilium \
23+
--zones=${ZONE} \
24+
--instance-manager=karpenter \
25+
--yes
1126
```
1227

13-
Karpenter requires that external permissions for ServiceAccounts be enabled for the cluster. See [AWS IAM roles for ServiceAccounts documentation](/cluster_spec#service-account-issuer-discovery-and-aws-iam-roles-for-service-accounts-irsa) for how to enable this.
14-
1528
### Existing clusters
1629

17-
On existing clusters, you can create a Karpenter InstanceGroup by adding the following to its InstanceGroup spec:
30+
The Karpenter addon must be enabled in the cluster spec:
1831

1932
```yaml
2033
spec:
21-
manager: Karpenter
34+
karpenter:
35+
enabled: true
2236
```
2337
24-
You also need to enable the Karpenter addon in the cluster spec:
38+
To create a Karpenter InstanceGroup, set the following in its InstanceGroup spec:
2539
2640
```yaml
2741
spec:
28-
karpenter:
29-
enabled: true
42+
manager: Karpenter
3043
```
3144
32-
### New clusters
33-
34-
On new clusters, you can simply add the `--instance-manager=karpenter` flag:
45+
### EC2NodeClass and NodePool
3546
3647
```sh
37-
kops create cluster --name mycluster.example.com --cloud aws --networking=amazonvpc --zones=eu-central-1a,eu-central-1b --master-count=3 --yes --discovery-store=s3://discovery-store/
48+
export USER_DATA=$(aws s3 cp s3://my-state-store/${NAME}/igconfig/node/nodes/nodeupscript.sh -)
49+
50+
cat <<EOF | kubectl apply -f -
51+
apiVersion: karpenter.k8s.aws/v1
52+
kind: EC2NodeClass
53+
metadata:
54+
name: default
55+
spec:
56+
amiFamily: Custom
57+
amiSelectorTerms:
58+
- ssmParameter: /aws/service/canonical/ubuntu/server/24.04/stable/current/amd64/hvm/ebs-gp3/ami-id
59+
associatePublicIPAddress: true
60+
tags:
61+
KubernetesCluster: ${NAME}
62+
kops.k8s.io/instancegroup: nodes
63+
k8s.io/role/node: "1"
64+
subnetSelectorTerms:
65+
- tags:
66+
KubernetesCluster: ${NAME}
67+
securityGroupSelectorTerms:
68+
- tags:
69+
KubernetesCluster: ${NAME}
70+
Name: nodes.${NAME}
71+
instanceProfile: nodes.${NAME}
72+
userData: |
73+
$(echo "$USER_DATA" | sed 's/^/ /')
74+
EOF
75+
76+
cat <<EOF > | kubectl apply -f -
77+
apiVersion: karpenter.sh/v1
78+
kind: NodePool
79+
metadata:
80+
name: default
81+
spec:
82+
template:
83+
spec:
84+
requirements:
85+
- key: kubernetes.io/arch
86+
operator: In
87+
values: ["amd64", "arm64"]
88+
- key: kubernetes.io/os
89+
operator: In
90+
values: ["linux"]
91+
- key: karpenter.sh/capacity-type
92+
operator: In
93+
values: ["on-demand", "spot"]
94+
nodeClassRef:
95+
group: karpenter.k8s.aws
96+
kind: EC2NodeClass
97+
name: default
98+
expireAfter: 24h
99+
limits:
100+
cpu: 4
101+
disruption:
102+
consolidationPolicy: WhenEmptyOrUnderutilized
103+
consolidateAfter: 1m
104+
EOF
38105
```
39106

40107
## Karpenter-managed InstanceGroups
41108

42-
A Karpenter-managed InstanceGroup controls a corresponding Karpenter Provisioner resource. kOps will ensure that the Provisioner is configured with the correct AWS security groups, subnets, and launch templates. Just like with ASG-managed InstanceGroups, you can add labels and taints to Nodes and kOps will ensure those are added accordingly.
43-
44-
Note that not all features of InstanceGroups are supported.
45-
46-
## Subnets
47-
48-
By default, kOps will tag subnets with `kops.k8s.io/instance-group/<intancegroup>: "true"` for each InstanceGroup the subnet is assigned to. If you enable manual tagging of subnets, you have to ensure these tags are added, if not Karpenter will fail to provision any instances.
49-
50-
## Instance Types
51-
52-
If you do not specify a mixed instances policy, only the instance type specified by `spec.machineType` will be used. With Karpenter, one typically wants a wider range of instances to choose from. kOps supports both providing a list of instance types through `spec.mixedInstancesPolicy.instances` and providing instance type requirements through `spec.mixedInstancesPolicy.instanceRequirements`. See (/instance_groups)[InstanceGroup documentation] for more details.
109+
A Karpenter-managed InstanceGroup controls the bootstrap script. kOps will ensure the correct AWS security groups, subnets and permissions.
110+
`EC2NodeClass` and `NodePool` objects must be created by the operator.
53111

54112
## Known limitations
55113

56-
### Karpenter-managed Launch Templates
57-
58-
On EKS, Karpener creates its own launch templates for Provisioners. These launch templates will not work with a kOps cluster for a number of reasons. Most importantly, they do not use supported AMIs and they do not install and configure nodeup, the instance-side kOps component. The Karpenter features that require Karpenter to directly manage launch templates will not be available on kOps.
59-
60-
### Unmanaged Provisioner resources
61-
62-
As mentioned above, kOps will manage a Provisioner resource per InstanceGroup. It is technically possible to create Provsioner resources directly, but you have to ensure that you configure Provisioners according to kOps requirements. As mentioned above, Karpenter-managed launch templates do not work and you have to maintain your own kOps-compatible launch templates.
63-
64-
### Other minor limitations
65-
66-
* Control plane nodes must be provisioned with an ASG, not Karpenter.
67-
* Provisioners will unconditionally use spot with a fallback on ondemand instances.
68-
* Provisioners will unconditionally include burstable instance groups such as the T3 instance family.
69-
* kOps will not allow mixing arm64 and amd64 instances in the same Provider.
114+
* **Upgrade is not supported** from the previous version of managed Karpenter.
115+
* Control plane nodes must be provisioned with an ASG.
116+
* All `EC2NodeClass` objects must have the `spec.amiFamily` set to `Custom`.
117+
* `spec.instanceStorePolicy` configuration is not supported in `EC2NodeClass`.
118+
* `spec.kubelet`, `spec.taints` and `spec.labels` configuration are not supported in `EC2NodeClass`, but they can be configured in the `Cluster` or `InstanceGroup` spec.

0 commit comments

Comments
 (0)