-
Follow the README.md to setup a x86_64 based demo environment on AKS.
-
To prevent our changes to be rolled back, disable the built-in AKS azurefile and azuredisk drivers:
az aks update -g ${AZURE_RESOURCE_GROUP} --name ${CLUSTER_NAME} --disable-file-driver --disable-disk-driver
-
Create the PeerpodVolume CRD object
kubectl apply -f src/csi-wrapper/crd/peerpodvolume.yaml
The output looks like:
customresourcedefinition.apiextensions.k8s.io/peerpodvolumes.confidentialcontainers.org created
Follow this if you have made changes to the CSI wrapper code and want to deploy those changes.
-
Build csi-wrapper images:
pushd src/csi-wrapper/ make csi-controller-wrapper-docker make csi-node-wrapper-docker make csi-podvm-wrapper-docker popd
-
Export custom registry
export REGISTRY="my-registry" # e.g. "quay.io/my-registry"
-
Tag and push images
docker tag csi-controller-wrapper:local ${REGISTRY}/csi-controller-wrapper:latest docker tag csi-node-wrapper:local ${REGISTRY}/csi-node-wrapper:latest docker tag csi-podvm-wrapper:local ${REGISTRY}/csi-podvm-wrapper:latest docker push ${REGISTRY}/csi-controller-wrapper:latest docker push ${REGISTRY}/csi-node-wrapper:latest docker push ${REGISTRY}/csi-podvm-wrapper:latest
-
Change image in CSI wrapper k8s resources
sed -i "s#quay.io/confidential-containers#${REGISTRY}#g" src/csi-wrapper/examples/azure/disk/*.yaml sed -i "s#quay.io/confidential-containers#${REGISTRY}#g" src/csi-wrapper/examples/azure/file/*.yaml
Prerequisite: Assign the Storage Account Contributor
role to the AKS agent pool application so it can create storage accounts:
OBJECT_ID="$(az ad sp list --display-name "${CLUSTER_NAME}-agentpool" --query '[].id' --output tsv)"
az role assignment create \
--role "Storage Account Contributor" \
--assignee-object-id ${OBJECT_ID} \
--assignee-principal-type ServicePrincipal \
--scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourceGroups/${AZURE_RESOURCE_GROUP}-aks"
Note: All the steps can be performed anywhere with cluster access
-
Clone the azurefile-csi-driver source:
git clone --depth 1 --branch v1.28.0 https://github.com/kubernetes-sigs/azurefile-csi-driver pushd azurefile-csi-driver
-
Enable
attachRequired
in the CSI Driver:sed -i 's/attachRequired: false/attachRequired: true/g' deploy/csi-azurefile-driver.yaml
-
Run the script:
bash ./deploy/install-driver.sh master local popd
-
Configure RBAC so that the wrapper has access to the required operations
kubectl apply -f src/csi-wrapper/examples/azure/file/azure-files-csi-wrapper-runner.yaml kubectl apply -f src/csi-wrapper/examples/azure/file/azure-files-csi-wrapper-podvm.yaml
-
Patch csi-azurefile-driver:
kubectl patch deploy csi-azurefile-controller -n kube-system --patch-file src/csi-wrapper/examples/azure/file/patch-controller.yaml kubectl -n kube-system delete replicaset -l app=csi-azurefile-controller kubectl patch ds csi-azurefile-node -n kube-system --patch-file src/csi-wrapper/examples/azure/file/patch-node.yaml
-
Create a peerpod enabled StorageClass:
kubectl apply -f src/csi-wrapper/examples/azure/file/azure-file-StorageClass-for-peerpod.yaml
-
Create a PVC that use
azurefile-csi-driver
kubectl apply -f src/csi-wrapper/examples/azure/file/my-pvc.yaml
-
Wait for the PVC status to become
bound
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-azurefile Bound pvc-3edc7a93-4531-4034-8818-1b1608907494 1Gi RWO azure-file-storage 3m11s
-
Create the nginx peer-pod demo with with
podvm-wrapper
andazurefile-csi-driver
containerskubectl apply -f src/csi-wrapper/examples/azure/file/nginx-kata-with-my-pvc-and-csi-wrapper.yaml
-
Exec into the container and check the mount
kubectl exec nginx-pv -c nginx -i -t -- sh # mount | grep mount-path //fffffffffffffffffffffff.file.core.windows.net/pvc-ff587660-73ed-4bd0-8850-285be480f490 on /mount-path type cifs (rw,relatime,vers=3.1.1,cache=strict,username=fffffffffffffffffffffff,uid=0,noforceuid,gid=0,noforcegid,addr=x.x.x.x,file_mode=0777,dir_mode=0777,soft,persistenthandles,nounix,serverino,mapposix,mfsymlinks,rsize=1048576,wsize=1048576,bsize=1048576,echo_interval=60,actimeo=30,closetimeo=1)
Note: We can see there's a CIFS mount to
/mount-path
as expected
Prerequisite: The service principal of the cluster requires the Contributor
role for the CAA resource group so it can manage disks.
Additionally, the CSI driver needs to be configured to create disks in the CAA instead of the AKS resource group.
-
Start by assigning the
Contributor
role to the AKS agent poolOBJECT_ID="$(az ad sp list --display-name "${CLUSTER_NAME}-agentpool" --query '[].id' --output tsv)" az role assignment create \ --role "Contributor" \ --assignee-object-id ${OBJECT_ID} \ --assignee-principal-type ServicePrincipal \ --scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourceGroups/${AZURE_RESOURCE_GROUP}"
-
Get the user assigned identity ID of the identity assigned to the cluster
USER_ASSIGNED_CLIENT_ID=$(az aks show \ --resource-group $AZURE_RESOURCE_GROUP \ --name $CLUSTER_NAME \ --query identityProfile.kubeletidentity.clientId \ -o tsv)
-
Update
cloud-config.yaml
with the cloud config for the user assigned identity:cloud_config=$(cat <<EOF { "cloud": "AzurePublicCloud", "tenantId": "$(az account show --query tenantId -o tsv)", "subscriptionId": "${AZURE_SUBSCRIPTION_ID}", "resourceGroup": "${AZURE_RESOURCE_GROUP}", "location": "${AZURE_REGION}", "vmType": "vmss", "useManagedIdentityExtension": true, "userAssignedIdentityID": "${USER_ASSIGNED_CLIENT_ID}", "useInstanceMetadata": true, "aadClientID": "msi", "aadClientSecret": "msi" } EOF ) cloud_config_base64=$(echo "${cloud_config}" | base64 -w0) sed -i "s|@@CLOUD_CONFIG_BASE64@@|$cloud_config_base64|g" src/csi-wrapper/examples/azure/disk/cloud-config.yaml
-
Create the cloud config secret:
kubectl apply -f src/csi-wrapper/examples/azure/disk/cloud-config.yaml
-
Clone the azuredisk-csi-driver source:
git clone --depth 1 --branch v1.31.0 https://github.com/kubernetes-sigs/azuredisk-csi-driver
-
Run the script:
pushd azuredisk-csi-driver bash ./deploy/install-driver.sh master local popd
-
Configure RBAC so that the wrapper has access to the required operations
kubectl apply -f src/csi-wrapper/examples/azure/disk/azure-disk-csi-wrapper-runner.yaml kubectl apply -f src/csi-wrapper/examples/azure/disk/azure-disk-csi-wrapper-podvm.yaml
-
Patch csi-azuredisk-driver:
kubectl patch deploy csi-azuredisk-controller -n kube-system --patch-file src/csi-wrapper/examples/azure/disk/patch-controller.yaml kubectl -n kube-system delete replicaset -l app=csi-azuredisk-controller kubectl patch ds csi-azuredisk-node -n kube-system --patch-file src/csi-wrapper/examples/azure/disk/patch-node.yaml
-
Create a peerpod enabled StorageClass:
kubectl apply -f src/csi-wrapper/examples/azure/disk/azure-disk-storageclass-for-peerpod.yaml
-
Create a PVC that use
azuredisk-csi-driver
kubectl apply -f src/csi-wrapper/examples/azure/disk/dynamic-pvc.yaml
-
Wait for the PVC status to become
bound
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-azuredisk Bound pvc-3edc7a93-4531-4034-8818-1b1608907494 10Gi RWO azure-disk-storage 3m11s
-
Create a disk in Azure
az disk create --resource-group "${AZURE_RESOURCE_GROUP}-aks" --name static-pvc --size-gb 10 --sku Standard_LRS azure_disk_id=$(az disk show --resource-group "${AZURE_RESOURCE_GROUP}-aks" --name static-pvc --query id --output tsv)
-
Update the PVC yaml file to use the newly created disk as backend
sed -i "s|@@AZURE_DISK_ID@@|$azure_disk_id|g" src/csi-wrapper/examples/azure/disk/static-pvc.yaml
-
Create a PVC that use
azuredisk-csi-driver
and the statically provisioned diskkubectl apply -f src/csi-wrapper/examples/azure/disk/static-pvc.yaml
-
Wait for the PVC status to become
bound
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE pvc-azuredisk Bound pv-azuredisk 10Gi RWO azure-disk-storage <unset> 36s
-
Create the nginx peer-pod demo with with
podvm-wrapper
andazuredisk-csi-driver
containerskubectl apply -f src/csi-wrapper/examples/azure/disk/nginx-kata-with-my-pvc-and-csi-wrapper.yaml
-
Exec into the container and check the mount
kubectl exec nginx-pv-disk -c nginx -i -t -- sh # mount | grep mount-path /dev/sdb on /mount-path type ext4 (rw,relatime)
Note: We can see there's a ext4 mount to
/mount-path
as expected