You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have encountered an issue where the AppArmor configurations specified in the Pod specifications are not being applied. This appears to be due to the current Kubernetes version used in hami project, which does not support defining AppArmor profiles within the securityContext of Pods or containers.
Starting from Kubernetes v1.30, AppArmor profiles can be specified directly in the securityContext field of Pods and containers, simplifying the configuration process. In earlier versions, AppArmor settings were applied using annotations, which is now deprecated.
To resolve this issue and enable proper AppArmor configuration, I recommend upgrading the following dependencies. This upgrade will allow the use of the appArmorProfile field within the securityContext, ensuring that AppArmor settings are correctly applied to Pods and containers.
What happened:
I have encountered an issue where the AppArmor configurations specified in the Pod specifications are not being applied. This appears to be due to the current Kubernetes version used in hami project, which does not support defining AppArmor profiles within the securityContext of Pods or containers.
Starting from Kubernetes v1.30, AppArmor profiles can be specified directly in the securityContext field of Pods and containers, simplifying the configuration process. In earlier versions, AppArmor settings were applied using annotations, which is now deprecated.
What you expected to happen:
To resolve this issue and enable proper AppArmor configuration, I recommend upgrading the following dependencies. This upgrade will allow the use of the appArmorProfile field within the securityContext, ensuring that AppArmor settings are correctly applied to Pods and containers.
k8s.io/api v0.29.3 -> v0.31.1+
k8s.io/apimachinery v0.29.3 -> v0.31.1+
k8s.io/client-go v0.29.3 -> v0.31.1+
k8s.io/kube-scheduler v0.28.3 -> v0.31.1+
k8s.io/kubelet v0.29.3 -> v0.31.1 +
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
docker version
uname -a
The text was updated successfully, but these errors were encountered: