Table of Contents
Intel GPU plugin facilitates Kubernetes workload offloading by providing access to Intel discrete (Xe) and integrated GPU HW device files.
Use cases include, but are not limited to:
- Media transcode
- Media analytics
- Cloud gaming
- High performance computing
- AI training and inference
For example containers with Intel media driver (and components using that), can offload video transcoding operations, and containers with the Intel OpenCL / oneAPI Level Zero backend libraries can offload compute operations to GPU.
Flag | Argument | Default | Meaning |
---|---|---|---|
-enable-monitoring | - | disabled | Enable 'i915_monitoring' resource that provides access to all Intel GPU devices on the node |
-resource-manager | - | disabled | Enable fractional resource management, see also dependencies |
-shared-dev-num | int | 1 | Number of containers that can share the same GPU device |
The plugin also accepts a number of other arguments (common to all plugins) related to logging. Please use the -h option to see the complete list of logging related options.
The following sections detail how to obtain, build, deploy and test the GPU device plugin.
Examples are provided showing how to deploy the plugin either using a DaemonSet or by hand on a per-node basis.
Pre-built images of this component are available on the Docker hub. These images are automatically built and uploaded to the hub from the latest main branch of this repository.
Release tagged images of the components are also available on the Docker hub, tagged with their
release version numbers in the format x.y.z
, corresponding to the branches and releases in this
repository. Thus the easiest way to deploy the plugin in your cluster is to run this command
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin?ref=<RELEASE_VERSION>
daemonset.apps/intel-gpu-plugin created
Where <RELEASE_VERSION>
needs to be substituted with the desired release version, e.g. v0.18.0
.
Alternatively, if your cluster runs Node Feature Discovery, you can deploy the device plugin only on nodes with Intel GPU. The nfd_labeled_nodes kustomization adds the nodeSelector to the DaemonSet:
$ kubectl apply -k https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin/overlays/nfd_labeled_nodes?ref=<RELEASE_VERSION>
daemonset.apps/intel-gpu-plugin created
Nothing else is needed. But if you want to deploy a customized version of the plugin read further.
$ export INTEL_DEVICE_PLUGINS_SRC=/path/to/intel-device-plugins-for-kubernetes
$ git clone https://github.com/intel/intel-device-plugins-for-kubernetes ${INTEL_DEVICE_PLUGINS_SRC}
To deploy the gpu plugin as a daemonset, you first need to build a container image for the plugin and ensure that is visible to your nodes.
The following will use docker
to build a local container image called
intel/intel-gpu-plugin
with the tag devel
.
The image build tool can be changed from the default docker
by setting the BUILDER
argument
to the Makefile
.
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make intel-gpu-plugin
...
Successfully tagged intel/intel-gpu-plugin:devel
You can then use the example DaemonSet YAML file provided to deploy the plugin. The default kustomization that deploys the YAML as is:
$ kubectl apply -k deployments/gpu_plugin
daemonset.apps/intel-gpu-plugin created
Alternatively, if your cluster runs Node Feature Discovery, you can deploy the device plugin only on nodes with Intel GPU. The nfd_labeled_nodes kustomization adds the nodeSelector to the DaemonSet:
$ kubectl apply -k deployments/gpu_plugin/overlays/nfd_labeled_nodes
daemonset.apps/intel-gpu-plugin created
With the experimental fractional resource feature you can use additional kubernetes extended resources, such as GPU memory, which can then be consumed by deployments. PODs will then only deploy to nodes where there are sufficient amounts of the extended resources for the containers.
Enabling the fractional resource feature isn't quite as simple as just enabling the related command line flag. The DaemonSet needs additional RBAC-permissions and access to the kubelet podresources gRPC service, plus there are other dependencies to take care of, which are explained below. For the RBAC-permissions, gRPC service access and the flag enabling, it is recommended to use kustomization by running:
$ kubectl apply -k deployments/gpu_plugin/overlays/fractional_resources
serviceaccount/resource-reader-sa created
clusterrole.rbac.authorization.k8s.io/resource-reader created
clusterrolebinding.rbac.authorization.k8s.io/resource-reader-rb created
daemonset.apps/intel-gpu-plugin created
Usage of these fractional GPU resources requires that the cluster has node
extended resources with the name prefix gpu.intel.com/
. Those can be created with NFD
by running the hook installed by the plugin initcontainer. When fractional resources are
enabled, the plugin lets a scheduler extender
do card selection decisions based on resource availability and the amount of extended
resources requested in the pod spec.
The scheduler extender then needs to annotate the pod objects with unique
increasing numeric timestamps in the annotation gas-ts
and container card selections in
gas-container-cards
annotation. The latter has container separator '|
' and card separator
',
'. Example for a pod with two containers and both containers getting two cards:
gas-container-cards:card0,card1|card2,card3
. Enabling the fractional-resource support
in the plugin without running such an annotation adding scheduler extender in the cluster
will only slow down GPU-deployments, so do not enable this feature unnecessarily.
Note: It is also possible to run the GPU device plugin using a non-root user. To do this, the nodes' DAC rules must be configured to device plugin socket creation and kubelet registration. Furthermore, the deployments
securityContext
must be configured with appropriaterunAsUser/runAsGroup
.
For development purposes, it is sometimes convenient to deploy the plugin 'by hand' on a node. In this case, you do not need to build the complete container image, and can build just the plugin.
First we build the plugin:
$ cd ${INTEL_DEVICE_PLUGINS_SRC}
$ make gpu_plugin
Now we can run the plugin directly on the node:
$ sudo -E ${INTEL_DEVICE_PLUGINS_SRC}/cmd/gpu_plugin/gpu_plugin
device-plugin start server at: /var/lib/kubelet/device-plugins/gpu.intel.com-i915.sock
device-plugin registered
You can verify the plugin has been registered with the expected nodes by searching for the relevant resource allocation status on the nodes:
$ kubectl get nodes -o=jsonpath="{range .items[*]}{.metadata.name}{'\n'}{' i915: '}{.status.allocatable.gpu\.intel\.com/i915}{'\n'}"
master
i915: 1
We can test the plugin is working by deploying the provided example OpenCL image with FFT offload enabled.
-
Build a Docker image with an example program offloading FFT computations to GPU:
$ cd ${INTEL_DEVICE_PLUGINS_SRC}/demo $ ./build-image.sh ubuntu-demo-opencl ... Successfully tagged ubuntu-demo-opencl:devel
-
Create a job running unit tests off the local Docker image:
$ kubectl apply -f ${INTEL_DEVICE_PLUGINS_SRC}/demo/intelgpu-job.yaml job.batch/intelgpu-demo-job created
-
Review the job's logs:
$ kubectl get pods | fgrep intelgpu # substitute the 'xxxxx' below for the pod name listed in the above $ kubectl logs intelgpu-demo-job-xxxxx + WORK_DIR=/root/6-1/fft + cd /root/6-1/fft + ./fft + uprightdiff --format json output.pgm /expected.pgm diff.pgm + cat diff.json + jq .modifiedArea + MODIFIED_AREA=0 + [ 0 -gt 10 ] + echo Success Success
If the pod did not successfully launch, possibly because it could not obtain the gpu resource, it will be stuck in the
Pending
status:$ kubectl get pods NAME READY STATUS RESTARTS AGE intelgpu-demo-job-xxxxx 0/1 Pending 0 8s
This can be verified by checking the Events of the pod:
$ kubectl describe pod intelgpu-demo-job-xxxxx ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 Insufficient gpu.intel.com/i915.