For most common hardware, the Linux kernel includes the device driver modules needed to use that hardware when the computer starts up. For some hardware, however, modules are not available in Linux. Therefore, you must find a way to provide those modules to each host computer. This procedure describes how to do that for nodes in an {product-title} cluster.
When a kernel module is first deployed by following these instructions, the module is made available for the current kernel. If a new kernel is installed, the kmods-via-containers software will rebuild and deploy the module so a compatible version of that module is available with the new kernel.
The way that this feature is able to keep the module up to date on each node is by:
-
Adding a systemd service to each node that starts at boot time to detect if a new kernel has been installed and
-
If a new kernel is detected, the service rebuilds the module and installs it to the kernel
For information on the software needed for this procedure, see the kmods-via-containers github site.
A few important issues to keep in mind:
-
This procedure is Technology Preview.
-
Software tools and examples are not yet available in official RPM form and can only be obtained for now from unofficial
github.com
sites noted in the procedure. -
Third-party kernel modules you might add through these procedures are not supported by Red Hat.
-
In this procedure, the software needed to build your kernel modules is deployed in a RHEL 8 container. Keep in mind that modules are rebuilt automatically on each node when that node gets a new kernel. For that reason, each node needs access to a
yum
repository that contains the kernel and related packages needed to rebuild the module. That content is best provided with a valid RHEL subscription.
Before deploying kernel modules to your {product-title} cluster, you can test the process on a separate RHEL system. Gather the kernel module’s source code, the KVC framework, and the kmod-via-containers software. Then build and test the module. To do that on a RHEL 8 system, do the following:
-
Get a RHEL 8 system, then register and subscribe it:
# subscription-manager register Username: yourname Password: *************** # subscription-manager attach --auto
-
Install software needed to build the software and container:
# yum install podman make git -y
-
Clone the kmod-via-containers repository:
$ mkdir kmods; cd kmods $ git clone https://github.com/kmods-via-containers/kmods-via-containers
-
Install a KVC framework instance on your RHEL 8 build host to test the module. This adds a kmods-via-container systemd service and loads it:
$ cd kmods-via-containers/ $ sudo make install $ sudo systemctl daemon-reload
-
Get the kernel module source code. The source code might be used to build a third-party module that you do not have control over, but is supplied by others. You will need content similar to the content shown in the
kvc-simple-kmod
example that can be cloned to your system as follows:$ cd .. $ git clone https://github.com/kmods-via-containers/kvc-simple-kmod
-
Edit the configuration file,
simple-kmod.conf
, in his example, and change the name of the Dockerfile toDockerfile.rhel
so the file appears as shown here:$ cd kvc-simple-kmod $ cat simple-kmod.conf KMOD_CONTAINER_BUILD_CONTEXT="https://github.com/kmods-via-containers/kvc-simple-kmod.git" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES="simple-kmod simple-procfs-kmod"
-
Create an instance of
[email protected]
for your kernel module,simple-kmod
in this example, and enable it:$ sudo make install $ sudo kmods-via-containers build simple-kmod $(uname -r)
-
Enable and start the systemd service, then check the status:
$ sudo systemctl enable [email protected] $ sudo systemctl start [email protected] $ sudo systemctl status [email protected] ● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago...
-
To confirm that the kernel modules are loaded, use the
lsmod
command to list the modules:$ lsmod | grep simple_ simple_procfs_kmod 16384 0 simple_kmod 16384 0
-
The simple-kmod example has a few other ways to test that it is working. Look for a "Hello world" message in the kernel ring buffer with
dmesg
:$ dmesg | grep 'Hello world' [ 6420.761332] Hello world from simple_kmod.
Check the value of
simple-procfs-kmod
in/proc
:$ sudo cat /proc/simple-procfs-kmod simple-procfs-kmod number = 0
Run the spkut command to get more information from the module:
$ sudo spkut 44 KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container... + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44
Going forward, when the system boots this service will check if a new kernel is running. If there is a new kernel, the service builds a new version of the kernel module and then loads it. If the module is already built, it will just load it.
Depending on whether or not you must have the kernel module in place when {product-title} cluster first boots, you can set up the kernel modules to be deployed in one of two ways:
-
Provision kernel modules at cluster install time (day-1): You can create the content as a MachineConfig and provide it to
openshift-install
by including it with a set of manifest files. -
Provision kernel modules via Machine Config Operator (day-2): If you can wait until the cluster is up and running to add your kernel module, you can deploy the kernel module software via the Machine Config Operator (MCO).
In either case, each node needs to be able to get the kernel packages and related software packages at the time that a new kernel is detected. There are a few ways you can set up each node to be able to obtain that content.
-
Provide RHEL entitlements to each node.
-
Get RHEL entitlements from an existing RHEL host, from the
/etc/pki/entitlement
directory and copy them to the same location as the other files you provide when you build your Ignition config. -
Inside the Dockerfile, add pointers to a
yum
repository containing the kernel and other packages. This must include new kernel packages as they are needed to match newly installed kernels.
By packaging kernel module software with a MachineConfig you can deliver that software to worker or master nodes at installation time or via the Machine Config Operator.
First create a base Ignition config that you would like to use.
At installation time, the Ignition config will
contain the ssh public key to add to the authorized_keys
file for
the core
user on the cluster.
To add the MachineConfig later via the MCO instead, the ssh public key is not required.
For both type, the example simple-kmod service creates a systemd unit file,
which requires a [email protected]
.
Note
|
The systemd unit is a workaround for an
upstream bug
and makes sure that the |
-
Get a RHEL 8 system, then register and subscribe it:
# subscription-manager register Username: yourname Password: *************** # subscription-manager attach --auto
-
Install software needed to build the software:
# yum install podman make git -y
-
Create an Ignition config file that creates a systemd unit file:
$ mkdir kmods; cd kmods $ cat <<EOF > ./baseconfig.ign { "ignition": { "version": "2.2.0" }, "passwd": { "users": [ { "name": "core", "groups": ["sudo"], "sshAuthorizedKeys": [ "ssh-rsa AAAA" ] } ] }, "systemd": { "units": [{ "name": "require-kvc-simple-kmod.service", "enabled": true, "contents": "[Unit]\[email protected]\n[Service]\nType=oneshot\nExecStart=/usr/bin/true\n\n[Install]\nWantedBy=multi-user.target" }] } } EOF
NoteYou must add your public SSH key to the
baseconfig.ign
file to use the file duringopenshift-install
. The public SSH key is not needed if you create the MachineConfig via the MCO. -
Create a base MCO YAML snippet that looks like:
$ cat <<EOF > mc-base.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 10-kvc-simple-kmod spec: config: EOF
+
Note
|
The |
-
Get the
kmods-via-containers
software:$ git clone https://github.com/kmods-via-containers/kmods-via-containers $ git clone https://github.com/kmods-via-containers/kvc-simple-kmod
-
Get your module software. In this example,
kvc-simple-kmod
is used: -
Create a fakeroot directory and populate it with files that you want to deliver via Ignition, using the repositories cloned earlier:
$ FAKEROOT=$(mktemp -d) $ cd kmods-via-containers $ make install DESTDIR=${FAKEROOT}/usr/local CONFDIR=${FAKEROOT}/etc/ $ cd ../kvc-simple-kmod $ make install DESTDIR=${FAKEROOT}/usr/local CONFDIR=${FAKEROOT}/etc/
-
Get a tool called
filetranspiler
and dependent software:$ cd .. $ sudo yum install -y python3 git clone https://github.com/ashcrow/filetranspiler.git
-
Generate a final MachineConfig YAML (
mc.yaml
) and have it include the base Ignition config, base MachineConfig, and the fakeroot directory with files you would like to deliver:$ ./filetranspiler/filetranspile -i ./baseconfig.ign \ -f ${FAKEROOT} --format=yaml --dereference-symlinks \ | sed 's/^/ /' | (cat mc-base.yaml -) > 99_simple-kmod.yaml
-
If the cluster is not up yet, generate manifest files and add this file to the
openshift
directory. If the cluster is already running, apply the file as follows:$ oc create -f 99_simple-kmod.yaml
Your nodes will start the
[email protected]
service and the kernel modules will be loaded. -
To confirm that the kernel modules are loaded, you can log in to a node (using
oc debug node/<openshift-node>
, thenchroot /host
). To list the modules, use thelsmod
command:$ lsmod | grep simple_ simple_procfs_kmod 16384 0 simple_kmod 16384 0