To successfully build the Intel® Tiber™ Broadcast Suite, you need to follow a series of steps involving BIOS configuration, driver installation, host machine setup, and package installation. Depending on your preference, you can install the suite as a Docker application (the recommended method) or directly on a bare metal machine.
- Build guide
- Table of contents
- 1. Prerequisites
- 2. Install Intel® Tiber™ Broadcast Suite
- 3. (Optional) Install Media Proxy
- 4. Preparation to Run Intel® Tiber™ Broadcast Suite
- 5. Running the Image
Steps to perform before running Intel® Tiber™ Broadcast Suite on a host with Ubuntu operating system installed.
Note: It is recommended to properly set up BIOS settings before proceeding. Depending on the manufacturer, labels may vary. Please consult an instruction manual or ask a platform vendor for detailed steps.
The following technologies must be enabled for Media Transport Library (MTL) to function properly:
- Intel® Virtualization for Directed I/O (VT-d)
- Single-root input/output virtualization (SR-IOV)
- For 200 GbE throughput on Intel® Ethernet Network Adapter E810-2CQDA2 card, a PCI-E lane bifurcation is required.
Note: This step is optional if you want to install Intel® Tiber™ Broadcast Suite locally.
To install the Docker environment, please refer to the official Docker Engine on Ubuntu installation manual's Install using the apt repository section.
Note: Do not skip
docker-buildx-plugin
installation, otherwise thebuild.sh
script may not run properly.
Depending on the network environment, it could be required to set up the proxy. In that case, please refer to Configure the Docker client section of Configure Docker to use a proxy server guide.
To install the Flex GPU driver, follow the 1.4.3. Ubuntu Install Steps part of the Installation guide for Intel® Data Center GPUs.
Note: If prompted with
Unable to locate package
, please ensure the repository keyintel-graphics.key
is properly dearmored and installed as/usr/share/keyrings/intel-graphics.gpg
.
Use the vainfo
command to check the GPU installation:
sudo vainfo
In case of using an Nvidia GPU, please follow the steps below:
sudo apt install --install-suggests nvidia-driver-550-server
sudo apt install nvidia-utils-550-server
In case of any issues please follow Nvidia GPU driver install steps.
Note: Supported version of Nvidia driver compatible with packages inside Docker container is
- Driver Version: 550.90.07
- CUDA Version: 12.4
-
If you didn't do it already, then download the project from the GitHub repo.
git clone --recurse-submodules https://github.com/OpenVisualCloud/Intel-Tiber-Broadcast-Suite cd Intel-Tiber-Broadcast-Suite
-
Install patched ice driver for Intel® E810 Series Ethernet Adapter NICs.
-
Download the ice driver.
mkdir -p ${HOME}/ice_patched . versions.env && wget -qO- $LINK_ICE_DRIVER | tar -xz -C ${HOME}/ice_patched
-
Patch the ice driver.
# Ensure the target directory exists mkdir -p ${HOME}/Media-Transport-Library
# Download Media Transport Library: . versions.env && curl -Lf https://github.com/OpenVisualCloud/Media-Transport-Library/archive/refs/tags/${MTL_VER}.tar.gz | tar -zx --strip-components=1 -C ${HOME}/Media-Transport-Library
. versions.env && git -C ${HOME}/ice_patched/ice-* apply ~/Media-Transport-Library/patches/ice_drv/${ICE_VER}/*.patch cd ${HOME}/ice_patched/ice-*
-
Install the ice driver.
cd src make sudo make install # sudo rmmod irdma 2>/dev/null sudo rmmod ice sudo modprobe ice cd -
-
Check if the driver is installed properly, and if so - clean up.
# should give you output sudo dmesg | grep "Intel(R) Ethernet Connection E800 Series Linux Driver - version Kahawai"
rm -rf ${HOME}/ice_patched ${HOME}/Media-Transport-Library
-
Update firmware.
. versions.env && wget ${LINK_ICE_FIRMWARE} unzip Release_*.zip cd NVMUpdatePackage/E810 tar xvf E810_NVMUpdatePackage_v*_Linux.tar.gz cd E810/Linux_x64/ sudo ./nvmupdate64e
-
Verify installation.
# replace with your device ethtool -i ens801f0
Result should look like:
driver: ice version: Kahawai_1.14.9_20240613 firmware-version: 4.60 0x8001e8dc 1.3682.0
Note: if you encountered any problems, please go to E810 driver guide.
-
If you have already enabled IOMMU, you can skip this step. To check if IOMMU is enabled, please verify if there are any IOMMU groups listed under the /sys/kernel/iommu_groups/
directory. If no groups are found, it indicates that IOMMU is not enabled.
ls -l /sys/kernel/iommu_groups/
The steps to enable IOMMU in your BIOS/UEFI may vary depending on the manufacturer and model of your motherboard. Here are general steps that should guide you:
-
Restart your computer. During the boot process, you'll need to press a specific key to enter the BIOS/UEFI setup. This key varies depending on your system's manufacturer. It's often one of the function keys (like F2, F10, F12), the ESC key, or the DEL key.
-
Navigate to the advanced settings. Once you're in the BIOS/UEFI setup menu, look for a section with a name like "Advanced", "Advanced Options", or "Advanced Settings".
-
Look for IOMMU setting. Within the advanced settings, look for an option related to IOMMU. It might be listed under CPU Configuration or Chipset Configuration, depending on your system. For Intel systems, it's typically labeled as "VT-d" (Virtualization Technology for Directed I/O). Once you've located the appropriate option, change the setting to "Enabled".
-
Save your changes and exit. There will typically be an option to "Save & Exit" or "Save Changes and Reset". Select this to save your changes and restart the computer.
After enabling IOMMU in the BIOS, you need to enable it in your operating system as well.
Edit GRUB_CMDLINE_LINUX_DEFAULT
item in /etc/default/grub
file, append below parameters into GRUB_CMDLINE_LINUX_DEFAULT
item if it's not there.
sudo vim /etc/default/grub
intel_iommu=on iommu=pt
then:
sudo update-grub
sudo reboot
sudo grubby --update-kernel=ALL --args="intel_iommu=on iommu=pt"
sudo reboot
For non-Intel devices, contact the vendor for how to enable IOMMU.
ls -l /sys/kernel/iommu_groups/
If no IOMMU groups are found under the /sys/kernel/iommu_groups/
directory, it is likely that the previous two steps were not completed as expected. You can use the following two commands to identify which part was missed:
# Check if "intel_iommu=on iommu=pt" is included
cat /proc/cmdline
# Check if CPU flags have vmx feature
lscpu | grep vmx
Skip this step for Ubuntu since the default RLIMIT_MEMLOCK is set to unlimited already.
Some operating systems, including CentOS Stream and RHEL 9, have a small limit to RLIMIT_MEMLOCK (amount of pinned pages the process is allowed to have) which will cause DMA remapping to fail during the running. Please edit /etc/security/limits.conf
, append below two lines at the end of the file, change to the username currently logged in.
<USER> hard memlock unlimited
<USER> soft memlock unlimited
Reboot the system to let the settings take effect.
The Precision Time Protocol (PTP) facilitates global timing accuracy in the microsecond range for all essences. Typically, a PTP grandmaster is deployed within the network, and clients synchronize with it using tools like ptp4l. This library includes its own PTP implementation, and a sample application offers the option to enable it. Please refer to section Built-in PTP for instructions on how to enable it.
By default, the built-in PTP feature is disabled, and the PTP clock relies on the system time source of the user application (clock_gettime). However, if the built-in PTP is enabled, the internal NIC time will be selected as the PTP source.
Firstly run ptp4l to sync the PHC time with grandmaster, customize the interface as your setup.
sudo ptp4l -i ens801f2 -m -s -H
Then run phc2sys to sync the PHC time to system time, please make sure NTP service is disabled as it has conflict with phc2sys.
sudo phc2sys -s ens801f2 -m -w
This project includes built-in support for the Precision Time Protocol (PTP) protocol, which is also based on the hardware Network Interface Card (NIC) timesync feature. This combination allows for achieving a PTP time clock source with an accuracy of approximately 30ns.
To enable this feature in the RxTxApp sample application, use the --ptp
argument. The control for the built-in PTP feature is the MTL_FLAG_PTP_ENABLE
flag in the mtl_init_params
structure.
Note: Currently, the VF (Virtual Function) does not support the hardware timesync feature. Therefore, for VF deployment, the timestamp of the transmitted (TX) and received (RX) packets is read from the CPU TSC (TimeStamp Counter) instead. In this case, it is not possible to obtain a stable delta in the PTP adjustment, and the maximum accuracy achieved will be up to 1us.
Note: This method is recommended instead of Option #2 - layers are built in parallel, cross-compatibility is possible.
Access the project directory.
cd Intel-Tiber-Broadcast-Suite
Install Dependencies.
sudo apt-get update
sudo apt-get install meson python3-pyelftools libnuma-dev
Run build.sh script.
Note: For
build.sh
script to run without errors,docker-buildx-plugin
must be installed. The error thrown without the plugin does not inform about that fact, rather that the flags are not correct. See section 1.2.1. Install Docker build environment for installation details.
./build.sh
You can install the Intel® Tiber™ Broadcast Suite locally on bare metal. This installation allows you to skip installing Docker altogether.
./build.sh -l
Visit https://hub.docker.com/r/intel/intel-tiber-broadcast-suite/ Intel® Tiber™ Broadcast Suite image Docker Hub to select the most appropriate version.
Pull the Intel® Tiber™ Broadcast Suite image from Docker Hub:
docker pull intel/intel-tiber-broadcast-suite:latest
Note: Below method does not require buildx, but lacks cross-compatibility and may prolong the build process.
-
Download, Patch, Build, and Install DPDK from source code.
-
Download and Extract DPDK and MTL:
. versions.env && curl -Lf https://github.com/OpenVisualCloud/Media-Transport-Library/archive/refs/tags/${MTL_VER}.tar.gz | tar -zx --strip-components=1 -C ${HOME}/Media-Transport-Library
. versions.env && curl -Lf https://github.com/DPDK/dpdk/archive/refs/tags/v${DPDK_VER}.tar.gz | tar -zx --strip-components=1 -C dpdk
-
Apply Patches from Media Transport Library:
# Apply patches: . versions.env && cd dpdk && git apply ${HOME}/Media-Transport-Library/patches/dpdk/$DPDK_VER/*.patch
-
Build and Install DPDK:
# Prepare the build directory: meson build # Build DPDK: ninja -C build # Install DPDK: sudo ninja -C build install
-
Clean up:
cd .. rm -drf dpdk
-
-
Build image using Dockerfile:
docker build $(cat versions.env | xargs -I {} echo --build-arg {}) -t video_production_image -f Dockerfile .
-
Change the number of cores used to build by make can be changed with the flag
--build-arg nproc={number of proc}
docker build $(cat versions.env | xargs -I {} echo --build-arg {}) --build-arg nproc=1 -t video_production_image -f Dockerfile .
-
Build the MTL Manager docker:
cd ${HOME}/Media-Transport-Library/manager docker build --build-arg VERSION=1.0.0.TIBER -t mtl-manager:latest . cd -
To use Media Communications Mesh as a transport layer, make sure that Media Proxy is available on the host.
To install Media Proxy, please follow the steps below.
Note: This step is required e.g. for the Media Proxy Pipeline:
For a dockerized solution, please follow instructions on this page.
-
Clone the Media Communications Mesh repository
git clone https://github.com/OpenVisualCloud/Media-Communications-Mesh.git cd Media-Communications-Mesh
-
Install Dependencies
- gRPC: Refer to the gRPC documentation for installation instructions.
- Install required packages:
- Ubuntu/Debian
sudo apt-get update sudo apt-get install libbsd-dev cmake make rdma-core libibverbs-dev librdmacm-dev dracut
- CentOS stream
sudo yum install -y libbsd-devel cmake make rdma-core libibverbs-devel librdmacm-devel dracut
- Ubuntu/Debian
- Install the irdma driver and libfabric:
./scripts/setup_rdma_env.sh install
- Reboot.
[!TIP] More information about libfabric installation can be found in Building and installing libfabric from source.
-
Build the Media Proxy binary
./build.sh
Note: first_run.sh needs to be run after every reset of the machine.
From the root of the Intel® Tiber™ Broadcast Suite repository, execute the first_run.sh
script that sets up the hugepages, locks for MTL, E810 NIC's virtual controllers, and runs the MtlManager docker container:
sudo -E ./first_run.sh | tee virtual_functions.txt
Note: Please ensure the command is executed with
-E
switch, to copy all the necessary environment variables. Lack of the switch may cause the script to fail silently.
When running the Intel® Tiber™ Broadcast Suite locally, please execute first_run with the -l argument.
sudo -E ./first_run.sh -l | tee virtual_functions.txt
This script will start the Mtl Manager locally. To avoid issues with core assignment in Docker, ensure that the Mtl Manager is running. The Mtl Manager is typically run within a Docker container, but the -l
argument allows it to be executed directly from the terminal.
Note: Ensure that
MtlManager
is running when using the Intel® Tiber™ Broadcast Suite locally. You can check this by runningpgrep -l "MtlManager"
. If it is not running, start it with the commandsudo MtlManager
.
Note: In order to avoid unnecessary reruns, preserve the command's output as a file to note which interface was bound to which Virtual Functions.
docker run --rm -it --user=root --privileged video_production_image --help
ffmpeg --help
Go to the Running Intel® Tiber™ Broadcast Suite Pipelines instruction for more details on how to run the image.