Skip to content

Latest commit

 

History

History
470 lines (344 loc) · 20.4 KB

build.md

File metadata and controls

470 lines (344 loc) · 20.4 KB

Build guide

To successfully build the Intel® Tiber™ Broadcast Suite, you need to follow a series of steps involving BIOS configuration, driver installation, host machine setup, and package installation. Depending on your preference, you can install the suite as a Docker application (the recommended method) or directly on a bare metal machine.

Table of contents

1. Prerequisites

Steps to perform before running Intel® Tiber™ Broadcast Suite on a host with Ubuntu operating system installed.

1.1. BIOS Settings

Note: It is recommended to properly set up BIOS settings before proceeding. Depending on the manufacturer, labels may vary. Please consult an instruction manual or ask a platform vendor for detailed steps.

The following technologies must be enabled for Media Transport Library (MTL) to function properly:

1.2. Install Docker

Note: This step is optional if you want to install Intel® Tiber™ Broadcast Suite locally.

1.2.1. Install Docker Build Environment

To install the Docker environment, please refer to the official Docker Engine on Ubuntu installation manual's Install using the apt repository section.

Note: Do not skip docker-buildx-plugin installation, otherwise the build.sh script may not run properly.

1.2.2. Setup Docker Proxy

Depending on the network environment, it could be required to set up the proxy. In that case, please refer to Configure the Docker client section of Configure Docker to use a proxy server guide.

1.3. Install GPU Driver

1.3.1. Intel Flex GPU Driver

To install the Flex GPU driver, follow the 1.4.3. Ubuntu Install Steps part of the Installation guide for Intel® Data Center GPUs.

Note: If prompted with Unable to locate package, please ensure the repository key intel-graphics.key is properly dearmored and installed as /usr/share/keyrings/intel-graphics.gpg.

Use the vainfo command to check the GPU installation:

sudo vainfo

1.3.2. Nvidia GPU Driver

In case of using an Nvidia GPU, please follow the steps below:

sudo apt install --install-suggests nvidia-driver-550-server
sudo apt install nvidia-utils-550-server

In case of any issues please follow Nvidia GPU driver install steps.

Note: Supported version of Nvidia driver compatible with packages inside Docker container is

  • Driver Version: 550.90.07
  • CUDA Version: 12.4

1.4. Install and Configure Host's NIC Drivers and Related Software

  1. If you didn't do it already, then download the project from the GitHub repo.

    git clone --recurse-submodules https://github.com/OpenVisualCloud/Intel-Tiber-Broadcast-Suite
    cd Intel-Tiber-Broadcast-Suite
  2. Install patched ice driver for Intel® E810 Series Ethernet Adapter NICs.

    1. Download the ice driver.

      mkdir -p ${HOME}/ice_patched
      . versions.env && wget -qO- $LINK_ICE_DRIVER | tar -xz -C ${HOME}/ice_patched
    2. Patch the ice driver.

       # Ensure the target directory exists
       mkdir -p ${HOME}/Media-Transport-Library
      # Download Media Transport Library:
      . versions.env && curl -Lf https://github.com/OpenVisualCloud/Media-Transport-Library/archive/refs/tags/${MTL_VER}.tar.gz | tar -zx --strip-components=1 -C ${HOME}/Media-Transport-Library
      . versions.env && git -C ${HOME}/ice_patched/ice-* apply ~/Media-Transport-Library/patches/ice_drv/${ICE_VER}/*.patch
      
      cd ${HOME}/ice_patched/ice-*
    3. Install the ice driver.
      cd src
      make
      sudo make install
      # sudo rmmod irdma 2>/dev/null
      sudo rmmod ice
      sudo modprobe ice
      cd -
    4. Check if the driver is installed properly, and if so - clean up.

      # should give you output
      sudo dmesg | grep "Intel(R) Ethernet Connection E800 Series Linux Driver - version Kahawai"
      rm -rf ${HOME}/ice_patched ${HOME}/Media-Transport-Library
    5. Update firmware.

      . versions.env && wget ${LINK_ICE_FIRMWARE}
      unzip Release_*.zip
      cd NVMUpdatePackage/E810
      tar xvf E810_NVMUpdatePackage_v*_Linux.tar.gz
      cd E810/Linux_x64/
      sudo ./nvmupdate64e
    6. Verify installation.

      # replace with your device
      ethtool -i ens801f0

      Result should look like:

      driver: ice
      version: Kahawai_1.14.9_20240613
      firmware-version: 4.60 0x8001e8dc 1.3682.0
      

    Note: if you encountered any problems, please go to E810 driver guide.

1.5. Configure VFIO (IOMMU) required by PMD-based DPDK

If you have already enabled IOMMU, you can skip this step. To check if IOMMU is enabled, please verify if there are any IOMMU groups listed under the /sys/kernel/iommu_groups/ directory. If no groups are found, it indicates that IOMMU is not enabled.

ls -l /sys/kernel/iommu_groups/

Enable IOMMU(VT-D and VT-X) in BIOS

The steps to enable IOMMU in your BIOS/UEFI may vary depending on the manufacturer and model of your motherboard. Here are general steps that should guide you:

  1. Restart your computer. During the boot process, you'll need to press a specific key to enter the BIOS/UEFI setup. This key varies depending on your system's manufacturer. It's often one of the function keys (like F2, F10, F12), the ESC key, or the DEL key.

  2. Navigate to the advanced settings. Once you're in the BIOS/UEFI setup menu, look for a section with a name like "Advanced", "Advanced Options", or "Advanced Settings".

  3. Look for IOMMU setting. Within the advanced settings, look for an option related to IOMMU. It might be listed under CPU Configuration or Chipset Configuration, depending on your system. For Intel systems, it's typically labeled as "VT-d" (Virtualization Technology for Directed I/O). Once you've located the appropriate option, change the setting to "Enabled".

  4. Save your changes and exit. There will typically be an option to "Save & Exit" or "Save Changes and Reset". Select this to save your changes and restart the computer.

Enable IOMMU in Kernel

After enabling IOMMU in the BIOS, you need to enable it in your operating system as well.

Ubuntu/Debian

Edit GRUB_CMDLINE_LINUX_DEFAULT item in /etc/default/grub file, append below parameters into GRUB_CMDLINE_LINUX_DEFAULT item if it's not there.

sudo vim /etc/default/grub
intel_iommu=on iommu=pt

then:

sudo update-grub
sudo reboot
CentOS/RHEL9
sudo grubby --update-kernel=ALL --args="intel_iommu=on iommu=pt"
sudo reboot

For non-Intel devices, contact the vendor for how to enable IOMMU.

Double Check iommu_groups Creation by Kernel After Reboot

ls -l /sys/kernel/iommu_groups/

If no IOMMU groups are found under the /sys/kernel/iommu_groups/ directory, it is likely that the previous two steps were not completed as expected. You can use the following two commands to identify which part was missed:

# Check if "intel_iommu=on iommu=pt" is included
cat /proc/cmdline
# Check if CPU flags have vmx feature
lscpu | grep vmx

Unlock RLIMIT_MEMLOCK for non-root Run

Skip this step for Ubuntu since the default RLIMIT_MEMLOCK is set to unlimited already.

Some operating systems, including CentOS Stream and RHEL 9, have a small limit to RLIMIT_MEMLOCK (amount of pinned pages the process is allowed to have) which will cause DMA remapping to fail during the running. Please edit /etc/security/limits.conf, append below two lines at the end of the file, change to the username currently logged in.

<USER>    hard   memlock           unlimited
<USER>    soft   memlock           unlimited

Reboot the system to let the settings take effect.

1.6. (Optional) Configure PTP

The Precision Time Protocol (PTP) facilitates global timing accuracy in the microsecond range for all essences. Typically, a PTP grandmaster is deployed within the network, and clients synchronize with it using tools like ptp4l. This library includes its own PTP implementation, and a sample application offers the option to enable it. Please refer to section Built-in PTP for instructions on how to enable it.

By default, the built-in PTP feature is disabled, and the PTP clock relies on the system time source of the user application (clock_gettime). However, if the built-in PTP is enabled, the internal NIC time will be selected as the PTP source.

Linux ptp4l Setup to Sync System Time with Grandmaster

Firstly run ptp4l to sync the PHC time with grandmaster, customize the interface as your setup.

sudo ptp4l -i ens801f2 -m -s -H

Then run phc2sys to sync the PHC time to system time, please make sure NTP service is disabled as it has conflict with phc2sys.

sudo phc2sys -s ens801f2 -m -w

Built-in PTP

This project includes built-in support for the Precision Time Protocol (PTP) protocol, which is also based on the hardware Network Interface Card (NIC) timesync feature. This combination allows for achieving a PTP time clock source with an accuracy of approximately 30ns.

To enable this feature in the RxTxApp sample application, use the --ptp argument. The control for the built-in PTP feature is the MTL_FLAG_PTP_ENABLE flag in the mtl_init_params structure.

Note: Currently, the VF (Virtual Function) does not support the hardware timesync feature. Therefore, for VF deployment, the timestamp of the transmitted (TX) and received (RX) packets is read from the CPU TSC (TimeStamp Counter) instead. In this case, it is not possible to obtain a stable delta in the PTP adjustment, and the maximum accuracy achieved will be up to 1us.

2. Install Intel® Tiber™ Broadcast Suite

Option #1: Build Docker Image from Dockerfile Using build.sh Script

Note: This method is recommended instead of Option #2 - layers are built in parallel, cross-compatibility is possible.

Access the project directory.

cd Intel-Tiber-Broadcast-Suite

Install Dependencies.

sudo apt-get update
sudo apt-get install meson python3-pyelftools libnuma-dev

Run build.sh script.

Note: For build.sh script to run without errors, docker-buildx-plugin must be installed. The error thrown without the plugin does not inform about that fact, rather that the flags are not correct. See section 1.2.1. Install Docker build environment for installation details.

./build.sh

Option #2: Local Installation from Debian Packages

You can install the Intel® Tiber™ Broadcast Suite locally on bare metal. This installation allows you to skip installing Docker altogether.

./build.sh -l

Option #3: Install Docker Image from Docker Hub

Visit https://hub.docker.com/r/intel/intel-tiber-broadcast-suite/ Intel® Tiber™ Broadcast Suite image Docker Hub to select the most appropriate version.

Pull the Intel® Tiber™ Broadcast Suite image from Docker Hub:

docker pull intel/intel-tiber-broadcast-suite:latest

Option #4: Build Docker image from Dockerfile Manually

Note: Below method does not require buildx, but lacks cross-compatibility and may prolong the build process.

  1. Download, Patch, Build, and Install DPDK from source code.

    1. Download and Extract DPDK and MTL:

      . versions.env && curl -Lf https://github.com/OpenVisualCloud/Media-Transport-Library/archive/refs/tags/${MTL_VER}.tar.gz | tar -zx --strip-components=1 -C ${HOME}/Media-Transport-Library
       . versions.env && curl -Lf https://github.com/DPDK/dpdk/archive/refs/tags/v${DPDK_VER}.tar.gz | tar -zx --strip-components=1 -C dpdk
    2. Apply Patches from Media Transport Library:

      # Apply patches:
      . versions.env && cd dpdk && git apply ${HOME}/Media-Transport-Library/patches/dpdk/$DPDK_VER/*.patch
    3. Build and Install DPDK:

      # Prepare the build directory:
      meson build
      
      # Build DPDK:
      ninja -C build
      
      # Install DPDK:
      sudo ninja -C build install
    4. Clean up:

      cd ..
      rm -drf dpdk
  2. Build image using Dockerfile:

    docker build $(cat versions.env | xargs -I {} echo --build-arg {}) -t video_production_image -f Dockerfile .
  3. Change the number of cores used to build by make can be changed with the flag --build-arg nproc={number of proc}

    docker build $(cat versions.env | xargs -I {} echo --build-arg {}) --build-arg nproc=1 -t video_production_image -f Dockerfile .
  4. Build the MTL Manager docker:

    cd ${HOME}/Media-Transport-Library/manager
    docker build --build-arg VERSION=1.0.0.TIBER -t mtl-manager:latest .
    cd -

3. (Optional) Install Media Proxy

To use Media Communications Mesh as a transport layer, make sure that Media Proxy is available on the host.

To install Media Proxy, please follow the steps below.

Note: This step is required e.g. for the Media Proxy Pipeline:

Option #1: (Recommended) Dockerized installation

For a dockerized solution, please follow instructions on this page.

Option #2: Local installation

  1. Clone the Media Communications Mesh repository

    git clone https://github.com/OpenVisualCloud/Media-Communications-Mesh.git
    cd Media-Communications-Mesh
  2. Install Dependencies

    • gRPC: Refer to the gRPC documentation for installation instructions.
    • Install required packages:
      • Ubuntu/Debian
        sudo apt-get update
        sudo apt-get install libbsd-dev cmake make rdma-core libibverbs-dev librdmacm-dev dracut
      • CentOS stream
        sudo yum install -y libbsd-devel cmake make rdma-core libibverbs-devel librdmacm-devel dracut
    • Install the irdma driver and libfabric:
      ./scripts/setup_rdma_env.sh install
    • Reboot.

    [!TIP] More information about libfabric installation can be found in Building and installing libfabric from source.

  3. Build the Media Proxy binary

    ./build.sh

4. Preparation to Run Intel® Tiber™ Broadcast Suite

4.1. First Run Script

Note: first_run.sh needs to be run after every reset of the machine.

From the root of the Intel® Tiber™ Broadcast Suite repository, execute the first_run.sh script that sets up the hugepages, locks for MTL, E810 NIC's virtual controllers, and runs the MtlManager docker container:

sudo -E ./first_run.sh | tee virtual_functions.txt

Note: Please ensure the command is executed with -E switch, to copy all the necessary environment variables. Lack of the switch may cause the script to fail silently.

When running the Intel® Tiber™ Broadcast Suite locally, please execute first_run with the -l argument.

sudo -E ./first_run.sh -l | tee virtual_functions.txt

This script will start the Mtl Manager locally. To avoid issues with core assignment in Docker, ensure that the Mtl Manager is running. The Mtl Manager is typically run within a Docker container, but the -l argument allows it to be executed directly from the terminal.

Note: Ensure that MtlManager is running when using the Intel® Tiber™ Broadcast Suite locally. You can check this by running pgrep -l "MtlManager". If it is not running, start it with the command sudo MtlManager.

Note: In order to avoid unnecessary reruns, preserve the command's output as a file to note which interface was bound to which Virtual Functions.

4.2. Test Docker Installation

docker run --rm -it --user=root --privileged video_production_image --help

4.3. Test Local Installation

ffmpeg --help

5. Running the Image

Go to the Running Intel® Tiber™ Broadcast Suite Pipelines instruction for more details on how to run the image.