diff --git a/docs/source/installation/build-from-source-linux.md b/docs/source/installation/build-from-source-linux.md index 19dab71c769..7b94aa88119 100644 --- a/docs/source/installation/build-from-source-linux.md +++ b/docs/source/installation/build-from-source-linux.md @@ -147,6 +147,11 @@ check . ## Build TensorRT LLM +```{tip} +:name: build-from-source-tip-cuda-version +TensorRT LLM 1.1 supports both CUDA 12.9 and 13.0 while some dependency changes are required. The `requirements.txt` contains dependencies needed by CUDA 13.0. If you are using CUDA 12.9, please uncomment lines end with `# ` and comment out the next lines. +``` + ### Option 1: Full Build with C++ Compilation The following command compiles the C++ code and packages the compiled libraries along with the Python files into a wheel. When developing C++ code, you need this full build command to apply your code changes. diff --git a/docs/source/installation/linux.md b/docs/source/installation/linux.md index 02a0cf7817d..68db9403d3d 100644 --- a/docs/source/installation/linux.md +++ b/docs/source/installation/linux.md @@ -12,6 +12,11 @@ Install CUDA Toolkit following the [CUDA Installation Guide for Linux](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/) and make sure `CUDA_HOME` environment variable is properly set. + ```{tip} + :name: installation-linux-tip-cuda-version + TensorRT LLM 1.1 supports both CUDA 12.9 and 13.0. The wheel package release only supports CUDA 12.9, while CUDA 13.0 is only supported through NGC container release. + ``` + ```bash # Optional step: Only required for NVIDIA Blackwell GPUs and SBSA platform pip3 install torch==2.7.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128