|
1 | 1 | .. _installation:
|
2 | 2 |
|
| 3 | +============ |
3 | 4 | Installation
|
4 | 5 | ============
|
5 | 6 |
|
6 | 7 | vLLM is a Python library that also contains pre-compiled C++ and CUDA (12.1) binaries.
|
7 | 8 |
|
8 | 9 | Requirements
|
9 |
| ------------- |
| 10 | +=========================== |
10 | 11 |
|
11 | 12 | * OS: Linux
|
12 | 13 | * Python: 3.8 -- 3.12
|
13 | 14 | * GPU: compute capability 7.0 or higher (e.g., V100, T4, RTX20xx, A100, L4, H100, etc.)
|
14 | 15 |
|
15 | 16 | Install released versions
|
16 |
| --------------------------- |
| 17 | +=========================== |
17 | 18 |
|
18 | 19 | You can install vLLM using pip:
|
19 | 20 |
|
@@ -46,8 +47,11 @@ You can install vLLM using pip:
|
46 | 47 |
|
47 | 48 | Therefore, it is recommended to install vLLM with a **fresh new** conda environment. If either you have a different CUDA version or you want to use an existing PyTorch installation, you need to build vLLM from source. See below for instructions.
|
48 | 49 |
|
| 50 | + |
| 51 | +.. _install-the-latest-code: |
| 52 | + |
49 | 53 | Install the latest code
|
50 |
| ----------------------------- |
| 54 | +========================= |
51 | 55 |
|
52 | 56 | LLM inference is a fast-evolving field, and the latest code may contain bug fixes, performance improvements, and new features that are not released yet. To allow users to try the latest code without waiting for the next release, vLLM provides wheels for Linux running on x86 platform with cuda 12 for every commit since v0.5.3. You can download and install the latest one with the following command:
|
53 | 57 |
|
@@ -75,113 +79,122 @@ These docker images are used for CI and testing only, and they are not intended
|
75 | 79 |
|
76 | 80 | Latest code can contain bugs and may not be stable. Please use it with caution.
|
77 | 81 |
|
78 |
| -Build from source (without compilation) |
79 |
| ---------------------------------------- |
| 82 | +.. _build_from_source: |
| 83 | + |
| 84 | +Build from source |
| 85 | +================== |
| 86 | + |
| 87 | +Python-only build (without compilation) |
| 88 | +---------------------------------------- |
80 | 89 |
|
81 |
| -If you want to develop vLLM, and you only need to change the Python code, you can build vLLM without compilation. |
| 90 | +If you only need to change Python code, you can simply build vLLM without compilation. |
82 | 91 |
|
83 |
| -The first step is to follow the previous instructions to install the latest vLLM wheel: |
| 92 | +The first step is to install the latest vLLM wheel: |
84 | 93 |
|
85 | 94 | .. code-block:: console
|
86 | 95 |
|
87 |
| - $ pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/nightly/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl |
| 96 | + pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/nightly/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl |
| 97 | +
|
| 98 | +You can find more information about vLLM's wheels `above <#install-the-latest-code>`_. |
88 | 99 |
|
89 |
| -After verifying that the installation is successful, we have a script for you to copy and link directories, so that you can edit the Python code directly: |
| 100 | +After verifying that the installation is successful, you can use `the following script <https://github.com/vllm-project/vllm/blob/main/python_only_dev.py>`_: |
90 | 101 |
|
91 | 102 | .. code-block:: console
|
92 | 103 |
|
93 | 104 | $ git clone https://github.com/vllm-project/vllm.git
|
94 | 105 | $ cd vllm
|
95 | 106 | $ python python_only_dev.py
|
96 | 107 |
|
97 |
| -It will: |
| 108 | +The script will: |
98 | 109 |
|
99 |
| -- Find the installed vLLM in the current environment. |
100 |
| -- Copy built files to the current directory. |
101 |
| -- Rename the installed vLLM |
102 |
| -- Symbolically link the current directory to the installed vLLM. |
| 110 | +* Find the installed vLLM package in the current environment. |
| 111 | +* Copy built files to the current directory. |
| 112 | +* Rename the installed vLLM package. |
| 113 | +* Symbolically link the current directory to the installed vLLM package. |
103 | 114 |
|
104 |
| -This way, you can edit the Python code in the current directory, and the changes will be reflected in the installed vLLM. |
| 115 | +Now, you can edit the Python code in the current directory, and the changes will be reflected when you run vLLM. |
105 | 116 |
|
106 |
| -.. _build_from_source: |
107 | 117 |
|
108 |
| -Build from source (with compilation) |
109 |
| ------------------------------------- |
| 118 | +Full build (with compilation) |
| 119 | +--------------------------------- |
110 | 120 |
|
111 |
| -If you need to touch the C++ or CUDA code, you need to build vLLM from source: |
| 121 | +If you want to modify C++ or CUDA code, you'll need to build vLLM from source. This can take several minutes: |
112 | 122 |
|
113 | 123 | .. code-block:: console
|
114 | 124 |
|
115 | 125 | $ git clone https://github.com/vllm-project/vllm.git
|
116 | 126 | $ cd vllm
|
117 |
| - $ pip install -e . # This can take a long time |
| 127 | + $ pip install -e . |
118 | 128 |
|
119 |
| -.. note:: |
| 129 | +.. tip:: |
120 | 130 |
|
121 |
| - This will uninstall existing PyTorch, and install the version required by vLLM. If you want to use an existing PyTorch installation, there need to be some changes: |
| 131 | + Building from source requires a lot of compilation. If you are building from source repeatedly, it's more efficient to cache the compilation results. |
| 132 | + For example, you can install `ccache <https://github.com/ccache/ccache>`_ using ``conda install ccache`` or ``apt install ccache`` . |
| 133 | + As long as ``which ccache`` command can find the ``ccache`` binary, it will be used automatically by the build system. After the first build, subsequent builds will be much faster. |
122 | 134 |
|
123 |
| - .. code-block:: console |
124 | 135 |
|
125 |
| - $ git clone https://github.com/vllm-project/vllm.git |
126 |
| - $ cd vllm |
127 |
| - $ python use_existing_torch.py |
128 |
| - $ pip install -r requirements-build.txt |
129 |
| - $ pip install -e . --no-build-isolation |
| 136 | +Use an existing PyTorch installation |
| 137 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 138 | +There are scenarios where the PyTorch dependency cannot be easily installed via pip, e.g.: |
130 | 139 |
|
131 |
| - The differences are: |
| 140 | +* Building vLLM with PyTorch nightly or a custom PyTorch build. |
| 141 | +* Building vLLM with aarch64 and CUDA (GH200), where the PyTorch wheels are not available on PyPI. Currently, only the PyTorch nightly has wheels for aarch64 with CUDA. You can run ``pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124`` to `install PyTorch nightly <https://pytorch.org/get-started/locally/>`_, and then build vLLM on top of it. |
132 | 142 |
|
133 |
| - - ``python use_existing_torch.py``: This script will remove all the PyTorch versions in the requirements files, so that the existing PyTorch installation will be used. |
134 |
| - - ``pip install -r requirements-build.txt``: You need to manually install the requirements for building vLLM. |
135 |
| - - ``pip install -e . --no-build-isolation``: You need to disable build isolation, so that the build system can use the existing PyTorch installation. |
| 143 | +To build vLLM using an existing PyTorch installation: |
136 | 144 |
|
137 |
| - This is especially useful when the PyTorch dependency cannot be easily installed via pip, e.g.: |
| 145 | +.. code-block:: console |
| 146 | +
|
| 147 | + $ git clone https://github.com/vllm-project/vllm.git |
| 148 | + $ cd vllm |
| 149 | + $ python use_existing_torch.py |
| 150 | + $ pip install -r requirements-build.txt |
| 151 | + $ pip install -e . --no-build-isolation |
138 | 152 |
|
139 |
| - - build vLLM with PyTorch nightly or a custom PyTorch build. |
140 |
| - - build vLLM with aarch64 and cuda (GH200), where the PyTorch wheels are not available on PyPI. Currently, only PyTorch nightly has wheels for aarch64 with CUDA. You can run ``pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124`` to install PyTorch nightly, and then build vLLM on top of it. |
141 | 153 |
|
142 |
| -.. note:: |
| 154 | +Troubleshooting |
| 155 | +~~~~~~~~~~~~~~~~~ |
143 | 156 |
|
144 |
| - vLLM can fully run only on Linux, but you can still build it on other systems (for example, macOS). This build is only for development purposes, allowing for imports and a more convenient dev environment. The binaries will not be compiled and not work on non-Linux systems. You can create such a build with the following commands: |
| 157 | +To avoid your system being overloaded, you can limit the number of compilation jobs |
| 158 | +to be run simultaneously, via the environment variable ``MAX_JOBS``. For example: |
145 | 159 |
|
146 |
| - .. code-block:: console |
| 160 | +.. code-block:: console |
147 | 161 |
|
148 |
| - $ export VLLM_TARGET_DEVICE=empty |
149 |
| - $ pip install -e . |
| 162 | + $ export MAX_JOBS=6 |
| 163 | + $ pip install -e . |
150 | 164 |
|
| 165 | +This is especially useful when you are building on less powerful machines. For example, when you use WSL it only `assigns 50% of the total memory by default <https://learn.microsoft.com/en-us/windows/wsl/wsl-config#main-wsl-settings>`_, so using ``export MAX_JOBS=1`` can avoid compiling multiple files simultaneously and running out of memory. |
| 166 | +A side effect is a much slower build process. |
151 | 167 |
|
152 |
| -.. tip:: |
| 168 | +Additionally, if you have trouble building vLLM, we recommend using the NVIDIA PyTorch Docker image. |
153 | 169 |
|
154 |
| - Building from source requires quite a lot compilation. If you are building from source for multiple times, it is beneficial to cache the compilation results. For example, you can install `ccache <https://github.com/ccache/ccache>`_ via either ``conda install ccache`` or ``apt install ccache`` . As long as ``which ccache`` command can find the ``ccache`` binary, it will be used automatically by the build system. After the first build, the subsequent builds will be much faster. |
| 170 | +.. code-block:: console |
155 | 171 |
|
156 |
| -.. tip:: |
157 |
| - To avoid your system being overloaded, you can limit the number of compilation jobs |
158 |
| - to be run simultaneously, via the environment variable ``MAX_JOBS``. For example: |
| 172 | + $ # Use `--ipc=host` to make sure the shared memory is large enough. |
| 173 | + $ docker run --gpus all -it --rm --ipc=host nvcr.io/nvidia/pytorch:23.10-py3 |
159 | 174 |
|
160 |
| - .. code-block:: console |
| 175 | +If you don't want to use docker, it is recommended to have a full installation of CUDA Toolkit. You can download and install it from `the official website <https://developer.nvidia.com/cuda-toolkit-archive>`_. After installation, set the environment variable ``CUDA_HOME`` to the installation path of CUDA Toolkit, and make sure that the ``nvcc`` compiler is in your ``PATH``, e.g.: |
161 | 176 |
|
162 |
| - $ export MAX_JOBS=6 |
163 |
| - $ pip install -e . |
| 177 | +.. code-block:: console |
164 | 178 |
|
165 |
| - This is especially useful when you are building on less powerful machines. For example, when you use WSL, it only `gives you half of the memory by default <https://learn.microsoft.com/en-us/windows/wsl/wsl-config>`_, and you'd better use ``export MAX_JOBS=1`` to avoid compiling multiple files simultaneously and running out of memory. The side effect is that the build process will be much slower. If you only touch the Python code, slow compilation is okay, as you are building in an editable mode: you can just change the code and run the Python script without any re-compilation or re-installation. |
| 179 | + $ export CUDA_HOME=/usr/local/cuda |
| 180 | + $ export PATH="${CUDA_HOME}/bin:$PATH" |
166 | 181 |
|
167 |
| -.. tip:: |
168 |
| - If you have trouble building vLLM, we recommend using the NVIDIA PyTorch Docker image. |
| 182 | +Here is a sanity check to verify that the CUDA Toolkit is correctly installed: |
169 | 183 |
|
170 |
| - .. code-block:: console |
| 184 | +.. code-block:: console |
171 | 185 |
|
172 |
| - $ # Use `--ipc=host` to make sure the shared memory is large enough. |
173 |
| - $ docker run --gpus all -it --rm --ipc=host nvcr.io/nvidia/pytorch:23.10-py3 |
| 186 | + $ nvcc --version # verify that nvcc is in your PATH |
| 187 | + $ ${CUDA_HOME}/bin/nvcc --version # verify that nvcc is in your CUDA_HOME |
174 | 188 |
|
175 |
| - If you don't want to use docker, it is recommended to have a full installation of CUDA Toolkit. You can download and install it from `the official website <https://developer.nvidia.com/cuda-toolkit-archive>`_. After installation, set the environment variable ``CUDA_HOME`` to the installation path of CUDA Toolkit, and make sure that the ``nvcc`` compiler is in your ``PATH``, e.g.: |
176 | 189 |
|
177 |
| - .. code-block:: console |
| 190 | +Unsupported OS build |
| 191 | +---------------------- |
178 | 192 |
|
179 |
| - $ export CUDA_HOME=/usr/local/cuda |
180 |
| - $ export PATH="${CUDA_HOME}/bin:$PATH" |
| 193 | +vLLM can fully run only on Linux but for development purposes, you can still build it on other systems (for example, macOS), allowing for imports and a more convenient development environment. The binaries will not be compiled and won't work on non-Linux systems. |
181 | 194 |
|
182 |
| - Here is a sanity check to verify that the CUDA Toolkit is correctly installed: |
| 195 | +Simply disable the ``VLLM_TARGET_DEVICE`` environment variable before installing: |
183 | 196 |
|
184 |
| - .. code-block:: console |
| 197 | +.. code-block:: console |
185 | 198 |
|
186 |
| - $ nvcc --version # verify that nvcc is in your PATH |
187 |
| - $ ${CUDA_HOME}/bin/nvcc --version # verify that nvcc is in your CUDA_HOME |
| 199 | + $ export VLLM_TARGET_DEVICE=empty |
| 200 | + $ pip install -e . |
0 commit comments