Skip to content

Commit de24046

Browse files
authored
[Doc] Improve contributing and installation documentation (vllm-project#9132)
Signed-off-by: Rafael Vasquez <[email protected]>
1 parent 1874c6a commit de24046

File tree

3 files changed

+94
-88
lines changed

3 files changed

+94
-88
lines changed

CONTRIBUTING.md

+15-21
Original file line numberDiff line numberDiff line change
@@ -1,30 +1,23 @@
11
# Contributing to vLLM
22

3-
Thank you for your interest in contributing to vLLM!
4-
Our community is open to everyone and welcomes all kinds of contributions, no matter how small or large.
5-
There are several ways you can contribute to the project:
3+
Thank you for your interest in contributing to vLLM! Our community is open to everyone and welcomes all kinds of contributions, no matter how small or large. There are several ways you can contribute to the project:
64

75
- Identify and report any issues or bugs.
8-
- Request or add a new model.
6+
- Request or add support for a new model.
97
- Suggest or implement new features.
8+
- Improve documentation or contribute a how-to guide.
109

11-
However, remember that contributions aren't just about code.
12-
We believe in the power of community support; thus, answering queries, assisting others, and enhancing the documentation are highly regarded and beneficial contributions.
10+
We also believe in the power of community support; thus, answering queries, offering PR reviews, and assisting others are also highly regarded and beneficial contributions.
1311

14-
Finally, one of the most impactful ways to support us is by raising awareness about vLLM.
15-
Talk about it in your blog posts, highlighting how it's driving your incredible projects.
16-
Express your support on Twitter if vLLM aids you, or simply offer your appreciation by starring our repository.
12+
Finally, one of the most impactful ways to support us is by raising awareness about vLLM. Talk about it in your blog posts and highlight how it's driving your incredible projects. Express your support on social media if you're using vLLM, or simply offer your appreciation by starring our repository!
1713

1814

19-
## Setup for development
15+
## Developing
2016

21-
### Build from source
17+
Depending on the kind of development you'd like to do (e.g. Python, CUDA), you can choose to build vLLM with or without compilation. Check out the [building from source](https://docs.vllm.ai/en/latest/getting_started/installation.html#build-from-source) documentation for details.
2218

23-
```bash
24-
pip install -e . # This may take several minutes.
25-
```
2619

27-
### Testing
20+
## Testing
2821

2922
```bash
3023
pip install -r requirements-dev.txt
@@ -36,15 +29,16 @@ mypy
3629
# Unit tests
3730
pytest tests/
3831
```
39-
**Note:** Currently, the repository does not pass the mypy tests.
32+
**Note:** Currently, the repository does not pass the ``mypy`` tests.
4033

34+
## Contribution Guidelines
4135

42-
## Contributing Guidelines
36+
### Issues
4337

44-
### Issue Reporting
38+
If you encounter a bug or have a feature request, please [search existing issues](https://github.com/vllm-project/vllm/issues?q=is%3Aissue) first to see if it has already been reported. If not, please [file a new issue](https://github.com/vllm-project/vllm/issues/new/choose), providing as much relevant information as possible.
4539

46-
If you encounter a bug or have a feature request, please check our issues page first to see if someone else has already reported it.
47-
If not, please file a new issue, providing as much relevant information as possible.
40+
> [!IMPORTANT]
41+
> If you discover a security vulnerability, please follow the instructions [here](/SECURITY.md#reporting-a-vulnerability).
4842
4943
### Pull Requests & Code Reviews
5044

@@ -53,4 +47,4 @@ Please check the PR checklist in the [PR template](.github/PULL_REQUEST_TEMPLATE
5347
### Thank You
5448

5549
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM.
56-
Your contributions make vLLM a great tool for everyone!
50+
All of your contributions help make vLLM a great tool and community for everyone!

SECURITY.md

+4-5
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,10 @@
22

33
## Reporting a Vulnerability
44

5-
If you believe you have found a security vulnerability in vLLM, we encourage you to let us know right away.
6-
We will investigate all legitimate reports and do our best to quickly fix the problem.
5+
If you believe you have found a security vulnerability in vLLM, we encourage you to let us know right away. We will investigate all legitimate reports and do our best to quickly fix the problem.
76

8-
Please report security issues using https://github.com/vllm-project/vllm/security/advisories/new
7+
Please report security issues privately using [the vulnerability submission form](https://github.com/vllm-project/vllm/security/advisories/new).
98

109
---
11-
Please see PyTorch Security for more information how to securely interact with models: https://github.com/pytorch/pytorch/blob/main/SECURITY.md
12-
This document mostly references the recommendation from PyTorch, thank you!
10+
11+
Please see [PyTorch's Security Policy](https://github.com/pytorch/pytorch/blob/main/SECURITY.md) for more information and recommendations on how to securely interact with models.
+75-62
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,20 @@
11
.. _installation:
22

3+
============
34
Installation
45
============
56

67
vLLM is a Python library that also contains pre-compiled C++ and CUDA (12.1) binaries.
78

89
Requirements
9-
------------
10+
===========================
1011

1112
* OS: Linux
1213
* Python: 3.8 -- 3.12
1314
* GPU: compute capability 7.0 or higher (e.g., V100, T4, RTX20xx, A100, L4, H100, etc.)
1415

1516
Install released versions
16-
--------------------------
17+
===========================
1718

1819
You can install vLLM using pip:
1920

@@ -46,8 +47,11 @@ You can install vLLM using pip:
4647

4748
Therefore, it is recommended to install vLLM with a **fresh new** conda environment. If either you have a different CUDA version or you want to use an existing PyTorch installation, you need to build vLLM from source. See below for instructions.
4849

50+
51+
.. _install-the-latest-code:
52+
4953
Install the latest code
50-
----------------------------
54+
=========================
5155

5256
LLM inference is a fast-evolving field, and the latest code may contain bug fixes, performance improvements, and new features that are not released yet. To allow users to try the latest code without waiting for the next release, vLLM provides wheels for Linux running on x86 platform with cuda 12 for every commit since v0.5.3. You can download and install the latest one with the following command:
5357

@@ -75,113 +79,122 @@ These docker images are used for CI and testing only, and they are not intended
7579

7680
Latest code can contain bugs and may not be stable. Please use it with caution.
7781

78-
Build from source (without compilation)
79-
---------------------------------------
82+
.. _build_from_source:
83+
84+
Build from source
85+
==================
86+
87+
Python-only build (without compilation)
88+
----------------------------------------
8089

81-
If you want to develop vLLM, and you only need to change the Python code, you can build vLLM without compilation.
90+
If you only need to change Python code, you can simply build vLLM without compilation.
8291

83-
The first step is to follow the previous instructions to install the latest vLLM wheel:
92+
The first step is to install the latest vLLM wheel:
8493

8594
.. code-block:: console
8695
87-
$ pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/nightly/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl
96+
pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/nightly/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl
97+
98+
You can find more information about vLLM's wheels `above <#install-the-latest-code>`_.
8899

89-
After verifying that the installation is successful, we have a script for you to copy and link directories, so that you can edit the Python code directly:
100+
After verifying that the installation is successful, you can use `the following script <https://github.com/vllm-project/vllm/blob/main/python_only_dev.py>`_:
90101

91102
.. code-block:: console
92103
93104
$ git clone https://github.com/vllm-project/vllm.git
94105
$ cd vllm
95106
$ python python_only_dev.py
96107
97-
It will:
108+
The script will:
98109

99-
- Find the installed vLLM in the current environment.
100-
- Copy built files to the current directory.
101-
- Rename the installed vLLM
102-
- Symbolically link the current directory to the installed vLLM.
110+
* Find the installed vLLM package in the current environment.
111+
* Copy built files to the current directory.
112+
* Rename the installed vLLM package.
113+
* Symbolically link the current directory to the installed vLLM package.
103114

104-
This way, you can edit the Python code in the current directory, and the changes will be reflected in the installed vLLM.
115+
Now, you can edit the Python code in the current directory, and the changes will be reflected when you run vLLM.
105116

106-
.. _build_from_source:
107117

108-
Build from source (with compilation)
109-
------------------------------------
118+
Full build (with compilation)
119+
---------------------------------
110120

111-
If you need to touch the C++ or CUDA code, you need to build vLLM from source:
121+
If you want to modify C++ or CUDA code, you'll need to build vLLM from source. This can take several minutes:
112122

113123
.. code-block:: console
114124
115125
$ git clone https://github.com/vllm-project/vllm.git
116126
$ cd vllm
117-
$ pip install -e . # This can take a long time
127+
$ pip install -e .
118128
119-
.. note::
129+
.. tip::
120130

121-
This will uninstall existing PyTorch, and install the version required by vLLM. If you want to use an existing PyTorch installation, there need to be some changes:
131+
Building from source requires a lot of compilation. If you are building from source repeatedly, it's more efficient to cache the compilation results.
132+
For example, you can install `ccache <https://github.com/ccache/ccache>`_ using ``conda install ccache`` or ``apt install ccache`` .
133+
As long as ``which ccache`` command can find the ``ccache`` binary, it will be used automatically by the build system. After the first build, subsequent builds will be much faster.
122134

123-
.. code-block:: console
124135

125-
$ git clone https://github.com/vllm-project/vllm.git
126-
$ cd vllm
127-
$ python use_existing_torch.py
128-
$ pip install -r requirements-build.txt
129-
$ pip install -e . --no-build-isolation
136+
Use an existing PyTorch installation
137+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
138+
There are scenarios where the PyTorch dependency cannot be easily installed via pip, e.g.:
130139

131-
The differences are:
140+
* Building vLLM with PyTorch nightly or a custom PyTorch build.
141+
* Building vLLM with aarch64 and CUDA (GH200), where the PyTorch wheels are not available on PyPI. Currently, only the PyTorch nightly has wheels for aarch64 with CUDA. You can run ``pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124`` to `install PyTorch nightly <https://pytorch.org/get-started/locally/>`_, and then build vLLM on top of it.
132142

133-
- ``python use_existing_torch.py``: This script will remove all the PyTorch versions in the requirements files, so that the existing PyTorch installation will be used.
134-
- ``pip install -r requirements-build.txt``: You need to manually install the requirements for building vLLM.
135-
- ``pip install -e . --no-build-isolation``: You need to disable build isolation, so that the build system can use the existing PyTorch installation.
143+
To build vLLM using an existing PyTorch installation:
136144

137-
This is especially useful when the PyTorch dependency cannot be easily installed via pip, e.g.:
145+
.. code-block:: console
146+
147+
$ git clone https://github.com/vllm-project/vllm.git
148+
$ cd vllm
149+
$ python use_existing_torch.py
150+
$ pip install -r requirements-build.txt
151+
$ pip install -e . --no-build-isolation
138152
139-
- build vLLM with PyTorch nightly or a custom PyTorch build.
140-
- build vLLM with aarch64 and cuda (GH200), where the PyTorch wheels are not available on PyPI. Currently, only PyTorch nightly has wheels for aarch64 with CUDA. You can run ``pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124`` to install PyTorch nightly, and then build vLLM on top of it.
141153
142-
.. note::
154+
Troubleshooting
155+
~~~~~~~~~~~~~~~~~
143156

144-
vLLM can fully run only on Linux, but you can still build it on other systems (for example, macOS). This build is only for development purposes, allowing for imports and a more convenient dev environment. The binaries will not be compiled and not work on non-Linux systems. You can create such a build with the following commands:
157+
To avoid your system being overloaded, you can limit the number of compilation jobs
158+
to be run simultaneously, via the environment variable ``MAX_JOBS``. For example:
145159

146-
.. code-block:: console
160+
.. code-block:: console
147161
148-
$ export VLLM_TARGET_DEVICE=empty
149-
$ pip install -e .
162+
$ export MAX_JOBS=6
163+
$ pip install -e .
150164
165+
This is especially useful when you are building on less powerful machines. For example, when you use WSL it only `assigns 50% of the total memory by default <https://learn.microsoft.com/en-us/windows/wsl/wsl-config#main-wsl-settings>`_, so using ``export MAX_JOBS=1`` can avoid compiling multiple files simultaneously and running out of memory.
166+
A side effect is a much slower build process.
151167

152-
.. tip::
168+
Additionally, if you have trouble building vLLM, we recommend using the NVIDIA PyTorch Docker image.
153169

154-
Building from source requires quite a lot compilation. If you are building from source for multiple times, it is beneficial to cache the compilation results. For example, you can install `ccache <https://github.com/ccache/ccache>`_ via either ``conda install ccache`` or ``apt install ccache`` . As long as ``which ccache`` command can find the ``ccache`` binary, it will be used automatically by the build system. After the first build, the subsequent builds will be much faster.
170+
.. code-block:: console
155171
156-
.. tip::
157-
To avoid your system being overloaded, you can limit the number of compilation jobs
158-
to be run simultaneously, via the environment variable ``MAX_JOBS``. For example:
172+
$ # Use `--ipc=host` to make sure the shared memory is large enough.
173+
$ docker run --gpus all -it --rm --ipc=host nvcr.io/nvidia/pytorch:23.10-py3
159174
160-
.. code-block:: console
175+
If you don't want to use docker, it is recommended to have a full installation of CUDA Toolkit. You can download and install it from `the official website <https://developer.nvidia.com/cuda-toolkit-archive>`_. After installation, set the environment variable ``CUDA_HOME`` to the installation path of CUDA Toolkit, and make sure that the ``nvcc`` compiler is in your ``PATH``, e.g.:
161176

162-
$ export MAX_JOBS=6
163-
$ pip install -e .
177+
.. code-block:: console
164178
165-
This is especially useful when you are building on less powerful machines. For example, when you use WSL, it only `gives you half of the memory by default <https://learn.microsoft.com/en-us/windows/wsl/wsl-config>`_, and you'd better use ``export MAX_JOBS=1`` to avoid compiling multiple files simultaneously and running out of memory. The side effect is that the build process will be much slower. If you only touch the Python code, slow compilation is okay, as you are building in an editable mode: you can just change the code and run the Python script without any re-compilation or re-installation.
179+
$ export CUDA_HOME=/usr/local/cuda
180+
$ export PATH="${CUDA_HOME}/bin:$PATH"
166181
167-
.. tip::
168-
If you have trouble building vLLM, we recommend using the NVIDIA PyTorch Docker image.
182+
Here is a sanity check to verify that the CUDA Toolkit is correctly installed:
169183

170-
.. code-block:: console
184+
.. code-block:: console
171185
172-
$ # Use `--ipc=host` to make sure the shared memory is large enough.
173-
$ docker run --gpus all -it --rm --ipc=host nvcr.io/nvidia/pytorch:23.10-py3
186+
$ nvcc --version # verify that nvcc is in your PATH
187+
$ ${CUDA_HOME}/bin/nvcc --version # verify that nvcc is in your CUDA_HOME
174188
175-
If you don't want to use docker, it is recommended to have a full installation of CUDA Toolkit. You can download and install it from `the official website <https://developer.nvidia.com/cuda-toolkit-archive>`_. After installation, set the environment variable ``CUDA_HOME`` to the installation path of CUDA Toolkit, and make sure that the ``nvcc`` compiler is in your ``PATH``, e.g.:
176189
177-
.. code-block:: console
190+
Unsupported OS build
191+
----------------------
178192

179-
$ export CUDA_HOME=/usr/local/cuda
180-
$ export PATH="${CUDA_HOME}/bin:$PATH"
193+
vLLM can fully run only on Linux but for development purposes, you can still build it on other systems (for example, macOS), allowing for imports and a more convenient development environment. The binaries will not be compiled and won't work on non-Linux systems.
181194

182-
Here is a sanity check to verify that the CUDA Toolkit is correctly installed:
195+
Simply disable the ``VLLM_TARGET_DEVICE`` environment variable before installing:
183196

184-
.. code-block:: console
197+
.. code-block:: console
185198
186-
$ nvcc --version # verify that nvcc is in your PATH
187-
$ ${CUDA_HOME}/bin/nvcc --version # verify that nvcc is in your CUDA_HOME
199+
$ export VLLM_TARGET_DEVICE=empty
200+
$ pip install -e .

0 commit comments

Comments
 (0)