Skip to content

Commit dfc7740

Browse files
authored
[issue templates] add some issue templates (vllm-project#3412)
1 parent c17ca8e commit dfc7740

11 files changed

+1005
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
name: 📚 Documentation
2+
description: Report an issue related to https://docs.vllm.ai/
3+
title: "[Doc]: "
4+
labels: ["doc"]
5+
6+
body:
7+
- type: textarea
8+
attributes:
9+
label: 📚 The doc issue
10+
description: >
11+
A clear and concise description of what content in https://docs.vllm.ai/ is an issue.
12+
validations:
13+
required: true
14+
- type: textarea
15+
attributes:
16+
label: Suggest a potential alternative/fix
17+
description: >
18+
Tell us how we could improve the documentation in this regard.
19+
- type: markdown
20+
attributes:
21+
value: >
22+
Thanks for contributing 🎉!
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
name: 🛠️ Installation
2+
description: Report an issue here when you hit errors during installation.
3+
title: "[Installation]: "
4+
labels: ["installation"]
5+
6+
body:
7+
- type: markdown
8+
attributes:
9+
value: >
10+
#### Before submitting an issue, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/vllm-project/vllm/issues?q=is%3Aissue+sort%3Acreated-desc+).
11+
- type: textarea
12+
attributes:
13+
label: Your current environment
14+
description: |
15+
Please run the following and paste the output below.
16+
```sh
17+
wget https://raw.githubusercontent.com/vllm-project/vllm/main/collect_env.py
18+
# For security purposes, please feel free to check the contents of collect_env.py before running it.
19+
python collect_env.py
20+
```
21+
value: |
22+
```text
23+
The output of `python collect_env.py`
24+
```
25+
validations:
26+
required: true
27+
- type: textarea
28+
attributes:
29+
label: How you are installing vllm
30+
description: |
31+
Paste the full command you are trying to execute.
32+
value: |
33+
```sh
34+
pip install -vvv vllm
35+
```
36+
- type: markdown
37+
attributes:
38+
value: >
39+
Thanks for contributing 🎉!

.github/ISSUE_TEMPLATE/300-usage.yml

+37
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
name: 💻 Usage
2+
description: Raise an issue here if you don't know how to use vllm.
3+
title: "[Usage]: "
4+
labels: ["usage"]
5+
6+
body:
7+
- type: markdown
8+
attributes:
9+
value: >
10+
#### Before submitting an issue, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/vllm-project/vllm/issues?q=is%3Aissue+sort%3Acreated-desc+).
11+
- type: textarea
12+
attributes:
13+
label: Your current environment
14+
description: |
15+
Please run the following and paste the output below.
16+
```sh
17+
wget https://raw.githubusercontent.com/vllm-project/vllm/main/collect_env.py
18+
# For security purposes, please feel free to check the contents of collect_env.py before running it.
19+
python collect_env.py
20+
```
21+
value: |
22+
```text
23+
The output of `python collect_env.py`
24+
```
25+
validations:
26+
required: true
27+
- type: textarea
28+
attributes:
29+
label: How would you like to use vllm
30+
description: |
31+
A detailed description of how you want to use vllm.
32+
value: |
33+
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
34+
- type: markdown
35+
attributes:
36+
value: >
37+
Thanks for contributing 🎉!
+81
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
name: 🐛 Bug report
2+
description: Raise an issue here if you find a bug.
3+
title: "[Bug]: "
4+
labels: ["bug"]
5+
6+
body:
7+
- type: markdown
8+
attributes:
9+
value: >
10+
#### Before submitting an issue, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/vllm-project/vllm/issues?q=is%3Aissue+sort%3Acreated-desc+).
11+
- type: textarea
12+
attributes:
13+
label: Your current environment
14+
description: |
15+
Please run the following and paste the output below.
16+
```sh
17+
wget https://raw.githubusercontent.com/vllm-project/vllm/main/collect_env.py
18+
# For security purposes, please feel free to check the contents of collect_env.py before running it.
19+
python collect_env.py
20+
```
21+
value: |
22+
```text
23+
The output of `python collect_env.py`
24+
```
25+
validations:
26+
required: true
27+
- type: textarea
28+
attributes:
29+
label: 🐛 Describe the bug
30+
description: |
31+
Please provide a clear and concise description of what the bug is.
32+
33+
If relevant, add a minimal example so that we can reproduce the error by running the code. It is very important for the snippet to be as succinct (minimal) as possible, so please take time to trim down any irrelevant code to help us debug efficiently. We are going to copy-paste your code and we expect to get the same result as you did: avoid any external data, and include the relevant imports, etc. For example:
34+
35+
```python
36+
from vllm import LLM, SamplingParams
37+
38+
prompts = [
39+
"Hello, my name is",
40+
"The president of the United States is",
41+
"The capital of France is",
42+
"The future of AI is",
43+
]
44+
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
45+
46+
llm = LLM(model="facebook/opt-125m")
47+
48+
outputs = llm.generate(prompts, sampling_params)
49+
50+
# Print the outputs.
51+
for output in outputs:
52+
prompt = output.prompt
53+
generated_text = output.outputs[0].text
54+
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
55+
```
56+
57+
If the code is too long (hopefully, it isn't), feel free to put it in a public gist and link it in the issue: https://gist.github.com.
58+
59+
Please also paste or describe the results you observe instead of the expected results. If you observe an error, please paste the error message including the **full** traceback of the exception. It may be relevant to wrap error messages in ```` ```triple quotes blocks``` ````.
60+
placeholder: |
61+
A clear and concise description of what the bug is.
62+
63+
```python
64+
# Sample code to reproduce the problem
65+
```
66+
67+
```
68+
The error message you got, with the full traceback.
69+
```
70+
validations:
71+
required: true
72+
- type: markdown
73+
attributes:
74+
value: >
75+
⚠️ Please separate bugs of `transformers` implementation or usage from bugs of `vllm`. If you think anything is wrong with the models' output:
76+
77+
- Try the counterpart of `transformers` first. If the error appears, please go to [their issues](https://github.com/huggingface/transformers/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc).
78+
79+
- If the error only appears in vllm, please provide the detailed script of how you run `transformers` and `vllm`, also highlight the difference and what you expect.
80+
81+
Thanks for contributing 🎉!
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
name: 🚀 Feature request
2+
description: Submit a proposal/request for a new vllm feature
3+
title: "[Feature]: "
4+
labels: ["feature"]
5+
6+
body:
7+
- type: markdown
8+
attributes:
9+
value: >
10+
#### Before submitting an issue, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/vllm-project/vllm/issues?q=is%3Aissue+sort%3Acreated-desc+).
11+
- type: textarea
12+
attributes:
13+
label: 🚀 The feature, motivation and pitch
14+
description: >
15+
A clear and concise description of the feature proposal. Please outline the motivation for the proposal. Is your feature request related to a specific problem? e.g., *"I'm working on X and would like Y to be possible"*. If this is related to another GitHub issue, please link here too.
16+
validations:
17+
required: true
18+
- type: textarea
19+
attributes:
20+
label: Alternatives
21+
description: >
22+
A description of any alternative solutions or features you've considered, if any.
23+
- type: textarea
24+
attributes:
25+
label: Additional context
26+
description: >
27+
Add any other context or screenshots about the feature request.
28+
- type: markdown
29+
attributes:
30+
value: >
31+
Thanks for contributing 🎉!
+33
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
name: 🤗 Support request for a new model from huggingface
2+
description: Submit a proposal/request for a new model from huggingface
3+
title: "[New Model]: "
4+
labels: ["new model"]
5+
6+
body:
7+
- type: markdown
8+
attributes:
9+
value: >
10+
#### Before submitting an issue, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/vllm-project/vllm/issues?q=is%3Aissue+sort%3Acreated-desc+).
11+
12+
#### We also highly recommend you read https://docs.vllm.ai/en/latest/models/adding_model.html first to understand how to add a new model.
13+
- type: textarea
14+
attributes:
15+
label: The model to consider.
16+
description: >
17+
A huggingface url, pointing to the model, e.g. https://huggingface.co/openai-community/gpt2 .
18+
validations:
19+
required: true
20+
- type: textarea
21+
attributes:
22+
label: The closest model vllm already supports.
23+
description: >
24+
Here is the list of models already supported by vllm: https://github.com/vllm-project/vllm/tree/main/vllm/model_executor/models . Which model is the most similar to the model you want to add support for?
25+
- type: textarea
26+
attributes:
27+
label: What's your difficulty of supporting the model you want?
28+
description: >
29+
For example, any new operators or new architecture?
30+
- type: markdown
31+
attributes:
32+
value: >
33+
Thanks for contributing 🎉!
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
name: ⚡ Discussion on the performance of vllm
2+
description: Submit a proposal/discussion about the performance of vllm
3+
title: "[Performance]: "
4+
labels: ["performance"]
5+
6+
body:
7+
- type: markdown
8+
attributes:
9+
value: >
10+
#### Before submitting an issue, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/vllm-project/vllm/issues?q=is%3Aissue+sort%3Acreated-desc+).
11+
- type: textarea
12+
attributes:
13+
label: Proposal to improve performance
14+
description: >
15+
How do you plan to improve vllm's performance?
16+
validations:
17+
required: false
18+
- type: textarea
19+
attributes:
20+
label: Report of performance regression
21+
description: >
22+
Please provide detailed description of performance comparison to confirm the regression. You may want to run the benchmark script at https://github.com/vllm-project/vllm/tree/main/benchmarks .
23+
validations:
24+
required: false
25+
- type: textarea
26+
attributes:
27+
label: Misc discussion on performance
28+
description: >
29+
Anything about the performance.
30+
validations:
31+
required: false
32+
- type: textarea
33+
attributes:
34+
label: Your current environment (if you think it is necessary)
35+
description: |
36+
Please run the following and paste the output below.
37+
```sh
38+
wget https://raw.githubusercontent.com/vllm-project/vllm/main/collect_env.py
39+
# For security purposes, please feel free to check the contents of collect_env.py before running it.
40+
python collect_env.py
41+
```
42+
value: |
43+
```text
44+
The output of `python collect_env.py`
45+
```
46+
validations:
47+
required: false
48+
- type: markdown
49+
attributes:
50+
value: >
51+
Thanks for contributing 🎉!
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
name: 🎲 Misc/random discussions that do not fit into the above categories.
2+
description: Submit a discussion as you like. Note that developers are heavily overloaded and we mainly rely on community users to answer these issues.
3+
title: "[Misc]: "
4+
labels: ["misc"]
5+
6+
body:
7+
- type: markdown
8+
attributes:
9+
value: >
10+
#### Before submitting an issue, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/vllm-project/vllm/issues?q=is%3Aissue+sort%3Acreated-desc+).
11+
- type: textarea
12+
attributes:
13+
label: Anything you want to discuss about vllm.
14+
description: >
15+
Anything you want to discuss about vllm.
16+
validations:
17+
required: true
18+
- type: markdown
19+
attributes:
20+
value: >
21+
Thanks for contributing 🎉!

.github/ISSUE_TEMPLATE/config.yml

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
blank_issues_enabled: false

.yapfignore

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
collect_env.py

0 commit comments

Comments
 (0)