|
4 | 4 |
|
5 | 5 | TensorRT-LLM optimizes the performance of a range of well-known models on NVIDIA GPUs. The following sections provide a list of supported GPU architectures as well as important features implemented in TensorRT-LLM.
|
6 | 6 |
|
7 |
| -## Models |
| 7 | +## Models (PyTorch Backend) |
| 8 | + |
| 9 | +| Architecture | Model | HuggingFace Example | Modality | |
| 10 | +|--------------|-------|---------------------|----------| |
| 11 | +| `BertForSequenceClassification` | BERT-based | `textattack/bert-base-uncased-yelp-polarity` | L | |
| 12 | +| `DeciLMForCausalLM` | Nemotron | `nvidia/Llama-3_1-Nemotron-51B-Instruct` | L | |
| 13 | +| `DeepseekV3ForCausalLM` | DeepSeek-V3 | `deepseek-ai/DeepSeek-V3 `| L | |
| 14 | +| `LlavaLlamaModel` | VILA | `Efficient-Large-Model/NVILA-8B` | L + V | |
| 15 | +| `LlavaNextForConditionalGeneration` | LLaVA-NeXT | `llava-hf/llava-v1.6-mistral-7b-hf` | L + V | |
| 16 | +| `LlamaForCausalLM` | Llama 3.1, Llama 3, Llama 2, LLaMA | `meta-llama/Meta-Llama-3.1-70B` | L | |
| 17 | +| `Llama4ForConditionalGeneration` | Llama 4 | `meta-llama/Llama-4-Scout-17B-16E-Instruct` | L | |
| 18 | +| `MistralForCausalLM` | Mistral | `mistralai/Mistral-7B-v0.1` | L | |
| 19 | +| `MixtralForCausalLM` | Mixtral | `mistralai/Mixtral-8x7B-v0.1` | L | |
| 20 | +| `MllamaForConditionalGeneration` | Llama 3.2 | `meta-llama/Llama-3.2-11B-Vision` | L | |
| 21 | +| `NemotronForCausalLM` | Nemotron-3, Nemotron-4, Minitron | `nvidia/Minitron-8B-Base` | L | |
| 22 | +| `NemotronNASForCausalLM` | NemotronNAS | `nvidia/Llama-3_3-Nemotron-Super-49B-v1` | L | |
| 23 | +| `Qwen2ForCausalLM` | QwQ, Qwen2 | `Qwen/Qwen2-7B-Instruct` | L | |
| 24 | +| `Qwen2ForProcessRewardModel` | Qwen2-based | `Qwen/Qwen2.5-Math-PRM-7B` | L | |
| 25 | +| `Qwen2ForRewardModel` | Qwen2-based | `Qwen/Qwen2.5-Math-RM-72B` | L | |
| 26 | +| `Qwen2VLForConditionalGeneration` | Qwen2-VL | `Qwen/Qwen2-VL-7B-Instruct` | L + V | |
| 27 | +| `Qwen2_5_VLForConditionalGeneration` | Qwen2.5-VL | `Qwen/Qwen2.5-VL-7B-Instruct` | L + V | |
| 28 | + |
| 29 | +Note: |
| 30 | +- L: Language only |
| 31 | +- L + V: Language and Vision multimodal support |
| 32 | +- Llama 3.2 accepts vision input, but our support currently limited to text only. |
| 33 | + |
| 34 | +## Models (TensorRT Backend) |
8 | 35 |
|
9 | 36 | ### LLM Models
|
10 | 37 |
|
|
0 commit comments