You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/quick-start-guide.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ Here is a simple example to show how to use the LLM API with TinyLlama.
15
15
```
16
16
17
17
You can also directly load TensorRT Model Optimizer's [quantized checkpoints on Hugging Face](https://huggingface.co/collections/nvidia/model-optimizer-66aa84f7966b3150262481a4) in the LLM constructor.
18
-
To learn more about the LLM API, check out the [](llm-api/index) and [](llm-api-examples/index).
18
+
To learn more about the LLM API, check out the [](llm-api/index) and [](examples/llm_api_examples).
19
19
20
20
(deploy-with-trtllm-serve)=
21
21
## Deploy with trtllm-serve
@@ -151,7 +151,7 @@ In this Quick Start Guide, you:
151
151
152
152
For more examples, refer to:
153
153
154
-
-[examples/](https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples) for showcases of how to run a quick benchmark on latest LLMs.
154
+
-[examples](https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples) for showcases of how to run a quick benchmark on latest LLMs.
0 commit comments