diff --git a/docs/assets/images/guides/python/environment_overview.png b/docs/assets/images/guides/python/environment_overview.png index 0b0e4ad60..56d51c148 100644 Binary files a/docs/assets/images/guides/python/environment_overview.png and b/docs/assets/images/guides/python/environment_overview.png differ diff --git a/docs/user_guides/projects/python/python_env_overview.md b/docs/user_guides/projects/python/python_env_overview.md index 1f337a91e..63a0d5c9b 100644 --- a/docs/user_guides/projects/python/python_env_overview.md +++ b/docs/user_guides/projects/python/python_env_overview.md @@ -53,6 +53,7 @@ The `MODEL INFERENCE` environments can be used in a deployment using a custom pr * `tensorflow-inference-pipeline` to load and serve TensorFlow models * `torch-inference-pipeline` to load and serve PyTorch models * `pandas-inference-pipeline` to load and serve XGBoost, Catboost and Sklearn models +* `vllm-inference-pipeline` to load and serve LLMs with vLLM inference engine * `minimal-inference-pipeline` to install your own custom framework, contains a minimal set of dependencies ## Next steps