diff --git a/docs/assets/images/guides/mlops/serving/deployment_simple_form_vllm_conf_file.png b/docs/assets/images/guides/mlops/serving/deployment_simple_form_vllm_conf_file.png new file mode 100644 index 000000000..a9a04b41c Binary files /dev/null and b/docs/assets/images/guides/mlops/serving/deployment_simple_form_vllm_conf_file.png differ diff --git a/docs/user_guides/mlops/registry/frameworks/llm.md b/docs/user_guides/mlops/registry/frameworks/llm.md index 986dee6bf..ee9e29a05 100644 --- a/docs/user_guides/mlops/registry/frameworks/llm.md +++ b/docs/user_guides/mlops/registry/frameworks/llm.md @@ -1,4 +1,8 @@ -# How To Export an LLM Model +--- +description: Documentation on how to export a Large Language Model (LLM) to the model registry +--- + +# How To Export a Large Language Model (LLM) ## Introduction diff --git a/docs/user_guides/mlops/registry/frameworks/python.md b/docs/user_guides/mlops/registry/frameworks/python.md index af4689c60..8a7544aa9 100644 --- a/docs/user_guides/mlops/registry/frameworks/python.md +++ b/docs/user_guides/mlops/registry/frameworks/python.md @@ -1,3 +1,7 @@ +--- +description: Documentation on how to export a Python model to the model registry +--- + # How To Export a Python Model ## Introduction diff --git a/docs/user_guides/mlops/registry/frameworks/skl.md b/docs/user_guides/mlops/registry/frameworks/skl.md index e364101a9..81d8254bf 100644 --- a/docs/user_guides/mlops/registry/frameworks/skl.md +++ b/docs/user_guides/mlops/registry/frameworks/skl.md @@ -1,3 +1,7 @@ +--- +description: Documentation on how to export a Scikit-learn model to the model registry +--- + # How To Export a Scikit-learn Model ## Introduction diff --git a/docs/user_guides/mlops/registry/frameworks/tch.md b/docs/user_guides/mlops/registry/frameworks/tch.md new file mode 100644 index 000000000..040cf61be --- /dev/null +++ b/docs/user_guides/mlops/registry/frameworks/tch.md @@ -0,0 +1,79 @@ +--- +description: Documentation on how to export a Pytorch model to the model registry +--- + +# How To Export a Torch Model + +## Introduction + +In this guide you will learn how to export a Torch model and register it in the Model Registry. + + +## Code + +### Step 1: Connect to Hopsworks + +=== "Python" + ```python + import hopsworks + + project = hopsworks.login() + + # get Hopsworks Model Registry handle + mr = project.get_model_registry() + ``` + +### Step 2: Train + +Define your Torch model and run the training loop. + +=== "Python" + ```python + # Define the model architecture + class Net(nn.Module): + def __init__(self): + super().__init__() + self.conv1 = nn.Conv2d(3, 6, 5) + ... + + def forward(self, x): + x = self.pool(F.relu(self.conv1(x))) + ... + return x + + # Instantiate the model + net = Net() + + # Run the training loop + for epoch in range(n): + ... + ``` + +### Step 3: Export to local path + +Export the Torch model to a directory on the local filesystem. + +=== "Python" + ```python + model_dir = "./model" + + torch.save(net.state_dict(), model_dir) + ``` + +### Step 4: Register model in registry + +Use the `ModelRegistry.torch.create_model(..)` function to register a model as a Torch model. Define a name, and attach optional metrics for your model, then invoke the `save()` function with the parameter being the path to the local directory where the model was exported to. + +=== "Python" + ```python + # Model evaluation metrics + metrics = {'accuracy': 0.92} + + tch_model = mr.torch.create_model("tch_model", metrics=metrics) + + tch_model.save(model_dir) + ``` + +## Going Further + +You can attach an [Input Example](../input_example.md) and a [Model Schema](../model_schema.md) to your model to document the shape and type of the data the model was trained on. diff --git a/docs/user_guides/mlops/registry/frameworks/tf.md b/docs/user_guides/mlops/registry/frameworks/tf.md index d74630cbe..64a5e7225 100644 --- a/docs/user_guides/mlops/registry/frameworks/tf.md +++ b/docs/user_guides/mlops/registry/frameworks/tf.md @@ -1,3 +1,7 @@ +--- +description: Documentation on how to export a Tensorflow model to the model registry +--- + # How To Export a TensorFlow Model ## Introduction diff --git a/docs/user_guides/mlops/registry/index.md b/docs/user_guides/mlops/registry/index.md index 0fe2f4088..ddc2adab6 100644 --- a/docs/user_guides/mlops/registry/index.md +++ b/docs/user_guides/mlops/registry/index.md @@ -11,11 +11,13 @@ Follow these framework-specific guides to export a Model to the Model Registry. * [TensorFlow](frameworks/tf.md) +* [Torch](frameworks/tch.md) + * [Scikit-learn](frameworks/skl.md) * [LLM](frameworks/llm.md) -* [Other frameworks](frameworks/python.md) +* [Other Python frameworks](frameworks/python.md) ## Model Schema diff --git a/docs/user_guides/mlops/serving/deployment.md b/docs/user_guides/mlops/serving/deployment.md index cab182975..59e4f8447 100644 --- a/docs/user_guides/mlops/serving/deployment.md +++ b/docs/user_guides/mlops/serving/deployment.md @@ -1,11 +1,15 @@ -# How To Create A Deployment +--- +description: Documentation on how to deployment Machine Learning (ML) models and Large Language Models (LLMs) +--- + +# How To Create A Model Deployment ## Introduction In this guide, you will learn how to create a new deployment for a trained model. !!! warning - This guide assumes that a model has already been trained and saved into the Model Registry. To learn how to create a model in the Model Registry, see [Model Registry Guide](../registry/frameworks/tf.md) + This guide assumes that a model has already been trained and saved into the Model Registry. To learn how to create a model in the Model Registry, see [Model Registry Guide](../registry/index.md#exporting-a-model) Deployments are used to unify the different components involved in making one or more trained models online and accessible to compute predictions on demand. For each deployment, there are four concepts to consider: @@ -41,8 +45,8 @@ After selecting the model, the rest of fields are filled automatically. We pick !!! notice "Deployment name validation rules" A valid deployment name can only contain characters a-z, A-Z and 0-9. -!!! info "Predictor script for Python models and LLMs" - For Python models and LLMs, you must select a custom [predictor script](#predictor) that loads and runs the trained model by clicking on `From project` or `Upload new file`, to choose an existing script in the project file system or upload a new script, respectively. +!!! info "Predictor script for Python models" + For Python models, you must select a custom [predictor script](#predictor) that loads and runs the trained model by clicking on `From project` or `Upload new file`, to choose an existing script in the project file system or upload a new script, respectively. If you prefer, change the name of the deployment, model version or [artifact version](#model-artifact). Then, click on `Create new deployment` to create the deployment for your model. @@ -76,10 +80,10 @@ You will be redirected to a full-page deployment creation form where you can see !!! info "Deployment advanced options" 1. [Predictor](#predictor) 2. [Transformer](#transformer) - 3. [Inference logger](#inference-logger) - 4. [Inference batcher](#inference-batcher) - 5. [Resources](#resources) - 6. [API protocol](#api-protocol) + 3. [Inference logger](predictor.md#inference-logger) + 4. [Inference batcher](predictor.md#inference-batcher) + 5. [Resources](predictor.md#resources) + 6. [API protocol](predictor.md#api-protocol) Once you are done with the changes, click on `Create new deployment` at the bottom of the page to create the deployment for your model. @@ -174,7 +178,12 @@ Inside a model deployment, the local path to the model files is stored in the `M ## Artifact Files -Artifact files are files involved in the correct startup and running of the model deployment. The most important files are the **predictor** and **transformer scripts**. The former is used to load and run the model for making predictions. The latter is typically used to transform model inputs at inference time. +Artifact files are files involved in the correct startup and running of the model deployment. The most important files are the **predictor** and **transformer scripts**. The former is used to load and run the model for making predictions. The latter is typically used to apply transformations on the model inputs at inference time before making predictions. Predictor and transformer scripts run on separate components and, therefore, scale independently of each other. + +!!! tip + Whenever you provide a predictor script, you can include the transformations of model inputs in the same script as far as they don't need to be scaled independently from the model inference process. + +Additionally, artifact files can also contain a **server configuration file** that helps detach configuration used within the model deployment from the model server or the implementation of the predictor and transformer scripts. Inside a model deployment, the local path to the configuration file is stored in the `CONFIG_FILE_PATH` environment variable (see [environment variables](../serving/predictor.md#environment-variables)). Every model deployment runs a specific version of the artifact files, commonly referred to as artifact version. ==One or more model deployments can use the same artifact version== (i.e., same predictor and transformer scripts). Artifact versions are unique for the same model version. @@ -189,7 +198,7 @@ Inside a model deployment, the local path to the artifact files is stored in the All files under `/Models` are managed by Hopsworks. Changes to artifact files cannot be reverted and can have an impact on existing model deployments. !!! tip "Additional files" - Currently, the artifact files only include predictor and transformer scripts. Support for additional files (e.g., configuration files or other resources) is coming soon. + Currently, the artifact files can only include predictor and transformer scripts, and a configuration file. Support for additional files (e.g., other resources) is coming soon. ## Predictor diff --git a/docs/user_guides/mlops/serving/predictor.md b/docs/user_guides/mlops/serving/predictor.md index 2ad777ec5..79008919c 100644 --- a/docs/user_guides/mlops/serving/predictor.md +++ b/docs/user_guides/mlops/serving/predictor.md @@ -1,3 +1,7 @@ +--- +description: Documentation on how to configure a predictor for a model deployment +--- + # How To Configure A Predictor ## Introduction @@ -13,12 +17,13 @@ Predictors are the main component of deployments. They are responsible for runni 1. [Model server](#model-server) 2. [Serving tool](#serving-tool) 3. [User-provided script](#user-provided-script) - 4. [Python environments](#python-environments) - 5. [Transformer](#transformer) - 6. [Inference Logger](#inference-logger) - 7. [Inference Batcher](#inference-batcher) - 8. [Resources](#resources) - 9. [API protocol](#api-protocol) + 4. [Server configuration file](#server-configuration-file) + 5. [Python environments](#python-environments) + 6. [Transformer](#transformer) + 7. [Inference Logger](#inference-logger) + 8. [Inference Batcher](#inference-batcher) + 9. [Resources](#resources) + 10. [API protocol](#api-protocol) ## GUI @@ -85,7 +90,22 @@ To create your own it is recommended to [clone](../../projects/python/python_env

-### Step 5 (Optional): Enable KServe + +### Step 5 (Optional): Select a configuration file + +!!! note + Only available for LLM deployments. + +You can select a configuration file to be added to the [artifact files](deployment.md#artifact-files). If a predictor script is provided, this configuration file will be available inside the model deployment at the local path stored in the `CONFIG_FILE_PATH` environment variable. If a predictor script is **not** provided, this configuration file will be directly passed to the vLLM server. You can find all configuration parameters supported by the vLLM server in the [vLLM documentation](https://docs.vllm.ai/en/v0.7.1/serving/openai_compatible_server.html). + +

+

+ Server configuration file in the simplified deployment form +
Select a configuration file in the simplified deployment form
+
+

+ +### Step 6 (Optional): Enable KServe Other configuration such as the serving tool, is part of the advanced options of a deployment. To navigate to the advanced creation form, click on `Advanced options`. @@ -105,7 +125,7 @@ Here, you change the [serving tool](#serving-tool) for your deployment by enabli

-### Step 6 (Optional): Other advanced options +### Step 7 (Optional): Other advanced options Additionally, you can adjust the default values of the rest of components: @@ -143,50 +163,71 @@ Once you are done with the changes, click on `Create new deployment` at the bott def __init__(self): """ Initialization code goes here""" - pass + # Model files can be found at os.environ["MODEL_FILES_PATH"] + # self.model = ... # load your model def predict(self, inputs): """ Serve predictions using the trained model""" - pass + # Use the model to make predictions + # return self.model.predict(inputs) ``` -=== "Generate (vLLM deployments only)" +=== "Predictor (vLLM deployments only)" ``` python - from typing import Iterable, AsyncIterator, Union - - from vllm import LLM - + import os + from vllm import __version__, AsyncEngineArgs, AsyncLLMEngine + from typing import Iterable, AsyncIterator, Union, Optional from kserve.protocol.rest.openai import ( CompletionRequest, ChatPrompt, ChatCompletionRequestMessage, ) from kserve.protocol.rest.openai.types import Completion + from kserve.protocol.rest.openai.types.openapi import ChatCompletionTool + class Predictor(): def __init__(self): """ Initialization code goes here""" - # initialize vLLM backend - self.llm = LLM(os.environ["MODEL_FILES_PATH]) - - # initialize tokenizer if needed - # self.tokenizer = ... - - def apply_chat_template( - self, - messages: Iterable[ChatCompletionRequestMessage,], - ) -> ChatPrompt: - pass - - async def create_completion( - self, request: CompletionRequest - ) -> Union[Completion, AsyncIterator[Completion]]: - """Generate responses using the LLM""" - - # Completion: used for returning a single answer (batch) - # AsyncIterator[Completion]: used for returning a stream of answers - - pass + + # (optional) if any, access the configuration file via os.environ["CONFIG_FILE_PATH"] + config = ... + + print("Starting vLLM backend...") + engine_args = AsyncEngineArgs( + model=os.environ["MODEL_FILES_PATH"], + **config + ) + + # "self.vllm_engine" is required as the local variable with the vllm engine handler + self.vllm_engine = AsyncLLMEngine.from_engine_args(engine_args) + + # + # NOTE: Default implementations of the apply_chat_template and create_completion methods are already provided. + # If needed, you can override these methods as shown below + # + + #def apply_chat_template( + # self, + # messages: Iterable[ChatCompletionRequestMessage], + # chat_template: Optional[str] = None, + # tools: Optional[list[ChatCompletionTool]] = None, + #) -> ChatPrompt: + # """Converts a prompt or list of messages into a single templated prompt string""" + + # prompt = ... # apply chat template on the message to build the prompt + # return ChatPrompt(prompt=prompt) + + #async def create_completion( + # self, request: CompletionRequest + #) -> Union[Completion, AsyncIterator[Completion]]: + # """Generate responses using the vLLM engine""" + # + # generators = self.vllm_engine.generate(...) + # + # # Completion: used for returning a single answer (batch) + # # AsyncIterator[Completion]: used for returning a stream of answers + # return ... ``` !!! info "Jupyter magic" @@ -242,7 +283,7 @@ Hopsworks Model Serving supports deploying models with a Flask server for python | Flask | ✅ | python-based (scikit-learn, xgboost, pytorch...) | | TensorFlow Serving | ✅ | keras, tensorflow | | TorchServe | ❌ | pytorch | - | vLLM | ✅ | vLLM-supported models (see [list](https://docs.vllm.ai/en/latest/models/supported_models.html)) | + | vLLM | ✅ | vLLM-supported models (see [list](https://docs.vllm.ai/en/v0.7.1/models/supported_models.html)) | ## Serving tool @@ -279,7 +320,17 @@ The predictor script needs to implement a given template depending on the model | | TensorFlow Serving | ❌ | | KServe | Fast API | ✅ (only required for artifacts with multiple models) | | | TensorFlow Serving | ❌ | - | | vLLM | ✅ (required) | + | | vLLM | ✅ (optional) | + +### Server configuration file + +Depending on the model server, a **server configuration file** can be selected to help detach configuration used within the model deployment from the model server or the implementation of the predictor and transformer scripts. In other words, by modifying the configuration file of an existing model deployment you can adjust its settings without making changes to the predictor or transformer scripts. Inside a model deployment, the local path to the configuration file is stored in the `CONFIG_FILE_PATH` environment variable (see [environment variables](#environment-variables)). + +!!! warning "Configuration file format" + The configuration file can be of any format, except in vLLM deployments **without a predictor script** for which a YAML file is ==required==. + +!!! note "Passing arguments to vLLM via configuration file" + For vLLM deployments **without a predictor script**, the server configuration file is ==required== and it is used to configure the vLLM server. For example, you can use this configuration file to specify the chat template or LoRA modules to be loaded by the vLLM server. See all available parameters in the [official documentation](https://docs.vllm.ai/en/v0.7.1/serving/openai_compatible_server.html#command-line-arguments-for-the-server). ### Environment variables @@ -291,6 +342,7 @@ A number of different environment variables is available in the predictor to eas | ------------------- | -------------------------------------------------------------------- | | MODEL_FILES_PATH | Local path to the model files | | ARTIFACT_FILES_PATH | Local path to the artifact files | + | CONFIG_FILE_PATH | Local path to the configuration file | | DEPLOYMENT_NAME | Name of the current deployment | | MODEL_NAME | Name of the model being served by the current deployment | | MODEL_VERSION | Version of the model being served by the current deployment | @@ -302,13 +354,13 @@ Depending on the model server and serving tool used in the model deployment, you ??? info "Show supported Python environments" - | Serving tool | Model server | Editable | Predictor | Transformer | - | ------------ | ------------------ | -------- | ----------------------------------- | ------------------------------ | - | Kubernetes | Flask server | ❌ | `pandas-inference-pipeline` only | ❌ | - | | TensorFlow Serving | ❌ | (official) tensorflow serving image | ❌ | - | KServe | Fast API | ✅ | any `inference-pipeline` image | any `inference-pipeline` image | - | | TensorFlow Serving | ✅ | (official) tensorflow serving image | any `inference-pipeline` image | - | | vLLM | ✅ | `vllm-inference-pipeline` only | any `inference-pipeline` image | + | Serving tool | Model server | Editable | Predictor | Transformer | + | ------------ | ------------------ | -------- | ------------------------------------------ | ------------------------------ | + | Kubernetes | Flask server | ❌ | `pandas-inference-pipeline` only | ❌ | + | | TensorFlow Serving | ❌ | (official) tensorflow serving image | ❌ | + | KServe | Fast API | ✅ | any `inference-pipeline` image | any `inference-pipeline` image | + | | TensorFlow Serving | ✅ | (official) tensorflow serving image | any `inference-pipeline` image | + | | vLLM | ✅ | `vllm-inference-pipeline` or `vllm-openai` | any `inference-pipeline` image | !!! note The selected Python environment is used for both predictor and transformer. Support for selecting a different Python environment for the predictor and transformer is coming soon. diff --git a/docs/user_guides/mlops/serving/resources.md b/docs/user_guides/mlops/serving/resources.md index d79bb7edb..27a22ba02 100644 --- a/docs/user_guides/mlops/serving/resources.md +++ b/docs/user_guides/mlops/serving/resources.md @@ -1,4 +1,8 @@ -# How To Allocate Resources For A Deployment +--- +description: Documentation on how to allocate resources to a model deployment +--- + +# How To Allocate Resources To A Model Deployment ## Introduction diff --git a/docs/user_guides/mlops/serving/transformer.md b/docs/user_guides/mlops/serving/transformer.md index bd851c304..6d3466932 100644 --- a/docs/user_guides/mlops/serving/transformer.md +++ b/docs/user_guides/mlops/serving/transformer.md @@ -1,3 +1,7 @@ +--- +description: Documentation on how to configure a KServe transformer for a model deployment +--- + # How To Configure A Transformer ## Introduction diff --git a/docs/user_guides/mlops/serving/troubleshooting.md b/docs/user_guides/mlops/serving/troubleshooting.md index 78521e79b..df18c156c 100644 --- a/docs/user_guides/mlops/serving/troubleshooting.md +++ b/docs/user_guides/mlops/serving/troubleshooting.md @@ -1,4 +1,8 @@ -# How To Troubleshoot A Deployment +--- +description: Documentation on how to troubleshoot a model deployment +--- + +# How To Troubleshoot A Model Deployment ## Introduction diff --git a/mkdocs.yml b/mkdocs.yml index 0de1d2bc0..1d26480ad 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -182,6 +182,7 @@ nav: - user_guides/mlops/registry/index.md - Frameworks: - TensorFlow: user_guides/mlops/registry/frameworks/tf.md + - Torch: user_guides/mlops/registry/frameworks/tch.md - Scikit-learn: user_guides/mlops/registry/frameworks/skl.md - LLM: user_guides/mlops/registry/frameworks/llm.md - Python: user_guides/mlops/registry/frameworks/python.md