From e2fb95e789fbef5c330e92b92a9ca0855564c1c0 Mon Sep 17 00:00:00 2001 From: Naomi Pentrel <5212232+npentrel@users.noreply.github.com> Date: Thu, 9 Oct 2025 18:20:12 +0200 Subject: [PATCH 1/2] DOCS-3586: Update deploy.md --- docs/data-ai/ai/deploy.md | 75 +++++++++++++++++++++------------------ 1 file changed, 41 insertions(+), 34 deletions(-) diff --git a/docs/data-ai/ai/deploy.md b/docs/data-ai/ai/deploy.md index fd510afb86..d4748415a8 100644 --- a/docs/data-ai/ai/deploy.md +++ b/docs/data-ai/ai/deploy.md @@ -19,75 +19,82 @@ aliases: - /services/ml/ml-models/ - /registry/ml-models/ - /manage/ml/ - - /how-tos/deploy-ml/ - /how-tos/train-deploy-ml/ - /ml/deploy/ - /ml/ - /services/ml/deploy/ - /how-tos/deploy-ml/ - /manage/data/deploy-model/ +date: "2025-10-09" --- -After training or uploading a machine learning model, use a machine learning (ML) model service to deploy the ML model to your machine. +Use a machine learning (ML) model service to deploy an ML model to your machine. -## Deploy your ML model on an ML model service +## What is an ML model service? -1. Navigate to the **CONFIGURE** tab of one of your machines. -2. Add an ML model service that supports the ML model you want to use. - - For example, use the `ML model / TFLite CPU` service for TFlite ML models that you trained with Viam's built-in training. -3. Click **Select model** and select a model from your organization or the registry. -4. Save your config. -5. Use the **Test** panel to test your model. - -{{}} +An ML model service is a Viam service that runs machine learning models on your machine. The service works with models trained on Viam or elsewhere, and supports various frameworks including TensorFlow Lite, ONNX, TensorFlow, and PyTorch. -{{% expand "Want more information about model framework and hardware support for each ML model service? Click here." %}} +## Supported frameworks and hardware Viam currently supports the following frameworks: | Model Framework | ML Model Service | Hardware Support | Description | | --------------- | --------------- | ------------------- | ----------- | -| [TensorFlow Lite](https://www.tensorflow.org/lite) | [`tflite_cpu`](https://app.viam.com/module/viam/tflite_cpu) | linux/amd64, linux/arm64, darwin/arm64, darwin/amd64 | Quantized version of TensorFlow that has reduced compatibility for models but supports more hardware. Uploaded models must adhere to the [model requirements.](https://app.viam.com/module/viam/tflite_cpu) | -| [ONNX](https://onnx.ai/) | [`onnx-cpu`](https://app.viam.com/module/viam/onnx-cpu), [`triton`](https://app.viam.com/module/viam/mlmodelservice-triton-jetpack) | Nvidia GPU, linux/amd64, linux/arm64, darwin/arm64 | Universal format that is not optimized for hardware inference but runs on a wide variety of machines. | -| [TensorFlow](https://www.tensorflow.org/) | [`tensorflow-cpu`](https://app.viam.com/module/viam/tensorflow-cpu), [`triton`](https://app.viam.com/module/viam/mlmodelservice-triton-jetpack) | Nvidia GPU, linux/amd64, linux/arm64, darwin/arm64 | A full framework that is made for more production-ready systems. | -| [PyTorch](https://pytorch.org/) | [`torch-cpu`](https://app.viam.com/module/viam/torch-cpu), [`triton`](https://app.viam.com/module/viam/mlmodelservice-triton-jetpack) | Nvidia GPU, linux/arm64, darwin/arm64 | A full framework that was built primarily for research. Because of this, it is much faster to do iterative development with (model doesn’t have to be predefined) but it is not as “production ready” as TensorFlow. It is the most common framework for OSS models because it is the go-to framework for ML researchers. | +| [TensorFlow Lite](https://www.tensorflow.org/lite) | [`tflite_cpu`](https://app.viam.com/module/viam/tflite_cpu) | linux/amd64, linux/arm64, darwin/arm64, darwin/amd64 | Quantized version of TensorFlow that has reduced compatibility for models but supports more hardware. Uploaded models must adhere to the [model requirements](https://app.viam.com/module/viam/tflite_cpu). | +| [ONNX](https://onnx.ai/) | [`onnx-cpu`](https://app.viam.com/module/viam/onnx-cpu), [`triton`](https://app.viam.com/module/viam/mlmodelservice-triton-jetpack) | Nvidia GPU, linux/amd64, linux/arm64, darwin/arm64 | Universal format that is not optimized for hardware-specific inference but runs on a wide variety of machines. | +| [TensorFlow](https://www.tensorflow.org/) | [`tensorflow-cpu`](https://app.viam.com/module/viam/tensorflow-cpu), [`triton`](https://app.viam.com/module/viam/mlmodelservice-triton-jetpack) | Nvidia GPU, linux/amd64, linux/arm64, darwin/arm64 | A full framework designed for more production-ready systems. | +| [PyTorch](https://pytorch.org/) | [`torch-cpu`](https://app.viam.com/module/viam/torch-cpu), [`triton`](https://app.viam.com/module/viam/mlmodelservice-triton-jetpack) | Nvidia GPU, linux/arm64, darwin/arm64 | A full framework that was built primarily for research. Because of this, it is much faster to do iterative development with (the model doesn't have to be predefined) but it is not as "production ready" as TensorFlow. It is the most common framework for open-source models because it is the go-to framework for ML researchers. | {{< alert title="Note" color="note" >}} -For some models of the ML model service, like the [Triton ML model service](https://github.com/viamrobotics/viam-mlmodelservice-triton/) for Jetson boards, you can configure the service to use either the available CPU or a dedicated GPU. +For some ML model services, like the [Triton ML model service](https://github.com/viamrobotics/viam-mlmodelservice-triton/) for Jetson boards, you can configure the service to use either the available CPU or a dedicated GPU. {{< /alert >}} -{{< /expand>}} +## Deploy your ML model + +1. Navigate to the **CONFIGURE** tab of one of your machines. +2. Add an ML model service that supports the ML model you want to use. + - For example, use the `ML model / TFLite CPU` service for TFlite ML models that you trained with Viam's built-in training. +3. Click **Select model** and select a model from your organization or the registry. +4. Save your config. +5. Use the **Test** panel to test your model. + +### Available ML model services + +{{}} -### Models available to deploy on the ML Model service +## Available machine learning models -You can also use these publicly available machine learning models with an ML model service: +You can use these publicly available machine learning models: {{}} +### Model sources + +The service works with models from various sources: + +- You can [train TensorFlow or TensorFlow Lite](/data-ai/train/train-tf-tflite/) or [other model frameworks](/data-ai/train/train/) on data from your machines. +- You can use [ML models](https://app.viam.com/registry?type=ML+Model) from the [registry](https://app.viam.com/registry). +- You can upload externally trained models from a model file on the [**MODELS** tab](https://app.viam.com/models). +- You can use models trained outside the Viam platform whose files are on your machine. + See the documentation for the ML model service you're using (pick one that supports your model framework) for instructions on this. + +{{< alert title="Add support for other models" color="tip" >}} +ML models must be designed in particular shapes to work with the `mlmodel` [classification](/operate/reference/services/vision/mlmodel/) or [detection](/operate/reference/services/vision/mlmodel/) models of Viam's [vision service](/operate/reference/services/vision/). +See [ML Model Design](/data-ai/reference/mlmodel-design/) to design a modular ML model service with models that work with vision. +{{< /alert >}} + ### Deploy a specific version of an ML model When you add a model to the ML model service in the app interface, it automatically uses the latest version. In the ML model service panel, you can change the version in the version dropdown. Save your config to use your specified version of the ML model. -## How the ML model service works - -The service works with models trained on Viam or elsewhere: - -- You can [train TensorFlow or TensorFlow lite](/data-ai/train/train-tf-tflite/) or [other model frameworks](/data-ai/train/train/) on data from your machines. -- You can use [ML models](https://app.viam.com/registry?type=ML+Model) from the [registry](https://app.viam.com/registry). -- You can upload externally trained models from a model file on the [**MODELS** tab](https://app.viam.com/models). -- You can use a [model](/data-ai/ai/deploy/#deploy-your-ml-model-on-an-ml-model-service) trained outside the Viam platform whose files are on your machine. See the documentation of the model of ML model service you're using (pick one that supports your model framework) for instructions on this. +## Next steps -On its own the ML model service only runs the model. +On its own, the ML model service only runs the model. After deploying your model, you need to configure an additional service to use the deployed model. For example, you can configure an [`mlmodel` vision service](/operate/reference/services/vision/) to visualize the inferences your model makes. Follow our docs to [run inference](/data-ai/ai/run-inference/) to add an `mlmodel` vision service and see inferences. For other use cases, consider [creating custom functionality with a module](/operate/modules/other-hardware/create-module/). - -{{< alert title="Add support for other models" color="tip" >}} -ML models must be designed in particular shapes to work with the `mlmodel` [classification](/operate/reference/services/vision/mlmodel/) or [detection](/operate/reference/services/vision/mlmodel/) model of Viam's [vision service](/operate/reference/services/vision/). -See [ML Model Design](/data-ai/reference/mlmodel-design/) to design a modular ML model service with models that work with vision. -{{< /alert >}} From a360db663442468c607adb9e0ef32398bea195e0 Mon Sep 17 00:00:00 2001 From: Naomi Pentrel <5212232+npentrel@users.noreply.github.com> Date: Fri, 10 Oct 2025 22:29:59 +0200 Subject: [PATCH 2/2] Alert page --- docs/data-ai/ai/act.md | 16 ++ docs/data-ai/ai/alert.md | 147 +++++++++++------- docs/dev/reference/changelog.md | 6 +- .../reference/services/vision/mlmodel.md | 2 +- 4 files changed, 110 insertions(+), 61 deletions(-) diff --git a/docs/data-ai/ai/act.md b/docs/data-ai/ai/act.md index 7a01ca971a..a8f461b9c3 100644 --- a/docs/data-ai/ai/act.md +++ b/docs/data-ai/ai/act.md @@ -15,6 +15,20 @@ At the end of each step, you'll learn how to apply the step to a fictional examp ## Prerequisites +{{% expand "A running machine connected to Viam." %}} + +{{% snippet "setup-both.md" %}} + +{{% /expand%}} + +{{< expand "A configured camera and vision service." >}} + +Follow the instructions to [configure a camera](/operate/reference/components/camera/) and [run inference](/data-ai/ai/run-inference/). + +{{< /expand >}} + +{{% expand "The Viam CLI." %}} + {{< table >}} {{% tablestep start=1 %}} **Install the CLI.** @@ -47,6 +61,8 @@ Organizations for "user@viam.com": {{% /tablestep %}} {{< /table >}} +{{% /expand%}} + ## Create a module To program your machine's behavior based on the output of a vision service, create a resource that makes use of the vision service as input and controls the resources that should actuate based on the input. diff --git a/docs/data-ai/ai/alert.md b/docs/data-ai/ai/alert.md index 5354cfda73..a5a5a06122 100644 --- a/docs/data-ai/ai/alert.md +++ b/docs/data-ai/ai/alert.md @@ -5,36 +5,45 @@ weight: 60 layout: "docs" type: "docs" description: "Use machine learning and send alerts when an inference meets certain criteria." +date: "2025-10-10" --- -Triggers can send alerts in the form of email notifications or webhook requests when a new data is synced to the cloud. -If you then configure a filtered camera or another modular resource that uploads data only when a specific detection or classification is made, you get a notification. +Triggers can send alerts in the form of email notifications or webhook requests when new data syncs to the cloud. + +This guide shows you how to set up an alert system that notifies you when specific objects or classifications are detected in your camera feed. +The process involves three resources: + +1. **Filtered Camera**: Filters images passed to the data management service +2. **Data Management**: Syncs filtered images to the cloud +3. **Triggers**: Sends alerts when data syncs For example, a trigger could alert you when a camera feed detects an anomaly. ### Prerequisites +Before setting up alerts, you need: + {{% expand "A running machine connected to Viam." %}} {{% snippet "setup-both.md" %}} -{{% /expand%}} +{{% /expand %}} {{< expand "A configured camera and vision service." >}} +You'll need a working vision service that can detect objects or classifications. +The filtered camera will use this service to determine which images to capture. Follow the instructions to [configure a camera](/operate/reference/components/camera/) and [run inference](/data-ai/ai/run-inference/). {{< /expand >}} ## Configure a filtered camera -You can use a camera and vision service to sync only images where an inference is made with the [`filtered-camera`](https://app.viam.com/module/viam/filtered-camera) {{< glossary_tooltip term_id="module" text="module" >}}. -This camera module takes the vision service and applies it to your webcam feed, filtering the output. -With this filtering, you can save only images that contain people who match your filtering criteria. +The [`filtered-camera`](https://app.viam.com/module/viam/filtered-camera) {{< glossary_tooltip term_id="module" text="module" >}} functions as a normal camera unless used with the data management service. +When you configure the data management service to capture and sync images from the camera, the camera will only pass images to the data management service if they meet the defined criteria. +The camera module takes a vision service and applies it to a camera feed using the generated predictions to filter the output for the data management service. -Configure the camera module with classification or object labels according to the labels your ML model provides that you want to alert on. - -Complete the following steps to configure your module: +Configure the filtered camera module to capture images when specific predictions occur: 1. Navigate to your machine's **CONFIGURE** tab. @@ -49,24 +58,26 @@ Complete the following steps to configure your module: {{< tabs >}} {{% tab name="Template" %}} - Replace the `` and `` placeholders with values for your use case: + Replace the `` and `` values with the names of your camera and vision service. + + **Choose your detection type**: + + - For **object detection** (bounding boxes around objects): Use the `objects` configuration with the label you want to alert on and remove `classifications` + - For **classification** (image-level labels): Use the `classifications` configuration with the label you want to alert on and remove `objects` + + The confidence threshold (0.0-1.0) determines how certain the vision model must be before capturing photos. + For example, a confidence threshold of `0.6` only captures photos when the vision model is at least 60% sure that it has correctly identified the desired label. ```json {class="line-numbers linkable-line-numbers"} { - "camera": "", - "vision_services": [ - { - "vision": , - "classifications": ..., - "objects": ... - }, - { - "vision": , - "classifications": ..., - "objects": ... - } - ], - "window_seconds": , + "camera": "", + "vision_services": [ + { + "vision": "", + "classifications": { "