diff --git a/README.md b/README.md
index b4e0c35cb..25333153f 100644
--- a/README.md
+++ b/README.md
@@ -4,7 +4,7 @@
### Overview

-[Deep Learning Streamer](./docs/source/index.md) (**DL Streamer**) Pipeline Framework is an open-source streaming media analytics framework, based on [GStreamer*](https://gstreamer.freedesktop.org) multimedia framework, for creating complex media analytics pipelines for the Cloud or at the Edge.
+[Deep Learning Streamer](./docs/user-guide/index.md) (**DL Streamer**) Pipeline Framework is an open-source streaming media analytics framework, based on [GStreamer*](https://gstreamer.freedesktop.org) multimedia framework, for creating complex media analytics pipelines for the Cloud or at the Edge.
**Media analytics** is the analysis of audio & video streams to detect, classify, track, identify and count objects, events and people. The analyzed results can be used to take actions, coordinate events, identify patterns and gain insights across multiple domains: retail store and events facilities analytics, warehouse and parking management, industrial inspection, safety and regulatory compliance, security monitoring, and many other.
@@ -15,17 +15,17 @@ DL Streamer Pipeline Framework is optimized for performance and functional inter
* Image processing plugins based on [OpenCV](https://opencv.org/) and [DPC++](https://www.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/oneapi-programming-model/data-parallel-c-dpc.html)
* Hundreds other [GStreamer* plugins](https://gstreamer.freedesktop.org/documentation/plugins_doc.html) built on various open-source libraries for media input and output, muxing and demuxing, decode and encode
-[This page](./docs/source/elements/elements.md) contains a list of elements provided in this repository.
+[This page](./docs/user-guide/elements/elements.md) contains a list of elements provided in this repository.
## Prerequisites
-Please refer to [System Requirements](./docs/source/get_started/system_requirements.md) for details.
+Please refer to [System Requirements](./docs/user-guide/get_started/system_requirements.md) for details.
## Installation
-Please refer to [Install Guide](./docs/source/get_started/install/install_guide_ubuntu.md) for installation options
-1. [Install APT packages](./docs/source/get_started/install/install_guide_ubuntu.md#option-1-install-intel-dl-streamer-pipeline-framework-from-debian-packages-using-apt-repository)
-2. [Run Docker image](./docs/source/get_started/install/install_guide_ubuntu.md#option-2-install-docker-image-from-docker-hub-and-run-it)
-3. [Compile from source code](./docs/source/dev_guide/advanced_install/advanced_install_guide_compilation.md)
-4. [Build Docker image from source code](./docs/source/dev_guide/advanced_install/advanced_build_docker_image.md)
+Please refer to [Install Guide](./docs/user-guide/get_started/install/install_guide_ubuntu.md) for installation options
+1. [Install APT packages](./docs/user-guide/get_started/install/install_guide_ubuntu.md#option-1-install-intel-dl-streamer-pipeline-framework-from-debian-packages-using-apt-repository)
+2. [Run Docker image](./docs/user-guide/get_started/install/install_guide_ubuntu.md#option-2-install-docker-image-from-docker-hub-and-run-it)
+3. [Compile from source code](./docs/user-guide/dev_guide/advanced_install/advanced_install_guide_compilation.md)
+4. [Build Docker image from source code](./docs/user-guide/dev_guide/advanced_install/advanced_build_docker_image.md)
To see the full list of installed components check the [dockerfile content for Ubuntu24](https://github.com/open-edge-platform/dlstreamer/blob/main/docker/ubuntu/ubuntu24.Dockerfile)
@@ -35,12 +35,12 @@ To see the full list of installed components check the [dockerfile content for U
## Models
DL Streamer supports models in OpenVINO™ IR and ONNX* formats, including VLMs, object detection, object classification, human pose detection, sound classification, semantic segmentation, and other use cases on SSD, MobileNet, YOLO, Tiny YOLO, EfficientDet, ResNet, FasterRCNN, and other backbones.
-See the full list of [supported models](./docs/source/supported_models.md), including models pre-trained with [Intel® Geti™ Software](), or explore over 70 pre-trained models in [OpenVINO™ Open Model Zoo](https://docs.openvino.ai/latest/omz_models_group_intel.html) with corresponding [model-proc files](https://github.com/dlstreamer/dlstreamer/tree/main/samples/model_proc) (pre- and post-processing specifications).
+See the full list of [supported models](./docs/user-guide/supported_models.md), including models pre-trained with [Intel® Geti™ Software](), or explore over 70 pre-trained models in [OpenVINO™ Open Model Zoo](https://docs.openvino.ai/latest/omz_models_group_intel.html) with corresponding [model-proc files](https://github.com/dlstreamer/dlstreamer/tree/main/samples/model_proc) (pre- and post-processing specifications).
## Other Useful Links
-* [Get Started](./docs/source/get_started/get_started_index.md)
-* [Developer Guide](./docs/source/dev_guide/dev_guide_index.md)
-* [API Reference](./docs/source/api_ref/api_reference.rst)
+* [Get Started](./docs/user-guide/get_started/get_started_index.md)
+* [Developer Guide](./docs/user-guide/dev_guide/dev_guide_index.md)
+* [API Reference](./docs/user-guide/api_ref/api_reference.rst)
---
\* Other names and brands may be claimed as the property of others.
diff --git a/RELEASE_NOTES.md b/RELEASE_NOTES.md
index e565357c8..c400c669b 100644
--- a/RELEASE_NOTES.md
+++ b/RELEASE_NOTES.md
@@ -13,25 +13,25 @@ The complete solution leverages:
| Element | Description |
|---|---|
- | [gvaattachroi](./docs/source/elements/gvaattachroi.md) | Adds user-defined regions of interest to perform inference on, instead of full frame. |
- | [gvaaudiodetect](./docs/source/elements/gvaaudiodetect.md) | Performs audio event detection using AclNet model. |
- | [gvaaudiotranscribe](./docs/source/elements/gvaaudiotranscribe.md) | Performs audio transcription using OpenVino GenAI Whisper model. |
- | [gvaclassify](./docs/source/elements/gvaclassify.md) | Performs object classification. Accepts the ROI as an input and outputs classification results with the ROI metadata. |
- | [gvadetect](./docs/source/elements/gvadetect.md) | Performs object detection on a full-frame or region of interest (ROI) using object detection models such as YOLOv4-v11, MobileNet SSD, Faster-RCNN etc. Outputs the ROI for detected objects. |
- | [gvafpscounter](./docs/source/elements/gvafpscounter.md) | Measures frames per second across multiple streams in a single process. |
- | [gvagenai](./docs/source/elements/gvagenai.md) | Performs inference with Vision Language Models using OpenVINO™ GenAI, accepts video and text prompt as an input, and outputs text description. It can be used to generate text summarization from video. |
- | [gvainference](./docs/source/elements/gvainference.md) | Runs deep learning inference on a full-frame or ROI using any model with an RGB or BGR input. |
- | [gvametaaggregate](./docs/source/elements/gvametaaggregate.md) | Aggregates inference results from multiple pipeline branches |
- | [gvametaconvert](./docs/source/elements/gvametaconvert.md) | Converts the metadata structure to the JSON format. |
- | [gvametapublish](./docs/source/elements/gvametapublish.md) | Publishes the JSON metadata to MQTT or Kafka message brokers or files. |
- | [gvamotiondetect](./docs/source/elements/gvamotiondetect.md) | Performs lightweight motion detection on NV12 video frames and emits motion regions of interest (ROIs) as analytics metadata. |
- | [gvapython](./docs/source/elements/gvapython.md) | Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks. |
- | [gvarealsense](./docs/source/elements/gvarealsense.md) | Provides integration with Intel RealSense cameras, enabling video and depth stream capture for use in GStreamer pipelines. |
- | [gvatrack](./docs/source/elements/gvatrack.md) | Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects. |
- | [gvawatermark](./docs/source/elements/gvawatermark.md) | Overlays the metadata on the video frame to visualize the inference results. |
-
-For the details on supported platforms, please refer to [System Requirements](./docs/source/get_started/system_requirements.md).
-For installing Pipeline Framework with the prebuilt binaries or Docker\* or to build the binaries from the open source, refer to [Intel® DL Streamer Pipeline Framework installation guide](./docs/source/get_started/install/install_guide_index.md).
+ | [gvaattachroi](./docs/user-guide/elements/gvaattachroi.md) | Adds user-defined regions of interest to perform inference on, instead of full frame. |
+ | [gvaaudiodetect](./docs/user-guide/elements/gvaaudiodetect.md) | Performs audio event detection using AclNet model. |
+ | [gvaaudiotranscribe](./docs/user-guide/elements/gvaaudiotranscribe.md) | Performs audio transcription using OpenVino GenAI Whisper model. |
+ | [gvaclassify](./docs/user-guide/elements/gvaclassify.md) | Performs object classification. Accepts the ROI as an input and outputs classification results with the ROI metadata. |
+ | [gvadetect](./docs/user-guide/elements/gvadetect.md) | Performs object detection on a full-frame or region of interest (ROI) using object detection models such as YOLOv4-v11, MobileNet SSD, Faster-RCNN etc. Outputs the ROI for detected objects. |
+ | [gvafpscounter](./docs/user-guide/elements/gvafpscounter.md) | Measures frames per second across multiple streams in a single process. |
+ | [gvagenai](./docs/user-guide/elements/gvagenai.md) | Performs inference with Vision Language Models using OpenVINO™ GenAI, accepts video and text prompt as an input, and outputs text description. It can be used to generate text summarization from video. |
+ | [gvainference](./docs/user-guide/elements/gvainference.md) | Runs deep learning inference on a full-frame or ROI using any model with an RGB or BGR input. |
+ | [gvametaaggregate](./docs/user-guide/elements/gvametaaggregate.md) | Aggregates inference results from multiple pipeline branches |
+ | [gvametaconvert](./docs/user-guide/elements/gvametaconvert.md) | Converts the metadata structure to the JSON format. |
+ | [gvametapublish](./docs/user-guide/elements/gvametapublish.md) | Publishes the JSON metadata to MQTT or Kafka message brokers or files. |
+ | [gvamotiondetect](./docs/user-guide/elements/gvamotiondetect.md) | Performs lightweight motion detection on NV12 video frames and emits motion regions of interest (ROIs) as analytics metadata. |
+ | [gvapython](./docs/user-guide/elements/gvapython.md) | Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks. |
+ | [gvarealsense](./docs/user-guide/elements/gvarealsense.md) | Provides integration with Intel RealSense cameras, enabling video and depth stream capture for use in GStreamer pipelines. |
+ | [gvatrack](./docs/user-guide/elements/gvatrack.md) | Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects. |
+ | [gvawatermark](./docs/user-guide/elements/gvawatermark.md) | Overlays the metadata on the video frame to visualize the inference results. |
+
+For the details on supported platforms, please refer to [System Requirements](./docs/user-guide/get_started/system_requirements.md).
+For installing Pipeline Framework with the prebuilt binaries or Docker\* or to build the binaries from the open source, refer to [Intel® DL Streamer Pipeline Framework installation guide](./docs/user-guide/get_started/install/install_guide_index.md).
### New in this Release
| Title | High-level description |
@@ -97,23 +97,23 @@ The complete solution leverages:
| Element | Description |
|---|---|
- | [gvadetect](./docs/source/elements/gvadetect.md) | Performs object detection on a full-frame or region of interest (ROI) using object detection models such as YOLOv4-v11, MobileNet SSD, Faster-RCNN etc. Outputs the ROI for detected objects. |
- | [gvaclassify](./docs/source/elements/gvaclassify.md) | Performs object classification. Accepts the ROI as an input and outputs classification results with the ROI metadata. |
- | [gvainference](./docs/source/elements/gvainference.md) | Runs deep learning inference on a full-frame or ROI using any model with an RGB or BGR input. |
- | [gvatrack](./docs/source/elements/gvatrack.md) | Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects. |
- | [gvaaudiodetect](./docs/source/elements/gvaaudiodetect.md) | Performs audio event detection using AclNet model. |
- | [gvagenai](./docs/source/elements/gvagenai.md) | Performs inference with Vision Language Models using OpenVINO™ GenAI, accepts video and text prompt as an input, and outputs text description. It can be used to generate text summarization from video. |
- | [gvaattachroi](./docs/source/elements/gvaattachroi.md) | Adds user-defined regions of interest to perform inference on, instead of full frame. |
- | [gvafpscounter](./docs/source/elements/gvafpscounter.md) | Measures frames per second across multiple streams in a single process. |
- | [gvametaaggregate](./docs/source/elements/gvametaaggregate.md) | Aggregates inference results from multiple pipeline branches |
- | [gvametaconvert](./docs/source/elements/gvametaconvert.md) | Converts the metadata structure to the JSON format. |
- | [gvametapublish](./docs/source/elements/gvametapublish.md) | Publishes the JSON metadata to MQTT or Kafka message brokers or files. |
- | [gvapython](./docs/source/elements/gvapython.md) | Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks. |
- | [gvarealsense](./docs/source/elements/gvarealsense.md) | Provides integration with Intel RealSense cameras, enabling video and depth stream capture for use in GStreamer pipelines. |
- | [gvawatermark](./docs/source/elements/gvawatermark.md) | Overlays the metadata on the video frame to visualize the inference results. |
-
-For the details on supported platforms, please refer to [System Requirements](./docs/source/get_started/system_requirements.md).
-For installing Pipeline Framework with the prebuilt binaries or Docker\* or to build the binaries from the open source, refer to [Intel® DL Streamer Pipeline Framework installation guide](./docs/source/get_started/install/install_guide_index.md).
+ | [gvadetect](./docs/user-guide/elements/gvadetect.md) | Performs object detection on a full-frame or region of interest (ROI) using object detection models such as YOLOv4-v11, MobileNet SSD, Faster-RCNN etc. Outputs the ROI for detected objects. |
+ | [gvaclassify](./docs/user-guide/elements/gvaclassify.md) | Performs object classification. Accepts the ROI as an input and outputs classification results with the ROI metadata. |
+ | [gvainference](./docs/user-guide/elements/gvainference.md) | Runs deep learning inference on a full-frame or ROI using any model with an RGB or BGR input. |
+ | [gvatrack](./docs/user-guide/elements/gvatrack.md) | Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects. |
+ | [gvaaudiodetect](./docs/user-guide/elements/gvaaudiodetect.md) | Performs audio event detection using AclNet model. |
+ | [gvagenai](./docs/user-guide/elements/gvagenai.md) | Performs inference with Vision Language Models using OpenVINO™ GenAI, accepts video and text prompt as an input, and outputs text description. It can be used to generate text summarization from video. |
+ | [gvaattachroi](./docs/user-guide/elements/gvaattachroi.md) | Adds user-defined regions of interest to perform inference on, instead of full frame. |
+ | [gvafpscounter](./docs/user-guide/elements/gvafpscounter.md) | Measures frames per second across multiple streams in a single process. |
+ | [gvametaaggregate](./docs/user-guide/elements/gvametaaggregate.md) | Aggregates inference results from multiple pipeline branches |
+ | [gvametaconvert](./docs/user-guide/elements/gvametaconvert.md) | Converts the metadata structure to the JSON format. |
+ | [gvametapublish](./docs/user-guide/elements/gvametapublish.md) | Publishes the JSON metadata to MQTT or Kafka message brokers or files. |
+ | [gvapython](./docs/user-guide/elements/gvapython.md) | Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks. |
+ | [gvarealsense](./docs/user-guide/elements/gvarealsense.md) | Provides integration with Intel RealSense cameras, enabling video and depth stream capture for use in GStreamer pipelines. |
+ | [gvawatermark](./docs/user-guide/elements/gvawatermark.md) | Overlays the metadata on the video frame to visualize the inference results. |
+
+For the details on supported platforms, please refer to [System Requirements](./docs/user-guide/get_started/system_requirements.md).
+For installing Pipeline Framework with the prebuilt binaries or Docker\* or to build the binaries from the open source, refer to [Intel® DL Streamer Pipeline Framework installation guide](./docs/user-guide/get_started/install/install_guide_index.md).
### New in this Release
@@ -130,7 +130,7 @@ For installing Pipeline Framework with the prebuilt binaries or Docker\* or to b
| License plate recognition use case support | Added support for models that allow to recognize license plates; [sample](./samples/gstreamer/gst_launch/license_plate_recognition) added as reference. |
| Deep Scenario model support | Commercial 3D model support |
| Anomaly model support | Added support for anomaly model, [sample](./samples/gstreamer/gst_launch/geti_deployment) added as reference, sample added as reference. |
-| RealSense element support | New [gvarealsense](./docs/source/elements/gvarealsense.md) element implementation providing basic integration with Intel RealSense cameras, enabling video and depth stream capture for use in GStreamer pipelines. |
+| RealSense element support | New [gvarealsense](./docs/user-guide/elements/gvarealsense.md) element implementation providing basic integration with Intel RealSense cameras, enabling video and depth stream capture for use in GStreamer pipelines. |
| OpenVINO 2025.3 version support | Support of recent OpenVINO version added. |
| GStreamer 1.26.6 version support | Support of recent GStreamer version added. |
| NPU 1.19 version driver support | Support of recent NPU driver version added. |
@@ -164,23 +164,23 @@ The complete solution leverages:
| Element| Description|
|--------|------------|
-| [gvadetect](./docs/source/elements/gvadetect.md)| Performs object detection on a full-frame or region of interest (ROI) using object detection models such as YOLOv4-v11, MobileNet SSD, Faster-RCNN etc. Outputs the ROI for detected objects. |
-| [gvaclassify](./docs/source/elements/gvaclassify.md) | Performs object classification. Accepts the ROI as an input and outputs classification results with the ROI metadata. |
-| [gvainference](./docs/source/elements/gvainference.md) | Runs deep learning inference on a full-frame or ROI using any model with an RGB or BGR input.|
-| [gvatrack](./docs/source/elements/gvatrack.md)| Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects. |
-| [gvaaudiodetect](./docs/source/elements/gvaaudiodetect.md) | Performs audio event detection using AclNet model. |
-| [gvaattachroi](./docs/source/elements/gvaattachroi.md) | Adds user-defined regions of interest to perform inference on, instead of full frame.|
-| [gvafpscounter](./docs/source/elements/gvafpscounter.md) | Measures frames per second across multiple streams in a single process. |
-| [gvametaaggregate](./docs/source/elements/gvametaaggregate.md) | Aggregates inference results from multiple pipeline branches |
-| [gvametaconvert](./docs/source/elements/gvametaconvert.md) | Converts the metadata structure to the JSON format.|
-| [gvametapublish](./docs/source/elements/gvametapublish.md) | Publishes the JSON metadata to MQTT or Kafka message brokers or files. |
-| [gvapython](./docs/source/elements/gvapython.md) | Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks.|
-| [gvawatermark](./docs/source/elements/gvawatermark.md) | Overlays the metadata on the video frame to visualize the inference results. |
+| [gvadetect](./docs/user-guide/elements/gvadetect.md)| Performs object detection on a full-frame or region of interest (ROI) using object detection models such as YOLOv4-v11, MobileNet SSD, Faster-RCNN etc. Outputs the ROI for detected objects. |
+| [gvaclassify](./docs/user-guide/elements/gvaclassify.md) | Performs object classification. Accepts the ROI as an input and outputs classification results with the ROI metadata. |
+| [gvainference](./docs/user-guide/elements/gvainference.md) | Runs deep learning inference on a full-frame or ROI using any model with an RGB or BGR input.|
+| [gvatrack](./docs/user-guide/elements/gvatrack.md)| Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects. |
+| [gvaaudiodetect](./docs/user-guide/elements/gvaaudiodetect.md) | Performs audio event detection using AclNet model. |
+| [gvaattachroi](./docs/user-guide/elements/gvaattachroi.md) | Adds user-defined regions of interest to perform inference on, instead of full frame.|
+| [gvafpscounter](./docs/user-guide/elements/gvafpscounter.md) | Measures frames per second across multiple streams in a single process. |
+| [gvametaaggregate](./docs/user-guide/elements/gvametaaggregate.md) | Aggregates inference results from multiple pipeline branches |
+| [gvametaconvert](./docs/user-guide/elements/gvametaconvert.md) | Converts the metadata structure to the JSON format.|
+| [gvametapublish](./docs/user-guide/elements/gvametapublish.md) | Publishes the JSON metadata to MQTT or Kafka message brokers or files. |
+| [gvapython](./docs/user-guide/elements/gvapython.md) | Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks.|
+| [gvawatermark](./docs/user-guide/elements/gvawatermark.md) | Overlays the metadata on the video frame to visualize the inference results. |
-For the details of supported platforms, please refer to [System Requirements](./docs/source/get_started/system_requirements.md) section.
+For the details of supported platforms, please refer to [System Requirements](./docs/user-guide/get_started/system_requirements.md) section.
-For installing Pipeline Framework with the prebuilt binaries or Docker\* or to build the binaries from the open source, please refer to [DL Streamer Pipeline Framework installation guide](./docs/source/get_started/install/install_guide_index.md)
+For installing Pipeline Framework with the prebuilt binaries or Docker\* or to build the binaries from the open source, please refer to [DL Streamer Pipeline Framework installation guide](./docs/user-guide/get_started/install/install_guide_index.md)
## New in this Release
@@ -431,7 +431,7 @@ For installing Pipeline Framework with the prebuilt binaries or Docker\* or to b
### System Requirements
-Please refer to [DL Streamer documentation](./docs/source/get_started/system_requirements.md).
+Please refer to [DL Streamer documentation](./docs/user-guide/get_started/system_requirements.md).
## Installation Notes
@@ -441,7 +441,7 @@ There are several installation options for Pipeline Framework:
1. Build Docker image from docker file and run Docker image
1. Build Pipeline Framework from source code
-For more detailed instructions please refer to [DL Streamer Pipeline Framework installation guide](./docs/source/get_started/install/install_guide_index.md).
+For more detailed instructions please refer to [DL Streamer Pipeline Framework installation guide](./docs/user-guide/get_started/install/install_guide_index.md).
## Samples
diff --git a/SPECS/README.md b/SPECS/README.md
index f929806b6..fd4f2818e 100644
--- a/SPECS/README.md
+++ b/SPECS/README.md
@@ -175,8 +175,8 @@ source /opt/intel/dlstreamer/setupvars.sh
After installation, you can try DL Streamer pipelines as described in:
-- [Tutorial](../docs/source/get_started/tutorial.md)
-- [Performance Guide](../docs/source/dev_guide/performance_guide.md)
+- [Tutorial](../docs/user-guide/get_started/tutorial.md)
+- [Performance Guide](../docs/user-guide/dev_guide/performance_guide.md)
These resources provide sample pipelines and performance validation steps for DL Streamer GStreamer elements.
diff --git a/docs/.gitignore b/docs/.gitignore
index 4c08e20e6..61bf5ddb8 100644
--- a/docs/.gitignore
+++ b/docs/.gitignore
@@ -1,6 +1,6 @@
build**/
-source/_doxygen/src/
-source/_doxygen/src-api2.0/
+user-guide/_doxygen/src/
+user-guide/_doxygen/src-api2.0/
model_index.yaml
diff --git a/docs/README.md b/docs/README.md
index 79235ed8d..0e5152d79 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -19,7 +19,7 @@ Follow the steps below:
2. Build `.rst` pages in the format you need, e.g. `html`:
```shell
- sphinx-build -b html ./source ./build-html
+ sphinx-build -b html ./user-guide ./build-html
```
If Running Sphinx shows error like below:
```
@@ -34,35 +34,35 @@ Follow the steps below:
```
To see `html` built documentation open `./build-html/index.html` file.
-3. Run spelling check of `.rst` pages:
- Update conf.py adding `sphinxcontrib.spelling` to
+3. Run spelling check of `.rst` pages:
+ Update conf.py adding `sphinxcontrib.spelling` to
```
extensions = ... , 'sphinxcontrib.mermaid', 'sphinxcontrib.spelling']
```
-
+
Dictionary configuration can be done setting up
-
+
```
#Dictionary selected
spelling_lang='en_US'
-
- #Path of file containing a list of words (one word per line) known to be spelled correctly but that
+
+ #Path of file containing a list of words (one word per line) known to be spelled correctly but that
#do not appear in the language dictionary selected
spelling_word_list_filename='/spelling_wordlist.txt'
#Enable suggestions for misspelled words
spelling_show_suggestions=True
```
-
+
```shell
- sphinx-build -b spelling ./source ./build-spelling
+ sphinx-build -b spelling ./user-guide ./build-spelling
```
- Each `.rst` page is accompained by a `.spelling` report file with the list of misspelled words and the location.
+ Each `.rst` page is accompained by a `.spelling` report file with the list of misspelled words and the location.
4. Run link check of `.rst` pages:
```shell
- sphinx-build -b linkcheck ./source ./build-linkcheck
+ sphinx-build -b linkcheck ./user-guide ./build-linkcheck
```
Two reports file are generated: `output.json` with the complete list of URL checked; `output.txt` with the list of broken links.
diff --git a/docs/_docker/Dockerfile b/docs/_docker/Dockerfile
index da8207f24..053ef9011 100644
--- a/docs/_docker/Dockerfile
+++ b/docs/_docker/Dockerfile
@@ -29,15 +29,15 @@ WORKDIR ${DOCS_DIR}
RUN wget https://github.com/vovkos/doxyrest/releases/download/doxyrest-2.1.3/doxyrest-2.1.3-linux-amd64.tar.xz -P ${HOME} \
&& mkdir -p ${HOME}/doxyrest \
&& tar -xf ${HOME}/doxyrest-2.1.3-linux-amd64.tar.xz -C ${HOME}/doxyrest --strip-components 1 \
- && cp -r ${HOME}/doxyrest/share/doxyrest/frame ${DOCS_DIR}/source/_doxygen
+ && cp -r ${HOME}/doxyrest/share/doxyrest/frame ${DOCS_DIR}/user-guide/_doxygen
# Build Doxygen docs and convert to rst
-RUN cd ${DOCS_DIR}/source/_doxygen && doxygen Doxyfile \
+RUN cd ${DOCS_DIR}/user-guide/_doxygen && doxygen Doxyfile \
&& $HOME/doxyrest/bin/doxyrest -c ./doxyrest-config.lua \
&& cp GVA_API.png ../api_ref/
# Build Doxygen docs for API 2.0 and convert to rst
-RUN cd ${DOCS_DIR}/source/_doxygen && doxygen Doxyfile-api2.0 \
+RUN cd ${DOCS_DIR}/user-guide/_doxygen && doxygen Doxyfile-api2.0 \
&& $HOME/doxyrest/bin/doxyrest -c ./doxyrest-config-api2.0.lua
# Download open_model_zoo and generate supported_models.rst
@@ -47,9 +47,9 @@ RUN pip3 install PyYAML \
&& python3 ${DOCS_DIR}/scripts/models_table_from_yaml.py \
--model_index=${DOCS_DIR}/scripts/all_models.yaml \
--verified_models=${DOCS_DIR}/scripts/supported_models.json \
- --output=${DOCS_DIR}/source/supported_models.rst
+ --output=${DOCS_DIR}/user-guide/supported_models.rst
ENV DOXYREST_SPHINX_DIR=/root/doxyrest/share/doxyrest/sphinx/
#ENTRYPOINT ["/usr/local/bin/sphinx-build"]
-#CMD ["-b", "html", "./source", "./build"]
+#CMD ["-b", "html", "./user-guide", "./build"]
diff --git a/docs/build_html.sh b/docs/build_html.sh
index 56eb2f9f4..990550bb6 100755
--- a/docs/build_html.sh
+++ b/docs/build_html.sh
@@ -1,6 +1,6 @@
#!/bin/bash
# ==============================================================================
-# Copyright (C) 2022-2025 Intel Corporation
+# Copyright (C) 2022-2026 Intel Corporation
#
# SPDX-License-Identifier: MIT
# ==============================================================================
@@ -11,7 +11,7 @@ DOCKER_PRIVATE_REGISTRY=${3:-""}
ROOT="$(realpath "$(dirname "${0}")"/..)"
DOCS_DIR=$ROOT/docs
-DOXYGEN_DIR=$ROOT/docs/source/_doxygen
+DOXYGEN_DIR=$ROOT/docs/user-guide/_doxygen
IMAGE_DOCS_DIR=/root/docs
# Copy necessary files located outside of this folder
diff --git a/docs/scripts/sphinx_build.sh b/docs/scripts/sphinx_build.sh
index 2fb268e1d..b5f9d089b 100755
--- a/docs/scripts/sphinx_build.sh
+++ b/docs/scripts/sphinx_build.sh
@@ -1,12 +1,12 @@
#!/bin/bash
# ==============================================================================
-# Copyright (C) 2023 Intel Corporation
+# Copyright (C) 2023-2026 Intel Corporation
#
# SPDX-License-Identifier: MIT
# ==============================================================================
BUILD_TYPES=${1:-"html,linkcheck,spelling"}
-SOURCE_DIR=${2:-"./source"}
+SOURCE_DIR=${2:-"./user-guide"}
echo "Build types: $BUILD_TYPES"
@@ -38,4 +38,4 @@ echo ""
echo "Final exit code: $EXIT_CODE"
echo "::endgroup::"
-exit $EXIT_CODE
\ No newline at end of file
+exit $EXIT_CODE
diff --git a/docs/source/_doxygen/Doxyfile b/docs/user-guide/_doxygen/Doxyfile
similarity index 100%
rename from docs/source/_doxygen/Doxyfile
rename to docs/user-guide/_doxygen/Doxyfile
diff --git a/docs/source/_doxygen/Doxyfile-api2.0 b/docs/user-guide/_doxygen/Doxyfile-api2.0
similarity index 100%
rename from docs/source/_doxygen/Doxyfile-api2.0
rename to docs/user-guide/_doxygen/Doxyfile-api2.0
diff --git a/docs/source/_doxygen/GVA_API.png b/docs/user-guide/_doxygen/GVA_API.png
similarity index 100%
rename from docs/source/_doxygen/GVA_API.png
rename to docs/user-guide/_doxygen/GVA_API.png
diff --git a/docs/source/_doxygen/doxyrest-config-api2.0.lua b/docs/user-guide/_doxygen/doxyrest-config-api2.0.lua
similarity index 100%
rename from docs/source/_doxygen/doxyrest-config-api2.0.lua
rename to docs/user-guide/_doxygen/doxyrest-config-api2.0.lua
diff --git a/docs/source/_doxygen/doxyrest-config.lua b/docs/user-guide/_doxygen/doxyrest-config.lua
similarity index 100%
rename from docs/source/_doxygen/doxyrest-config.lua
rename to docs/user-guide/_doxygen/doxyrest-config.lua
diff --git a/docs/source/_doxygen/index.md b/docs/user-guide/_doxygen/index.md
similarity index 100%
rename from docs/source/_doxygen/index.md
rename to docs/user-guide/_doxygen/index.md
diff --git a/docs/source/_images/NTT_Logo.png b/docs/user-guide/_images/NTT_Logo.png
similarity index 100%
rename from docs/source/_images/NTT_Logo.png
rename to docs/user-guide/_images/NTT_Logo.png
diff --git a/docs/source/_images/blur-them-all.png b/docs/user-guide/_images/blur-them-all.png
similarity index 100%
rename from docs/source/_images/blur-them-all.png
rename to docs/user-guide/_images/blur-them-all.png
diff --git a/docs/source/_images/c++-interfaces-and-base-classes.svg b/docs/user-guide/_images/c++-interfaces-and-base-classes.svg
similarity index 100%
rename from docs/source/_images/c++-interfaces-and-base-classes.svg
rename to docs/user-guide/_images/c++-interfaces-and-base-classes.svg
diff --git a/docs/source/_images/c++-interfaces-and-classes.svg b/docs/user-guide/_images/c++-interfaces-and-classes.svg
similarity index 100%
rename from docs/source/_images/c++-interfaces-and-classes.svg
rename to docs/user-guide/_images/c++-interfaces-and-classes.svg
diff --git a/docs/source/_images/color-idx-zero.png b/docs/user-guide/_images/color-idx-zero.png
similarity index 100%
rename from docs/source/_images/color-idx-zero.png
rename to docs/user-guide/_images/color-idx-zero.png
diff --git a/docs/source/_images/ffmpeg-to-usm-memory-mappers-chain.svg b/docs/user-guide/_images/ffmpeg-to-usm-memory-mappers-chain.svg
similarity index 100%
rename from docs/source/_images/ffmpeg-to-usm-memory-mappers-chain.svg
rename to docs/user-guide/_images/ffmpeg-to-usm-memory-mappers-chain.svg
diff --git a/docs/source/_images/gst-opencv-pre-processing-and-inference.svg b/docs/user-guide/_images/gst-opencv-pre-processing-and-inference.svg
similarity index 100%
rename from docs/source/_images/gst-opencv-pre-processing-and-inference.svg
rename to docs/user-guide/_images/gst-opencv-pre-processing-and-inference.svg
diff --git a/docs/source/_images/gst-to-usm-memory-mappers-chain.svg b/docs/user-guide/_images/gst-to-usm-memory-mappers-chain.svg
similarity index 100%
rename from docs/source/_images/gst-to-usm-memory-mappers-chain.svg
rename to docs/user-guide/_images/gst-to-usm-memory-mappers-chain.svg
diff --git a/docs/source/_images/gvawatermark-disabled-labels.png b/docs/user-guide/_images/gvawatermark-disabled-labels.png
similarity index 100%
rename from docs/source/_images/gvawatermark-disabled-labels.png
rename to docs/user-guide/_images/gvawatermark-disabled-labels.png
diff --git a/docs/source/_images/high-level-bin-elements-architecture.svg b/docs/user-guide/_images/high-level-bin-elements-architecture.svg
similarity index 100%
rename from docs/source/_images/high-level-bin-elements-architecture.svg
rename to docs/user-guide/_images/high-level-bin-elements-architecture.svg
diff --git a/docs/source/_images/high-level-sequence-element-creation-operation.svg b/docs/user-guide/_images/high-level-sequence-element-creation-operation.svg
similarity index 100%
rename from docs/source/_images/high-level-sequence-element-creation-operation.svg
rename to docs/user-guide/_images/high-level-sequence-element-creation-operation.svg
diff --git a/docs/source/_images/memory-interop.svg b/docs/user-guide/_images/memory-interop.svg
similarity index 100%
rename from docs/source/_images/memory-interop.svg
rename to docs/user-guide/_images/memory-interop.svg
diff --git a/docs/source/_images/object-detect-internal-pipeline.svg b/docs/user-guide/_images/object-detect-internal-pipeline.svg
similarity index 100%
rename from docs/source/_images/object-detect-internal-pipeline.svg
rename to docs/user-guide/_images/object-detect-internal-pipeline.svg
diff --git a/docs/source/_images/only-car-blured.png b/docs/user-guide/_images/only-car-blured.png
similarity index 100%
rename from docs/source/_images/only-car-blured.png
rename to docs/user-guide/_images/only-car-blured.png
diff --git a/docs/source/_images/only-person-blured.png b/docs/user-guide/_images/only-person-blured.png
similarity index 100%
rename from docs/source/_images/only-person-blured.png
rename to docs/user-guide/_images/only-person-blured.png
diff --git a/docs/source/_images/overview_pipeline_example.png b/docs/user-guide/_images/overview_pipeline_example.png
similarity index 100%
rename from docs/source/_images/overview_pipeline_example.png
rename to docs/user-guide/_images/overview_pipeline_example.png
diff --git a/docs/source/_images/overview_sw_stack.png b/docs/user-guide/_images/overview_sw_stack.png
similarity index 100%
rename from docs/source/_images/overview_sw_stack.png
rename to docs/user-guide/_images/overview_sw_stack.png
diff --git a/docs/source/_images/per-roi-inference.svg b/docs/user-guide/_images/per-roi-inference.svg
similarity index 100%
rename from docs/source/_images/per-roi-inference.svg
rename to docs/user-guide/_images/per-roi-inference.svg
diff --git a/docs/source/_images/roi-exclude-car.png b/docs/user-guide/_images/roi-exclude-car.png
similarity index 100%
rename from docs/source/_images/roi-exclude-car.png
rename to docs/user-guide/_images/roi-exclude-car.png
diff --git a/docs/source/_images/roi-include-car.png b/docs/user-guide/_images/roi-include-car.png
similarity index 100%
rename from docs/source/_images/roi-include-car.png
rename to docs/user-guide/_images/roi-include-car.png
diff --git a/docs/source/_images/show-avg-fps.png b/docs/user-guide/_images/show-avg-fps.png
similarity index 100%
rename from docs/source/_images/show-avg-fps.png
rename to docs/user-guide/_images/show-avg-fps.png
diff --git a/docs/source/_images/show-text-background.png b/docs/user-guide/_images/show-text-background.png
similarity index 100%
rename from docs/source/_images/show-text-background.png
rename to docs/user-guide/_images/show-text-background.png
diff --git a/docs/source/_images/simple-complex-font.png b/docs/user-guide/_images/simple-complex-font.png
similarity index 100%
rename from docs/source/_images/simple-complex-font.png
rename to docs/user-guide/_images/simple-complex-font.png
diff --git a/docs/source/_images/text-scale-0-7.png b/docs/user-guide/_images/text-scale-0-7.png
similarity index 100%
rename from docs/source/_images/text-scale-0-7.png
rename to docs/user-guide/_images/text-scale-0-7.png
diff --git a/docs/source/_images/text-scale-2-0.png b/docs/user-guide/_images/text-scale-2-0.png
similarity index 100%
rename from docs/source/_images/text-scale-2-0.png
rename to docs/user-guide/_images/text-scale-2-0.png
diff --git a/docs/source/_images/the-queue-after-inference.svg b/docs/user-guide/_images/the-queue-after-inference.svg
similarity index 100%
rename from docs/source/_images/the-queue-after-inference.svg
rename to docs/user-guide/_images/the-queue-after-inference.svg
diff --git a/docs/source/_images/triplex-font.png b/docs/user-guide/_images/triplex-font.png
similarity index 100%
rename from docs/source/_images/triplex-font.png
rename to docs/user-guide/_images/triplex-font.png
diff --git a/docs/source/_images/vaapi-opencl-pre-processing-and-inference.svg b/docs/user-guide/_images/vaapi-opencl-pre-processing-and-inference.svg
similarity index 100%
rename from docs/source/_images/vaapi-opencl-pre-processing-and-inference.svg
rename to docs/user-guide/_images/vaapi-opencl-pre-processing-and-inference.svg
diff --git a/docs/source/_images/vaapi-pre-processing-and-inference.svg b/docs/user-guide/_images/vaapi-pre-processing-and-inference.svg
similarity index 100%
rename from docs/source/_images/vaapi-pre-processing-and-inference.svg
rename to docs/user-guide/_images/vaapi-pre-processing-and-inference.svg
diff --git a/docs/source/_images/vaapi-surface-sharing-pre-processing-and-inference.svg b/docs/user-guide/_images/vaapi-surface-sharing-pre-processing-and-inference.svg
similarity index 100%
rename from docs/source/_images/vaapi-surface-sharing-pre-processing-and-inference.svg
rename to docs/user-guide/_images/vaapi-surface-sharing-pre-processing-and-inference.svg
diff --git a/docs/source/api_ref/api_reference.rst b/docs/user-guide/api_ref/api_reference.rst
similarity index 100%
rename from docs/source/api_ref/api_reference.rst
rename to docs/user-guide/api_ref/api_reference.rst
diff --git a/docs/source/architecture_2.0/api_ref/index.rst b/docs/user-guide/architecture_2.0/api_ref/index.rst
similarity index 100%
rename from docs/source/architecture_2.0/api_ref/index.rst
rename to docs/user-guide/architecture_2.0/api_ref/index.rst
diff --git a/docs/source/architecture_2.0/architecture_2.0.md b/docs/user-guide/architecture_2.0/architecture_2.0.md
similarity index 100%
rename from docs/source/architecture_2.0/architecture_2.0.md
rename to docs/user-guide/architecture_2.0/architecture_2.0.md
diff --git a/docs/source/architecture_2.0/cpp_elements.md b/docs/user-guide/architecture_2.0/cpp_elements.md
similarity index 100%
rename from docs/source/architecture_2.0/cpp_elements.md
rename to docs/user-guide/architecture_2.0/cpp_elements.md
diff --git a/docs/source/architecture_2.0/cpp_interfaces.md b/docs/user-guide/architecture_2.0/cpp_interfaces.md
similarity index 100%
rename from docs/source/architecture_2.0/cpp_interfaces.md
rename to docs/user-guide/architecture_2.0/cpp_interfaces.md
diff --git a/docs/source/architecture_2.0/dlstreamer-arch-2.0.png b/docs/user-guide/architecture_2.0/dlstreamer-arch-2.0.png
similarity index 100%
rename from docs/source/architecture_2.0/dlstreamer-arch-2.0.png
rename to docs/user-guide/architecture_2.0/dlstreamer-arch-2.0.png
diff --git a/docs/source/architecture_2.0/elements_list.md b/docs/user-guide/architecture_2.0/elements_list.md
similarity index 100%
rename from docs/source/architecture_2.0/elements_list.md
rename to docs/user-guide/architecture_2.0/elements_list.md
diff --git a/docs/source/architecture_2.0/gstreamer_bins.md b/docs/user-guide/architecture_2.0/gstreamer_bins.md
similarity index 100%
rename from docs/source/architecture_2.0/gstreamer_bins.md
rename to docs/user-guide/architecture_2.0/gstreamer_bins.md
diff --git a/docs/source/architecture_2.0/gstreamer_elements.md b/docs/user-guide/architecture_2.0/gstreamer_elements.md
similarity index 100%
rename from docs/source/architecture_2.0/gstreamer_elements.md
rename to docs/user-guide/architecture_2.0/gstreamer_elements.md
diff --git a/docs/source/architecture_2.0/migration_guide.md b/docs/user-guide/architecture_2.0/migration_guide.md
similarity index 100%
rename from docs/source/architecture_2.0/migration_guide.md
rename to docs/user-guide/architecture_2.0/migration_guide.md
diff --git a/docs/source/architecture_2.0/packaging.md b/docs/user-guide/architecture_2.0/packaging.md
similarity index 100%
rename from docs/source/architecture_2.0/packaging.md
rename to docs/user-guide/architecture_2.0/packaging.md
diff --git a/docs/source/architecture_2.0/python_bindings.md b/docs/user-guide/architecture_2.0/python_bindings.md
similarity index 100%
rename from docs/source/architecture_2.0/python_bindings.md
rename to docs/user-guide/architecture_2.0/python_bindings.md
diff --git a/docs/source/architecture_2.0/pytorch_inference.md b/docs/user-guide/architecture_2.0/pytorch_inference.md
similarity index 100%
rename from docs/source/architecture_2.0/pytorch_inference.md
rename to docs/user-guide/architecture_2.0/pytorch_inference.md
diff --git a/docs/source/architecture_2.0/samples_2.0.md b/docs/user-guide/architecture_2.0/samples_2.0.md
similarity index 100%
rename from docs/source/architecture_2.0/samples_2.0.md
rename to docs/user-guide/architecture_2.0/samples_2.0.md
diff --git a/docs/source/conf.py b/docs/user-guide/conf.py
similarity index 97%
rename from docs/source/conf.py
rename to docs/user-guide/conf.py
index 0b2bd5a39..58978610c 100644
--- a/docs/source/conf.py
+++ b/docs/user-guide/conf.py
@@ -1,5 +1,5 @@
# ==============================================================================
-# Copyright (C) 2025 Intel Corporation
+# Copyright (C) 2025-2026 Intel Corporation
#
# SPDX-License-Identifier: MIT
# ==============================================================================
@@ -27,7 +27,7 @@
# -- Project information -----------------------------------------------------
project = "Deep Learning Streamer (DL Streamer)"
-copyright = "2025, Intel Corporation"
+copyright = "2026, Intel Corporation"
author = "Intel Corporation"
diff --git a/docs/source/dev_guide/BottomUP_tab.png b/docs/user-guide/dev_guide/BottomUP_tab.png
similarity index 100%
rename from docs/source/dev_guide/BottomUP_tab.png
rename to docs/user-guide/dev_guide/BottomUP_tab.png
diff --git a/docs/source/dev_guide/Platform_tab.png b/docs/user-guide/dev_guide/Platform_tab.png
similarity index 100%
rename from docs/source/dev_guide/Platform_tab.png
rename to docs/user-guide/dev_guide/Platform_tab.png
diff --git a/docs/source/dev_guide/advanced_install/advanced_build_docker_image.md b/docs/user-guide/dev_guide/advanced_install/advanced_build_docker_image.md
similarity index 100%
rename from docs/source/dev_guide/advanced_install/advanced_build_docker_image.md
rename to docs/user-guide/dev_guide/advanced_install/advanced_build_docker_image.md
diff --git a/docs/source/dev_guide/advanced_install/advanced_install_guide_compilation.md b/docs/user-guide/dev_guide/advanced_install/advanced_install_guide_compilation.md
similarity index 100%
rename from docs/source/dev_guide/advanced_install/advanced_install_guide_compilation.md
rename to docs/user-guide/dev_guide/advanced_install/advanced_install_guide_compilation.md
diff --git a/docs/source/dev_guide/advanced_install/advanced_install_guide_index.md b/docs/user-guide/dev_guide/advanced_install/advanced_install_guide_index.md
similarity index 100%
rename from docs/source/dev_guide/advanced_install/advanced_install_guide_index.md
rename to docs/user-guide/dev_guide/advanced_install/advanced_install_guide_index.md
diff --git a/docs/source/dev_guide/advanced_install/advanced_install_guide_prerequisites.md b/docs/user-guide/dev_guide/advanced_install/advanced_install_guide_prerequisites.md
similarity index 100%
rename from docs/source/dev_guide/advanced_install/advanced_install_guide_prerequisites.md
rename to docs/user-guide/dev_guide/advanced_install/advanced_install_guide_prerequisites.md
diff --git a/docs/source/dev_guide/advanced_install/advanced_install_on_windows.md b/docs/user-guide/dev_guide/advanced_install/advanced_install_on_windows.md
similarity index 96%
rename from docs/source/dev_guide/advanced_install/advanced_install_on_windows.md
rename to docs/user-guide/dev_guide/advanced_install/advanced_install_on_windows.md
index 3f1b92e02..55c773efe 100644
--- a/docs/source/dev_guide/advanced_install/advanced_install_on_windows.md
+++ b/docs/user-guide/dev_guide/advanced_install/advanced_install_on_windows.md
@@ -1,46 +1,46 @@
-# Advanced Installation on Windows - compilation from source files
-
-The instructions below are intended for building Deep Learning Streamer Pipeline Framework
-from the source code provided in
-
-[Open Edge Platform repository](https://github.com/open-edge-platform/dlstreamer).
-
-## Step 1: Clone Deep Learning Streamer repository
-
-```bash
-git clone --recursive https://github.com/open-edge-platform/dlstreamer.git
-cd dlstreamer
-```
-
-## Step 2: Run installation script
-
-Open PowerShell as administrator and run the `build_dlstreamer_dlls.ps1` script.
-
-
-```
-cd ./dlstreamer/
-./scripts/build_dlstreamer_dlls.ps1
-```
-
-### Details of the build script
-
-- The script will install the following dependencies:
- | Required dependency | Path |
- | -------- | ------- |
- | Temporary downloaded files | C:\\dlstreamer_tmp |
- | WinGet PowerShell module from PSGallery | \%programfiles\%\\WindowsPowerShell\\Modules\\Microsoft.WinGet.Client |
- | Visual Studio BuildTools | C:\\BuildTools |
- | Microsoft Windows SDK | \%programfiles(x86)\%\\Windows Kits |
- | GStreamer | C:\\gstreamer |
- | OpenVINO GenAI | C:\\openvino |
- | Git | \%programfiles\%\\Git |
- | vcpkg | C:\\vcpkg |
- | Python | \%programfiles\%\\Python |
- | DL Streamer | C:\\dlstreamer_tmp\\build |
-
-- The script will create or modify following environmental variables:
- - VCPKG_ROOT
- - PATH
- - PKG_CONFIG_PATH
-
-- The script assumes that the proxy is properly configured
+# Advanced Installation on Windows - compilation from source files
+
+The instructions below are intended for building Deep Learning Streamer Pipeline Framework
+from the source code provided in
+
+[Open Edge Platform repository](https://github.com/open-edge-platform/dlstreamer).
+
+## Step 1: Clone Deep Learning Streamer repository
+
+```bash
+git clone --recursive https://github.com/open-edge-platform/dlstreamer.git
+cd dlstreamer
+```
+
+## Step 2: Run installation script
+
+Open PowerShell as administrator and run the `build_dlstreamer_dlls.ps1` script.
+
+
+```
+cd ./dlstreamer/
+./scripts/build_dlstreamer_dlls.ps1
+```
+
+### Details of the build script
+
+- The script will install the following dependencies:
+ | Required dependency | Path |
+ | -------- | ------- |
+ | Temporary downloaded files | C:\\dlstreamer_tmp |
+ | WinGet PowerShell module from PSGallery | \%programfiles\%\\WindowsPowerShell\\Modules\\Microsoft.WinGet.Client |
+ | Visual Studio BuildTools | C:\\BuildTools |
+ | Microsoft Windows SDK | \%programfiles(x86)\%\\Windows Kits |
+ | GStreamer | C:\\gstreamer |
+ | OpenVINO GenAI | C:\\openvino |
+ | Git | \%programfiles\%\\Git |
+ | vcpkg | C:\\vcpkg |
+ | Python | \%programfiles\%\\Python |
+ | DL Streamer | C:\\dlstreamer_tmp\\build |
+
+- The script will create or modify following environmental variables:
+ - VCPKG_ROOT
+ - PATH
+ - PKG_CONFIG_PATH
+
+- The script assumes that the proxy is properly configured
diff --git a/docs/source/dev_guide/advanced_install/advanced_uninstall_guide.md b/docs/user-guide/dev_guide/advanced_install/advanced_uninstall_guide.md
similarity index 100%
rename from docs/source/dev_guide/advanced_install/advanced_uninstall_guide.md
rename to docs/user-guide/dev_guide/advanced_install/advanced_uninstall_guide.md
diff --git a/docs/source/dev_guide/coding_style.md b/docs/user-guide/dev_guide/coding_style.md
similarity index 100%
rename from docs/source/dev_guide/coding_style.md
rename to docs/user-guide/dev_guide/coding_style.md
diff --git a/docs/source/dev_guide/converting_deepstream_to_dlstreamer.md b/docs/user-guide/dev_guide/converting_deepstream_to_dlstreamer.md
similarity index 100%
rename from docs/source/dev_guide/converting_deepstream_to_dlstreamer.md
rename to docs/user-guide/dev_guide/converting_deepstream_to_dlstreamer.md
diff --git a/docs/source/dev_guide/custom_plugin_installation.md b/docs/user-guide/dev_guide/custom_plugin_installation.md
similarity index 100%
rename from docs/source/dev_guide/custom_plugin_installation.md
rename to docs/user-guide/dev_guide/custom_plugin_installation.md
diff --git a/docs/source/dev_guide/custom_processing.md b/docs/user-guide/dev_guide/custom_processing.md
similarity index 100%
rename from docs/source/dev_guide/custom_processing.md
rename to docs/user-guide/dev_guide/custom_processing.md
diff --git a/docs/source/dev_guide/deepstream_mapping_dlstreamer.png b/docs/user-guide/dev_guide/deepstream_mapping_dlstreamer.png
similarity index 100%
rename from docs/source/dev_guide/deepstream_mapping_dlstreamer.png
rename to docs/user-guide/dev_guide/deepstream_mapping_dlstreamer.png
diff --git a/docs/source/dev_guide/dev_guide_index.md b/docs/user-guide/dev_guide/dev_guide_index.md
similarity index 100%
rename from docs/source/dev_guide/dev_guide_index.md
rename to docs/user-guide/dev_guide/dev_guide_index.md
diff --git a/docs/source/dev_guide/dlstreamer-deepstream-coexistence.md b/docs/user-guide/dev_guide/dlstreamer-deepstream-coexistence.md
similarity index 100%
rename from docs/source/dev_guide/dlstreamer-deepstream-coexistence.md
rename to docs/user-guide/dev_guide/dlstreamer-deepstream-coexistence.md
diff --git a/docs/source/dev_guide/download_public_models.md b/docs/user-guide/dev_guide/download_public_models.md
similarity index 100%
rename from docs/source/dev_guide/download_public_models.md
rename to docs/user-guide/dev_guide/download_public_models.md
diff --git a/docs/source/dev_guide/gpu_device_selection.md b/docs/user-guide/dev_guide/gpu_device_selection.md
similarity index 100%
rename from docs/source/dev_guide/gpu_device_selection.md
rename to docs/user-guide/dev_guide/gpu_device_selection.md
diff --git a/docs/source/dev_guide/gst_va_developer_flow.png b/docs/user-guide/dev_guide/gst_va_developer_flow.png
similarity index 100%
rename from docs/source/dev_guide/gst_va_developer_flow.png
rename to docs/user-guide/dev_guide/gst_va_developer_flow.png
diff --git a/docs/source/dev_guide/how_to_contribute.md b/docs/user-guide/dev_guide/how_to_contribute.md
similarity index 100%
rename from docs/source/dev_guide/how_to_contribute.md
rename to docs/user-guide/dev_guide/how_to_contribute.md
diff --git a/docs/source/dev_guide/how_to_create_model_proc_file.md b/docs/user-guide/dev_guide/how_to_create_model_proc_file.md
similarity index 100%
rename from docs/source/dev_guide/how_to_create_model_proc_file.md
rename to docs/user-guide/dev_guide/how_to_create_model_proc_file.md
diff --git a/docs/source/dev_guide/latency_tracer.md b/docs/user-guide/dev_guide/latency_tracer.md
similarity index 100%
rename from docs/source/dev_guide/latency_tracer.md
rename to docs/user-guide/dev_guide/latency_tracer.md
diff --git a/docs/source/dev_guide/lvms.md b/docs/user-guide/dev_guide/lvms.md
similarity index 100%
rename from docs/source/dev_guide/lvms.md
rename to docs/user-guide/dev_guide/lvms.md
diff --git a/docs/source/dev_guide/metadata.md b/docs/user-guide/dev_guide/metadata.md
similarity index 100%
rename from docs/source/dev_guide/metadata.md
rename to docs/user-guide/dev_guide/metadata.md
diff --git a/docs/source/dev_guide/model_info_xml.md b/docs/user-guide/dev_guide/model_info_xml.md
similarity index 100%
rename from docs/source/dev_guide/model_info_xml.md
rename to docs/user-guide/dev_guide/model_info_xml.md
diff --git a/docs/source/dev_guide/model_preparation.md b/docs/user-guide/dev_guide/model_preparation.md
similarity index 100%
rename from docs/source/dev_guide/model_preparation.md
rename to docs/user-guide/dev_guide/model_preparation.md
diff --git a/docs/source/dev_guide/model_proc_file.md b/docs/user-guide/dev_guide/model_proc_file.md
similarity index 100%
rename from docs/source/dev_guide/model_proc_file.md
rename to docs/user-guide/dev_guide/model_proc_file.md
diff --git a/docs/source/dev_guide/object_tracking.md b/docs/user-guide/dev_guide/object_tracking.md
similarity index 100%
rename from docs/source/dev_guide/object_tracking.md
rename to docs/user-guide/dev_guide/object_tracking.md
diff --git a/docs/source/dev_guide/openvino_custom_operations.md b/docs/user-guide/dev_guide/openvino_custom_operations.md
similarity index 100%
rename from docs/source/dev_guide/openvino_custom_operations.md
rename to docs/user-guide/dev_guide/openvino_custom_operations.md
diff --git a/docs/source/dev_guide/optimizer.md b/docs/user-guide/dev_guide/optimizer.md
similarity index 100%
rename from docs/source/dev_guide/optimizer.md
rename to docs/user-guide/dev_guide/optimizer.md
diff --git a/docs/source/dev_guide/performance_guide.md b/docs/user-guide/dev_guide/performance_guide.md
similarity index 100%
rename from docs/source/dev_guide/performance_guide.md
rename to docs/user-guide/dev_guide/performance_guide.md
diff --git a/docs/source/dev_guide/profiling.md b/docs/user-guide/dev_guide/profiling.md
similarity index 100%
rename from docs/source/dev_guide/profiling.md
rename to docs/user-guide/dev_guide/profiling.md
diff --git a/docs/source/dev_guide/python_bindings.md b/docs/user-guide/dev_guide/python_bindings.md
similarity index 100%
rename from docs/source/dev_guide/python_bindings.md
rename to docs/user-guide/dev_guide/python_bindings.md
diff --git a/docs/source/dev_guide/yolo_models.md b/docs/user-guide/dev_guide/yolo_models.md
similarity index 97%
rename from docs/source/dev_guide/yolo_models.md
rename to docs/user-guide/dev_guide/yolo_models.md
index 5ebc29614..739e07a9e 100644
--- a/docs/source/dev_guide/yolo_models.md
+++ b/docs/user-guide/dev_guide/yolo_models.md
@@ -31,7 +31,7 @@ The directory created by the exporter contains all files required to use the mod
## Other YOLO Models
-> **NOTE:** To obtain ready-to-use versions of the models described below, we recommend using the [`download_public_models.sh`](https://github.com/open-edge-platform/dlstreamer/blob/main/samples/download_public_models.sh) script. See [Download Public Models](https://github.com/open-edge-platform/dlstreamer/blob/main/docs/source/dev_guide/download_public_models.md) for details.
+> **NOTE:** To obtain ready-to-use versions of the models described below, we recommend using the [`download_public_models.sh`](https://github.com/open-edge-platform/dlstreamer/blob/main/samples/download_public_models.sh) script. See [Download Public Models](./download_public_models.md) for details.
### YOLOv7
diff --git a/docs/source/dev_guide/yolov5s_network.png b/docs/user-guide/dev_guide/yolov5s_network.png
similarity index 100%
rename from docs/source/dev_guide/yolov5s_network.png
rename to docs/user-guide/dev_guide/yolov5s_network.png
diff --git a/docs/source/elements/compositor.md b/docs/user-guide/elements/compositor.md
similarity index 100%
rename from docs/source/elements/compositor.md
rename to docs/user-guide/elements/compositor.md
diff --git a/docs/source/elements/elements.md b/docs/user-guide/elements/elements.md
similarity index 100%
rename from docs/source/elements/elements.md
rename to docs/user-guide/elements/elements.md
diff --git a/docs/source/elements/g3dlidarparse.md b/docs/user-guide/elements/g3dlidarparse.md
similarity index 100%
rename from docs/source/elements/g3dlidarparse.md
rename to docs/user-guide/elements/g3dlidarparse.md
diff --git a/docs/source/elements/g3dradarprocess.md b/docs/user-guide/elements/g3dradarprocess.md
similarity index 100%
rename from docs/source/elements/g3dradarprocess.md
rename to docs/user-guide/elements/g3dradarprocess.md
diff --git a/docs/source/elements/gstelements.md b/docs/user-guide/elements/gstelements.md
similarity index 100%
rename from docs/source/elements/gstelements.md
rename to docs/user-guide/elements/gstelements.md
diff --git a/docs/source/elements/gstreamer_compositor_dls_4outputs.png b/docs/user-guide/elements/gstreamer_compositor_dls_4outputs.png
similarity index 100%
rename from docs/source/elements/gstreamer_compositor_dls_4outputs.png
rename to docs/user-guide/elements/gstreamer_compositor_dls_4outputs.png
diff --git a/docs/source/elements/gvaattachroi.md b/docs/user-guide/elements/gvaattachroi.md
similarity index 100%
rename from docs/source/elements/gvaattachroi.md
rename to docs/user-guide/elements/gvaattachroi.md
diff --git a/docs/source/elements/gvaaudiodetect.md b/docs/user-guide/elements/gvaaudiodetect.md
similarity index 100%
rename from docs/source/elements/gvaaudiodetect.md
rename to docs/user-guide/elements/gvaaudiodetect.md
diff --git a/docs/source/elements/gvaaudiotranscribe.md b/docs/user-guide/elements/gvaaudiotranscribe.md
similarity index 100%
rename from docs/source/elements/gvaaudiotranscribe.md
rename to docs/user-guide/elements/gvaaudiotranscribe.md
diff --git a/docs/source/elements/gvaclassify.md b/docs/user-guide/elements/gvaclassify.md
similarity index 100%
rename from docs/source/elements/gvaclassify.md
rename to docs/user-guide/elements/gvaclassify.md
diff --git a/docs/source/elements/gvadetect.md b/docs/user-guide/elements/gvadetect.md
similarity index 100%
rename from docs/source/elements/gvadetect.md
rename to docs/user-guide/elements/gvadetect.md
diff --git a/docs/source/elements/gvafpscounter.md b/docs/user-guide/elements/gvafpscounter.md
similarity index 100%
rename from docs/source/elements/gvafpscounter.md
rename to docs/user-guide/elements/gvafpscounter.md
diff --git a/docs/source/elements/gvafpsthrottle.md b/docs/user-guide/elements/gvafpsthrottle.md
similarity index 100%
rename from docs/source/elements/gvafpsthrottle.md
rename to docs/user-guide/elements/gvafpsthrottle.md
diff --git a/docs/source/elements/gvagenai.md b/docs/user-guide/elements/gvagenai.md
similarity index 100%
rename from docs/source/elements/gvagenai.md
rename to docs/user-guide/elements/gvagenai.md
diff --git a/docs/source/elements/gvainference.md b/docs/user-guide/elements/gvainference.md
similarity index 100%
rename from docs/source/elements/gvainference.md
rename to docs/user-guide/elements/gvainference.md
diff --git a/docs/source/elements/gvametaaggregate.md b/docs/user-guide/elements/gvametaaggregate.md
similarity index 100%
rename from docs/source/elements/gvametaaggregate.md
rename to docs/user-guide/elements/gvametaaggregate.md
diff --git a/docs/source/elements/gvametaconvert.md b/docs/user-guide/elements/gvametaconvert.md
similarity index 100%
rename from docs/source/elements/gvametaconvert.md
rename to docs/user-guide/elements/gvametaconvert.md
diff --git a/docs/source/elements/gvametapublish.md b/docs/user-guide/elements/gvametapublish.md
similarity index 100%
rename from docs/source/elements/gvametapublish.md
rename to docs/user-guide/elements/gvametapublish.md
diff --git a/docs/source/elements/gvamotiondetect.md b/docs/user-guide/elements/gvamotiondetect.md
similarity index 100%
rename from docs/source/elements/gvamotiondetect.md
rename to docs/user-guide/elements/gvamotiondetect.md
diff --git a/docs/source/elements/gvapython.md b/docs/user-guide/elements/gvapython.md
similarity index 100%
rename from docs/source/elements/gvapython.md
rename to docs/user-guide/elements/gvapython.md
diff --git a/docs/source/elements/gvarealsense.md b/docs/user-guide/elements/gvarealsense.md
similarity index 100%
rename from docs/source/elements/gvarealsense.md
rename to docs/user-guide/elements/gvarealsense.md
diff --git a/docs/source/elements/gvatrack.md b/docs/user-guide/elements/gvatrack.md
similarity index 100%
rename from docs/source/elements/gvatrack.md
rename to docs/user-guide/elements/gvatrack.md
diff --git a/docs/source/elements/gvawatermark.md b/docs/user-guide/elements/gvawatermark.md
similarity index 100%
rename from docs/source/elements/gvawatermark.md
rename to docs/user-guide/elements/gvawatermark.md
diff --git a/docs/source/get_started/get_started_index.md b/docs/user-guide/get_started/get_started_index.md
similarity index 100%
rename from docs/source/get_started/get_started_index.md
rename to docs/user-guide/get_started/get_started_index.md
diff --git a/docs/source/get_started/install/gvadetect_sample_help.jpg b/docs/user-guide/get_started/install/gvadetect_sample_help.jpg
similarity index 100%
rename from docs/source/get_started/install/gvadetect_sample_help.jpg
rename to docs/user-guide/get_started/install/gvadetect_sample_help.jpg
diff --git a/docs/source/get_started/install/gvadetect_sample_help.png b/docs/user-guide/get_started/install/gvadetect_sample_help.png
similarity index 100%
rename from docs/source/get_started/install/gvadetect_sample_help.png
rename to docs/user-guide/get_started/install/gvadetect_sample_help.png
diff --git a/docs/source/get_started/install/install_guide_index.md b/docs/user-guide/get_started/install/install_guide_index.md
similarity index 100%
rename from docs/source/get_started/install/install_guide_index.md
rename to docs/user-guide/get_started/install/install_guide_index.md
diff --git a/docs/source/get_started/install/install_guide_ubuntu.md b/docs/user-guide/get_started/install/install_guide_ubuntu.md
similarity index 100%
rename from docs/source/get_started/install/install_guide_ubuntu.md
rename to docs/user-guide/get_started/install/install_guide_ubuntu.md
diff --git a/docs/source/get_started/install/install_guide_ubuntu_wsl2.md b/docs/user-guide/get_started/install/install_guide_ubuntu_wsl2.md
similarity index 100%
rename from docs/source/get_started/install/install_guide_ubuntu_wsl2.md
rename to docs/user-guide/get_started/install/install_guide_ubuntu_wsl2.md
diff --git a/docs/source/get_started/install/install_guide_windows.md b/docs/user-guide/get_started/install/install_guide_windows.md
similarity index 100%
rename from docs/source/get_started/install/install_guide_windows.md
rename to docs/user-guide/get_started/install/install_guide_windows.md
diff --git a/docs/source/get_started/install/uninstall_guide_ubuntu.md b/docs/user-guide/get_started/install/uninstall_guide_ubuntu.md
similarity index 100%
rename from docs/source/get_started/install/uninstall_guide_ubuntu.md
rename to docs/user-guide/get_started/install/uninstall_guide_ubuntu.md
diff --git a/docs/source/get_started/system_requirements.md b/docs/user-guide/get_started/system_requirements.md
similarity index 100%
rename from docs/source/get_started/system_requirements.md
rename to docs/user-guide/get_started/system_requirements.md
diff --git a/docs/source/get_started/tutorial.md b/docs/user-guide/get_started/tutorial.md
similarity index 100%
rename from docs/source/get_started/tutorial.md
rename to docs/user-guide/get_started/tutorial.md
diff --git a/docs/source/index.md b/docs/user-guide/index.md
similarity index 100%
rename from docs/source/index.md
rename to docs/user-guide/index.md
diff --git a/docs/source/release-notes.md b/docs/user-guide/release-notes.md
similarity index 99%
rename from docs/source/release-notes.md
rename to docs/user-guide/release-notes.md
index a90b6a893..737e26460 100644
--- a/docs/source/release-notes.md
+++ b/docs/user-guide/release-notes.md
@@ -3,6 +3,7 @@
## [Preview] Version 2026.0
## Key highlights:
+
* New elements: gvafpsthrottle, g3dradarprocess, g3dlidarparse
* New model support: YOLOv26, YOLO-E, RT-DETR, HuggingFace ViT
* Streamlined integration with Ultralytics and HuggingFace model hubs
diff --git a/docs/source/release-notes/release-notes-2024.md b/docs/user-guide/release-notes/release-notes-2024.md
similarity index 100%
rename from docs/source/release-notes/release-notes-2024.md
rename to docs/user-guide/release-notes/release-notes-2024.md
diff --git a/docs/source/release-notes/release-notes-2025.md b/docs/user-guide/release-notes/release-notes-2025.md
similarity index 100%
rename from docs/source/release-notes/release-notes-2025.md
rename to docs/user-guide/release-notes/release-notes-2025.md
diff --git a/docs/source/spelling_wordlist.txt b/docs/user-guide/spelling_wordlist.txt
similarity index 100%
rename from docs/source/spelling_wordlist.txt
rename to docs/user-guide/spelling_wordlist.txt
diff --git a/docs/source/supported_models.md b/docs/user-guide/supported_models.md
similarity index 99%
rename from docs/source/supported_models.md
rename to docs/user-guide/supported_models.md
index 8dbd50a0b..5cb007ff0 100644
--- a/docs/source/supported_models.md
+++ b/docs/user-guide/supported_models.md
@@ -36,7 +36,7 @@ The table provides links to model preparation instructions describing download a
| **--- Emotion Recognition:** | | | |
| [HSEmotion](https://github.com/av-savchenko/face-emotion-recognition/tree/main) | [download_public_models.sh](https://github.com/open-edge-platform/dlstreamer/blob/main/samples/download_public_models.sh) | [enet_b0_8_va_mtl.onnx](https://github.com/sb-ai-lab/EmotiEffLib/blob/main/models/affectnet_emotions/onnx/enet_b0_8_va_mtl.onnx) | [Custom Post-Processing Library Sample - Classification](https://github.com/open-edge-platform/dlstreamer/tree/main/samples/gstreamer/gst_launch/custom_postproc/classify) |
| **--- Feature Extraction:** | | | |
-| [Mars-small128](https://github.com/ZQPei/deep_sort_pytorch) | [download_public_models.sh](https://github.com/open-edge-platform/dlstreamer/blob/main/samples/download_public_models.sh) | | [Deep SORT Tracking](https://github.com/open-edge-platform/dlstreamer/blob/main/docs/source/dev_guide/object_tracking.md#deep-sort-tracking) |
+| [Mars-small128](https://github.com/ZQPei/deep_sort_pytorch) | [download_public_models.sh](https://github.com/open-edge-platform/dlstreamer/blob/main/samples/download_public_models.sh) | | [Deep SORT Tracking](https://github.com/open-edge-platform/dlstreamer/blob/main/docs/user-guide/dev_guide/object_tracking.md#deep-sort-tracking) |
| **--- Image Classification:** | | | |
| ViTForImageClassification | [Optimum-Intel](https://docs.openedgeplatform.intel.com/2026.0/edge-ai-libraries/dlstreamer/dev_guide/lvms.html) | [dima806/fairface_age_image_detection](https://huggingface.co/dima806/fairface_age_image_detection) | [Face Detection and Classification](https://github.com/open-edge-platform/dlstreamer/tree/main/samples/gstreamer/python/face_detection_and_classification) |
| Mobilenet-V3 | [GETI](https://docs.geti.intel.com/docs/user-guide/getting-started/use-geti/supported-models) | | |
diff --git a/samples/gstreamer/README.md b/samples/gstreamer/README.md
index 07f076d8d..6cd964f5c 100644
--- a/samples/gstreamer/README.md
+++ b/samples/gstreamer/README.md
@@ -4,21 +4,21 @@ Samples are simple applications that demonstrate how to use the Intel® DL Strea
Samples separated into several categories
1. gst_launch command-line samples (samples construct GStreamer pipeline via [gst-launch-1.0](https://gstreamer.freedesktop.org/documentation/tools/gst-launch.html) command-line utility)
- * [Face Detection And Classification Sample](./gst_launch/face_detection_and_classification/README.md) - constructs object detection and classification pipeline example with [gvadetect](../../docs/source/elements/gvadetect.md) and [gvaclassify](../../docs/source/elements/gvaclassify.md) elements to detect faces and estimate age, gender, emotions and landmark points
- * [Audio Event Detection Sample](./gst_launch/audio_detect/README.md) - constructs audio event detection pipeline example with [gvaaudiodetect](../../docs/source/elements/gvaaudiodetect.md) element and uses [gvametaconvert](../../docs/source/elements/gvametaconvert.md), [gvametapublish](../../docs/source/elements/gvametapublish.md) elements to convert audio event metadata with inference results into JSON format and to print on standard out
- * [Audio Transcription Sample](./gst_launch/audio_transcribe/README.md) - performs audio transcription using OpenVino GenAI model (whisper) with [gvaaudiotranscribe](../../docs/source/elements/gvaaudiotranscribe.md)
- * [Vehicle and Pedestrian Tracking Sample](./gst_launch/vehicle_pedestrian_tracking/README.md) - demonstrates object tracking via [gvatrack](../../docs/source/elements/gvatrack.md) element
- * [Human Pose Estimation Sample](./gst_launch/human_pose_estimation/README.md) - demonstrates human pose estimation with full-frame inference via [gvaclassify](../../docs/source/elements/gvaclassify.md) element
- * [Metadata Publishing Sample](./gst_launch/metapublish/README.md) - demonstrates how [gvametaconvert](../../docs/source/elements/gvametaconvert.md) and [gvametapublish](../../docs/source/elements/gvametapublish.md) elements are used for converting metadata with inference results into JSON format and publishing to file or Kafka/MQTT message bus
- * [gvapython face_detection_and_classification Sample](./gst_launch/gvapython/face_detection_and_classification/README.md) - demonstrates pipeline customization with [gvapython](../../docs/source/elements/gvapython.md) element and application provided Python script for inference post-processing
- * [gvapython save frames with ROI Sample](./gst_launch/gvapython/save_frames_with_ROI_only/README.md) - demonstrates [gvapython](../../docs/source/elements/gvapython.md) element for saving video frames with detected objects to disk
+ * [Face Detection And Classification Sample](./gst_launch/face_detection_and_classification/README.md) - constructs object detection and classification pipeline example with [gvadetect](../../docs/user-guide/elements/gvadetect.md) and [gvaclassify](../../docs/user-guide/elements/gvaclassify.md) elements to detect faces and estimate age, gender, emotions and landmark points
+ * [Audio Event Detection Sample ](./gst_launch/audio_detect/README.md) - constructs audio event detection pipeline example with [gvaaudiodetect](../../docs/user-guide/elements/gvaaudiodetect.md) element and uses [gvametaconvert](../../docs/user-guide/elements/gvametaconvert.md), [gvametapublish](../../docs/user-guide/elements/gvametapublish.md) elements to convert audio event metadata with inference results into JSON format and to print on standard out
+ * [Audio Transcription Sample](./gst_launch/audio_transcribe/README.md) - performs audio transcription using OpenVino GenAI model (whisper) with [gvaaudiotranscribe](../..//docs/user-guide/elements/gvaaudiotranscribe.md)
+ * [Vehicle and Pedestrian Tracking Sample](./gst_launch/vehicle_pedestrian_tracking/README.md) - demonstrates object tracking via [gvatrack](../../docs/user-guide/elements/gvatrack.md) element
+ * [Human Pose Estimation Sample](./gst_launch/human_pose_estimation/README.md) - demonstrates human pose estimation with full-frame inference via [gvaclassify](../../docs/user-guide/elements/gvaclassify.md) element
+ * [Metadata Publishing Sample](./gst_launch/metapublish/README.md) - demonstrates how [gvametaconvert](../../docs/user-guide/elements/gvametaconvert.md) and [gvametapublish](../../docs/user-guide/elements/gvametapublish.md) elements are used for converting metadata with inference results into JSON format and publishing to file or Kafka/MQTT message bus
+ * [gvapython face_detection_and_classification Sample](./gst_launch/gvapython/face_detection_and_classification/README.md) - demonstrates pipeline customization with [gvapython](../../docs/user-guide/elements/gvapython.md) element and application provided Python script for inference post-processing
+ * [gvapython save frames with ROI Sample](./gst_launch/gvapython/save_frames_with_ROI_only/README.md) - demonstrates [gvapython](../../docs/user-guide/elements/gvapython.md) element for saving video frames with detected objects to disk
* [Action Recognition Sample](./gst_launch/action_recognition/README.md) - demonstrates action recognition via video_inference bin element
* [Instance Segmentation Sample](./gst_launch/instance_segmentation/README.md) - demonstrates Instance Segmentation via object_detect and object_classify bin elements
* [Detection with Yolo](./gst_launch/detection_with_yolo/README.md) - demonstrates how to use publicly available Yolo models for object detection and classification
* [Deployment of Geti™ models](./gst_launch/geti_deployment/README.md) - demonstrates how to deploy models trained with Intel® Geti™ Platform for object detection, anomaly detection and classification tasks
* [Multi-camera deployments](./gst_launch/multi_stream/README.md) - demonstrates how to handle video streams from multiple cameras with one instance of DL Streamer application
* [gvaattachroi](./gst_launch/gvaattachroi/README.md) - demonstrates how to use gvaattachroi to define the regions on which the inference should be performed
- * [FPS Throttle](./gst_launch/gvafpsthrottle/README.md) - demonstrates how to use [gvafpsthrottle](../../docs/source/elements/gvafpsthrottle.md) element to throttle framerate independent of sink synchronization and without frame duplication or dropping
+ * [FPS Throttle](./gst_launch/gvafpsthrottle/README.md) - demonstrates how to use [gvafpsthrottle](../../docs/user-guide/elements/gvafpsthrottle.md) element to throttle framerate independent of sink synchronization and without frame duplication or dropping
* [Image Embeddings Generation with ViT](./gst_launch/lvm/README.md) - demonstrates how to generate image embeddings using the Vision Transformer component of a CLIP model
* [License Plate Recognition Sample](./gst_launch/license_plate_recognition/README.md) - demonstrates the use of the Yolo detector together with the optical character recognition model
* [Using VLM Models With gvagenai Element](./gst_launch/gvagenai/README.md) - demonstrates how to use the `gvagenai` element with MiniCPM-V for video summarization
@@ -37,7 +37,7 @@ Samples separated into several categories
4. Benchmark
* [Benchmark Sample](./benchmark/README.md) - measures overall performance of single-channel or multi-channel video analytics pipelines
5. Coexistently use of DL Streamer and DeepStream
- * [Coexistently use Sample](./python/coexistence/README.md) - runs pipelines on DL Streamer and/or DeepStream
+ * [Coexistently use Sample](./python/coexistence/README.md) - runs pipelines on DL Streamer and/or DeepStream
## How To Build And Run
diff --git a/samples/gstreamer/benchmark/README.md b/samples/gstreamer/benchmark/README.md
index a01d43e12..a72b88f84 100755
--- a/samples/gstreamer/benchmark/README.md
+++ b/samples/gstreamer/benchmark/README.md
@@ -1,6 +1,6 @@
# Benchmark Samples
-Samples `benchmark_one_model.sh` and `benchmark_two_models.sh` demonstrates [gvafpscounter](../../../docs/source/elements/gvafpscounter.md) element used to measure overall performance of multi-stream and multi-process video analytics pipelines.
+Samples `benchmark_one_model.sh` and `benchmark_two_models.sh` demonstrates [gvafpscounter](../../../docs/user-guide/elements/gvafpscounter.md) element used to measure overall performance of multi-stream and multi-process video analytics pipelines.
The sample outputs last and average FPS (Frames Per Second) every second and overall FPS on exit.
diff --git a/samples/gstreamer/gst_launch/action_recognition/README.md b/samples/gstreamer/gst_launch/action_recognition/README.md
index 4da5a3b42..7893ca0e5 100644
--- a/samples/gstreamer/gst_launch/action_recognition/README.md
+++ b/samples/gstreamer/gst_launch/action_recognition/README.md
@@ -10,7 +10,7 @@ This sample builds GStreamer pipeline of the following elements
* `filesrc` or `urisourcebin` or `v4l2src` for input from file/URL/web-camera
* `decodebin3` for video decoding
* `video_inference` for converting video frame into custom tensor, inferencing using OpenVINO™ toolkit and post process data.
-* [gvawatermark](../../../../docs/source/elements/gvawatermark.md) for labels visualization
+* [gvawatermark](../../../../docs/user-guide/elements/gvawatermark.md) for labels visualization
* `gvafpscounter` for rendering fps info in terminal
* `autovideosink` for rendering output video into screen
> **NOTE**: `sync=false` property in `autovideosink` element disables real-time synchronization so pipeline runs as fast as possible
diff --git a/samples/gstreamer/gst_launch/audio_detect/README.md b/samples/gstreamer/gst_launch/audio_detect/README.md
index 6bace26ce..1340f6cdf 100755
--- a/samples/gstreamer/gst_launch/audio_detect/README.md
+++ b/samples/gstreamer/gst_launch/audio_detect/README.md
@@ -9,9 +9,9 @@ This sample builds a GStreamer pipeline using the following elements
* `filesrc` or `urisourcebin`
* `decodebin3` for audio decoding
* `audioresample`, `audioconvert` and `audiomixer` for converting and resizing audio input
-* [gvaaudiodetect](../../../../docs/source/elements/gvaaudiodetect.md) for audio event detection using ACLNet
-* [gvametaconvert](../../../../docs/source/elements/gvametaconvert.md) for converting ACLNet detection results into JSON for further processing and display
-* [gvametapublish](../../../../docs/source/elements/gvametapublish.md) for printing detection results to stdout
+* [gvaaudiodetect](../../../../docs/user-guide/elements/gvaaudiodetect.md) for audio event detection using ACLNet
+* [gvametaconvert](../../../../docs/user-guide/elements/gvametaconvert.md) for converting ACLNet detection results into JSON for further processing and display
+* [gvametapublish](../../../../docs/user-guide/elements/gvametapublish.md) for printing detection results to stdout
* `fakesink` for terminating the pipeline
## Model
diff --git a/samples/gstreamer/gst_launch/custom_postproc/classify/README.md b/samples/gstreamer/gst_launch/custom_postproc/classify/README.md
index b7aaea27f..0f0f39180 100644
--- a/samples/gstreamer/gst_launch/custom_postproc/classify/README.md
+++ b/samples/gstreamer/gst_launch/custom_postproc/classify/README.md
@@ -11,9 +11,9 @@ This sample builds GStreamer pipeline of the following elements:
* `filesrc` or `urisourcebin` or `v4l2src` for input from file/URL/web-camera
* `decodebin3` for video decoding
* `vapostproc` (when using GPU) for video format conversion and VA-API memory handling
-* [gvadetect](../../../../../docs/source/elements/gvadetect.md) for face detection
-* [gvaclassify](../../../../../docs/source/elements/gvaclassify.md) for emotion classification using custom post-processing library
-* [gvawatermark](../../../../../docs/source/elements/gvawatermark.md) for bounding boxes and labels visualization
+* [gvadetect](../../../../../docs/user-guide/elements/gvadetect.md) for face detection
+* [gvaclassify](../../../../../docs/user-guide/elements/gvaclassify.md) for emotion classification using custom post-processing library
+* [gvawatermark](../../../../../docs/user-guide/elements/gvawatermark.md) for bounding boxes and labels visualization
* Various sink elements depending on output format (`autovideosink` for display, `filesink` for file output, `fakesink` for performance testing)
> **NOTE**: `sync=false` property in `autovideosink` element disables real-time synchronization so pipeline runs as fast as possible
@@ -56,7 +56,7 @@ Or download all available models:
> **NOTE**: Remember to set the `MODELS_PATH` environment variable, which is needed by both the script that downloads the models and the script that runs the sample.
-These instructions assume that the DLStreamer framework is installed on your local system, along with the Intel® OpenVINO™ model downloader and converter tools, as described in this [tutorial](../../../../../docs/source/get_started/tutorial.md#setup).
+These instructions assume that the DLStreamer framework is installed on your local system, along with the Intel® OpenVINO™ model downloader and converter tools, as described in this [tutorial](../../../../../docs/user-guide/get_started/tutorial.md#setup).
## Running
@@ -197,7 +197,7 @@ The library:
## See also
* [Samples overview](../../../README.md)
-* [DLStreamer documentation](../../../../../docs/source/index.md)
-* [Custom post-processing guide](../../../../../docs/source/dev_guide/custom_processing.md#6-create-custom-post-processing-library)
+* [DLStreamer documentation](../../../../../docs/user-guide/index.md)
+* [Custom post-processing guide](../../../../../docs/user-guide/dev_guide/custom_processing.md#6-create-custom-post-processing-library)
* [GStreamer Analytics Documentation](https://gstreamer.freedesktop.org/documentation/analytics/index.html?gi-language=c)
* [Custom Post-Processing to ROI Sample](../detect/README.md)
diff --git a/samples/gstreamer/gst_launch/custom_postproc/detect/README.md b/samples/gstreamer/gst_launch/custom_postproc/detect/README.md
index 5713756ce..0b6212589 100644
--- a/samples/gstreamer/gst_launch/custom_postproc/detect/README.md
+++ b/samples/gstreamer/gst_launch/custom_postproc/detect/README.md
@@ -11,8 +11,8 @@ This sample builds GStreamer pipeline of the following elements:
* `filesrc` or `urisourcebin` or `v4l2src` for input from file/URL/web-camera
* `decodebin3` for video decoding
* `vapostproc` (when using GPU) for video format conversion and VA-API memory handling
-* [gvadetect](../../../../../docs/source/elements/gvadetect.md) for object detection using YOLOv11 model with custom post-processing library
-* [gvawatermark](../../../../../docs/source/elements/gvawatermark.md) for bounding boxes and labels visualization
+* [gvadetect](../../../../../docs/user-guide/elements/gvadetect.md) for object detection using YOLOv11 model with custom post-processing library
+* [gvawatermark](../../../../../docs/user-guide/elements/gvawatermark.md) for bounding boxes and labels visualization
* Various sink elements depending on output format (`autovideosink` for display, `filesink` for file output, `fakesink` for performance testing)
> **NOTE**: `sync=false` property in `autovideosink` element disables real-time synchronization so pipeline runs as fast as possible
@@ -35,7 +35,7 @@ The pipeline uses the `gvadetect` element with the `custom-postproc-lib` paramet
## Model
-The sample uses the **YOLOv11s** model from Ultralytics, which should be available in the `$MODELS_PATH/public/yolo11s/FP32/` directory. These instructions assume that the DLStreamer framework is installed on your local system, along with the Intel® OpenVINO™ model downloader and converter tools, as described in this [tutorial](../../../../../docs/source/get_started/tutorial.md#setup).
+The sample uses the **YOLOv11s** model from Ultralytics, which should be available in the `$MODELS_PATH/public/yolo11s/FP32/` directory. These instructions assume that the DLStreamer framework is installed on your local system, along with the Intel® OpenVINO™ model downloader and converter tools, as described in this [tutorial](../../../../../docs/user-guide/get_started/tutorial.md#setup).
For the YOLOv11s model, it is also necessary to install the Ultralytics Python package:
@@ -200,7 +200,7 @@ The library:
## See also
* [Samples overview](../../../README.md)
-* [DLStreamer documentation](../../../../../docs/source/index.md)
-* [Custom post-processing guide](../../../../../docs/source/dev_guide/custom_processing.md#6-create-custom-post-processing-library)
+* [DLStreamer documentation](../../../../../docs/user-guide/index.md)
+* [Custom post-processing guide](../../../../../docs/user-guide/dev_guide/custom_processing.md#6-create-custom-post-processing-library)
* [GStreamer Analytics Documentation](https://gstreamer.freedesktop.org/documentation/analytics/index.html?gi-language=c)
* [Custom Post-Processing to Tensor Sample](../classify/README.md)
diff --git a/samples/gstreamer/gst_launch/detection_with_yolo/README.md b/samples/gstreamer/gst_launch/detection_with_yolo/README.md
index d4561b3f1..fc6ffcd97 100644
--- a/samples/gstreamer/gst_launch/detection_with_yolo/README.md
+++ b/samples/gstreamer/gst_launch/detection_with_yolo/README.md
@@ -10,8 +10,8 @@ This sample builds GStreamer pipeline of the following elements
* `filesrc` or `urisourcebin` or `v4l2src` for input from file/URL/web-camera
* `decodebin3` for video decoding
* `videoconvert` for converting video frame into different color formats
-* [gvadetect](../../../../docs/source/elements/gvadetect.md) uses for full-frame object detection and marking objects with labels
-* [gvawatermark](../../../../docs/source/elements/gvawatermark.md) for points and theirs connections visualization
+* [gvadetect](../../../../docs/user-guide/elements/gvadetect.md) uses for full-frame object detection and marking objects with labels
+* [gvawatermark](../../../../docs/user-guide/elements/gvawatermark.md) for points and theirs connections visualization
* `autovideosink` for rendering output video into screen
> **NOTE**: `sync=false` property in `autovideosink` element disables real-time synchronization so pipeline runs as fast as possible
@@ -19,7 +19,7 @@ This sample builds GStreamer pipeline of the following elements
The samples use YOLO models from different repositories as listed in a table below. The model preparation and conversion method depends on the model source.
The instructions assume Intel® DL Streamer framework is installed on the local system along with Intel® OpenVINO™ model downloader and converter tools,
-as described here: [Tutorial](../../../../docs/source/get_started/tutorial.md#setup).
+as described here: [Tutorial](../../../../docs/user-guide/get_started/tutorial.md#setup).
For yolov5su, yolov8s (8n-obb,8n-seg), yolov9c, yolov10s and yolo11s (yolo11s-seg, yolo11s-obb) models it is also necessary to install the ultralytics python package:
diff --git a/samples/gstreamer/gst_launch/g3dlidarparse/README.md b/samples/gstreamer/gst_launch/g3dlidarparse/README.md
index 734fa0df4..35e3db7c3 100644
--- a/samples/gstreamer/gst_launch/g3dlidarparse/README.md
+++ b/samples/gstreamer/gst_launch/g3dlidarparse/README.md
@@ -32,7 +32,7 @@ If the element is found, you should see detailed information about the element,
### 2. Download Lidar Data and Configuration
-Download the sample lidar binary dataset:
+Download the sample lidar binary dataset:
```bash
DATA_DIR=velodyne
@@ -82,6 +82,6 @@ The sample:
* outputs LiDAR parser debug logs and metadata summaries
## See also
-* [Elements overview](../../../../docs/source/elements/elements.md)
-* [g3dlidarparse element](../../../../docs/source/elements/g3dlidarparse.md)
+* [Elements overview](../../../../docs/user-guide/elements/elements.md)
+* [g3dlidarparse element](../../../../docs/user-guide/elements/g3dlidarparse.md)
* [Samples overview](../../README.md)
\ No newline at end of file
diff --git a/samples/gstreamer/gst_launch/g3dradarprocess/README.md b/samples/gstreamer/gst_launch/g3dradarprocess/README.md
index fe229e07d..1ea4fb42a 100644
--- a/samples/gstreamer/gst_launch/g3dradarprocess/README.md
+++ b/samples/gstreamer/gst_launch/g3dradarprocess/README.md
@@ -216,4 +216,4 @@ GST_DEBUG=*:3,g3dradarprocess:5 ./radar_process_sample.sh
## See also
* [Samples overview](../../README.md)
-* [g3dradarprocess element documentation](../../../../docs/source/elements/g3dradarprocess.md)
+* [g3dradarprocess element documentation](../../../../docs/user-guide/elements/g3dradarprocess.md)
diff --git a/samples/gstreamer/gst_launch/geti_deployment/README.md b/samples/gstreamer/gst_launch/geti_deployment/README.md
index 58f588f46..9c90fb029 100644
--- a/samples/gstreamer/gst_launch/geti_deployment/README.md
+++ b/samples/gstreamer/gst_launch/geti_deployment/README.md
@@ -47,9 +47,9 @@ The set of samples demonstrates how to deploy above models to run inference with
The 'geti_sample.sh' script sample builds GStreamer pipeline composed of the following elements:
* `filesrc` or `urisourcebin` or `v4l2src` for input from file/URL/web-camera
* `decodebin3` for video decoding
-* [gvadetect](../../../../docs/source/elements/gvadetect.md) uses for full-frame object detection and marking objects with labels
-* [gvaclassify](../../../../docs/source/elements/gvaclassify.md) uses for full-frame object classficiation
-* [gvawatermark](../../../../docs/source/elements/gvawatermark.md) for points and theirs connections visualization
+* [gvadetect](../../../../docs/user-guide/elements/gvadetect.md) uses for full-frame object detection and marking objects with labels
+* [gvaclassify](../../../../docs/user-guide/elements/gvaclassify.md) uses for full-frame object classficiation
+* [gvawatermark](../../../../docs/user-guide/elements/gvawatermark.md) for points and theirs connections visualization
* `autovideosink` for rendering output video into screen
* `vah264enc` or `vah264lpenc` and `filesink` for encoding video stream and storing in a local file
> **NOTE**: `sync=false` property in `autovideosink` element disables real-time synchronization so pipeline runs as fast as possible
diff --git a/samples/gstreamer/gst_launch/gvaattachroi/README.md b/samples/gstreamer/gst_launch/gvaattachroi/README.md
index 298f37ed9..04f68dd9a 100755
--- a/samples/gstreamer/gst_launch/gvaattachroi/README.md
+++ b/samples/gstreamer/gst_launch/gvaattachroi/README.md
@@ -12,9 +12,9 @@ This sample builds GStreamer pipeline of the following elements
* `filesrc` or `urisourcebin` or `v4l2src` for input from file/URL/web-camera
* `decodebin3` for video decoding
* `videoconvert` for converting video frame into different color formats
-* [gvaattachroi](../../../../docs/source/elements/gvaattachroi.md) for defining the areas of interest (one or more) in the input image
-* [gvadetect](../../../../docs/source/elements/gvadetect.md) uses for roi object detection and marking objects with labels
-* [gvawatermark](../../../../docs/source/elements/gvawatermark.md) for points and theirs connections visualization
+* [gvaattachroi](../../../../docs/user-guide/elements/gvaattachroi.md) for defining the areas of interest (one or more) in the input image
+* [gvadetect](../../../../docs/user-guide/elements/gvadetect.md) uses for roi object detection and marking objects with labels
+* [gvawatermark](../../../../docs/user-guide/elements/gvawatermark.md) for points and theirs connections visualization
* `autovideosink` for rendering output video into screen
> **NOTE**: `sync=false` property in `autovideosink` element disables real-time synchronization so pipeline runs as fast as possible
@@ -22,7 +22,7 @@ This sample builds GStreamer pipeline of the following elements
## Model
The sample use YOLOv8s model from Ultralytics. The instructions assume Intel® DL Streamer framework is installed on the local system along with Intel® OpenVINO™ model downloader and converter tools,
-as described here: [Tutorial](../../../../docs/source/get_started/tutorial.md#setup).
+as described here: [Tutorial](../../../../docs/user-guide/get_started/tutorial.md#setup).
For yolov8s model it is also necessary to install the ultralytics python package:
diff --git a/samples/gstreamer/gst_launch/gvafpsthrottle/README.md b/samples/gstreamer/gst_launch/gvafpsthrottle/README.md
index 38d632cff..988ac8232 100644
--- a/samples/gstreamer/gst_launch/gvafpsthrottle/README.md
+++ b/samples/gstreamer/gst_launch/gvafpsthrottle/README.md
@@ -4,7 +4,7 @@ This directory contains sample pipelines demonstrating the use of the `gvafpsthr
The `gvafpsthrottle` element throttles (limits) framerate by capping the maximum rate at which buffers pass through. Note: This element does not duplicate or drop frames to match the framerate. It cannot increase FPS, any slowdown in upstream processing cannot be recovered.
-For detailed documentation, see [docs/source/elements/gvafpsthrottle.md](../../../../docs/source/elements/gvafpsthrottle.md).
+For detailed documentation, see [docs/user-guide/elements/gvafpsthrottle.md](../../../../docs/user-guide/elements/gvafpsthrottle.md).
## Features
diff --git a/samples/gstreamer/gst_launch/gvapython/face_detection_and_classification/README.md b/samples/gstreamer/gst_launch/gvapython/face_detection_and_classification/README.md
index 6743053f8..e733521e6 100644
--- a/samples/gstreamer/gst_launch/gvapython/face_detection_and_classification/README.md
+++ b/samples/gstreamer/gst_launch/gvapython/face_detection_and_classification/README.md
@@ -1,6 +1,6 @@
# gvapython Sample
-This sample demonstrates [gvapython](../../../../../docs/source/elements/gvapython.md) element and ability to customize pipeline with application provided Python script for pre- or post-processing of inference operations. It typically used for interpretation of inference results and various application logic, especially if required in the middle of GStreamer pipeline.
+This sample demonstrates [gvapython](../../../../../docs/user-guide/elements/gvapython.md) element and ability to customize pipeline with application provided Python script for pre- or post-processing of inference operations. It typically used for interpretation of inference results and various application logic, especially if required in the middle of GStreamer pipeline.
## How It Works
In this sample the `gvapython` element is used three times.
diff --git a/samples/gstreamer/gst_launch/gvapython/save_frames_with_ROI_only/README.md b/samples/gstreamer/gst_launch/gvapython/save_frames_with_ROI_only/README.md
index 24e2877cc..222fe455b 100644
--- a/samples/gstreamer/gst_launch/gvapython/save_frames_with_ROI_only/README.md
+++ b/samples/gstreamer/gst_launch/gvapython/save_frames_with_ROI_only/README.md
@@ -1,6 +1,6 @@
# gvapython Sample - Save Frames with ROI Only
-This sample demonstrates [gvapython](../../../../../docs/source/elements/gvapython.md) element with custom Python script to save video frames containing detected objects. It showcases practical post-processing use case where frames with regions of interest (ROI) are automatically saved to disk.
+This sample demonstrates [gvapython](../../../../../docs/user-guide/elements/gvapython.md) element with custom Python script to save video frames containing detected objects. It showcases practical post-processing use case where frames with regions of interest (ROI) are automatically saved to disk.
## How It Works
In this sample the `gvapython` element is inserted after `gvadetect` element running object detection. The Python script (`simple_frame_saver.py`) processes each frame and saves it to disk when:
@@ -54,13 +54,13 @@ The sample takes three command-line *optional* parameters:
* local video file
* web camera device (ex. `/dev/video0`)
* RTSP camera (URL starting with `rtsp://`) or other streaming source (ex URL starting with `http://`)
-
+
If parameter is not specified, the sample by default streams video example from HTTPS link (utilizing `urisourcebin` element) so requires internet connection.
2. [DEVICE] to specify device for detection. Default GPU.
Please refer to OpenVINO™ toolkit documentation for supported devices.
https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_supported_plugins_Supported_Devices.html
-
+
You can find what devices are supported on your system by running following OpenVINO™ toolkit sample:
https://docs.openvinotoolkit.org/latest/openvino_inference_engine_ie_bridges_python_sample_hello_query_device_README.html
@@ -88,4 +88,4 @@ The saved frames are numbered sequentially and include all detected objects with
## See also
* [Samples overview](../../../README.md)
-* [gvapython element documentation](../../../../../docs/source/elements/gvapython.md)
+* [gvapython element documentation](../../../../../docs/user-guide/elements/gvapython.md)
diff --git a/samples/gstreamer/gst_launch/human_pose_estimation/README.md b/samples/gstreamer/gst_launch/human_pose_estimation/README.md
index b91c5671f..7aea5d55d 100755
--- a/samples/gstreamer/gst_launch/human_pose_estimation/README.md
+++ b/samples/gstreamer/gst_launch/human_pose_estimation/README.md
@@ -10,8 +10,8 @@ This sample builds GStreamer pipeline of the following elements
* `filesrc` or `urisourcebin` or `v4l2src` for input from file/URL/web-camera
* `decodebin3` for video decoding
* `videoconvert` for converting video frame into different color formats
-* [gvaclassify](../../../../docs/source/elements/gvaclassify.md) uses for full-frame inference and post-processing of OpenPose's output
-* [gvawatermark](../../../../docs/source/elements/gvawatermark.md) for points and theirs connections visualization
+* [gvaclassify](../../../../docs/user-guide/elements/gvaclassify.md) uses for full-frame inference and post-processing of OpenPose's output
+* [gvawatermark](../../../../docs/user-guide/elements/gvawatermark.md) for points and theirs connections visualization
* `autovideosink` for rendering output video into screen
> **NOTE**: `sync=false` property in `autovideosink` element disables real-time synchronization so pipeline runs as fast as possible
diff --git a/samples/gstreamer/gst_launch/instance_segmentation/README.md b/samples/gstreamer/gst_launch/instance_segmentation/README.md
index a6e4a8c4a..a3c8c4980 100644
--- a/samples/gstreamer/gst_launch/instance_segmentation/README.md
+++ b/samples/gstreamer/gst_launch/instance_segmentation/README.md
@@ -12,11 +12,11 @@ The sample constructs GStreamer pipelines with the following elements:
* `filesrc` or `urisourcebin` or `v4l2src` for input from a file, URL, or web camera
* `decodebin3` to construct a decoding pipeline using available decoders and demuxers via auto-plugging
* `vapostproc ! video/x-raw(memory:VAMemory)` to facilitate video processing on the GPU
-* [gvadetect](../../../../docs/source/elements/gvadetect.md) for performing object detection using Mask RCNN models
-* [gvawatermark](../../../../docs/source/elements/gvawatermark.md) to visualize segmentation masks on the video
-* [gvafpscounter](../../../../docs/source/elements/gvafpscounter.md) to measure and display frames per second
-* [gvametaconvert](../../../../docs/source/elements/gvametaconvert.md) o transform the metadata into JSON format
-* [gvametapublish](../../../../docs/source/elements/gvametapublish.md) to save the metadata as a JSON file
+* [gvadetect](../../../../docs/user-guide/elements/gvadetect.md) for performing object detection using Mask RCNN models
+* [gvawatermark](../../../../docs/user-guide/elements/gvawatermark.md) to visualize segmentation masks on the video
+* [gvafpscounter](../../../../docs/user-guide/elements/gvafpscounter.md) to measure and display frames per second
+* [gvametaconvert](../../../../docs/user-guide/elements/gvametaconvert.md) o transform the metadata into JSON format
+* [gvametapublish](../../../../docs/user-guide/elements/gvametapublish.md) to save the metadata as a JSON file
* `vah264enc ! h264parse ! mp4mux` or `vah264lpenc ! h264parse ! mp4mux` to encode the raw video into H.264 bitstream, ensure that the stream is correctly formatted and contains the necessary headers and metadata, and to create the MP4 file structure
* `jpegenc` for encoding frames as JPEG images
* `autovideosink` for displaying the output video on the screen
diff --git a/samples/gstreamer/gst_launch/metapublish/README.md b/samples/gstreamer/gst_launch/metapublish/README.md
index e994ce445..db5375d91 100755
--- a/samples/gstreamer/gst_launch/metapublish/README.md
+++ b/samples/gstreamer/gst_launch/metapublish/README.md
@@ -1,6 +1,6 @@
# Metadata Publishing Sample (gst-launch command line)
-This sample demonstrates how [gvametaconvert](../../../../docs/source/elements/gvametaconvert.md) and [gvametapublish](../../../../docs/source/elements/gvametapublish.md) elements are used in a typical pipeline constructed with Intel® Deep Learning Streamer (Intel® DL Streamer) and GStreamer elements. By placing these elements to the end of a pipeline that performs face detection and emotion classification, you will quickly see how these elements enable publishing of pipeline metadata to an output file, in-memory fifo, or a popular message bus.
+This sample demonstrates how [gvametaconvert](../../../../docs/user-guide/elements/gvametaconvert.md) and [gvametapublish](../../../../docs/user-guide/elements/gvametapublish.md) elements are used in a typical pipeline constructed with Intel® Deep Learning Streamer (Intel® DL Streamer) and GStreamer elements. By placing these elements to the end of a pipeline that performs face detection and emotion classification, you will quickly see how these elements enable publishing of pipeline metadata to an output file, in-memory fifo, or a popular message bus.
These elements are useful for cases where you need to record outcomes (e.g., emitting inferences) of your DL Streamer pipeline to applications running locally or across distributed systems.
@@ -11,13 +11,13 @@ The string contains a list of GStreamer elements separated by exclamation mark `
Overall this sample builds GStreamer pipeline of the following elements:
* `filesrc` or `urisourcebin` or `v4l2src` for input from file/URL/web-camera
* `decodebin3` for video decoding
-* [gvadetect](../../../../docs/source/elements/gvadetect.md) for detecting faces using the OpenVINO™ Toolkit Inference Engine
-* [gvaclassify](../../../../docs/source/elements/gvaclassify.md) for recognizing the age and gender of detected faces using the the OpenVINO™ Toolkit Inference Engine.
-* [gvametaconvert](../../../../docs/source/elements/gvametaconvert.md) for conversion of tensor and inference metadata to JSON format.
-* [gvametapublish](../../../../docs/source/elements/gvametapublish.md) for publishing the JSON metadata as output to console, file, fifo, MQTT or Kafka.
+* [gvadetect](../../../../docs/user-guide/elements/gvadetect.md) for detecting faces using the OpenVINO™ Toolkit Inference Engine
+* [gvaclassify](../../../../docs/user-guide/elements/gvaclassify.md) for recognizing the age and gender of detected faces using the the OpenVINO™ Toolkit Inference Engine.
+* [gvametaconvert](../../../../docs/user-guide/elements/gvametaconvert.md) for conversion of tensor and inference metadata to JSON format.
+* [gvametapublish](../../../../docs/user-guide/elements/gvametapublish.md) for publishing the JSON metadata as output to console, file, fifo, MQTT or Kafka.
* `fakesink` to terminate the pipeline output without actually rendering video frames.
-> **NOTE**: The sample sets property 'json-indent=4' in [gvametaconvert](../../../../docs/source/elements/gvametaconvert.md) element for generating JSON in pretty print format with 4 spaces indent. Remove this property to generate JSON without pretty print.
+> **NOTE**: The sample sets property 'json-indent=4' in [gvametaconvert](../../../../docs/user-guide/elements/gvametaconvert.md) element for generating JSON in pretty print format with 4 spaces indent. Remove this property to generate JSON without pretty print.
## Models
diff --git a/samples/gstreamer/gst_launch/multi_stream/README.md b/samples/gstreamer/gst_launch/multi_stream/README.md
index ad52c9423..b96579f25 100644
--- a/samples/gstreamer/gst_launch/multi_stream/README.md
+++ b/samples/gstreamer/gst_launch/multi_stream/README.md
@@ -8,14 +8,14 @@ This sample utilizes GStreamer command-line tool `gst-launch-1.0` which can buil
The string contains a list of GStreamer elements separated by exclamation mark `!`, each element may have properties specified in the format `property`=`value`.
> **NOTE**: Before running, download the required models to `$MODELS_PATH/public/{model_name}/FP16/`.
-Please follow instruction: [Detection with Yolo](./gst_launch/detection_with_yolo/README.md) how to download YOLO models.
+Please follow instruction: [Detection with Yolo](../detection_with_yolo/README.md) how to download YOLO models.
This sample builds for GStreamer a pipeline of the following elements:
* `filesrc`
* `decodebin3` for video decoding
* `videoconvert` for converting video frame into different color formats
-* [gvadetect](../../../../docs/source/elements/gvadetect.md) uses for full-frame object detection and marking objects with labels
-* [gvawatermark](../../../../docs/source/elements/gvawatermark.md) for points and visualization of their connections
+* [gvadetect](../../../../docs/user-guide/elements/gvadetect.md) uses for full-frame object detection and marking objects with labels
+* [gvawatermark](../../../../docs/user-guide/elements/gvawatermark.md) for points and visualization of their connections
* `autovideosink` for rendering output video into screen
**Supported models**: yolox-tiny, yolox_s, yolov7, yolov8s, yolov9c, yolo11s, yolo26s
diff --git a/samples/gstreamer/gst_launch/vehicle_pedestrian_tracking/README.md b/samples/gstreamer/gst_launch/vehicle_pedestrian_tracking/README.md
index 511a905c2..eb7020aa9 100755
--- a/samples/gstreamer/gst_launch/vehicle_pedestrian_tracking/README.md
+++ b/samples/gstreamer/gst_launch/vehicle_pedestrian_tracking/README.md
@@ -1,6 +1,6 @@
# Vehicle and Pedestrian Tracking Sample (gst-launch command line)
-This sample demonstrates [gvatrack](../../../../docs/source/elements/gvatrack.md) element and object tracking capabilities on example of person and vehicle tracking. Object tracking increases performance by running inference on object detection and classification models less frequently (not every frame).
+This sample demonstrates [gvatrack](../../../../docs/user-guide/elements/gvatrack.md) element and object tracking capabilities on example of person and vehicle tracking. Object tracking increases performance by running inference on object detection and classification models less frequently (not every frame).
## How It Works
The sample utilizes GStreamer command-line tool `gst-launch-1.0` which can build and run GStreamer pipeline described in a string format.
@@ -16,10 +16,10 @@ Overall this sample builds GStreamer pipeline of the following elements
* `filesrc` or `urisourcebin` or `v4l2src` for input from file/URL/web-camera
* `decodebin3` for video decoding
* `videoconvert` for converting video frame into different color formats
-* [gvadetect](../../../../docs/source/elements/gvadetect.md) for person and vehicle detection based on OpenVINO™ Toolkit Inference Engine
-* [gvatrack](../../../../docs/source/elements/gvatrack.md) for tracking objects
-* [gvaclassify](../../../../docs/source/elements/gvaclassify.md) inserted into pipeline twice for person and vehicle classification
-* [gvawatermark](../../../../docs/source/elements/gvawatermark.md) for bounding boxes and labels visualization
+* [gvadetect](../../../../docs/user-guide/elements/gvadetect.md) for person and vehicle detection based on OpenVINO™ Toolkit Inference Engine
+* [gvatrack](../../../../docs/user-guide/elements/gvatrack.md) for tracking objects
+* [gvaclassify](../../../../docs/user-guide/elements/gvaclassify.md) inserted into pipeline twice for person and vehicle classification
+* [gvawatermark](../../../../docs/user-guide/elements/gvawatermark.md) for bounding boxes and labels visualization
* `autovideosink` for rendering output video into screen
> **NOTE**: `sync=false` property in `autovideosink` element disables real-time synchronization so pipeline runs as fast as possible
@@ -59,7 +59,7 @@ If parameter is not specified, the sample by default streams video example from
5. [TRACKING_TYPE] to specify tracking type.
Tracking types available are short-term-imageless, zero-term, zero-term-imageless.
For more information on tracking types and their difference, please turn to
- [this guide](../../../../docs/source/dev_guide/object_tracking.md).
+ [this guide](../../../../docs/user-guide/dev_guide/object_tracking.md).
## Sample Output
diff --git a/samples/gstreamer/python/coexistence/README.md b/samples/gstreamer/python/coexistence/README.md
index 2efd11cc4..3d58b62be 100644
--- a/samples/gstreamer/python/coexistence/README.md
+++ b/samples/gstreamer/python/coexistence/README.md
@@ -1,3 +1,3 @@
These are shell script and Python implementations designed to enable the coexistent execution of DL Streamer and DeepStream pipelines.
-[Coexistently use of DL Streamer and DeepStream](../../../../docs/source/dev_guide/dlstreamer-deepstream-coexistence.md)
\ No newline at end of file
+[Coexistently use of DL Streamer and DeepStream](../../../../docs/user-guide/dev_guide/dlstreamer-deepstream-coexistence.md)
\ No newline at end of file
diff --git a/samples/windows/gstreamer/benchmark/README.md b/samples/windows/gstreamer/benchmark/README.md
index 4cc0a2456..dfa4264ce 100644
--- a/samples/windows/gstreamer/benchmark/README.md
+++ b/samples/windows/gstreamer/benchmark/README.md
@@ -1,6 +1,6 @@
# Benchmark Sample (Windows)
-Sample `benchmark.bat` demonstrates [gvafpscounter](../../../../docs/source/elements/gvafpscounter.md) element used to measure overall performance of video analytics pipelines on Windows.
+Sample `benchmark.bat` demonstrates [gvafpscounter](../../../../docs/user-guide/elements/gvafpscounter.md) element used to measure overall performance of video analytics pipelines on Windows.
The sample outputs last and average FPS (Frames Per Second) every second and overall FPS on exit.
diff --git a/samples/windows/gstreamer/gst_launch/audio_detect/README.md b/samples/windows/gstreamer/gst_launch/audio_detect/README.md
index 74edf7316..dda6bd939 100644
--- a/samples/windows/gstreamer/gst_launch/audio_detect/README.md
+++ b/samples/windows/gstreamer/gst_launch/audio_detect/README.md
@@ -9,9 +9,9 @@ This sample builds a GStreamer pipeline using the following elements
* `filesrc` or `urisourcebin`
* `decodebin3` for audio decoding
* `audioresample`, `audioconvert` and `audiomixer` for converting and resizing audio input
-* [gvaaudiodetect](../../../../../docs/source/elements/gvaaudiodetect.md) for audio event detection using ACLNet
-* [gvametaconvert](../../../../../docs/source/elements/gvametaconvert.md) for converting ACLNet detection results into JSON for further processing and display
-* [gvametapublish](../../../../../docs/source/elements/gvametapublish.md) for printing detection results to stdout
+* [gvaaudiodetect](../../../../../docs/user-guide/elements/gvaaudiodetect.md) for audio event detection using ACLNet
+* [gvametaconvert](../../../../../docs/user-guide/elements/gvametaconvert.md) for converting ACLNet detection results into JSON for further processing and display
+* [gvametapublish](../../../../../docs/user-guide/elements/gvametapublish.md) for printing detection results to stdout
* `fakesink` for terminating the pipeline
## Pipeline Architecture
@@ -70,7 +70,7 @@ By default, if no [INPUT_PATH] is specified, the sample uses a local file `how_a
> **NOTE**: The default audio file `how_are_you_doing.wav` is located in the Linux samples folder at:
> `samples/gstreamer/gst_launch/audio_detect/how_are_you_doing.wav`
->
+>
> Copy this file to the same directory as the batch file, or provide your own audio file as input.
### Example
diff --git a/samples/windows/gstreamer/gst_launch/detection_with_yolo/README.md b/samples/windows/gstreamer/gst_launch/detection_with_yolo/README.md
index 0cb17aaf0..ca6a0e333 100644
--- a/samples/windows/gstreamer/gst_launch/detection_with_yolo/README.md
+++ b/samples/windows/gstreamer/gst_launch/detection_with_yolo/README.md
@@ -197,7 +197,7 @@ cd samples
./download_public_models.sh yolo11s coco128
```
-For detailed instructions on downloading models, including the full list of supported models and quantization options, see the [Download Public Models Guide](../../../../../docs/source/dev_guide/download_public_models.md).
+For detailed instructions on downloading models, including the full list of supported models and quantization options, see the [Download Public Models Guide](../../../../../docs/user-guide/dev_guide/download_public_models.md).
> **Note**: Make sure to set your `MODELS_PATH` environment variable in Windows to point to the same location where models were downloaded (e.g., `set MODELS_PATH=C:\models`).
diff --git a/src/monolithic/gst/elements/gvametapublish/Readme.md b/src/monolithic/gst/elements/gvametapublish/Readme.md
index f17ba3df9..0e42056a6 100644
--- a/src/monolithic/gst/elements/gvametapublish/Readme.md
+++ b/src/monolithic/gst/elements/gvametapublish/Readme.md
@@ -14,7 +14,7 @@ A GStreamer element to publish JSON data to a designated file, or a chosen messa
The Docker image built with the Dockerfile includes all necessary dependencies for Kafka/MQTT by default.
If you are building from source according to the provided instructions, all dependencies should already be satisfied.
- You can find the source build instructions [here](../../../../../docs/source/dev_guide/advanced_install/advanced_install_guide_compilation.md).
+ You can find the source build instructions [here](../../../../../docs/user-guide/dev_guide/advanced_install/advanced_install_guide_compilation.md).
If you are not following the source instructions, you may need to run the [install_metapublish_dependencies.sh](https://github.com/open-edge-platform/dlstreamer/tree/main/scripts/install_metapublish_dependencies.sh) script and rebuild DL Streamer with the following parameters enabled: