Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hailo Official integration #16906

Open
wants to merge 16 commits into
base: dev
Choose a base branch
from
90 changes: 85 additions & 5 deletions docs/docs/configuration/object_detectors.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Frigate supports multiple different detectors that work on different types of ha
**Most Hardware**

- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Hailo](#hailo-8l): The Hailo8 AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
- [Hailo](#hailo-8): The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.

**AMD**

Expand Down Expand Up @@ -129,15 +129,56 @@ detectors:
type: edgetpu
device: pci
```
---


## Hailo-8l
## Hailo-8 Detector

This detector is available for use with Hailo-8 AI Acceleration Module.
This detector is available for use with both Hailo-8 and Hailo-8L AI Acceleration Modules. The integration automatically detects your hardware architecture via the Hailo CLI and selects the appropriate default model if no custom model is specified.

See the [installation docs](../frigate/installation.md#hailo-8l) for information on configuring the hailo8.
See the [installation docs](../frigate/installation.md#hailo-8l) for information on configuring the Hailo hardware.

### Configuration

When configuring the Hailo detector, you have two options to specify the model: a local **path** or a **URL**.
If both are provided, the detector will first check for the model at the given local path. If the file is not found, it will download the model from the specified URL. The model file is cached under `/config/model_cache/hailo`.

#### YOLO (Recommended)

Use this configuration for YOLO-based models. When no custom model path or URL is provided, the detector automatically downloads the default model based on the detected hardware:
- **Hailo-8 hardware:** Uses **YOLOv8s** (default: `yolov8s.hef`)
- **Hailo-8L hardware:** Uses **YOLOv6n** (default: `yolov6n.hef`)

```yaml
detectors:
hailo8l:
type: hailo8l
device: PCIe

model:
width: 640
height: 640
input_tensor: nhwc
input_pixel_format: rgb
input_dtype: int
model_type: hailo-yolo
# The detector automatically selects the default model based on your hardware:
# - For Hailo-8 hardware: YOLOv8s (default: yolov8s.hef)
# - For Hailo-8L hardware: YOLOv6n (default: yolov6n.hef)
#
# Optionally, you can specify a local model path to override the default.
# If a local path is provided and the file exists, it will be used instead of downloading.
# Example:
# path: /config/model_cache/hailo/yolov8s.hef
#
# You can also override using a custom URL:
# url: https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.14.0/hailo8/yolov8s.hef
```

#### SSD

For SSD-based models, provide either a model path or URL to your compiled SSD model. The integration will first check the local path before downloading if necessary.

```yaml
detectors:
hailo8l:
Expand All @@ -150,9 +191,48 @@ model:
input_tensor: nhwc
input_pixel_format: bgr
model_type: ssd
path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
# Specify the local model path (if available) or URL for SSD MobileNet v1.
# Example with a local path:
# path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
#
# Or override using a custom URL:
# url: https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.14.0/hailo8l/ssd_mobilenet_v1.hef
```

#### Custom Models

The Hailo detector supports all YOLO models compiled for Hailo hardware that include post-processing. You can specify a custom URL or a local path to download or use your model directly. If both are provided, the detector checks the local path first.

```yaml
detectors:
hailo8l:
type: hailo8l
device: PCIe
# Optional: Specify a local model path.
# path: /config/model_cache/hailo/custom_model.hef
#
# Alternatively, or as a fallback, provide a custom URL:
# url: https://custom-model-url.com/path/to/model.hef

model:
width: 640
height: 640
input_tensor: nhwc
input_pixel_format: rgb
input_dtype: int
model_type: hailo-yolo
```

> **Note:**
> If both a model **path** and **URL** are provided, the detector will first check the local model path. If the file is not found, it will download the model from the URL.
>
> *Tested custom models include: yolov5, yolov8, yolov9, yolov11.*

---

This guide now clearly explains how the model is chosen based on the presence of a local file path versus a URL, ensuring users know which model will be used by the integration.


## OpenVINO Detector

The OpenVINO detector type runs an OpenVINO IR model on AMD and Intel CPUs, Intel GPUs and Intel VPU hardware. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`.
Expand Down
13 changes: 10 additions & 3 deletions docs/docs/frigate/hardware.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,11 +92,18 @@ Inference speeds will vary greatly depending on the GPU and the model used.

With the [rocm](../configuration/object_detectors.md#amdrocm-gpu-detector) detector Frigate can take advantage of many discrete AMD GPUs.

### Hailo-8l PCIe
### Hailo-8 Detector

Frigate supports the Hailo-8l M.2 card on any hardware but currently it is only tested on the Raspberry Pi5 PCIe hat from the AI kit.
Frigate supports both the Hailo-8 and Hailo-8L AI Acceleration Modules on compatible hardware platforms—including the Raspberry Pi 5 with the PCIe hat from the AI kit. The Hailo detector integration in Frigate automatically identifies your hardware type and selects the appropriate default model when a custom model isn’t provided.

**Default Model Configuration:**
- **Hailo-8L:** Default model is **YOLOv6n**.
- **Hailo-8:** Default model is **YOLOv8s**.

Additionally, the heavier **YOLOv8m** model has been tested on Hailo-8 hardware for users who require higher accuracy despite increased inference time.

In real-world deployments, even with multiple cameras running concurrently, Frigate has demonstrated consistent performance. Testing on x86 platforms—with dual PCIe lanes—yields further improvements in FPS, throughput, and latency compared to the Raspberry Pi setup.

The inference time for the Hailo-8L chip at time of writing is around 17-21 ms for the SSD MobileNet Version 1 model.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should leave this in here, perhaps create a table like the other hardware sections so users know what to expect.

I am happy to do that separately if you can provide some inference times for the yolov6 model used here

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain how the detector inference speed and Detector CPU usage is measured in your system metrics page?
We removed the table because we saw a mismatch between the inference speed measured in your metrics and the one we measure.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is the total time of inference including image preprocessing and detection post processing


## Community Supported Detectors

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/frigate/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,9 +100,9 @@ By default, the Raspberry Pi limits the amount of memory available to the GPU. I

Additionally, the USB Coral draws a considerable amount of power. If using any other USB devices such as an SSD, you will experience instability due to the Pi not providing enough power to USB devices. You will need to purchase an external USB hub with it's own power supply. Some have reported success with <a href="https://amzn.to/3a2mH0P" target="_blank" rel="nofollow noopener sponsored">this</a> (affiliate link).

### Hailo-8L
### Hailo-8

The Hailo-8L is an M.2 card typically connected to a carrier board for PCIe, which then connects to the Raspberry Pi 5 as part of the AI Kit. However, it can also be used on other boards equipped with an M.2 M key edge connector.
The Hailo-8 and Hailo-8L AI accelerators are available in both M.2 and HAT form factors for the Raspberry Pi. The M.2 version typically connects to a carrier board for PCIe, which then interfaces with the Raspberry Pi 5 as part of the AI Kit. The HAT version can be mounted directly onto compatible Raspberry Pi models. Both form factors have been successfully tested on x86 platforms as well, making them versatile options for various computing environments.

#### Installation

Expand Down
1 change: 1 addition & 0 deletions frigate/detectors/detector_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ class ModelTypeEnum(str, Enum):
yolov9 = "yolov9"
yolonas = "yolonas"
dfine = "dfine"
hailoyolo = "hailo-yolo"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't want these to be detector specific, I'd suggest just using yolo 9 as we may change that to be called yolo-generic or something in the future

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey Nick,

We've made this change to support all the models in our model zoo, not just YOLO. Any model, regardless of its configuration, can be run in Frigate as long as its post-processing is done on Hailo (which happens during the model compilation).

Regarding the default SSD MobileNet, it uses a 300x300 input size. Resizing from 320x320 to 300x300 takes more time than simply moving from 320x320 to 640x640. To optimize this, I've updated the detector to perform the resizing (so you can run Frigate with a 320x320 configuration) and use YOLOv6n with a 640x640 input size.

In summary, we now use the default 320x320 configuration for most models, and we only resize and run YOLOv6n with a 640x640 input size.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, but this is a mixing of concerns, because now we have detectors that rely on the model type and model type that relies on the detector.

We either need the different model types to be listed out, or just have the Hailo detector ignore the configured model type when using a model from the Hailo zoo

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did all the required changes , and improved performance by 3x , we need just to change something that is not correct and will re push



class ModelConfig(BaseModel):
Expand Down
Loading