Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hailo Official integration #16906

Open
wants to merge 16 commits into
base: dev
Choose a base branch
from
Open

Hailo Official integration #16906

wants to merge 16 commits into from

Conversation

OmriAx
Copy link

@OmriAx OmriAx commented Mar 3, 2025

We are now standardizing on asynchronous inference to improve performance and responsiveness, while also upgrading our detection models to the latest YOLO variants—YOLOv8s for Hailo-8 and YOLOv6n for Hailo-8L—both optimized for 640x640 processing. In addition, documentation and configuration guidance have been updated to reflect these changes.

Key Changes:

Async Inference Integration:

Migrated our inference pipeline to use asynchronous inference as the new standard.
Improved throughput and reduced latency by decoupling input preprocessing from model execution.
Model Upgrades:

Updated defaults to use YOLOv8s on Hailo-8 and YOLOv6n on Hailo-8L.
These models are optimized for 640x640 input dimensions, delivering high accuracy and performance (320 FPS for YOLOv8s and ~300 FPS for YOLOv6n).
Detection Post-Processing Enhancements:

Revised the detect_raw function to leverage the outer loop index to determine the class label, aligning with our standard detection extraction method.

`

Copy link

netlify bot commented Mar 3, 2025

Deploy Preview for frigate-docs ready!

Name Link
🔨 Latest commit b8a20f5
🔍 Latest deploy log https://app.netlify.com/sites/frigate-docs/deploys/67c840f70ccbe8000834f95c
😎 Deploy Preview https://deploy-preview-16906--frigate-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@NickM-27
Copy link
Collaborator

NickM-27 commented Mar 3, 2025

Hi, thank you for the pull request! Please look at #10717, we will not be able to accept some of these changes

Copy link
Collaborator

@NickM-27 NickM-27 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should keep the same default model for the issue I raised above, as well as the fact that Frigate is optimized to work well with the 320x320 input and I think it is preferable to have the default inputs similar across detectors.

@@ -38,6 +38,7 @@ class ModelTypeEnum(str, Enum):
yolov9 = "yolov9"
yolonas = "yolonas"
dfine = "dfine"
hailoyolo = "hailo-yolo"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't want these to be detector specific, I'd suggest just using yolo 9 as we may change that to be called yolo-generic or something in the future

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey Nick,

We've made this change to support all the models in our model zoo, not just YOLO. Any model, regardless of its configuration, can be run in Frigate as long as its post-processing is done on Hailo (which happens during the model compilation).

Regarding the default SSD MobileNet, it uses a 300x300 input size. Resizing from 320x320 to 300x300 takes more time than simply moving from 320x320 to 640x640. To optimize this, I've updated the detector to perform the resizing (so you can run Frigate with a 320x320 configuration) and use YOLOv6n with a 640x640 input size.

In summary, we now use the default 320x320 configuration for most models, and we only resize and run YOLOv6n with a 640x640 input size.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, but this is a mixing of concerns, because now we have detectors that rely on the model type and model type that relies on the detector.

We either need the different model types to be listed out, or just have the Hailo detector ignore the configured model type when using a model from the Hailo zoo

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did all the required changes , and improved performance by 3x , we need just to change something that is not correct and will re push


The inference time for the Hailo-8L chip at time of writing is around 17-21 ms for the SSD MobileNet Version 1 model.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should leave this in here, perhaps create a table like the other hardware sections so users know what to expect.

I am happy to do that separately if you can provide some inference times for the yolov6 model used here

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain how the detector inference speed and Detector CPU usage is measured in your system metrics page?
We removed the table because we saw a mismatch between the inference speed measured in your metrics and the one we measure.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is the total time of inference including image preprocessing and detection post processing

@NickM-27
Copy link
Collaborator

NickM-27 commented Mar 5, 2025

Looks like ruff needs to be run

class HailoDetectorConfig(BaseDetectorConfig):
type: Literal[DETECTOR_KEY]
device: str = Field(default="PCIe", title="Device Type")
#url: Optional[str] = Field(default=None, title="Custom Model URL")
Copy link
Collaborator

@hawkeye217 hawkeye217 Mar 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This commented line can just be removed if it's unused.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants