Understanding Objects and Models #15494
Unanswered
baitinghollow
asked this question in
Ask A Question
Replies: 1 comment 1 reply
-
All of the tensorrt models are pulled from nvidia's which you can see taking place at https://github.com/NateMeyer/tensorrt_demos/blob/master/yolo/download_yolo.sh Frigate+ supports mobiledet model for coral as well as yolo-nas models which can run on GPUs like Nvidia. Aside from that you can of course train your own model on another architecture but that is a manual process not involving Frigate+ The tensorrt model supports the same labels, though there is a known issue where some models seem to have an incorrect labelmap leading to some labels not being detected correctly. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Greeting All,
I have Frigate running on a HP EliteDesk 800 with a Coral tpu (and Frigate+) Works great with no issues.
I have been learning on a pc that has a Nvidia RTX, all about the in’s and out’s using the gpu for object tracking. After stretching my brain to the limit, I now have it working well with a volov7x-320.
Comparing the two units, I understand there are differences in the models. But after hours of searching, I’m having trouble with some fundamental concepts.
First, searching the web, I don’t find a volo7 model that is -320. Is that made by the Frigate team with objects in it? Is there a list of object types for the volo model? For instance, the Coral tpu detects and labels a deer correctly, but the volo model does not seem to. But both models do well with vehicles and persons.
The docs say the volo model can be improved over time with more images. Is there a mechanism where you can submit images to improve volo as there is with Frigate+?
Anything to help me understand the concepts of this would be greatly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions