Copyright (C) 2023, Axis Communications AB, Lund, Sweden. All Rights Reserved.
Axis network cameras can be used for computer vision applications and can run machine learning models to make inferences. The model to use will depend on your device and your application. This repository contains a collection of different models compatible with Axis cameras and some performance measures (accuracy and speed). Our goal is to keep updating this collection with models for different applications, like object detection or pose estimation. For easy reproduction, we mostly use models that are public and we also share tools to benchmark the models. We have linked the model files and, in case they are public, the ckpt files to continue the training. The speed measurements in the table are updated with every AXIS OS release.
Note : These are not production-quality models, they are off-the-shelf models used for comparative and demonstration purposes only.
Platform | Model | TF version | Speed | Accuracy |
---|---|---|---|---|
ARTPEC-7 (Q1615 Mk III) | MobilenetV2 (ckpt) | 1 | 4.46 ms | Top 1: 68.9% Top 5: 88.2% |
ARTPEC-7 (Q1615 Mk III) | MobilenetV2 | 2 | 4.45 ms | Top 1: 69.6% Top 5: 89.1% |
ARTPEC-7 (Q1615 Mk III) | MobilenetV3 | 2 | 4.63 ms | Top 1: 72.7% Top 5: 91.1% |
ARTPEC-8 (P1465-LE) | MobilenetV2 (ckpt) | 1 | 9.83 ms | Top 1: 68.8% Top 5: 88.9% |
ARTPEC-8 (Q1656-LE) | MobilenetV2 (ckpt) | 1 | 5.35 ms | Top 1: 68.8% Top 5: 88.9% |
ARTPEC-9 (Q1728) | MobilenetV2 (ckpt) | 1 | 2.80 ms | Top 1: 69.1% Top 5: 89.0% |
CV25 (M3085-V) | MobilenetV2 (ckpt) | 1 | 5.36 ms | Top 1: 66.8% Top 5: 87.2% |
CV25 (M3085-V) | EfficientNet-Lite0 (ckpt) | 1 | 6.76 ms | Top 1: 71.2% Top 5: 90.3% |
Platform | Model | Speed | Accuracy |
---|---|---|---|
ARTPEC-7 (Q1615 Mk III) | SSD MobileNet v2 | 17.21 ms | mAP: 25.6% |
ARTPEC-7 (Q1615 Mk III) | SSDLite MobileDet | 30.49 ms | mAP: 32.9% |
ARTPEC-8 (P1465-LE) | SSD MobilenetV2 | 27.89 ms | mAP: 25.6% |
ARTPEC-8 (P1465-LE) | SSDLite MobileDet | 38.79 ms | mAP: 32.9% |
ARTPEC-8 (P1465-LE) | Yolov5n-Artpec8 (ckpt) | 100.00 ms | mAP: 23.5% |
ARTPEC-8 (Q1656-LE) | SSD MobilenetV2 | 18.42 ms | mAP: 25.6% |
ARTPEC-8 (Q1656-LE) | SSDLite MobileDet | 28.59 ms | mAP: 32.9% |
ARTPEC-8 (Q1656-LE) | Yolov5n-Artpec8 (ckpt) | 55.03 ms | mAP: 23.5% |
ARTPEC-8 (Q1656-LE) | Yolov5s-Artpec8 (ckpt) | 69.50 ms | mAP: 32.3% |
ARTPEC-8 (Q1656-LE) | Yolov5m-Artpec8 (ckpt) | 94.77 ms | mAP: 37.9% |
ARTPEC-9 (Q1728) | SSD MobilenetV2 | 14.36 ms | mAP: 25.6% |
ARTPEC-9 (Q1728) | SSDLite MobileDet | 25.42 ms | mAP: 32.9% |
ARTPEC-9 (Q1728) | Yolov5n-Artpec9 (ckpt) | 54.82 ms | mAP: 23.3% |
ARTPEC-9 (Q1728) | Yolov5s-Artpec9 (ckpt) | 58.93 ms | mAP: 32.2% |
ARTPEC-9 (Q1728) | Yolov5m-Artpec9 (ckpt) | 68.48 ms | mAP: 38.1% |
Values for AXIS OS 12.3.56.
Note
To comply with the licensing terms of Ultralytics, the YOLOv5 model files in the table above are licensed under AGPL-3.0-only. The license file is available together with the models here.
There are many factors to consider when determining the performance of a machine learning model. This repository aims to showcase two key performance indicators: inference speed and accuracy. In the following sections we will describe how they are measured.
The auto-test-framework directory contains the code for measuring the average inference (speed) and updating the speed value of each model in the table above. This test is run for every AXIS OS release. The test is done by installing and running an ACAP application on the Axis camera. To know more about how it works, see the larod-test directory.
If you want to measure the speed of your own models more conveniently, you can use the code in
model_performance_tester.py. This script connects to the
Axis camera via SSH and uses the command larod-client
to run a specified number of inferences on
random data. When all inferences has been run, the output from larod-client
will be parsed to find
the mean inference time. See below how to use the script:
python3 ./scripts/model_performance_tester.py \
--model_path <MODEL_PATH> --test_duration <DURATION> \
--chip <CHIP> --device_ip <IP> --device_credentials <USER> <PASS> --device_port <SSH_PORT>
<MODEL_PATH>
is the path to your.tflite
or.bin
model.<DURATION>
is the number of inferences to run.<CHIP>
is the larod device to use;CPU
,A9-DLPU
,A8-DLPU
,A7-GPU
,A7-TPU
,CV25
.<IP>
is the IP of the device.<USER>
,<PASS>
are the device credentials.<SSH_PORT>
is the device port for ssh, default is port22
.
Go to the Test your model page to learn more about testing machine learning models on Axis devices.
There are no automated tests for the accuracy results and they are not reevaluated for each release of AXIS OS. However, the image classification models are tested on an Axis camera by installing and running an ACAP application on the Axis camera. To know more about how it works, see the accuracy-test directory.
Accuracy test for the object detection models have never been evaluated on an Axis camera. Instead, the accuracy results come from Coral object detection models, except our custom-trained YOLOv5, which were evaluated during the "Evaluate the model accuracy" step in the YOLOv5 on ARTPEC-8 guide and YOLOv5 or ARTPEC-9 guide.