add benchmark results on vision five 2 with conflicts resolved (#173)
Browse files- README.md +1 -0
- benchmark/README.md +52 -0
- benchmark/color_table.svg +0 -0
- benchmark/table_config.yaml +4 -0
README.md
CHANGED
@@ -31,6 +31,7 @@ Hardware Setup:
|
|
31 |
- [Khadas Edge 2](https://www.khadas.com/edge2): Rockchip RK3588S SoC with a CPU of 2.25 GHz Quad Core ARM Cortex-A76 + 1.8 GHz Quad Core Cortex-A55, and a 6 TOPS NPU.
|
32 |
- [Horizon Sunrise X3](https://developer.horizon.ai/sunrise): an SoC from Horizon Robotics with a quad-core ARM Cortex-A53 1.2 GHz CPU and a 5 TOPS BPU (a.k.a NPU).
|
33 |
- [MAIX-III AXera-Pi](https://wiki.sipeed.com/hardware/en/maixIII/ax-pi/axpi.html#Hardware): Axera AX620A SoC with a quad-core ARM Cortex-A7 CPU and a 3.6 TOPS @ int8 NPU.
|
|
|
34 |
- [NVIDIA Jetson Nano B01](https://developer.nvidia.com/embedded/jetson-nano-developer-kit): a Quad-core ARM A57 @ 1.43 GHz CPU, and a 128-core NVIDIA Maxwell GPU.
|
35 |
- [Khadas VIM3](https://www.khadas.com/vim3): Amlogic A311D SoC with a 2.2GHz Quad core ARM Cortex-A73 + 1.8GHz dual core Cortex-A53 ARM CPU, and a 5 TOPS NPU. Benchmarks are done using **per-tensor quantized** models. Follow [this guide](https://github.com/opencv/opencv/wiki/TIM-VX-Backend-For-Running-OpenCV-On-NPU) to build OpenCV with TIM-VX backend enabled.
|
36 |
- [Atlas 200 DK](https://e.huawei.com/en/products/computing/ascend/atlas-200): Ascend 310 NPU with 22 TOPS @ INT8. Follow [this guide](https://github.com/opencv/opencv/wiki/Huawei-CANN-Backend) to build OpenCV with CANN backend enabled.
|
|
|
31 |
- [Khadas Edge 2](https://www.khadas.com/edge2): Rockchip RK3588S SoC with a CPU of 2.25 GHz Quad Core ARM Cortex-A76 + 1.8 GHz Quad Core Cortex-A55, and a 6 TOPS NPU.
|
32 |
- [Horizon Sunrise X3](https://developer.horizon.ai/sunrise): an SoC from Horizon Robotics with a quad-core ARM Cortex-A53 1.2 GHz CPU and a 5 TOPS BPU (a.k.a NPU).
|
33 |
- [MAIX-III AXera-Pi](https://wiki.sipeed.com/hardware/en/maixIII/ax-pi/axpi.html#Hardware): Axera AX620A SoC with a quad-core ARM Cortex-A7 CPU and a 3.6 TOPS @ int8 NPU.
|
34 |
+
- [StarFive VisionFive 2](https://doc-en.rvspace.org/VisionFive2/Product_Brief/VisionFive_2/specification_pb.html): `StarFive JH7110` SoC with a RISC-V quad-core CPU, which can turbo up to 1.5GHz, and an GPU of model `IMG BXE-4-32 MC1` from Imagination, which has a work freq up to 600MHz.
|
35 |
- [NVIDIA Jetson Nano B01](https://developer.nvidia.com/embedded/jetson-nano-developer-kit): a Quad-core ARM A57 @ 1.43 GHz CPU, and a 128-core NVIDIA Maxwell GPU.
|
36 |
- [Khadas VIM3](https://www.khadas.com/vim3): Amlogic A311D SoC with a 2.2GHz Quad core ARM Cortex-A73 + 1.8GHz dual core Cortex-A53 ARM CPU, and a 5 TOPS NPU. Benchmarks are done using **per-tensor quantized** models. Follow [this guide](https://github.com/opencv/opencv/wiki/TIM-VX-Backend-For-Running-OpenCV-On-NPU) to build OpenCV with TIM-VX backend enabled.
|
37 |
- [Atlas 200 DK](https://e.huawei.com/en/products/computing/ascend/atlas-200): Ascend 310 NPU with 22 TOPS @ INT8. Follow [this guide](https://github.com/opencv/opencv/wiki/Huawei-CANN-Backend) to build OpenCV with CANN backend enabled.
|
benchmark/README.md
CHANGED
@@ -659,3 +659,55 @@ mean median min input size model
|
|
659 |
3001.31 3237.93 2353.81 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
|
660 |
2887.05 3224.12 2206.89 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
|
661 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
659 |
3001.31 3237.93 2353.81 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
|
660 |
2887.05 3224.12 2206.89 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
|
661 |
```
|
662 |
+
|
663 |
+
### StarFive VisionFive 2
|
664 |
+
|
665 |
+
Specs: [details_cn](https://doc.rvspace.org/VisionFive2/PB/VisionFive_2/specification_pb.html), [details_en](https://doc-en.rvspace.org/VisionFive2/Product_Brief/VisionFive_2/specification_pb.html)
|
666 |
+
- CPU: StarFive JH7110 with RISC-V quad-core CPU with 2 MB L2 cache and a monitor core, supporting RV64GC ISA, working up to 1.5 GHz
|
667 |
+
- GPU: IMG BXE-4-32 MC1 with work frequency up to 600 MHz
|
668 |
+
|
669 |
+
CPU:
|
670 |
+
|
671 |
+
```
|
672 |
+
$ python3 benchmark.py --all --cfg_exclude wechat:dasiam --model_exclude license_plate_detection_lpd_yunet_2023mar_int8.onnx:human_segmentation_pphumanseg_2023mar_int8.onnx
|
673 |
+
Benchmarking ...
|
674 |
+
backend=cv.dnn.DNN_BACKEND_OPENCV
|
675 |
+
target=cv.dnn.DNN_TARGET_CPU
|
676 |
+
mean median min input size model
|
677 |
+
50.28 50.42 50.08 [160, 120] YuNet with ['face_detection_yunet_2022mar.onnx']
|
678 |
+
44.45 44.84 39.29 [160, 120] YuNet with ['face_detection_yunet_2022mar_int8.onnx']
|
679 |
+
1059.87 1059.79 1058.95 [150, 150] SFace with ['face_recognition_sface_2021dec.onnx']
|
680 |
+
838.07 859.42 658.86 [150, 150] SFace with ['face_recognition_sface_2021dec_int8.onnx']
|
681 |
+
424.55 424.74 424.06 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july.onnx']
|
682 |
+
350.30 357.95 290.66 [112, 112] FacialExpressionRecog with ['facial_expression_recognition_mobilefacenet_2022july_int8.onnx']
|
683 |
+
314.50 313.75 313.67 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
|
684 |
+
275.80 280.48 243.97 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
|
685 |
+
1131.91 1132.16 1131.08 [192, 192] PPHumanSeg with ['human_segmentation_pphumanseg_2023mar.onnx']
|
686 |
+
1072.77 1073.31 1072.07 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr.onnx']
|
687 |
+
811.64 837.32 602.08 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr.onnx']
|
688 |
+
692.68 602.74 516.39 [224, 224] MobileNet with ['image_classification_mobilenetv1_2022apr_int8.onnx']
|
689 |
+
596.12 559.52 382.75 [224, 224] MobileNet with ['image_classification_mobilenetv2_2022apr_int8.onnx']
|
690 |
+
8131.86 8132.90 8128.55 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan.onnx']
|
691 |
+
5412.98 5684.12 3236.35 [224, 224] PPResNet with ['image_classification_ppresnet50_2022jan_int8.onnx']
|
692 |
+
2265.62 2264.83 2263.38 [320, 240] LPD_YuNet with ['license_plate_detection_lpd_yunet_2023mar.onnx']
|
693 |
+
1727.39 1727.31 1726.31 [416, 416] NanoDet with ['object_detection_nanodet_2022nov.onnx']
|
694 |
+
1429.48 1458.69 1189.19 [416, 416] NanoDet with ['object_detection_nanodet_2022nov_int8.onnx']
|
695 |
+
26156.87 26169.88 26134.95 [640, 640] YoloX with ['object_detection_yolox_2022nov.onnx']
|
696 |
+
17151.71 17933.90 9675.03 [640, 640] YoloX with ['object_detection_yolox_2022nov_int8.onnx']
|
697 |
+
316.26 315.72 315.55 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb.onnx']
|
698 |
+
276.38 280.84 243.11 [224, 224] MPHandPose with ['handpose_estimation_mediapipe_2023feb_int8.onnx']
|
699 |
+
586.18 586.28 585.62 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb.onnx']
|
700 |
+
542.79 546.26 506.12 [192, 192] MPPalmDet with ['palm_detection_mediapipe_2023feb_int8.onnx']
|
701 |
+
910.67 910.62 909.72 [224, 224] MPPersonDet with ['person_detection_mediapipe_2023mar.onnx']
|
702 |
+
7628.31 7624.65 7623.26 [128, 256] YoutuReID with ['person_reid_youtu_2021nov.onnx']
|
703 |
+
4899.76 5171.88 2714.07 [128, 256] YoutuReID with ['person_reid_youtu_2021nov_int8.onnx']
|
704 |
+
486.59 490.33 484.31 [256, 256] MPPose with ['pose_estimation_mediapipe_2023mar.onnx']
|
705 |
+
34888.37 34834.51 34103.30 [640, 480] DB with ['text_detection_DB_IC15_resnet18_2021sep.onnx']
|
706 |
+
35123.00 35996.09 34103.30 [640, 480] DB with ['text_detection_DB_TD500_resnet18_2021sep.onnx']
|
707 |
+
1425.08 1543.33 1413.01 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2021sep.onnx']
|
708 |
+
1455.55 1580.51 1413.01 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov.onnx']
|
709 |
+
1457.01 1484.13 1413.01 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2021sep.onnx']
|
710 |
+
1281.84 1468.77 810.51 [1280, 720] CRNN with ['text_recognition_CRNN_CH_2022oct_int8.onnx']
|
711 |
+
1191.52 1517.48 810.51 [1280, 720] CRNN with ['text_recognition_CRNN_CN_2021nov_int8.onnx']
|
712 |
+
1111.95 1131.27 775.96 [1280, 720] CRNN with ['text_recognition_CRNN_EN_2022oct_int8.onnx']
|
713 |
+
```
|
benchmark/color_table.svg
CHANGED
|
|
benchmark/table_config.yaml
CHANGED
@@ -174,6 +174,10 @@ Devices:
|
|
174 |
display_info: "Rasberry Pi 4B\nBCM2711\nCPU"
|
175 |
platform: "CPU"
|
176 |
|
|
|
|
|
|
|
|
|
177 |
- name: "Toybrick RV1126"
|
178 |
display_info: "Toybrick\nRV1126\nCPU"
|
179 |
platform: "CPU"
|
|
|
174 |
display_info: "Rasberry Pi 4B\nBCM2711\nCPU"
|
175 |
platform: "CPU"
|
176 |
|
177 |
+
- name: "StarFive VisionFive 2"
|
178 |
+
display_info: "StarFive VisionFive 2\nStarFive JH7110\nCPU"
|
179 |
+
platform: "CPU"
|
180 |
+
|
181 |
- name: "Toybrick RV1126"
|
182 |
display_info: "Toybrick\nRV1126\nCPU"
|
183 |
platform: "CPU"
|