Ultra-lightweight face detection model
Description
This model is a lightweight facedetection model designed for edge computing devices.
Model
| Model | Download | Download (with sample test data) | ONNX version | Opset version |
|---|---|---|---|---|
| version-RFB-320 | 1.21 MB | 1.92 MB | 1.4 | 9 |
| version-RFB-640 | 1.51 MB | 4.59 MB | 1.4 | 9 |
| version-RFB-320-int8 | 0.44 MB | 1.2 MB | 1.14 | 12 |
Dataset
The training set is the VOC format data set generated by using the cleaned widerface labels provided by Retinaface in conjunction with the widerface dataset.
Source
You can find the source code here.
Demo
Run demo.py python scripts example.
Inference
Input
Input tensor is 1 x 3 x height x width with mean values 127, 127, 127 and scale factor 1.0 / 128. Input image have to be previously converted to RGB format and resized to 320 x 240 pixels for version-RFB-320 model (or 640 x 480 for version-RFB-640 model).
Preprocessing
Given a path image_path to the image you would like to score:
image = cv2.cvtColor(orig_image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (320, 240))
image_mean = np.array([127, 127, 127])
image = (image - image_mean) / 128
image = np.transpose(image, [2, 0, 1])
image = np.expand_dims(image, axis=0)
image = image.astype(np.float32)
Output
The model outputs two arrays (1 x 4420 x 2) and (1 x 4420 x 4) of scores and boxes.
Postprocessing
In postprocessing, threshold filtration and non-max suppression are applied to the scores and boxes arrays.
Quantization
version-RFB-320-int8 is obtained by quantizing fp32 version-RFB-320 model. We use Intel® Neural Compressor with onnxruntime backend to perform quantization. View the instructions to understand how to use Intel® Neural Compressor for quantization.
Prepare Model
Download model from ONNX Model Zoo.
wget https://github.com/onnx/models/raw/main/vision/body_analysis/ultraface/models/version-RFB-320.onnx
Convert opset version to 12 for more quantization capability.
import onnx
from onnx import version_converter
model = onnx.load('version-RFB-320.onnx')
model = version_converter.convert_version(model, 12)
onnx.save_model(model, 'version-RFB-320-12.onnx')
Model quantize
cd neural-compressor/examples/onnxrt/body_analysis/onnx_model_zoo/ultraface/quantization/ptq_static
bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
--dataset_location=/path/to/data \
--output_model=path/to/save
Contributors
License
MIT