Accuracy evaluation of models in OpenCV Zoo
Make sure you have the following packages installed:
pip install tqdm
pip install scikit-learn
pip install scipy
Generally speaking, evaluation can be done with the following command:
python eval.py -m model_name -d dataset_name -dr dataset_root_dir
Supported datasets:
ImageNet
Prepare data
Please visit https://image-net.org/ to download the ImageNet dataset and the labels from caffe. Organize files as follow:
$ tree -L 2 /path/to/imagenet
.
βββ caffe_ilsvrc12
β βββ det_synset_words.txt
β βββ imagenet.bet.pickle
β βββ imagenet_mean.binaryproto
β βββ synsets.txt
β βββ synset_words.txt
β βββ test.txt
β βββ train.txt
β βββ val.txt
βββ caffe_ilsvrc12.tar.gz
βββ ILSVRC
β βββ Annotations
β βββ Data
β βββ ImageSets
βββ imagenet_object_localization_patched2019.tar.gz
βββ LOC_sample_submission.csv
βββ LOC_synset_mapping.txt
βββ LOC_train_solution.csv
βββ LOC_val_solution.csv
Evaluation
Run evaluation with the following command:
python eval.py -m mobilenet -d imagenet -dr /path/to/imagenet
WIDERFace
The script is modified based on WiderFace-Evaluation.
Prepare data
Please visit http://shuoyang1213.me/WIDERFACE to download the WIDERFace dataset Validation Images, Face annotations and eval_tools. Organize files as follow:
$ tree -L 2 /path/to/widerface
.
βββ eval_tools
β βββ boxoverlap.m
β βββ evaluation.m
β βββ ground_truth
β βββ nms.m
β βββ norm_score.m
β βββ plot
β βββ read_pred.m
β βββ wider_eval.m
βββ wider_face_split
β βββ readme.txt
β βββ wider_face_test_filelist.txt
β βββ wider_face_test.mat
β βββ wider_face_train_bbx_gt.txt
β βββ wider_face_train.mat
β βββ wider_face_val_bbx_gt.txt
β βββ wider_face_val.mat
βββ WIDER_val
βββ images
Evaluation
Run evaluation with the following command:
python eval.py -m yunet -d widerface -dr /path/to/widerface
LFW
The script is modified based on evaluation of InsightFace.
This evaluation uses YuNet as face detector. The structure of the face bounding boxes saved in lfw_face_bboxes.npy is shown below. Each row represents the bounding box of the main face that will be used in each image.
[
[x, y, w, h, x_re, y_re, x_le, y_le, x_nt, y_nt, x_rcm, y_rcm, x_lcm, y_lcm],
...
[x, y, w, h, x_re, y_re, x_le, y_le, x_nt, y_nt, x_rcm, y_rcm, x_lcm, y_lcm]
]
x1, y1, w, h are the top-left coordinates, width and height of the face bounding box, {x, y}_{re, le, nt, rcm, lcm} stands for the coordinates of right eye, left eye, nose tip, the right corner and left corner of the mouth respectively. Data type of this numpy array is np.float32.
Prepare data
Please visit http://vis-www.cs.umass.edu/lfw to download the LFW all images(needs to be decompressed) and pairs.txt(needs to be placed in the view2 folder). Organize files as follow:
$ tree -L 2 /path/to/lfw
.
βββ lfw
β βββ Aaron_Eckhart
β βββ ...
β βββ Zydrunas_Ilgauskas
βββ view2
βββ pairs.txt
Evaluation
Run evaluation with the following command:
python eval.py -m sface -d lfw -dr /path/to/lfw