Re-quantify some models from per_channel mode to per_tensor mode (#90) 323da84 Wanli commited on Jan 6, 2023
revise the quantize-ort to quantize crnn into int8 with high accuracy (#94) d30c3db Yiyao Wang commited on Oct 13, 2022
use QuantFormat.QOperator by default to avoid fake quantization (#88) 4b236af ytfeng commited on Aug 29, 2022
Add the missing yaml config for quantizing MP-PalmDet and improve quantized MP-PalmDet (#60) 83c563e ytfeng commited on Jun 8, 2022
Use YuNet of fixed input shape to avoid 'parseShape' error (#45) 62917b7 ytfeng commited on Apr 1, 2022