jemartin commited on
Commit
f55ad5a
·
verified ·
1 Parent(s): 490e59c

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +194 -0
README.md ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ model_name: efficientnet-lite4-11-qdq.onnx
5
+ tags:
6
+ - validated
7
+ - vision
8
+ - classification
9
+ - efficientnet-lite4
10
+ ---
11
+ <!--- SPDX-License-Identifier: MIT -->
12
+
13
+ # EfficientNet-Lite4
14
+
15
+ ## Use Cases
16
+ EfficientNet-Lite4 is an image classification model that achieves state-of-the-art accuracy. It is designed to run on mobile CPU, GPU, and EdgeTPU devices, allowing for applications on mobile and loT, where computational resources are limited.
17
+
18
+ ## Description
19
+ EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite model. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU.
20
+
21
+ ## Model
22
+
23
+ |Model |Download | Download (with sample test data)|ONNX version|Opset version|Top-1 accuracy (%)|
24
+ |-------------|:--------------|:--------------|:--------------|:--------------|:--------------|
25
+ |EfficientNet-Lite4 | [51.9 MB](model/efficientnet-lite4-11.onnx) | [48.6 MB](model/efficientnet-lite4-11.tar.gz)|1.7.0|11|80.4|
26
+ |EfficientNet-Lite4-int8 | [13.0 MB](model/efficientnet-lite4-11-int8.onnx) | [12.2 MB](model/efficientnet-lite4-11-int8.tar.gz)|1.9.0|11|77.56|
27
+ |EfficientNet-Lite4-qdq | [12.9 MB](model/efficientnet-lite4-11-qdq.onnx) | [9.72 MB](model/efficientnet-lite4-11-qdq.tar.gz) |1.10.0 | 11| 76.90 |
28
+ > The fp32 Top-1 accuracy got by [Intel® Neural Compressor](https://github.com/intel/neural-compressor) is 77.70%, and compared with this value, int8 EfficientNet-Lite4's Top-1 accuracy drop ratio is 0.18% and performance improvement is 1.12x.
29
+ >
30
+ > **Note**
31
+ >
32
+ > The performance depends on the test hardware. Performance data here is collected with Intel® Xeon® Platinum 8280 Processor, 1s 4c per instance, CentOS Linux 8.3, data batch size is 1.
33
+
34
+ ### Source
35
+ Tensorflow EfficientNet-Lite4 => ONNX EfficientNet-Lite4
36
+ ONNX EfficientNet-Lite4 => Quantized ONNX EfficientNet-Lite4
37
+
38
+ <hr>
39
+
40
+
41
+ ## Inference
42
+
43
+ ### Running Inference
44
+ The following steps show how to run the inference using onnxruntime.
45
+
46
+ import onnxruntime as rt
47
+
48
+ # load model
49
+ # Start from ORT 1.10, ORT requires explicitly setting the providers parameter if you want to use execution providers
50
+ # other than the default CPU provider (as opposed to the previous behavior of providers getting set/registered by default
51
+ # based on the build flags) when instantiating InferenceSession.
52
+ # For example, if NVIDIA GPU is available and ORT Python package is built with CUDA, then call API as following:
53
+ # rt.InferenceSession(path/to/model, providers=['CUDAExecutionProvider'])
54
+ sess = rt.InferenceSession(MODEL + ".onnx")
55
+ # run inference
56
+ results = sess.run(["Softmax:0"], {"images:0": img_batch})[0]
57
+
58
+
59
+ ### Input to model
60
+ Input image to model is resized to shape `float32[1,224,224,3]`. The batch size is 1, with 224 x 224 height and width dimensions. The input is an RBG image that has 3 channels: red, green, and blue. Inference was done using a jpg image.
61
+
62
+ ### Preprocessing steps
63
+ The following steps show how to preprocess the input image. For more details visit [this conversion notebook](https://github.com/onnx/tensorflow-onnx/blob/master/tutorials/efficientnet-lite.ipynb).
64
+
65
+ import numpy as np
66
+ import math
67
+ import matplotlib.pyplot as plt
68
+ import onnxruntime as rt
69
+ import cv2
70
+ import json
71
+
72
+ # load the labels text file
73
+ labels = json.load(open("labels_map.txt", "r"))
74
+
75
+ # set image file dimensions to 224x224 by resizing and cropping image from center
76
+ def pre_process_edgetpu(img, dims):
77
+ output_height, output_width, _ = dims
78
+ img = resize_with_aspectratio(img, output_height, output_width, inter_pol=cv2.INTER_LINEAR)
79
+ img = center_crop(img, output_height, output_width)
80
+ img = np.asarray(img, dtype='float32')
81
+ # converts jpg pixel value from [0 - 255] to float array [-1.0 - 1.0]
82
+ img -= [127.0, 127.0, 127.0]
83
+ img /= [128.0, 128.0, 128.0]
84
+ return img
85
+
86
+ # resize the image with a proportional scale
87
+ def resize_with_aspectratio(img, out_height, out_width, scale=87.5, inter_pol=cv2.INTER_LINEAR):
88
+ height, width, _ = img.shape
89
+ new_height = int(100. * out_height / scale)
90
+ new_width = int(100. * out_width / scale)
91
+ if height > width:
92
+ w = new_width
93
+ h = int(new_height * height / width)
94
+ else:
95
+ h = new_height
96
+ w = int(new_width * width / height)
97
+ img = cv2.resize(img, (w, h), interpolation=inter_pol)
98
+ return img
99
+
100
+ # crop the image around the center based on given height and width
101
+ def center_crop(img, out_height, out_width):
102
+ height, width, _ = img.shape
103
+ left = int((width - out_width) / 2)
104
+ right = int((width + out_width) / 2)
105
+ top = int((height - out_height) / 2)
106
+ bottom = int((height + out_height) / 2)
107
+ img = img[top:bottom, left:right]
108
+ return img
109
+
110
+ # read the image
111
+ fname = "image_file"
112
+ img = cv2.imread(fname)
113
+ img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
114
+
115
+ # pre-process the image like mobilenet and resize it to 224x224
116
+ img = pre_process_edgetpu(img, (224, 224, 3))
117
+ plt.axis('off')
118
+ plt.imshow(img)
119
+ plt.show()
120
+
121
+ # create a batch of 1 (that batch size is buned into the saved_model)
122
+ img_batch = np.expand_dims(img, axis=0)
123
+
124
+ ### Output of model
125
+ Output of model is an inference score with array shape `float32[1,1000]`. The output references the `labels_map.txt` file which maps an index to a label to classify the type of image.
126
+
127
+ ### Postprocessing steps
128
+ The following steps detail how to print the output results of the model.
129
+
130
+ # load the model
131
+ # Start from ORT 1.10, ORT requires explicitly setting the providers parameter if you want to use execution providers
132
+ # other than the default CPU provider (as opposed to the previous behavior of providers getting set/registered by default
133
+ # based on the build flags) when instantiating InferenceSession.
134
+ # For example, if NVIDIA GPU is available and ORT Python package is built with CUDA, then call API as following:
135
+ # rt.InferenceSession(path/to/model, providers=['CUDAExecutionProvider'])
136
+ sess = rt.InferenceSession(MODEL + ".onnx")
137
+ # run inference and print results
138
+ results = sess.run(["Softmax:0"], {"images:0": img_batch})[0]
139
+ result = reversed(results[0].argsort()[-5:])
140
+ for r in result:
141
+ print(r, labels[str(r)], results[0][r])
142
+ <hr>
143
+
144
+ ## Dataset (Train and validation)
145
+ The model was trained using [COCO 2017 Train Images, Val Images, and Train/Val annotations](https://cocodataset.org/#download).
146
+ <hr>
147
+
148
+ ## Validation
149
+ Refer to [efficientnet-lite4 conversion notebook](https://github.com/onnx/tensorflow-onnx/blob/master/tutorials/efficientnet-lite.ipynb) for details of how to use it and reproduce accuracy.
150
+ <hr>
151
+
152
+ ## Quantization
153
+ EfficientNet-Lite4-int8 and EfficientNet-Lite4-qdq are obtained by quantizing fp32 CaffeNet model. We use [Intel® Neural Compressor](https://github.com/intel/neural-compressor) with onnxruntime backend to perform quantization. View the [instructions](https://github.com/intel/neural-compressor/blob/master/examples/onnxrt/image_recognition/onnx_model_zoo/efficientnet/quantization/ptq/README.md) to understand how to use Intel® Neural Compressor for quantization.
154
+
155
+ ### Environment
156
+ onnx: 1.9.0
157
+ onnxruntime: 1.8.0
158
+
159
+ ### Prepare model
160
+ ```shell
161
+ wget https://github.com/onnx/models/raw/main/vision/classification/efficientnet-lite4/model/efficientnet-lite4-11.onnx
162
+ ```
163
+
164
+ ### Model quantize
165
+ Make sure to specify the appropriate dataset path in the configuration file.
166
+ ```bash
167
+ bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
168
+ --config=efficientnet.yaml \
169
+ --output_model=path/to/save
170
+ ```
171
+ <hr>
172
+
173
+ ## References
174
+ * Tensorflow to Onnx conversion [tutorial](https://github.com/onnx/tensorflow-onnx/blob/master/tutorials/efficientnet-lite.ipynb). The Juypter Notebook references how to run an evaluation on the efficientnet-lite4 model and export it as a saved model. It also details how to convert the tensorflow model into onnx, and how to run its preprocessing and postprocessing code for the inputs and outputs.
175
+
176
+ * Refer to this [paper](https://arxiv.org/abs/1905.11946) for more details on the model.
177
+
178
+ * [Intel® Neural Compressor](https://github.com/intel/neural-compressor)
179
+
180
+ <hr>
181
+
182
+ ## Contributors
183
+ * [Shirley Su](https://github.com/shirleysu8)
184
+ * [mengniwang95](https://github.com/mengniwang95) (Intel)
185
+ * [airMeng](https://github.com/airMeng) (Intel)
186
+ * [ftian1](https://github.com/ftian1) (Intel)
187
+ * [hshen14](https://github.com/hshen14) (Intel)
188
+
189
+ <hr>
190
+
191
+ ## License
192
+ MIT License
193
+ <hr>
194
+