Spaces:
Sleeping
Sleeping
| title: Submission Template | |
| emoji: 🔥 | |
| colorFrom: yellow | |
| colorTo: green | |
| sdk: docker | |
| pinned: false | |
| # Object Detector for forest fire smoke | |
| ## Model Description | |
| This is a frugal object detector use to detect fire smoke, as part of the Frugal AI Challenge 2024. It is based of the yolo model series | |
| ### Intended Use | |
| - **Primary intended uses**: Detect fire smoke on photos of forests, in different natural settings | |
| - **Primary intended users**: Researchers and developers participating in the Frugal AI Challenge | |
| ## Training Data | |
| The model uses the pyronear/pyro-sdis dataset: | |
| - Size: ~33 600 examples | |
| - Split: 88% train, 12% test | |
| - Images with smoke or no smoke | |
| ### Labels | |
| Smoke | |
| ## Performance | |
| ### Metrics | |
| All reported on the test set | |
| - **Accuracy**: ~ 90.8% | |
| - **Precision**: ~ 91.7% | |
| - **Recall**: ~ 97.8% | |
| - **Environmental Impact**: | |
| - Emissions tracked in gCO2eq: 0.205 | |
| - Energy consumption tracked in Wh: 3.66 | |
| ### Model Architecture | |
| Based of YOLOv11, see https://arxiv.org/abs/2410.17725, fine tuned on the pyronear dataset. The network is pruned and quantized to be as compressed as possible. | |
| Inference should ideally performed on GPU - the speed bump is drastic, it is more energy efficient than CPU inference which takes much longer. | |
| ## Environmental Impact | |
| Environmental impact is tracked using CodeCarbon, measuring: | |
| - Carbon emissions during inference | |
| - Energy consumption during inference | |
| This tracking helps establish a baseline for the environmental impact of model deployment and inference. | |
| ## Limitations | |
| - Quantization was performed to FP16 - INT8 could compress even more but the accuracy drop was too big. Finding a way to smartly quantize and calibrate to INT8 could be interesting | |
| - To maximize inference speed even more, the model can be converted to TensorRT - it is note done in this repository, as the same type of GPU needs to be used both for exporting to TensorRT and inferencing with TensorRT | |
| ## Ethical Considerations | |
| - Environmental impact is tracked to promote awareness of AI's carbon footprint | |
| ``` | |