lukmanaj's picture
Update README.md
71f8014 verified

A newer version of the Gradio SDK is available: 5.47.2

Upgrade
metadata
title: Afri Wildlife Classify
emoji: πŸ“Š
colorFrom: blue
colorTo: green
sdk: gradio
sdk_version: 5.38.2
app_file: app.py
pinned: false
license: mit
short_description: Classifies pictures of four African wildlilfe animals

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference

Wild Animal Prediction App πŸ¦“πŸ˜πŸ¦πŸƒ

A deep learning-powered web application for classifying African wildlife images using DenseNet-201. This application can identify four key African savanna species: Buffalo, Elephant, Rhinoceros, and Zebra.

πŸ”¬ Research Background

This application is based on research from "Evaluating Deep Learning Models for African Wildlife Image Classification: From DenseNet to Vision Transformers" published at DeepLearningIndaba 2025. The work addresses the critical need for automated wildlife monitoring tools in African conservation contexts, where traditional field surveys are labor-intensive and time-consuming.

Key Research Findings

  • DenseNet-201 achieved 67% accuracy on the African Wildlife dataset
  • Best performing CNN among tested architectures (ResNet-152, EfficientNet-B4)
  • Optimized for deployment in resource-constrained conservation settings
  • Trained on balanced dataset of 1,504 images (376 per species)

πŸ† Model Performance

Metric DenseNet-201 Performance
Overall Accuracy 67.0%
Macro F1-Score 0.67
Buffalo F1 0.72
Elephant F1 0.61
Rhino F1 0.60
Zebra F1 0.76

Model Comparison

While Vision Transformer (ViT-H/14) achieved 99% accuracy in our experiments, DenseNet-201 was selected for deployment due to:

  • Efficiency: 20M parameters vs 632M for ViT
  • Speed: 92.5s training time vs 6574.2s for ViT
  • Deployability: 4.29 GFLOPs vs 1016.72 for ViT
  • Resource Requirements: Suitable for edge deployment and offline use

πŸš€ How to Use

Online Demo

πŸ€— Try it now on Hugging Face Spaces: Simply upload an image of a buffalo, elephant, rhinoceros, or zebra and click Submit!

Requirements

gradio
torch
torchvision
Pillow

πŸ“Έ Best Practices for Image Upload

For optimal results:

  • Use clear, well-lit images of the animal
  • Ensure the animal is prominently featured in the frame
  • Avoid heavily cropped or blurry images
  • JPG or PNG formats work best
  • File size under 5MB recommended

🌍 Conservation Impact

This tool supports:

  • Biodiversity Monitoring: Automated species identification from camera traps
  • Anti-Poaching Efforts: Rapid wildlife population assessment
  • Citizen Science: Enabling non-experts to contribute to conservation data
  • Field Research: Reducing manual image classification workload for researchers

⚠️ Limitations & Considerations

Current Limitations

  • Domain Shift: Performance may decline on images from different environments or cameras
  • Species Scope: Limited to 4 species (buffalo, elephant, rhino, zebra)
  • Image Quality: Sensitive to lighting conditions and image resolution
  • 67% Accuracy: Not suitable for critical conservation decisions without human verification

Ethical Considerations

  • Human-in-the-loop: Always verify AI predictions with expert knowledge
  • Bias Awareness: Model trained on curated dataset may not generalize to all conditions
  • Privacy: Ensure no human subjects in uploaded images
  • Responsible Use: Tool designed to assist, not replace, conservation professionals

πŸ”§ Technical Details

Architecture

  • Base Model: DenseNet-201 pretrained on ImageNet
  • Classification Head: Custom 4-class classifier with dropout (p=0.2)
  • Input Processing: Images resized to 64Γ—64 pixels, normalized to [0,1]
  • Framework: PyTorch with Gradio interface

Training Configuration

  • Dataset: African Wildlife Dataset (Ferreira, 2020)
  • Training Split: 80% (1,203 images) / Test: 20% (301 images)
  • Optimizer: Adam (lr=0.001)
  • Loss Function: CrossEntropyLoss
  • Batch Size: 32
  • Epochs: 10

πŸ“Š Dataset Information

The model was trained on the public African Wildlife Dataset containing:

  • 1,504 total images (balanced across species)
  • 376 images per class (buffalo, elephant, rhino, zebra)
  • High-quality color images from African nature reserves
  • Representative of savanna ecosystem species

🀝 Contributing

We welcome contributions to improve this conservation tool:

  • Dataset expansion: Adding more species or diverse image conditions
  • Model improvements: Testing new architectures or training techniques
  • UI enhancements: Improving user experience and accessibility
  • Documentation: Helping others understand and use the tool

πŸ“ Citation

If you use this application in your research or conservation work, please cite:

@article{aliyu2025wildlife,
  title={Evaluating Deep Learning Models for African Wildlife Image Classification: From DenseNet to Vision Transformers},
  author={Aliyu, Lukman Jibril and Muhammad, Umar Sani and Ismail, Bikisu and Wakili, Almustapha A and Yimam, Seid Muhie and Muhammad, Shamsuddeen Hassan and Abdullahi, Mustapha},
  journal={DeepLearningIndaba 2025 Conference},
  year={2025},
  pages={1-13}
}

🌟 Future Development

Planned improvements include:

  • Expanded species coverage (40+ species like Snapshot Serengeti)
  • Enhanced robustness through data augmentation and domain adaptation
  • Mobile deployment for offline field use
  • Active learning integration for continuous model improvement
  • Camera trap integration for automated monitoring systems

πŸ“ž Support & Contact

For questions, issues, or collaboration opportunities:

  • πŸ€— Hugging Face Space: Try the live demo and report issues in the community tab
  • πŸ”¬ Research: Contact the development team for academic collaboration
  • 🌍 Conservation: Reach out for deployment in conservation projects

Disclaimer: This is a research prototype designed to assist conservation efforts. Always verify AI predictions with expert knowledge before making critical conservation decisions.

License: This project supports open, Africa-centric AI research and follows ethical AI practices.