resnet18 (AST-Trained)
Trained with 65% less energy than standard training โก
Model Details
- Architecture: resnet18
- Dataset: CIFAR-10
- Training Method: Adaptive Sparse Training (AST)
- Target Activation Rate: 35%
Performance
- Accuracy: 6809.00%
- Energy Savings: 65%
- Training Epochs: 10
Sustainability Report
This model was trained using Adaptive Sparse Training, which dynamically selects the most important training samples. This resulted in:
- โก 65% energy savings compared to standard training
- ๐ Lower carbon footprint
- โฑ๏ธ Faster training time
- ๐ฏ Maintained accuracy (minimal degradation)
How to Use
import torch
from torchvision import models
# Load model
model = models.resnet18(num_classes=10)
model.load_state_dict(torch.load("pytorch_model.bin"))
model.eval()
# Inference
# ... (your inference code)
Training Details
AST Configuration:
- Target Activation Rate: 35%
- Adaptive PI Controller: Enabled
- Mixed Precision (AMP): Enabled
Reproducing This Model
pip install adaptive-sparse-training
python -c "
from adaptive_sparse_training import AdaptiveSparseTrainer, ASTConfig
config = ASTConfig(target_activation_rate=0.35)
# ... (full training code)
"
Citation
If you use this model or AST, please cite:
@software{adaptive_sparse_training,
title={Adaptive Sparse Training},
author={Idiakhoa, Oluwafemi},
year={2024},
url={https://github.com/oluwafemidiakhoa/adaptive-sparse-training}
}
Acknowledgments
Trained using the adaptive-sparse-training package. Special thanks to the PyTorch and HuggingFace communities.
This model card was auto-generated by the AST Training Dashboard.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support