Datasets:
File size: 3,213 Bytes
1238bf0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 |
---
license: cc-by-nc-4.0
task_categories:
- tabular-classification
language:
- en
tags:
- evaluation
- metrics
- setfit
- water-conflict
- multi-label-classification
size_categories:
- n<1K
pretty_name: Water Conflict Classifier Evaluation Metrics
---
# Water Conflict Classifier Evaluation Metrics
Evaluation metrics tracking the performance of the [Water Conflict Classifier](https://huggingface.co/baobabtech/water-conflict-classifier) across multiple training iterations and model configurations.
## Dataset Summary
This dataset contains evaluation results from training runs of the Water Conflict Classifier, a multi-label SetFit model that identifies water-related conflict events in news headlines. Each row represents one model version with comprehensive performance metrics across three classification labels: Trigger, Casualty, and Weapon.
**Related Links:**
- 🤗 [Model Collection](https://huggingface.co/collections/baobabtech/water-conflict-classifier)
- 🐙 [GitHub Repository](https://github.com/baobab-tech/waterconflict)
- 📦 [PyPI Package](https://pypi.org/project/water-conflict-classifier/)
- 🌊 [Pacific Institute Water Conflict Chronology](https://www.worldwater.org/water-conflict/)
## Dataset Structure
### Fields
| Field | Type | Description |
|-------|------|-------------|
| `version` | string | Model version identifier (v1.0, v2.0, etc.) |
| `timestamp` | string | Training completion timestamp |
| `base_model` | string | Base embedding model used |
| `train_size` | int | Number of training examples |
| `test_size` | int | Number of test examples |
| `f1_micro` | float | Micro-averaged F1 score |
| `f1_macro` | float | Macro-averaged F1 score |
| `accuracy` | float | Overall accuracy |
| `trigger_*` | float | Precision/recall/F1 for Trigger label |
| `casualty_*` | float | Precision/recall/F1 for Casualty label |
| `weapon_*` | float | Precision/recall/F1 for Weapon label |
| `model_repo` | string | HuggingFace model repository |
### Model Versions
The dataset tracks performance across different configurations:
- Base models: BAAI/bge-small-en-v1.5, sentence-transformers/all-MiniLM-L6-v2
- Training strategies: undersampling for class balance
- Hyperparameter variations: batch size, epochs, sample size
## Usage
```python
from datasets import load_dataset
# Load the evaluation metrics
evals = load_dataset("baobabtech/water-conflict-classifier-evals")
# Compare model versions
import pandas as pd
df = pd.DataFrame(evals['train'])
print(df[['version', 'f1_macro', 'accuracy']].sort_values('f1_macro', ascending=False))
```
## Citation
If you use this dataset or the Water Conflict Classifier in your research, please cite:
```bibtex
@misc{baobab_water_conflict_classifier,
author = {Mills, Olivier},
title = {Water Conflict Classifier: Few-Shot Learning for Water-Related Conflict Event Detection},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/baobabtech/water-conflict-classifier}}
}
```
## License
CC-BY-NC-4.0 (Non-commercial use only)
## Contact
**Olivier Mills**
Website: [baobabtech.ai](https://baobabtech.ai)
LinkedIn: [oliviermills](https://www.linkedin.com/in/oliviermills/) |