non_green_as_train_context_roberta-large_TEST
This model is a fine-tuned version of FacebookAI/roberta-large on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2360
- Accuracy: 0.9798
- Recall: 0.7497
- F1: 0.7375
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 |
|---|---|---|---|---|---|---|
| 0.0581 | 1.0 | 7739 | 0.0937 | 0.9758 | 0.7851 | 0.7105 |
| 0.0429 | 2.0 | 15478 | 0.0931 | 0.9778 | 0.7417 | 0.7160 |
| 0.0276 | 3.0 | 23217 | 0.1040 | 0.9775 | 0.7134 | 0.7056 |
| 0.019 | 4.0 | 30956 | 0.1323 | 0.9783 | 0.6276 | 0.6862 |
| 0.0143 | 5.0 | 38695 | 0.1369 | 0.9781 | 0.7265 | 0.7154 |
| 0.0102 | 6.0 | 46434 | 0.1819 | 0.9783 | 0.7366 | 0.7196 |
| 0.0051 | 7.0 | 54173 | 0.1870 | 0.9786 | 0.7053 | 0.7140 |
| 0.0047 | 8.0 | 61912 | 0.2024 | 0.9790 | 0.7467 | 0.7287 |
| 0.0 | 9.0 | 69651 | 0.2323 | 0.9796 | 0.6983 | 0.7212 |
| 0.0001 | 10.0 | 77390 | 0.2360 | 0.9798 | 0.7497 | 0.7375 |
Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
- Downloads last month
- 8
Model tree for kghanlon/non_green_as_train_context_roberta-large_TEST
Base model
FacebookAI/roberta-large