--- library_name: transformers license: mit base_model: FacebookAI/roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: sentiment-analysis-roberta-base-V1.5_ima_ds results: [] --- # sentiment-analysis-roberta-base-V1.5_ima_ds This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3346 - Accuracy: 0.5869 - Precision Macro: 0.5474 - Recall Macro: 0.5539 - F1 Macro: 0.5472 - Precision Weighted: 0.5941 - Recall Weighted: 0.5869 - F1 Weighted: 0.5860 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 3407 - optimizer: Use OptimizerNames.ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1 Macro | Precision Weighted | Recall Weighted | F1 Weighted | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:| | 1.5983 | 0.4 | 20 | 1.6093 | 0.2418 | 0.0484 | 0.2 | 0.0779 | 0.0585 | 0.2418 | 0.0942 | | 1.6121 | 0.8 | 40 | 1.6067 | 0.2418 | 0.0484 | 0.2 | 0.0779 | 0.0585 | 0.2418 | 0.0942 | | 1.5924 | 1.2 | 60 | 1.5340 | 0.4358 | 0.3628 | 0.3576 | 0.3453 | 0.4328 | 0.4358 | 0.4141 | | 1.3138 | 1.6 | 80 | 1.3488 | 0.4736 | 0.4587 | 0.4645 | 0.4236 | 0.5236 | 0.4736 | 0.4577 | | 1.2226 | 2.0 | 100 | 1.2556 | 0.4786 | 0.4708 | 0.4714 | 0.4551 | 0.5293 | 0.4786 | 0.4958 | | 0.7537 | 2.4 | 120 | 1.1485 | 0.5718 | 0.5461 | 0.5623 | 0.5453 | 0.5897 | 0.5718 | 0.5754 | | 0.9658 | 2.8 | 140 | 1.1688 | 0.5466 | 0.5373 | 0.5496 | 0.5230 | 0.6096 | 0.5466 | 0.5643 | | 0.7872 | 3.2 | 160 | 1.1887 | 0.5718 | 0.5398 | 0.5341 | 0.5285 | 0.5901 | 0.5718 | 0.5756 | | 1.1199 | 3.6 | 180 | 1.1970 | 0.5743 | 0.5774 | 0.5609 | 0.5404 | 0.6265 | 0.5743 | 0.5847 | | 1.0137 | 4.0 | 200 | 1.2344 | 0.4987 | 0.5382 | 0.5293 | 0.4768 | 0.6059 | 0.4987 | 0.5158 | | 0.6547 | 4.4 | 220 | 1.1846 | 0.5919 | 0.5737 | 0.5997 | 0.5751 | 0.6139 | 0.5919 | 0.5972 | | 0.7685 | 4.8 | 240 | 1.2460 | 0.5970 | 0.5759 | 0.6021 | 0.5749 | 0.6252 | 0.5970 | 0.6035 | | 0.5167 | 5.2 | 260 | 1.2670 | 0.5919 | 0.5863 | 0.5959 | 0.5697 | 0.6268 | 0.5919 | 0.5980 | | 0.6995 | 5.6 | 280 | 1.3015 | 0.5743 | 0.5686 | 0.5830 | 0.5497 | 0.6167 | 0.5743 | 0.5775 | | 0.8019 | 6.0 | 300 | 1.3346 | 0.5869 | 0.5474 | 0.5539 | 0.5472 | 0.5941 | 0.5869 | 0.5860 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1