Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
sentiment-classification
Languages:
English
Size:
10K - 100K
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -98,7 +98,19 @@ In general, model would be trained on `train_all`, the most representative train
|
|
| 98 |
|
| 99 |
### Models
|
| 100 |
|
| 101 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 102 |
|
| 103 |
Model fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/lm_finetuning.py).
|
| 104 |
|
|
|
|
| 98 |
|
| 99 |
### Models
|
| 100 |
|
| 101 |
+
| model | training data | F1 | F1 (macro) | Accuracy |
|
| 102 |
+
|:----------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------|---------:|-------------:|-----------:|
|
| 103 |
+
| [cardiffnlp/roberta-large-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-multi-all) | all (2020 + 2021) | 0.763104 | 0.620257 | 0.536629 |
|
| 104 |
+
| [cardiffnlp/roberta-base-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-multi-all) | all (2020 + 2021) | 0.751814 | 0.600782 | 0.531864 |
|
| 105 |
+
| [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all) | all (2020 + 2021) | 0.762513 | 0.603533 | 0.547945 |
|
| 106 |
+
| [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-all) | all (2020 + 2021) | 0.759917 | 0.59901 | 0.536033 |
|
| 107 |
+
| [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all) | all (2020 + 2021) | 0.764767 | 0.618702 | 0.548541 |
|
| 108 |
+
| [cardiffnlp/roberta-large-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-multi-2020) | 2020 only | 0.732366 | 0.579456 | 0.493746 |
|
| 109 |
+
| [cardiffnlp/roberta-base-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-multi-2020) | 2020 only | 0.725229 | 0.561261 | 0.499107 |
|
| 110 |
+
| [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-2020) | 2020 only | 0.73671 | 0.565624 | 0.513401 |
|
| 111 |
+
| [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-2020) | 2020 only | 0.729446 | 0.534799 | 0.50268 |
|
| 112 |
+
| [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-2020) | 2020 only | 0.731106 | 0.532141 | 0.509827 |
|
| 113 |
+
|
| 114 |
|
| 115 |
Model fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/lm_finetuning.py).
|
| 116 |
|