Update README.md
Browse files
README.md
CHANGED
|
@@ -11,7 +11,7 @@ inference: false
|
|
| 11 |
# RoBERTa Tagalog Large
|
| 12 |
Tagalog RoBERTa trained as an improvement over our previous Tagalog pretrained Transformers. Trained with TLUnified, a newer, larger, more topically-varied pretraining corpus for Filipino. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
|
| 13 |
|
| 14 |
-
This model is a cased model. We do not release uncased RoBERTa models.
|
| 15 |
|
| 16 |
## Citations
|
| 17 |
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
|
|
|
|
| 11 |
# RoBERTa Tagalog Large
|
| 12 |
Tagalog RoBERTa trained as an improvement over our previous Tagalog pretrained Transformers. Trained with TLUnified, a newer, larger, more topically-varied pretraining corpus for Filipino. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community.
|
| 13 |
|
| 14 |
+
This model is a cased model. We do not release uncased RoBERTa models.
|
| 15 |
|
| 16 |
## Citations
|
| 17 |
All model details and training setups can be found in our papers. If you use our model or find it useful in your projects, please cite our work:
|