Datasets:
Tasks:
Fill-Mask
Formats:
csv
Sub-tasks:
masked-language-modeling
Size:
1M - 10M
ArXiv:
Tags:
afrolm
active learning
language modeling
research papers
natural language processing
self-active learning
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
# AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages
|
| 2 |
|
| 3 |
-
This repository contains the dataset for our paper
|
| 4 |
|
| 5 |
## Our self-active learning framework
|
| 6 |

|
|
@@ -9,7 +9,7 @@ This repository contains the dataset for our paper `AfroLM: A Self-Active Learni
|
|
| 9 |
AfroLM has been pretrained from scratch on 23 African Languages: Amharic, Afan Oromo, Bambara, Ghomalá, Éwé, Fon, Hausa, Ìgbò, Kinyarwanda, Lingala, Luganda, Luo, Mooré, Chewa, Naija, Shona, Swahili, Setswana, Twi, Wolof, Xhosa, Yorùbá, and Zulu.
|
| 10 |
|
| 11 |
## Evaluation Results
|
| 12 |
-
AfroLM was evaluated on MasakhaNER1.0 (10 African Languages) and MasakhaNER2.0 (21 African Languages) datasets
|
| 13 |
|
| 14 |
Model | MasakhaNER | MasakhaNER2.0* | Text Classification (Yoruba/Hausa) | Sentiment Analysis (YOSM) | OOD Sentiment Analysis (Twitter -> YOSM) |
|
| 15 |
|:---: |:---: |:---: | :---: |:---: | :---: |
|
|
@@ -19,7 +19,7 @@ Model | MasakhaNER | MasakhaNER2.0* | Text Classification (Yoruba/Hausa) | Senti
|
|
| 19 |
`XLMR-base` | 79.16 | 83.09 | --- | --- | --- |
|
| 20 |
`AfroXLMR-base` | `81.90` | `84.55` | --- | --- | --- |
|
| 21 |
|
| 22 |
-
-
|
| 23 |
- Bold numbers represent the performance of the model with the **smallest pretrained data**.
|
| 24 |
## Pretrained Models and Dataset
|
| 25 |
|
|
@@ -42,7 +42,7 @@ tokenizer.model_max_length = 256
|
|
| 42 |
|
| 43 |
|
| 44 |
## Citation
|
| 45 |
-
We will share the proceeding citation as soon as possible. Stay tuned, and if you
|
| 46 |
|
| 47 |
## Reach out
|
| 48 |
|
|
|
|
| 1 |
# AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages
|
| 2 |
|
| 3 |
+
This repository contains the dataset for our paper 'AfroLM: A Self-Active Learning-based Multilingual Pretrained Language Model for 23 African Languages` which will appear at the third Simple and Efficient Natural Language Processing, at EMNLP 2022.
|
| 4 |
|
| 5 |
## Our self-active learning framework
|
| 6 |

|
|
|
|
| 9 |
AfroLM has been pretrained from scratch on 23 African Languages: Amharic, Afan Oromo, Bambara, Ghomalá, Éwé, Fon, Hausa, Ìgbò, Kinyarwanda, Lingala, Luganda, Luo, Mooré, Chewa, Naija, Shona, Swahili, Setswana, Twi, Wolof, Xhosa, Yorùbá, and Zulu.
|
| 10 |
|
| 11 |
## Evaluation Results
|
| 12 |
+
AfroLM was evaluated on MasakhaNER1.0 (10 African Languages) and MasakhaNER2.0 (21 African Languages) datasets, on text classification and sentiment analysis. AfroLM outperformed AfriBERTa, mBERT, and XLMR-base, and was very competitive with AfroXLMR. AfroLM is also very data efficient because it was pretrained on a dataset 14x+ smaller than its competitors' datasets. Below is the average performance of various models across various datasets. Please consult our paper for more language-level performance.
|
| 13 |
|
| 14 |
Model | MasakhaNER | MasakhaNER2.0* | Text Classification (Yoruba/Hausa) | Sentiment Analysis (YOSM) | OOD Sentiment Analysis (Twitter -> YOSM) |
|
| 15 |
|:---: |:---: |:---: | :---: |:---: | :---: |
|
|
|
|
| 19 |
`XLMR-base` | 79.16 | 83.09 | --- | --- | --- |
|
| 20 |
`AfroXLMR-base` | `81.90` | `84.55` | --- | --- | --- |
|
| 21 |
|
| 22 |
+
-(*) The evaluation was conducted on the 11 additional languages of the dataset.
|
| 23 |
- Bold numbers represent the performance of the model with the **smallest pretrained data**.
|
| 24 |
## Pretrained Models and Dataset
|
| 25 |
|
|
|
|
| 42 |
|
| 43 |
|
| 44 |
## Citation
|
| 45 |
+
We will share the proceeding citation as soon as possible. Stay tuned, and if you like our work, give it a star.
|
| 46 |
|
| 47 |
## Reach out
|
| 48 |
|