|
|
--- |
|
|
inference: false |
|
|
--- |
|
|
# Model Card for Token-DI-50 |
|
|
|
|
|
[](https://github.com/AMR-KELEG/ALDi) |
|
|
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
A BERT-based model fine-tuned to tag each token in a sentence as being in MSA or DA. |
|
|
|
|
|
Model | Link on ๐ค |
|
|
---|--- |
|
|
**Sentence-ALDi** (random seed: 42) | https://huggingface.co/AMR-KELEG/Sentence-ALDi |
|
|
Sentence-ALDi (random seed: 30) | https://huggingface.co/AMR-KELEG/Sentence-ALDi-30 |
|
|
Sentence-ALDi (random seed: 50) | https://huggingface.co/AMR-KELEG/Sentence-ALDi-50 |
|
|
**Token-DI** (random seed: 42) | https://huggingface.co/AMR-KELEG/ALDi-Token-DI |
|
|
Token-DI (random seed: 30) | https://huggingface.co/AMR-KELEG/ALDi-Token-DI-30 |
|
|
Token-DI (random seed: 50) | https://huggingface.co/AMR-KELEG/ALDi-Token-DI-50 |
|
|
|
|
|
### Usage |
|
|
|
|
|
``` |
|
|
from transformers import AutoTokenizer, AutoModelForTokenClassification |
|
|
|
|
|
def tokenize_text(text): |
|
|
"""Tokenize a string based on separator regexps.""" |
|
|
tokens = text.split() |
|
|
return tokens |
|
|
|
|
|
def tag_sentence(text, print_tags=False): |
|
|
logits = model( |
|
|
**tokenizer(text, return_tensors="pt", truncation=True, max_length=512) |
|
|
).logits |
|
|
|
|
|
# Ignore the labels for [CLS] and [SEP] |
|
|
subwords_labels = logits.argmax(axis=-1).numpy()[0][1:-1] |
|
|
tokens = tokenize_text(text) |
|
|
subwords = [ |
|
|
tokenizer.tokenize(token) |
|
|
for token in tokens |
|
|
if tokenizer.tokenize(token) |
|
|
] |
|
|
n_subwords = [len(l) for l in subwords] |
|
|
first_subword_indecies = [sum(n_subwords[0:i]) for i in range(len(n_subwords))] |
|
|
|
|
|
tokens_labels = [ |
|
|
subwords_labels[index] for index in first_subword_indecies if index < 510 |
|
|
] |
|
|
tokens_tags = [INDECIES_TO_TAGS[l] for l in tokens_labels] |
|
|
|
|
|
if print_tags: |
|
|
for token, token_tag in zip(tokens, tokens_tags): |
|
|
print(f"{token} -> {token_tag}") |
|
|
print() |
|
|
|
|
|
# Compute the CMI (Code Mixing Index) |
|
|
# Ignore: "ambiguous", "ne" (named entity), "other" (emojis, ..) |
|
|
n_msa_tokens = sum([t == "lang1" for t in tokens_tags]) |
|
|
n_da_tokens = sum([t in ["lang2", "mixed"] for t in tokens_tags]) |
|
|
|
|
|
if n_msa_tokens + n_da_tokens != 0: |
|
|
return n_da_tokens / (n_msa_tokens + n_da_tokens) |
|
|
else: |
|
|
return 0 |
|
|
|
|
|
if __name__ == "__main__": |
|
|
model_name = "AMR-KELEG/ALDi-Token-DI-50" |
|
|
|
|
|
TAGS = ["ambiguous", "lang1", "lang2", "mixed", "ne", "other"] # lang1 -> MSA, lang2 -> Dialectal Arabic (DA) |
|
|
INDECIES_TO_TAGS = {i: tag for i, tag in enumerate(TAGS)} |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
model = AutoModelForTokenClassification.from_pretrained( |
|
|
model_name, num_labels=len(TAGS) |
|
|
) |
|
|
|
|
|
# Example usage |
|
|
sentence = "ู
ุง ูุฐุง ูุง ูุชูุ ุฃูุช ู
ุชุฃูุฏ ู
ู ูุฏูุ" |
|
|
CMI_index = tag_sentence(sentence, print_tags=True) |
|
|
print(f"CMI Index (as a proxy for ALDi): {CMI_index:.2f}") |
|
|
``` |
|
|
|
|
|
### Model Description |
|
|
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
|
|
<!-- - **Developed by:** Amr Keleg --> |
|
|
- **Model type:** Token classification head on top of a BERT-based model. |
|
|
- **Language(s) (NLP):** Arabic. |
|
|
<!--- **License:** [More Information Needed] --> |
|
|
- **Finetuned from model :** [MarBERT](https://huggingface.co/UBC-NLP/MARBERT) |
|
|
- **Dataset:** [MSA-DA code-switched dataset](https://aclanthology.org/W16-5805.pdf) |
|
|
|
|
|
### Citation |
|
|
|
|
|
If you find the model useful, please cite the following respective paper: |
|
|
``` |
|
|
@inproceedings{keleg-etal-2023-aldi, |
|
|
title = "{ALD}i: Quantifying the {A}rabic Level of Dialectness of Text", |
|
|
author = "Keleg, Amr and |
|
|
Goldwater, Sharon and |
|
|
Magdy, Walid", |
|
|
editor = "Bouamor, Houda and |
|
|
Pino, Juan and |
|
|
Bali, Kalika", |
|
|
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", |
|
|
month = dec, |
|
|
year = "2023", |
|
|
address = "Singapore", |
|
|
publisher = "Association for Computational Linguistics", |
|
|
url = "https://aclanthology.org/2023.emnlp-main.655", |
|
|
doi = "10.18653/v1/2023.emnlp-main.655", |
|
|
pages = "10597--10611", |
|
|
abstract = "Transcribed speech and user-generated text in Arabic typically contain a mixture of Modern Standard Arabic (MSA), the standardized language taught in schools, and Dialectal Arabic (DA), used in daily communications. To handle this variation, previous work in Arabic NLP has focused on Dialect Identification (DI) on the sentence or the token level. However, DI treats the task as binary, whereas we argue that Arabic speakers perceive a spectrum of dialectness, which we operationalize at the sentence level as the Arabic Level of Dialectness (ALDi), a continuous linguistic variable. We introduce the AOC-ALDi dataset (derived from the AOC dataset), containing 127,835 sentences (17{\%} from news articles and 83{\%} from user comments on those articles) which are manually labeled with their level of dialectness. We provide a detailed analysis of AOC-ALDi and show that a model trained on it can effectively identify levels of dialectness on a range of other corpora (including dialects and genres not included in AOC-ALDi), providing a more nuanced picture than traditional DI systems. Through case studies, we illustrate how ALDi can reveal Arabic speakers{'} stylistic choices in different situations, a useful property for sociolinguistic analyses.", |
|
|
} |
|
|
``` |