ReadMe++: Benchmarking Multilingual Language Models for Multi-Domain Readability Assessment
Paper • 2305.14463 • Published
How to use tareknaous/readabert-ar with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="tareknaous/readabert-ar") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("tareknaous/readabert-ar")
model = AutoModelForSequenceClassification.from_pretrained("tareknaous/readabert-ar")AraBERT-base (aubmindlab/bert-base-arabertv02) model fine-tuned on the Arabic portion of the ReadMe++ corpus for sentence-level readability prediction on a scale of 6-level CEFR scale
Github (Dataset and Python Package): https://github.com/tareknaous/readme