HumBert

HumBert (Humanitarian Bert) is a XLM-Roberta model trained on humanitarian texts - approximately 50 million textual examples (roughly 2 billion tokens) from public humanitarian reports, law cases and news articles. Data were collected from three main sources: Reliefweb, UNHCR Refworld and Europe Media Monitor News Brief. Although XLM-Roberta was trained on 100 different languages, this fine-tuning was performed on three languages, English, French and Spanish, due to the impossibility of finding a good amount of such kind of humanitarian data in other languages.

Developed by Nicolò Tamagnone, Data Friendly Space

Intended uses

To the best of our knowledge, HumBert is the first language model adapted on humanitarian topics, which often use a very specific language, making adaptation to downstream tasks (such as dister responses text classification) more effective. This model is primarily aimed at being fine-tuned on tasks such as sequence classification or token classification.

Benchmarks

Soon...

Usage

Here is how to use this model to get the features of a given text in PyTorch:

from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('nlp-thedeep/humbert')
model = AutoModelForMaskedLM.from_pretrained("nlp-thedeep/humbert")
# prepare input
text = "YOUR TEXT"
encoded_input = tokenizer(text, return_tensors='pt')
# forward pass
output = model(**encoded_input)
Downloads last month
62
Safetensors
Model size
0.3B params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support