Instructions to use raduion/bert-medium-luxembourgish with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use raduion/bert-medium-luxembourgish with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="raduion/bert-medium-luxembourgish")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("raduion/bert-medium-luxembourgish") model = AutoModelForMaskedLM.from_pretrained("raduion/bert-medium-luxembourgish") - Notebooks
- Google Colab
- Kaggle
BERT Medium for Luxembourgish
Created from a dataset with 1M Luxembourgish sentences from Wikipedia. Corpus has approx. 16M words.
The MLM objective was trained. The BERT model has parameters L=8 and H=512. Vocabulary has 70K word pieces.
Final loss scores, after 3 epochs:
- Final train loss: 4.230
- Final train perplexity: 68.726
- Final validation loss: 4.074
- Final validation perplexity: 58.765
- Downloads last month
- 9