toukmaji-flanigan-gem25
Collection
Models and datasets from ACL GEM paper (Toukmaji and Flanigan 2025)
•
49 items
•
Updated
•
1
@misc{toukmaji2025prompttranslatefinetunereinitialize,
title={Prompt, Translate, Fine-Tune, Re-Initialize, or Instruction-Tune? Adapting LLMs for In-Context Learning in Low-Resource Languages},
author={Christopher Toukmaji and Jeffrey Flanigan},
year={2025},
eprint={2506.19187},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.19187},
}
This model is a fine-tuned version of final_models/focus_lug_mpt_after_focus_reinit on the mozilla-foundation/common_voice_11_0 lg dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
5.7812 | 1.0 | 697 | 6.0611 |
5.9688 | 2.0 | 1394 | 5.6902 |
5.375 | 3.0 | 2091 | 5.5077 |
4.5625 | 4.0 | 2788 | 5.3659 |
3.8594 | 5.0 | 3485 | 5.5141 |
2.3594 | 6.0 | 4182 | 5.8420 |