Update README.md
Browse files
README.md
CHANGED
|
@@ -12,6 +12,21 @@ model-index:
|
|
| 12 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 13 |
should probably proofread and complete it, then remove this comment. -->
|
| 14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
# focus_bur_phi_focus_trained
|
| 16 |
|
| 17 |
This model is a fine-tuned version of [final_models/focus_bur_phi_after_focus_reinit](https://huggingface.co/final_models/focus_bur_phi_after_focus_reinit) on the mc4 my dataset.
|
|
|
|
| 12 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 13 |
should probably proofread and complete it, then remove this comment. -->
|
| 14 |
|
| 15 |
+
# Paper and Citation
|
| 16 |
+
Paper: [Prompt, Translate, Fine-Tune, Re-Initialize, or Instruction-Tune? Adapting LLMs for In-Context Learning in Low-Resource Languages
|
| 17 |
+
](https://arxiv.org/abs/2506.19187)
|
| 18 |
+
```
|
| 19 |
+
@misc{toukmaji2025prompttranslatefinetunereinitialize,
|
| 20 |
+
title={Prompt, Translate, Fine-Tune, Re-Initialize, or Instruction-Tune? Adapting LLMs for In-Context Learning in Low-Resource Languages},
|
| 21 |
+
author={Christopher Toukmaji and Jeffrey Flanigan},
|
| 22 |
+
year={2025},
|
| 23 |
+
eprint={2506.19187},
|
| 24 |
+
archivePrefix={arXiv},
|
| 25 |
+
primaryClass={cs.CL},
|
| 26 |
+
url={https://arxiv.org/abs/2506.19187},
|
| 27 |
+
}
|
| 28 |
+
```
|
| 29 |
+
|
| 30 |
# focus_bur_phi_focus_trained
|
| 31 |
|
| 32 |
This model is a fine-tuned version of [final_models/focus_bur_phi_after_focus_reinit](https://huggingface.co/final_models/focus_bur_phi_after_focus_reinit) on the mc4 my dataset.
|