Update README.md
Browse files
README.md
CHANGED
|
@@ -20,7 +20,7 @@ To enhance efficiency, we replaced the causal self-attention layers with bidirec
|
|
| 20 |
Finally, our implementation integrates FlashAttention V2 for faster inference.
|
| 21 |
|
| 22 |
|
| 23 |
-
- **Paper:** [
|
| 24 |
- **Languages:** 600+ Programming languages
|
| 25 |
|
| 26 |
|
|
@@ -86,8 +86,8 @@ The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can
|
|
| 86 |
|
| 87 |
# Citation
|
| 88 |
```
|
| 89 |
-
@article{
|
| 90 |
-
title={
|
| 91 |
author={Andrea Gurioli and Federico Pennino and João Monteiro and Maurizio Gabbrielli},
|
| 92 |
year={2025},
|
| 93 |
eprint={2503.03008},
|
|
|
|
| 20 |
Finally, our implementation integrates FlashAttention V2 for faster inference.
|
| 21 |
|
| 22 |
|
| 23 |
+
- **Paper:** [MoSE: Hierarchical Self-Distillation Enhances Early Layer Embeddings](https://arxiv.org/abs/2503.03008)
|
| 24 |
- **Languages:** 600+ Programming languages
|
| 25 |
|
| 26 |
|
|
|
|
| 86 |
|
| 87 |
# Citation
|
| 88 |
```
|
| 89 |
+
@article{gurioli2025mosehierarchicalselfdistillationenhances,
|
| 90 |
+
title={MoSE: Hierarchical Self-Distillation Enhances Early Layer Embeddings},
|
| 91 |
author={Andrea Gurioli and Federico Pennino and João Monteiro and Maurizio Gabbrielli},
|
| 92 |
year={2025},
|
| 93 |
eprint={2503.03008},
|