Commit
·
6802a39
1
Parent(s):
aece249
Improve wording, add link
Browse files
README.md
CHANGED
|
@@ -72,7 +72,7 @@ If a question does not make any sense, or is not factually coherent, explain why
|
|
| 72 |
### Credits & Special Thanks
|
| 73 |
|
| 74 |
- Thanks to [Meta AI](https://ai.meta.com/) for training and releasing the Llama2 model.
|
| 75 |
-
-
|
| 76 |
- The open-source [epfLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) trainer was used for fine-tuning.
|
| 77 |
- [rombodawg](https://huggingface.co/rombodawg) curated the [LosslessMegaCodeTrainingV2_1m_Evol_Uncensored](https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored) dataset.
|
| 78 |
- [ehartford](https://huggingface.co/ehartford) generated and published the [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) and the [ehartford/oa_leet10k](https://huggingface.co/datasets/ehartford/oa_leet10k) datasets.
|
|
|
|
| 72 |
### Credits & Special Thanks
|
| 73 |
|
| 74 |
- Thanks to [Meta AI](https://ai.meta.com/) for training and releasing the Llama2 model.
|
| 75 |
+
- Distributed training support was provided by EPFL's [Machine Learning and Optimization Laboratory](https://www.epfl.ch/labs/mlo/), and [Natural Language Processing Lab](https://nlp.epfl.ch/).
|
| 76 |
- The open-source [epfLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) trainer was used for fine-tuning.
|
| 77 |
- [rombodawg](https://huggingface.co/rombodawg) curated the [LosslessMegaCodeTrainingV2_1m_Evol_Uncensored](https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored) dataset.
|
| 78 |
- [ehartford](https://huggingface.co/ehartford) generated and published the [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) and the [ehartford/oa_leet10k](https://huggingface.co/datasets/ehartford/oa_leet10k) datasets.
|