Update README.md
Browse files
README.md
CHANGED
|
@@ -5,6 +5,11 @@ license: apache-2.0
|
|
| 5 |
---
|
| 6 |
This is unsloth/llama-3-8b-Instruct trained on the Replete-AI/code-test-dataset using the code bellow with unsloth and google colab with under 15gb of vram. This training was complete in about 40 minutes total.
|
| 7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
For anyone that is new to coding and training Ai, all your really have to edit is
|
| 9 |
|
| 10 |
1. (max_seq_length = 8192) To match the max tokens of the dataset or model you are using
|
|
|
|
| 5 |
---
|
| 6 |
This is unsloth/llama-3-8b-Instruct trained on the Replete-AI/code-test-dataset using the code bellow with unsloth and google colab with under 15gb of vram. This training was complete in about 40 minutes total.
|
| 7 |
|
| 8 |
+
Copy from my announcement in my discord:
|
| 9 |
+
```
|
| 10 |
+
If anyone wants to train their own llama-3-8b model for free on any dataset that has around 1,500 lines of data or less you can now do it easily by using the code I provided in the model card for my test model in this repo and google colab. The training for this model uses (Unsloth + Qlora + Galore) to achieve the ability for training under such low vram.
|
| 11 |
+
```
|
| 12 |
+
|
| 13 |
For anyone that is new to coding and training Ai, all your really have to edit is
|
| 14 |
|
| 15 |
1. (max_seq_length = 8192) To match the max tokens of the dataset or model you are using
|