Update README.md
Browse files
README.md
CHANGED
|
@@ -25,7 +25,7 @@ The chatbot has been fine-tuned on the **PHR Therapy Dataset** using **LLaMA 3.2
|
|
| 25 |
- **Base Model**: LLaMA 3.2 3B Instruct
|
| 26 |
- **Dataset**: PHR Therapy Dataset (contains therapist-patient conversations for empathetic response generation)
|
| 27 |
- **Fine-Tuning Framework**: Unsloth (optimized training for efficiency)
|
| 28 |
-
- **Training Environment**:
|
| 29 |
- **Optimization Techniques**:
|
| 30 |
- LoRA (Low-Rank Adaptation) for parameter-efficient tuning
|
| 31 |
- Mixed Precision Training for speed and memory efficiency
|
|
|
|
| 25 |
- **Base Model**: LLaMA 3.2 3B Instruct
|
| 26 |
- **Dataset**: PHR Therapy Dataset (contains therapist-patient conversations for empathetic response generation)
|
| 27 |
- **Fine-Tuning Framework**: Unsloth (optimized training for efficiency)
|
| 28 |
+
- **Training Environment**: Google colab free version
|
| 29 |
- **Optimization Techniques**:
|
| 30 |
- LoRA (Low-Rank Adaptation) for parameter-efficient tuning
|
| 31 |
- Mixed Precision Training for speed and memory efficiency
|