Update README.md
Browse files
README.md
CHANGED
@@ -12,4 +12,43 @@ base_model:
|
|
12 |
- Qwen/Qwen3-0.6B
|
13 |
pipeline_tag: text-generation
|
14 |
library_name: transformers
|
15 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
- Qwen/Qwen3-0.6B
|
13 |
pipeline_tag: text-generation
|
14 |
library_name: transformers
|
15 |
+
---
|
16 |
+
|
17 |
+
# Qwen3-0.6B-Diagnosis-Expert
|
18 |
+
|
19 |
+
This project performs full fine-tuning on the **Qwen3-0.6B** language model to enhance its clinical diagnosis interpretation and reasoning capabilities. The model was optimized using the bfloat16 (bf16) data type.
|
20 |
+
|
21 |
+
## Training Procedure
|
22 |
+
|
23 |
+
1. **Dataset Preparation**
|
24 |
+
|
25 |
+
* Dataset: Containing paired clinical patient histories and step-by-step diagnostic conclusions.
|
26 |
+
|
27 |
+
2. **Model Loading and Configuration**
|
28 |
+
|
29 |
+
* Base model: **Qwen3-0.6B**, loaded with the `unsloth` library in bf16 precision.
|
30 |
+
* Full fine-tuning (`full_finetuning=True`) applied to all layers to adapt the model for medical diagnostic tasks.
|
31 |
+
|
32 |
+
3. **Supervised Fine-Tuning (SFT)**
|
33 |
+
|
34 |
+
* Utilized the Hugging Face TRL library with the Supervised Fine-Tuning approach.
|
35 |
+
* The model was trained to generate both intermediate reasoning steps and final diagnostic statements.
|
36 |
+
* Training hyperparameters:
|
37 |
+
|
38 |
+
* Epochs: 2
|
39 |
+
* Learning rate: 2e-5
|
40 |
+
* Batch size: 8
|
41 |
+
|
42 |
+
## Purpose and Outcome
|
43 |
+
|
44 |
+
* Significantly improved the model’s ability to interpret clinical information and propose accurate, structured diagnoses.
|
45 |
+
|
46 |
+
## Evaluation
|
47 |
+
|
48 |
+
* Performance was measured on a held-out validation set with the following metric:
|
49 |
+
|
50 |
+
* **Diagnostic Similarity:** 71.68% similarity compared to DeepSeek V3-0324 baseline.
|
51 |
+
|
52 |
+
## License
|
53 |
+
|
54 |
+
This project is licensed under the Apache License 2.0. See the [LICENSE](./LICENSE) file for details.
|