Qwen3-0.6B-Diagnose / README.md
suayptalha's picture
Update README.md
6d956e9 verified
metadata
license: apache-2.0
tags:
  - unsloth
  - trl
  - sft
datasets:
  - suayptalha/Diagnose-Instructions
language:
  - en
base_model:
  - Qwen/Qwen3-0.6B
pipeline_tag: text-generation
library_name: transformers

Qwen3-0.6B-Diagnosis-Expert

This project performs full fine-tuning on the Qwen3-0.6B language model to enhance its clinical diagnosis interpretation and reasoning capabilities. The model was optimized using the bfloat16 (bf16) data type.

Training Procedure

  1. Dataset Preparation

    • Dataset: Containing paired clinical patient histories and step-by-step diagnostic conclusions.
  2. Model Loading and Configuration

    • Base model: Qwen3-0.6B, loaded with the unsloth library in bf16 precision.
    • Full fine-tuning (full_finetuning=True) applied to all layers to adapt the model for medical diagnostic tasks.
  3. Supervised Fine-Tuning (SFT)

    • Utilized the Hugging Face TRL library with the Supervised Fine-Tuning approach.

    • The model was trained to generate both intermediate reasoning steps and final diagnostic statements.

    • Training hyperparameters:

      • Epochs: 2
      • Learning rate: 2e-5
      • Batch size: 8

Purpose and Outcome

  • Significantly improved the model’s ability to interpret clinical information and propose accurate, structured diagnoses.

Evaluation

  • Performance was measured on a held-out validation set with the following metric:

    • Diagnostic Similarity: 71.68% similarity compared to DeepSeek V3-0324 baseline.

License

This project is licensed under the Apache License 2.0. See the LICENSE file for details.