π§ Fine-Tuning Gemma 3B on Healthcare Admin Tasks
This repository demonstrates how to fine-tune the instruction-tuned google/gemma-3-4b-it model on a custom dataset covering administrative tasks in the healthcare industry.
π Project Overview
We use the Unsloth framework to:
- Load and quantize the base Gemma model in 4-bit precision.
- Apply LoRA (Low-Rank Adaptation) for efficient parameter tuning.
- Train the model using Hugging Face's
trllibrary andSFTTrainer.
This setup significantly reduces memory footprint and training cost, making it suitable for training on consumer GPUs (e.g. Colab, T4, A100).
π©Ί Dataset: Healthcare Admin
- Source:
xgalaxy/healthcare_admin - Format: ShareGPT-style JSON format with structured
userandassistantroles - Coverage:
- Appointment scheduling, cancellation, and rescheduling
- Edge cases involving follow-ups, missing info, and ambiguous requests
- Multi-turn conversations to emulate real-world interactions
π οΈ Key Components
β Model Setup
google/gemma-3-4b-itloaded using Unsloth'sFastModel.from_pretrained()- 4-bit quantization enabled via
load_in_4bit=True - LoRA adapters injected for memory-efficient tuning
β Training
- Supervised fine-tuning with
SFTTrainer - Batch size simulated using
gradient_accumulation_steps - Linear learning rate scheduler with warmup
- Training capped at a fixed number of steps for fast iteration
π Trained Model
The fine-tuned model is available on Hugging Face: π xgalaxy/gemma-3
π Resources
- π Unsloth GitHub
- π Gemma on Hugging Face
- ποΈ Healthcare Admin Dataset
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support