AllM Assistant

AllM Assistant is a lightweight instruction-tuned LLM focused on fitness and lifestyle guidance. This repository contains training scripts, inference wrappers, a Gradio demo for Hugging Face Spaces, and a sample instruction-response dataset to fine-tune a causal LM (GPT-2 by default).

Contents

  • src/ β€” model/inference/training utilities
  • data/ β€” sample train.jsonl and val.jsonl
  • hf_space/ β€” Gradio demo app
  • requirements.txt β€” exact package versions to reproduce the environment
  • README.md, model_card.md, LICENSE, .gitignore

Quick start (local)

  1. Create and activate a virtual env:
    python -m venv venv
    source venv/bin/activate   # Windows: venv\Scripts\activate
    pip install -r requirements.txt
    
  2. Train (small quick demo):
    python src/trainer.py --model_name_or_path gpt2 --train_file data/train.jsonl --validation_file data/val.jsonl --output_dir outputs/allm --num_train_epochs 1 --per_device_train_batch_size 1
    
  3. Inference:
    python src/inference.py --model_dir outputs/allm --prompt "Create a 10-minute beginner home workout for fat loss."
    
  4. Run the demo locally:
    python hf_space/app.py
    

Notes

  • This project uses GPT-2 by default for speed. After testing, you can replace the base model with larger OSS LLMs.
  • For efficient fine-tuning on limited hardware, consider using PEFT/LoRA (PEFT is included in requirements).
  • The dataset included is synthetic sample data for demo and testing only β€” expand with high-quality real data for production.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support