SmolLM3-3B • Indian-Recipe LoRA (NF4-4bit)
Summary
A lightweight LoRA adapter (≈ 200 MB) that teaches SmolLM3-3B to generate
detailed, step-by-step Indian recipes given a dish name and ingredient list.
Trained on the open-source
EmTpro01/indian-recipe-cleaned
corpus (6 871 recipes).
Model Details
Developer | Susant-Achary |
Base model | HuggingFaceTB/SmolLM3-3B |
Adapter type | LoRA (r=16 , α=32 , dropout 0.05) |
Quantisation | 4-bit NF4, bfloat16 compute (BitsAndBytes) |
Languages | English (culinary domain) |
License | Apache-2.0 (inherits base-model license) |
Finetuning data | 6 871 Indian recipes (CC-BY-SA-4.0) |
Hardware | 1 × A100-40 GB |
Model Sources
- Weights & tokenizer: this repository
- Dataset: see link above
Uses
Direct Use
from transformers import AutoModelForCausalLM, AutoTokenizer
base = "HuggingFaceTB/SmolLM3-3B"
lora = "Susant-Achary/smollm3-indian-recipes"
tok = AutoTokenizer.from_pretrained(lora)
model = AutoModelForCausalLM.from_pretrained(
lora,
load_in_4bit=True,
device_map="auto",
torch_dtype="bfloat16")
prompt = "Give me a detailed, step-by-step recipe for Paneer Butter Masala using these ingredients: paneer, tomato, butter, cream, garam masala."
print(tok.decode(model.generate(**tok(prompt, return_tensors="pt").to(model.device),
max_new_tokens=256)[0], skip_special_tokens=True))
Ask in Spanish , it still responds
#-----------------------------------------------------------------------------
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# 1. Load the base + LoRA adapter (4-bit)
lora = "Susant-Achary/smollm3-indian-recipes
tok = AutoTokenizer.from_pretrained(lora)
model = AutoModelForCausalLM.from_pretrained(
repo,
device_map="auto",
load_in_4bit=True,
torch_dtype=torch.bfloat16)
# 2. Ask for a Spanish recipe
system = "Eres un chef experto en cocina india. Responde siempre en español."
usuario = ("Dame una receta detallada, paso a paso, para hacer 'Chole Bhature' "
"utilizando los siguientes ingredientes: garbanzos, cebolla, tomate, "
"masala de garbanzos, harina de trigo, yogur, aceite.")
chat = tok.apply_chat_template(
[{"role":"system", "content":system},
{"role":"user", "content":usuario}],
tokenize=False, add_generation_prompt=True)
inputs = tok(chat, return_tensors="pt").to(model.device)
out_ids = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.7,
top_p=0.9)
print(tok.decode(out_ids[0][inputs.input_ids.shape[1]:],
skip_special_tokens=True))
Citation if you like it
@misc{smollm3_indian_recipes_2025,
title = {SmolLM3-3B: Indian-Recipe LoRA},
author = {Susant-Achary},
year = {2025},
howpublished = {HuggingFace Hub},
url = {https://huggingface.co/<susant-achary>/smollm3-indian-recipes}
}
- Downloads last month
- 14
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support