Model Card for Susant-Achary/smolLM2-recipes
This is a 360-million-parameter causal language model, fine-tuned with LoRA adapters on a cleaned Indian recipe dataset. It generates step-by-step cooking instructions in English.
Model Details
Model Description
A compact transformer-decoder causal language model (360M parameters) originally from HuggingFaceTB/SmolLM2-360M, enhanced with low-rank adapter finetuning (LoRA) on ~22K Indian recipes (EmTpro01/indian-recipe-cleaned). Ideal for generating detailed, structured recipe instructions.
- Developed by: Susant Achary
- Model type: Causal Language Model (decoder-only transformer)
- Language(s): English
- License: Apache-2.0
- Finetuned from: HuggingFaceTB/SmolLM2-360M
Model Sources
- Hugging Face Hub: https://huggingface.co/Susant-Achary/smolLM2-recipes
- Dataset: EmTpro01/indian-recipe-cleaned on Hugging Face Datasets
Uses
Direct Use
- Generate detailed, step-by-step Indian recipe instructions given a recipe title and ingredient list.
- Integrate into cooking chatbots or recipe suggestion systems.
Downstream Use
- Fine-tune further for multilingual or specialized dietary versions.
- Plug into kitchen assistant apps or voice-controlled cooking aids.
Out-of-Scope Use
- Nutritional analysis or precise caloric computation (model does not predict nutritional values).
- Medical or allergy advice—recommend consulting a qualified professional.
Bias, Risks, and Limitations
The model reflects the distribution of the training data (predominantly North and South Indian recipes). It may reproduce cultural biases (e.g., under-representation of certain regional cuisines) and occasional inconsistencies in ingredient quantities or cooking times.
Recommendations
- Validate generated recipes before use.
- Provide user feedback loops to catch and correct errors.
- Augment with nutritional or safety checks from external sources.
How to Get Started with the Model
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("Susant-Achary/smolLM2-recipes")
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained("Susant-Achary/smolLM2-recipes", device_map="auto")
generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = (
"### Recipe: Masala Dosa\n"
"### Ingredients:\n"
"rice, urad dal, fenugreek seeds, potato, onion, mustard seeds, curry leaves\n\n"
"### Instructions:\n"
)
print(generator(prompt, max_length=200, do_sample=True, top_p=0.9))
- Downloads last month
- 12
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Susant-Achary/smolLM2-recipes
Base model
HuggingFaceTB/SmolLM2-360M