YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
AureusERP Fine-Tuned Gemma-3 Model
This is a fine-tuned version of the Gemma-3 open-source LLM, specialized for answering questions based on AureusERP developer documentation.
π§ Model Details
- Base Model:
gemma-3b
- Fine-tuned On: Internal AureusERP Developer Docs
- Precision: Float16 (merged & optimized for deployment)
- Tokenizer: Same as base model
- Training Objective: Supervised fine-tuning with synthetically generated QA pairs for LLM alignment
π§ Use Case
This model is optimized for answering technical questions related to AureusERP, such as:
- Filament integration and actions
- Module architecture (e.g., FollowerAction, notifications, dev hooks)
- Deployment/configuration steps
- Developer-specific usage patterns
π Data Source
Training data was automatically generated using Gemini 2.5-flash and then curated.
- π Source: Internal
AureusDevDocs
(markdown) - π€ Process: QA pairs extracted from docs using Gemini
- π’ Format: JSON list of question-answer pairs
π How to Use
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("webkul/Aureus-gemma-finetuned", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("webkul/Aureus-gemma-finetuned")
prompt = "How can I add followers using AureusERP FollowerAction?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π₯ Maintained By
Webkul AI Research Team https://webkul.com
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support