YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Aureus ERP Fine-Tuned Gemma-3 Model

This is a fine-tuned version of the Gemma-3 open-source LLM, specialized for answering questions based on AureusERP developer documentation.

Aureus ERP is an open-source Enterprise Resource Planning (ERP) platform built with Laravel. It offers a modular and scalable architecture for businesses of all sizes to manage various aspects of their operations, including inventory, sales, projects, and more. Aureus ERP is designed to be flexible, customizable, and developer-friendly, allowing businesses to tailor the platform to their specific needs.


πŸ”§ Model Details

  • Base Model: gemma-3b
  • Fine-tuned On: Internal AureusERP Developer Docs
  • Precision: Float16 (merged & optimized for deployment)
  • Tokenizer: Same as base model
  • Training Objective: Supervised fine-tuning with synthetically generated QA pairs for LLM alignment

🧠 Use Case

This model is optimized for answering technical questions related to AureusERP, such as:

  • Filament integration and actions
  • Module architecture (e.g., FollowerAction, notifications, dev hooks)
  • Deployment/configuration steps
  • Developer-specific usage patterns

πŸ“ Data Source

Training data was automatically generated using Gemini 2.5-flash and then curated.

  • πŸ“„ Source: Internal AureusDevDocs (markdown)
  • πŸ€– Process: QA pairs extracted from docs using Gemini
  • πŸ”’ Format: JSON list of question-answer pairs

πŸ”Œ How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("webkul/Aureus-gemma-finetuned", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("webkul/Aureus-gemma-finetuned")

prompt = "How can I add followers using AureusERP FollowerAction?"
inputs = tokenizer(prompt, return_tensors="pt")

outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ‘₯ Maintained By

Webkul AI Research Team https://webkul.com

Downloads last month
16
Safetensors
Model size
4B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for webkul/Aureus-gemma-finetuned

Quantizations
1 model