Aryabhata-1.0 GGUF Models
Model Generation Details
This model was generated using llama.cpp at commit 221c0e0c
.
Quantization Beyond the IMatrix
I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the --tensor-type
option in llama.cpp
to manually "bump" important layers to higher precision. You can see the implementation here:
👉 Layer bumping with llama.cpp
While this does increase model file size, it significantly improves precision for a given quantization level.
I'd love your feedback—have you tried this? How does it perform for you?
Click here to get info on choosing the right GGUF model format
Aryabhatta 1.0 : An exam-focused language model for JEE Math
Overview
Aryabhata 1.0 is a 7B parameter small language model for mathematics developed by Physics Wallah AI Research, optimized for high-stakes Indian competitive exams like JEE Mains. Despite its compact size, Aryabhata 1.0 achieves state-of-the-art performance on exam-centric reasoning tasks with impressive token efficiency and low inference cost.
🚧 Aryabhata 1.0 is an experimental release. We are actively seeking feedback — please contribute in the Discussion tab of this repo.
🧠 Key Features
- Architecture: 7B parameter causal decoder-based model.
- Exam-Centric Optimization: Specifically tuned for JEE-level Mathematics reasoning.
- High Accuracy:
- 86% on JEE Mains January 2025 session.
- 90.2% on JEE Mains April 2025 session.
- Token Efficiency: Operates effectively around a ~2K token window, compared to ~8K required by other reasoning models.
- Compute Efficient: Trained on a 1x2 NVIDIA H100 GPU using optimized pipeline.
🛠️ Training Details
- Training Data: ~130K problem-solution pairs curated from proprietary Physics Wallah exam datasets.
- Training Pipeline:
- Model Merging
- Rejection Sampling
- Supervised Fine-Tuning (SFT)
- Reinforcement Learning with Verifiable Rewards (RLVR)
🔀 Model Merging
We began with model merging (Weighted average) to build a strong initialization (Aryabhata 0.5) by combining diverse model capabilities:
- Qwen 2.5 Math: A robust math-centric LLM with solid symbolic math foundations.
- Ace Math: An enhanced version of Qwen 2.5 Math, fine-tuned by NVIDIA for improved accuracy in mathematics benchmarks.
- DeepSeek R1 Distill Qwen: A long-form reasoning model, fine-tuned on reasoning traces distilled from DeepSeek R1.
📚 Data Curation + Rejection Sampling
We extracted ~250K raw questions from Physics Wallah's internal database and applied aggressive filtering and cleaning:
- Removed: diagram-based, non-English, and option-heavy questions.
- Kept: questions matching the distribution of JEE Main 2019–2024. Final curated dataset: ~130K high-quality questions.
For each question:
- Generated 4 CoTs using Aryabhata 0.5.
- Retained only those leading to correct final answers.
Resulting Dataset:
- ~100K questions
- ~350K high-quality CoTs
We used this dataset for SFT.
🎯 Reinforcement Learning with Verifiable Rewards (RLVR)
We used a custom in-house variant of Group Relative Policy Optimization (GRPO), adapted for math-specific reward functions.
- Removed KL-divergence penalty
- Removed clipping
We used RLVR on the remaining ~30K questions.
This multi-phase training strategy allows Aryabhata 1.0 to capture pedagogy-aligned reasoning patterns, making it highly effective for solving real student queries in mathematics.
📊 Performance Highlights
Evaluation Setup
All evaluations were performed with temperature = 0.0, and we report pass@1 accuracy.
Evaluation Datasets
We evaluated the model on two sets of official JEE Mains 2025 mathematics papers:
- January Session: 10 question papers containing 250 questions.
- April Session: 9 question papers containing 225 questions.
Each paper includes a mix of:
- Multiple Choice Questions (MCQs) with one correct option
- Numeric Answer Type (NAT) questions requiring precise numerical responses
Evaluation Metric
We used a composite evaluation metric to reflect real-world grading rigor and reduce false positives:
- Float Match
- Compares predicted and target answers within a tolerance (±1e-9)
- Handles rounding artifacts and small numerical errors robustly
- String Match
- Used for symbolic answers (e.g., fractions, radicals)
- Uses strict exact match — predictions must match ground truth character-for-character
- LLM-as-Judge (GPT-4o-mini)
- Used for Mathematical equivalence for ambiguous formats
🔹 Accuracy Comparison Across Models
Aryabhata has the best accuracy on JEE Main Maths, on par with frontier models
🔹 Accuracy vs Token Usage
Aryabhata is on par with frontier models in terms of accuracy vs token usage
🔧 Intended Use
Primary Use Cases:
- Competitive exam preparation (JEE Main level mathematics problems)
- Question answering and doubt-solving systems
- Educational tutoring and concept explanation
💡 How to Use
🧪 Using with 🤗 Transformers
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
model_id = "PhysicsWallahAI/Aryabhata-1.0"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Define stop strings
stop_strings = ["<|im_end|>", "<|end|>", "<im_start|>", "```python\n", "<|im_start|>", "]}}]}}]"]
def strip_bad_tokens(s, stop_strings):
for suffix in stop_strings:
if s.endswith(suffix):
return s[:-len(suffix)]
return s
# Create generation config (can also set temperature, top_p, etc.)
generation_config = GenerationConfig(
max_new_tokens=4096,
stop_strings = stop_strings
)
query = 'Find all the values of \\sqrt[3]{1}'
messages = [{'role': 'system', 'content': 'Think step-by-step; put only the final answer inside \\boxed{}.'},
{'role': 'user', 'content': query}]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer([text], return_tensors="pt")
outputs = model.generate(**inputs, generation_config=generation_config, tokenizer=tokenizer)
print(strip_bad_tokens(tokenizer.decode(outputs[0], skip_special_tokens=True), stop_strings))
⚡ Using with vLLM
To run the model efficiently using vLLM:
from vllm import LLM, SamplingParams
# Initialize model (downloads from Hugging Face if not local)
llm = LLM(model="PhysicsWallahAI/Aryabhata-1.0")
# Define prompt and sampling configuration
query = 'Find all the values of \\sqrt[3]{1}'
messages = [{'role': 'system', 'content': 'Think step-by-step; put only the final answer inside \\boxed{}.'},
{'role': 'user', 'content': query}]
sampling_params = SamplingParams(temperature=0.0, max_tokens=4*1024, stop=["<|im_end|>", "<|end|>", "<im_start|>", "```python\n", "<|im_start|>", "]}}]}}]"])
# Run inference
results = llm.chat(messages, sampling_params)
# Print result
print(results[0].outputs[0].text.strip())
🚀 Roadmap
Aryabhata 2.0 (Upcoming):
- Extending domain coverage to Physics and Chemistry
- Supporting JEE Advanced, NEET, and Foundation syllabus
- Further optimization for affordability and accuracy in real-time deployments
🤝 Citation
If you use this model, please cite:
@misc{Aryabhata2025,
title = {Aryabhata 1.0: A compact, exam-focused language model tailored for mathematics in Indian competitive exams, especially JEE Main.},
author = {Physics Wallah AI Research},
year = {2025},
note = {\url{https://huggingface.co/PhysicsWallahAI/Aryabhata-1.0}},
}
<!--End Original Model Card-->
---
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
💬 **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4.1-mini)
- `HugLLM` (Hugginface Open-source models)
- `TestLLM` (Experimental CPU-only)
### **What I’m Testing**
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap security scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
- ✅ **Zero-configuration setup**
- ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
### **Other Assistants**
🟢 **TurboLLM** – Uses **gpt-4.1-mini** :
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
🔵 **HugLLM** – Latest Open-source models:
- 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
### 💡 **Example commands you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
### Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! 😊
- Downloads last month
- 707
1-bit
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit