Moxin is a family of fully open-source and reproducible LLMs
AI & ML interests
AI, LLM, Agents
Recent Activity
Organization Card
Introducing Moxin 7B: The truly open, SOTA-performing LLM and VLM that's redefining transparency.
We've open-sourced EVERYTHING—pre-training code, data, and models, including our GRPO-enhanced Reasoning model. It outperforms Mistral, Qwen, and LLaMA in zero-shot/few-shot tasks and delivers superior reasoning on complex math benchmarks, all with an efficient training cost of ~$160K for full pretraining.
We unleash the power of reproducible AI 🚀. Interested? Explore the models and code on our GitHub and read the full paper on arXiv.
models
6

moxin-org/DeepSeek-V3-0324-Moxin-GGUF
Text Generation
•
671B
•
Updated
•
374
•
2

moxin-org/Moxin-7B-VLM
Updated
•
7
•
1

moxin-org/Moxin-7B-Reasoning
8B
•
Updated
•
4

moxin-org/Moxin-7B-Instruct
Updated
•
2
•
4

moxin-org/Moxin-7B-Chat
8B
•
Updated
•
76
•
34

moxin-org/Moxin-7B-LLM
Text Generation
•
8B
•
Updated
•
214
•
19
datasets
0
None public yet