Model Card for Model ID
Key Takeaways
π‘ Systematic analysis on error types: Categorizes common model-generated mathematical reasoning errors, revealing consistent error patterns across models and guiding targeted improvements.
π‘ Error-type grounded error augmentation: Introduces diverse and meaningful errors by leveraging a teacher model to intentionally inject representative mistakes with type sampled from the analyzed distribution, enhancing the modelβs ability to learn from failures.
π‘ Two complementary self-correction mechanisms: Combines Fix & Continue (correcting mistakes within the original reasoning) and Fresh & Restart (restarting the reasoning process from scratch) to generate effective revision trajectories.
β LEMMA β A novel framework that fine-tunes LLMs on error-corrective trajectories, enabling autonomous error detection and correction during mathematical reasoning.
π Result β Up to 13.3% accuracy improvement for LLaMA3-8B with only 90k synthesized data.
The LEMMA series models are trained on the LEMMA Dataset. This dataset uses the training set of MATH and GSM8K to generate error-corrective reasoning trajectories. For each question in these datasets, the student model (LLaMA3-8B) generates self-generated errors, and the teacher model (GPT-4o) deliberately introduces errors based on the error type distribution of the student model. Then, both "Fix & Continue" and "Fresh & Restart" correction strategies are applied to these errors to create error-corrective revision trajectories. After filtering out trajectories with incorrect final answers, we obtain this dataset. Fine-tuning on this dataset achieves up to 13.3% average accuracy improvement for LLaMA3-8B with less than 90k synthesized data. For more details, please refer to our paper LEMMA: Learning from Errors for MatheMatical Advancement in LLMs.
Model Details
Model Description
- Finetuned from model [optional]: Llama-3-70B
Model Sources [optional]
- Repository: https://github.com/pzs19/LEMMA/
- Paper: https://arxiv.org/abs/2503.17439
Direct Use
The same as Llama-3-70B.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Training Details
The LEMMA series models are trained on the LEMMA Dataset using LLaMA-Factory. For more details, please refer to our paper.
Results
| Model | Checkpoint | Paper | GSM8k | MATH | License |
|---|---|---|---|---|---|
| LEMMA-LLAMA-3-8B | π€ HF Link | π [LEMMA] | 79.2 | 38.3 | Llama 3 |
| LEMMA-LLAMA-3-70B | π€ HF Link | π [LEMMA] | 91.5 | 51.8 | Llama 3 |
Citation [optional]
Please cite the paper if you refer to our model, code, data or paper from MetaMath.
@article{LEMMA,
title={LEMMA: Learning from Errors for MatheMatical Advancement in LLMs},
author={Zhuoshi Pan, Yu Li, Honglin Lin, Qizhi Pei, Zinan Tang, Wei Wu, Chenlin Ming, H. Vicky Zhao, Conghui He, Lijun Wu},
journal={arXiv preprint arXiv:2503.17439},
year={2025}
}
- Downloads last month
- 4