The base Qwen2.5-Math-7B model used by ReLIFT. We change to rope_theta from 10000 to 40000 and extend the context window to 16k. Also, we modify the chat_template for the system prompt and add .

Github: https://github.com/TheRoadQaQ/ReLIFT

Citation

If you find our model, data, or evaluation code useful, please kindly cite our paper:

@article{ma2025learning,
  title={Learning What Reinforcement Learning Can't: Interleaved Online Fine-Tuning for Hardest Questions},
  author={Ma, Lu and Liang, Hao and Qiang, Meiyi and Tang, Lexiang and Ma, Xiaochen and Wong, Zhen Hao and Niu, Junbo and Shen, Chengyu and He, Runming and Cui, Bin and others},
  journal={arXiv preprint arXiv:2506.07527},
  year={2025}
}
Downloads last month
6
Safetensors
Model size
8B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including RoadQAQ/Qwen2.5-Math-7B-16k-think