Upload README_hf.md with huggingface_hub
Browse files- README_hf.md +215 -0
README_hf.md
ADDED
|
@@ -0,0 +1,215 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<div align="center">
|
| 2 |
+
|
| 3 |
+
# Open Reasoner Zero
|
| 4 |
+
|
| 5 |
+
<img src="figure/logo.jpg" width="300"/>
|
| 6 |
+
|
| 7 |
+
<div>
|
| 8 |
+
|
| 9 |
+
An Open Source Approach to Scaling Up Reinforcement Learning on the Base Model
|
| 10 |
+
</div>
|
| 11 |
+
</div>
|
| 12 |
+
|
| 13 |
+
<div align="center" style="line-height: 1;">
|
| 14 |
+
<a href="https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero" style="margin: 2px;"><img alt="Code" src="https://img.shields.io/badge/Open%20Reasoner%20Zero-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white" style="display: inline-block; vertical-align: middle;"/></a>
|
| 15 |
+
|
| 16 |
+
<a href="https://huggingface.co/Open-Reasoner-Zero" target="_blank"><img alt="Hugging Face"
|
| 17 |
+
src="https://img.shields.io/badge/HuggingFace-fcd022?style=for-the-badge&logo=huggingface&logoColor=000&labelColor"/></a>
|
| 18 |
+
|
| 19 |
+
<a href="https://yasminezhang.notion.site/Open-Reasoner-Zero-19e12cf72d418007b9cdebf44b0e7903" target="_blank">
|
| 20 |
+
<img alt="Notion Page"
|
| 21 |
+
src="https://img.shields.io/badge/Notion-%23000000.svg?style=for-the-badge&logo=notion&logoColor=white"/></a>
|
| 22 |
+
|
| 23 |
+
<br>
|
| 24 |
+
<a href="https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/ORZ_paper.pdf"><b>Paper PDF Link [WIP]</b>ποΈ</a>
|
| 25 |
+
</div>
|
| 26 |
+
|
| 27 |
+
<div>
|
| 28 |
+
<br>
|
| 29 |
+
|
| 30 |
+
</div>
|
| 31 |
+
|
| 32 |
+
## Overview π
|
| 33 |
+
We introduce **Open-Reasoner-Zero**, the first open source implementation of large-scale reasoning-oriented RL training focusing on scalability, simplicity and accessibility.
|
| 34 |
+
|
| 35 |
+
To enable broader participation in this pivotal moment we witnessed and accelerate research towards artificial general intelligence (AGI),
|
| 36 |
+
we release our source code, parameter settings, training data, and model weights.
|
| 37 |
+
Please refer to our [paper](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/ORZ_paper.pdf) for more insights across various model sizes.
|
| 38 |
+
|
| 39 |
+
**Let the Reasoner-Zero tide rise!**
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
## Main Results π
|
| 43 |
+
|
| 44 |
+

|
| 45 |
+
|
| 46 |
+
*Figure 1 | Evaluation performance of Open-Reasoner-Zero-\{7B, 32B\}. Evaluation performance of Open-Reasoner-Zero-\{7B, 32B\} on benchmarks (averaged on 16 responses) during training. Using the same base model as DeepSeek-R1-Zero-Qwen-32B, Open-Reasoner-Zero-32B achieves superior performance on AIME2024, MATH500, and GPQA Diamond benchmark-requiring only a tenth of the training steps.*
|
| 47 |
+
|
| 48 |
+

|
| 49 |
+
*Figure 2 | Train-time Scale up on Train Reward and Response Length of Open-Reasoner-Zero (ORZ) - \{0.5B, 1.5B, 7B, 32B\}. Train Reward and Response Length increase steadily, demonstrating consistent scalability across model sizes. Interestingly, the ORZ-32B Response Length exhibits fluctuations without negatively impacting training stability, highlighting the robustness of our minimalist recipe.*
|
| 50 |
+
|
| 51 |
+
## Releases π¦
|
| 52 |
+
|
| 53 |
+
<strong>[2025/03/31]</strong>
|
| 54 |
+
We announce a major milestone for `Open-Reasoner-Zero`:
|
| 55 |
+
|
| 56 |
+
- π [Updated Paper](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/ORZ_paper.pdf) with new results.
|
| 57 |
+
- π [Easy-to-use Training Scripts](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/tree/main/playground):
|
| 58 |
+
- [ORZ-1.5B training scripts](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/playground/orz_1p5b_ppo.py) and [ORZ-0.5B training scripts](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/playground/orz_0p5b_ppo.py) (main results in Figure 2).
|
| 59 |
+
- [Minimal resource training scripts](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/playground/orz_0p5b_ppo_1gpu.py): ORZ-0.5B can be run on a single A800/H800 gpu!
|
| 60 |
+
- π€© [Updated Curated Datasets](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/tree/main/data):
|
| 61 |
+
- 129k data in total:
|
| 62 |
+
- [original 57k data](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/data/orz_math_57k_collected.json).
|
| 63 |
+
- [extended 72k data](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/data/orz_math_72k_collection_extended.json).
|
| 64 |
+
- [13k hard data](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/data/orz_math_13k_collection_hard.json) mined from the above 129k data.
|
| 65 |
+
- used in the "annealing" stage of ORZ-32B training: **AIME2024 from ~41% to ~48%**!
|
| 66 |
+
- π€ More HF Models:
|
| 67 |
+
- Updated HF Models: [`Open-Reasoner-Zero-7B`](https://huggingface.co/Open-Reasoner-Zero/Open-Reasoner-Zero-7B) and [`Open-Reasoner-Zero-32B`](https://huggingface.co/Open-Reasoner-Zero/Open-Reasoner-Zero-32B).
|
| 68 |
+
- Released HF Models: [`Open-Reasoner-Zero-1.5B`](https://huggingface.co/Open-Reasoner-Zero/Open-Reasoner-Zero-1.5B) and [`Open-Reasoner-Zero-0.5B`](https://huggingface.co/Open-Reasoner-Zero/Open-Reasoner-Zero-0.5B).
|
| 69 |
+
- π Full Suite of Critic Models for in-depth research: `Open-Reasoner-Zero-Critic-`{[0.5B](https://huggingface.co/Open-Reasoner-Zero/Open-Reasoner-Zero-Critic-0.5B), [1.5B](https://huggingface.co/Open-Reasoner-Zero/Open-Reasoner-Zero-Critic-1.5B), [7B](https://huggingface.co/Open-Reasoner-Zero/Open-Reasoner-Zero-Critic-7B), [32B](https://huggingface.co/Open-Reasoner-Zero/Open-Reasoner-Zero-Critic-32B)}.
|
| 70 |
+
|
| 71 |
+
<strong>[2025/02/18]</strong>
|
| 72 |
+
We release `Open-Reasoner-Zero`.
|
| 73 |
+
|
| 74 |
+
As part of this release, we open-source:
|
| 75 |
+
- π [Paper](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/ORZ_paper.pdf) on our comprehensive analysis and insights in Reasoner-Zero training
|
| 76 |
+
- π€ HF Model [`Open-Reasoner-Zero-7B`](https://huggingface.co/Open-Reasoner-Zero/Open-Reasoner-Zero-7B) and [`Open-Reasoner-Zero-32B`](https://huggingface.co/Open-Reasoner-Zero/Open-Reasoner-Zero-32B)
|
| 77 |
+
- π [`Our curated 57k training data`](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/tree/main/data)
|
| 78 |
+
- π [Training Scripts](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/tree/main/playground) to enjoy your own Reasoner-Zero journey!
|
| 79 |
+
|
| 80 |
+
## Key Features in Codebase π
|
| 81 |
+
|
| 82 |
+
- Adopt single controller trainer design, flexible and researcher-friendly.
|
| 83 |
+
- Colocate training and generation in the same GPUs to maximize GPU utilization.
|
| 84 |
+
|
| 85 |
+
## Getting Started π
|
| 86 |
+
### Data
|
| 87 |
+
|
| 88 |
+
We release all of curated high-quality training data in the [`data`](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/tree/main/data) folder:
|
| 89 |
+
* curated 129k data:
|
| 90 |
+
* [original 57k](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/data/orz_math_57k_collected.json), collected from various sources, including AIME (up to 2023), MATH, Numina-Math collection and Tulu3 MATH.
|
| 91 |
+
* [extended 72k](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/data/orz_math_72k_collection_extended.json), mainly cleaned from OpenR1-Math-220k.
|
| 92 |
+
* [hard 13k](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/data/orz_math_13k_collection_hard.json), mined from the first stage of ORZ-32B training.
|
| 93 |
+
|
| 94 |
+
The details for how to collect data are described in our [paper](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/ORZ_paper.pdf).
|
| 95 |
+
|
| 96 |
+
### Installation & Training Scripts
|
| 97 |
+
We release our [Dockerfile](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/blob/main/docker/Dockerfile) in [docker](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero/tree/main/docker) folder to facilitate the reproducibility of our training.
|
| 98 |
+
|
| 99 |
+
To install the package, run:
|
| 100 |
+
```bash
|
| 101 |
+
pip install -e .
|
| 102 |
+
```
|
| 103 |
+
|
| 104 |
+
#### Start ORZ-32B PPO Training
|
| 105 |
+
Here are the starting commands in 16 nodes.
|
| 106 |
+
|
| 107 |
+
First on master node, run:
|
| 108 |
+
```bash
|
| 109 |
+
ray start --head
|
| 110 |
+
# you will see logging like:
|
| 111 |
+
# Next steps
|
| 112 |
+
# To add another node to this Ray cluster, run
|
| 113 |
+
# ray start --address='<master-node-ip>:<master-node-port>'
|
| 114 |
+
```
|
| 115 |
+
|
| 116 |
+
then on all other nodes, run:
|
| 117 |
+
```bash
|
| 118 |
+
ray start --address='<master-node-ip>:<master-node-port>' # <master-node-ip> and <master-node-port> are from above loggings!
|
| 119 |
+
```
|
| 120 |
+
|
| 121 |
+
finally on master node, just run:
|
| 122 |
+
```bash
|
| 123 |
+
python -m playground.orz_32b_ppo
|
| 124 |
+
```
|
| 125 |
+
Your training log will be shown in the master node terminal.
|
| 126 |
+
|
| 127 |
+
------
|
| 128 |
+
|
| 129 |
+
#### Start ORZ-0.5B PPO Training
|
| 130 |
+
You can start the ORZ-0.5B PPO training in single A800/H800 node:
|
| 131 |
+
```bash
|
| 132 |
+
python -m playground.orz_0p5b_ppo
|
| 133 |
+
```
|
| 134 |
+
|
| 135 |
+
You can even run in **a single A800/H800 gpu**:
|
| 136 |
+
```bash
|
| 137 |
+
python -m playground.orz_0p5b_ppo_1gpu
|
| 138 |
+
```
|
| 139 |
+
|
| 140 |
+
note: since we are not in multi-node setting, no `ray start` like logics are needed.
|
| 141 |
+
|
| 142 |
+
------
|
| 143 |
+
|
| 144 |
+
#### Start ORZ-7B PPO Training
|
| 145 |
+
|
| 146 |
+
Multi-node Training on 4 nodes:
|
| 147 |
+
```bash
|
| 148 |
+
# set up for multi-node training
|
| 149 |
+
ray start --head # on master node
|
| 150 |
+
ray start --address='<master-node-ip>:<master-node-port>' # then on other nodes
|
| 151 |
+
|
| 152 |
+
# then on master node, run:
|
| 153 |
+
python -m playground.orz_7b_ppo
|
| 154 |
+
```
|
| 155 |
+
|
| 156 |
+
Your training log will be shown in the master node terminal.
|
| 157 |
+
|
| 158 |
+
-----
|
| 159 |
+
|
| 160 |
+
#### Start ORZ-1.5B PPO Training
|
| 161 |
+
|
| 162 |
+
Multi-node Training on 2 nodes:
|
| 163 |
+
```bash
|
| 164 |
+
# set up for multi-node training
|
| 165 |
+
ray start --head # on master node
|
| 166 |
+
ray start --address='<master-node-ip>:<master-node-port>' # then on other nodes
|
| 167 |
+
# then on master node, run:
|
| 168 |
+
python -m playground.orz_1p5b_ppo
|
| 169 |
+
```
|
| 170 |
+
|
| 171 |
+
----
|
| 172 |
+
|
| 173 |
+
#### Debug Settings
|
| 174 |
+
In the code, we leave an environment variable `DEBUG_MODE` to run in debug setting for researcher to iterate. (Thought for now, we recommend using `python -m playground.orz_0p5b_ppo_1gpu` for debugging.)
|
| 175 |
+
|
| 176 |
+
The debug running command examples:
|
| 177 |
+
```bash
|
| 178 |
+
# NOTE: just for debug, not final setting!
|
| 179 |
+
|
| 180 |
+
## Debug command in a single GPU with `EleutherAI/pythia-14m`
|
| 181 |
+
DEBUG_MODE=True python -m playground.orz_14m_ppo_mini
|
| 182 |
+
## Debug command in a single node (8 GPUs) with `Qwen/Qwen2.5-7B`
|
| 183 |
+
DEBUG_MODE=True python -m playground.orz_7b_ppo
|
| 184 |
+
```
|
| 185 |
+
|
| 186 |
+
## Acknowledgements π
|
| 187 |
+
|
| 188 |
+
- This work was supported by computing resources and valuable feedback provided by [StepFun](https://www.stepfun.com/) and Tsinghua University.
|
| 189 |
+
- Our training framework is built on [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF), [vllm](https://github.com/vllm-project/vllm), [DeepSpeed](https://github.com/deepspeedai/DeepSpeed) and [ray](https://github.com/ray-project/ray).
|
| 190 |
+
- Our model is based on [Qwen2.5 Series](https://qwenlm.github.io/blog/qwen2.5-llm/) of **base models**, including [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B), [Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B), [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) and [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B).
|
| 191 |
+
- We thank [Project Numina](https://projectnumina.ai/), [Tulu3](https://allenai.org/blog/tulu-3-technical) and [OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) for their collected open sourced data.
|
| 192 |
+
|
| 193 |
+
## Advertisement Time π£
|
| 194 |
+
|
| 195 |
+
We are hiring talented researchers and engineers to join our team. If you are interested in our project and would like to contribute to the reasoner scale-up all the way to AGI, please feel free to reach out to us at [email protected]
|
| 196 |
+
|
| 197 |
+
|
| 198 |
+
[](https://star-history.com/#Open-Reasoner-Zero/Open-Reasoner-Zero&Timeline)
|
| 199 |
+
|
| 200 |
+
## Community Discussions πΊ
|
| 201 |
+
|
| 202 |
+
We have several wechat groups to help discussions and sharing, you can scan the QR code below to join the latest group.
|
| 203 |
+
|
| 204 |
+
<img src="figure/WeChatGroup.png" width="300" style="display: block; margin: 0 auto;"/>
|
| 205 |
+
|
| 206 |
+
## Citation
|
| 207 |
+
|
| 208 |
+
```bibtex
|
| 209 |
+
@misc{OpenReasonerZero2025,
|
| 210 |
+
title={Open-Reasoner-Zero: An Open Source Approach to Scaling Reinforcement Learning on the Base Model},
|
| 211 |
+
author={Jingcheng Hu and Yinmin Zhang and Qi Han and Daxin Jiang and Xiangyu Zhang, Heung-Yeung Shum},
|
| 212 |
+
year={2025},
|
| 213 |
+
howpublished={\url{https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero}},
|
| 214 |
+
}
|
| 215 |
+
```
|