Abstract
MetaStone-S1, a reflective generative model using a self-supervised process reward model, achieves efficient reasoning and scalable performance with fewer parameters compared to existing models.
We introduce our first reflective generative model MetaStone-S1, which obtains OpenAI o3's performance via the self-supervised process reward model (SPRM). Through sharing the backbone network and using task-specific heads for next token prediction and process scoring respectively, SPRM successfully integrates the policy model and process reward model(PRM) into a unified interface without extra process annotation, reducing over 99% PRM parameters for efficient reasoning. Equipped with SPRM, MetaStone-S1 is naturally suitable for test time scaling (TTS), and we provide three reasoning effort modes (low, medium, and high), based on the controllable thinking length. Moreover, we empirically establish a scaling law that reveals the relationship between total thinking computation and TTS performance. Experiments demonstrate that our MetaStone-S1 achieves comparable performance to OpenAI-o3-mini's series with only 32B parameter size. To support the research community, we have open-sourced MetaStone-S1 at https://github.com/MetaStone-AI/MetaStone-S1.
Community
We introduce MetaStone-S1, a pioneering reflective generative model designed to significantly enhance test-time scaling (TTS) capabilities through the new reflective generative form. This work provides three major contributions:
- Reflective Generative Form: By sharing backbone between the policy and process reward model(PRM), we develop a unified interface that efficiently integrates reasoning and evaluation processes, introducing only 53M parameters' PRM for efficient inference.
- Self-supervised Process Reward Model: We introduce a novel self-supervised learning strategy that dynamically assigns outcome rewards to individual reasoning steps without the need of process-level annotations.
- Scaling Law and aha-moment: We empirically demonstrate the scaling law between reasoning computation and TTS performance, and find the aha-moment of the Reflective Generative Form. Extensive evaluations on benchmarks such as AIME24, AIME25, LiveCodeBench, and C-EVAL show that MetaStone-S1 consistently achieves state-of-the-art performance compared to larger open-source and closed-source models.
To foster community-driven research, we have open-sourced MetaStone-S1. Code, models, and resources are available at https://github.com/MetaStone-AI/MetaStone-S1.
arXiv explained breakdown of this paper 👉 https://arxivexplained.com/papers/test-time-scaling-with-reflective-generative-model
Models citing this paper 3
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper