Papers
arxiv:2510.22543

FAPO: Flawed-Aware Policy Optimization for Efficient and Reliable Reasoning

Published on Oct 26
· Submitted by Ding on Oct 30
Authors:
,
,
,
,

Abstract

Flawed-Aware Policy Optimization (FAPO) improves reinforcement learning with verifiable rewards by penalizing flawed-positive rollouts, enhancing reasoning capability and training stability without increasing computational cost.

AI-generated summary

Reinforcement learning with verifiable rewards (RLVR) has emerged as a promising paradigm for enhancing the reasoning capabilities of large language models (LLMs). In this context, models explore reasoning trajectories and exploit rollouts with correct answers as positive signals for policy optimization. However, these rollouts might involve flawed patterns such as answer-guessing and jump-in-reasoning. Such flawed-positive rollouts are rewarded identically to fully correct ones, causing policy models to internalize these unreliable reasoning patterns. In this work, we first conduct a systematic study of flawed-positive rollouts in RL and find that they enable rapid capability gains during the early optimization stage, while constraining reasoning capability later by reinforcing unreliable patterns. Building on these insights, we propose Flawed-Aware Policy Optimization (FAPO), which presents a parameter-free reward penalty for flawed-positive rollouts, enabling the policy to leverage them as useful shortcuts in the warm-up stage, securing stable early gains, while gradually shifting optimization toward reliable reasoning in the later refinement stage. To accurately and comprehensively detect flawed-positive rollouts, we introduce a generative reward model (GenRM) with a process-level reward that precisely localizes reasoning errors. Experiments show that FAPO is effective in broad domains, improving outcome correctness, process reliability, and training stability without increasing the token budget.

Community

Paper author Paper submitter
edited 1 day ago

Project homepage: https://fapo-rl.github.io

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 2

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.22543 in a Space README.md to link it from this page.

Collections including this paper 1