Reflect, Retry, Reward: Self-Improving LLMs via Reinforcement Learning
Abstract
A method using self-reflection and reinforcement learning improves the performance of large language models, especially with limited feedback, by rewarding self-reflections that lead to better task performance.
We explore a method for improving the performance of large language models through self-reflection and reinforcement learning. By incentivizing the model to generate better self-reflections when it answers incorrectly, we demonstrate that a model's ability to solve complex, verifiable tasks can be enhanced even when generating synthetic data is infeasible and only binary feedback is available. Our framework operates in two stages: first, upon failing a given task, the model generates a self-reflective commentary analyzing its previous attempt; second, the model is given another attempt at the task with the self-reflection in context. If the subsequent attempt succeeds, the tokens generated during the self-reflection phase are rewarded. Our experimental results show substantial performance gains across a variety of model architectures, as high as 34.7% improvement at math equation writing and 18.1% improvement at function calling. Notably, smaller fine-tuned models (1.5 billion to 7 billion parameters) outperform models in the same family that are 10 times larger. Our novel paradigm is thus an exciting pathway to more useful and reliable language models that can self-improve on challenging tasks with limited external feedback.
Community
We present a two-stage framework where language models improve by generating self-reflective commentary after making mistakes and then retrying tasks with that reflection, receiving reinforcement if they succeed; notably, only the reflection tokens are rewarded, with other tokens masked out, to reinforce generalisable self-reflection rather than task-specific solutions.
Nice work and thank you for sharing! Will this implementation be publicly available on GitHub?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Learning to Reason without External Rewards (2025)
- Incentivizing LLMs to Self-Verify Their Answers (2025)
- Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model? (2025)
- DEBATE, TRAIN, EVOLVE: Self Evolution of Language Model Reasoning (2025)
- SeRL: Self-Play Reinforcement Learning for Large Language Models with Limited Data (2025)
- ShorterBetter: Guiding Reasoning Models to Find Optimal Inference Length for Efficient Reasoning (2025)
- NOVER: Incentive Training for Language Models via Verifier-Free Reinforcement Learning (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Nice to see more evidence that bigger isn't always better (especially with the right techniques)
Nice work and thank you for sharing this ! I am curious did you also run vanilla GRPO without self-reflection ? I wonder what's the accuracy would be compared to +self-reflection 2nd try
Will this implementation be publicly available on GitHub?
I have carefully read this paper, but I still feel a bit confused about the specific process described in it, especially regarding the role of the "mask" mentioned in the section "4.4 Multi-Step GRPO" of the paper. Below is the process I derived after discussing with Gemini. Could you please confirm if this interpretation is correct?
We believe the process is a decoupled, two-step procedure orchestrated by a custom GRPOTrainer
.
For a single training step on a batch of failed tasks:
Step 1: Reflection Generation (The core RL step, managed by GRPOTrainer)
- Input: The
GRPOTrainer
receives a batch of prompts. Each prompt consists of the original query, the first failed attempt, and an instruction to reflect (e.g., "Reflect on what went wrong..."). - Generation: For each prompt, the trainer uses the model to generate a group of
k
differentself-reflections
. These are the "initial completions" in the RL context. The trainer calculates and stores thelog_probs
for the tokens of each generated reflection. - Masking: At this stage, a standard TRL mask is used to differentiate the input prompt from the generated
self-reflection
. This ensures that any future gradient update will only apply to the reflection tokens.( This is the biggest confusion: in the grpoTrainer of TRL, when calculating the Loss, the prompt in the response is automatically removed, so theoretically, a special mask is not needed)
Step 2: Answer Generation & Reward Calculation (The custom second_step
function)
- Execution: After the reflections are generated, a custom function (the
second_step
mentioned in the paper) is executed. This function operates outside the main gradient computation graph. - Iteration: It iterates through each of the
k
self-reflections
generated in Step 1. - Second Attempt: For each
reflection_i
, it constructs a new temporary prompt (original_query + reflection_i
) and performs a standard, non-RL model generation to produce afinal_answer_i
. - Verification: Each
final_answer_i
is passed to a task-specific verifier (e.g., does the math equation evaluate correctly?). The verifier returns a scalar reward,reward_i
(e.g., 1 for success, 0 for failure).
Step 3: RL Policy Update (Back within GRPOTrainer)
- Return Rewards: The
second_step
function returns the list of scalar rewards{reward_1, reward_2, ..., reward_k}
to theGRPOTrainer
. - Calculate Advantage: The trainer uses this group of rewards to calculate the advantage for each of the
k
reflections (e.g.,Advantage_i = (reward_i - mean(rewards)) / std(rewards)
). - Backpropagation: The trainer performs the policy gradient update. The
Advantage_i
is used to scale the loss calculated from the storedlog_probs
ofreflection_i
. Thanks to the mask from Step 1, the update correctly and exclusively affects the policy for generating the self-reflection tokens.
I'm not sure if my understanding is correct. It would be much better if the GitHub source code could be provided. Thank you for your clarification.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper