user
stringlengths 3
28
| created_at
timestamp[us]date 2020-04-01 09:48:12
2025-07-30 20:59:07
| body
stringlengths 1
173k
| issue_number
int64 1
3.81k
| __index_level_0__
int64 0
11.8k
|
---|---|---|---|---|
qgallouedec
| 2025-04-26T02:50:32 |
Are you using a modified version of GRPO?
| 3,310 | 11,105 |
ahatamiz
| 2025-04-26T02:57:32 |
Yes ! but it works without any issues with the previous version which is basically TP=1 and DP=1.
| 3,310 | 11,106 |
qgallouedec
| 2025-04-26T03:03:38 |
Try adding this line:
```diff
self.vllm_client = VLLMClient(args.vllm_server_host, args.vllm_server_port, connection_timeout=args.vllm_server_timeout)
+ self.vllm_client.init_communicator()
```
| 3,310 | 11,107 |
ahatamiz
| 2025-04-26T04:08:56 |
Thanks @qgallouedec ! kick started another training with TP=1 and DP=8 and have not noticed any issues at least for now.
Hopefully the issue is resolved.
Thanks again for your amazing work !
| 3,310 | 11,108 |
HuggingFaceDocBuilderDev
| 2025-04-16T20:43:40 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3309). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,309 | 11,109 |
HuggingFaceDocBuilderDev
| 2025-05-02T02:32:22 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3307). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,307 | 11,110 |
LeonEricsson
| 2025-04-16T12:29:16 |
~~I've replicated your issue and have identified the root cause as:~~
~~DPO concatenates chosen + rejected inputs to avoid performing multiple forward passes and in doing so we naturally perform padding but it seems this padding is always right-padding, it doesn't adhere to the tokenizers padding_side. I'm still investigating if there are any side effects of doing left-padding instead, but wanted to update you in the meantime.~~
~~EDIT: if you want a quick fix I was able to run error-free with `padding_free=True`~~
EDIT 2: This has been an issue before #1217. Because we are not doing generation our padding side should not matter, we just need to pass `use_cache=False` in the forward pass and we will avoid the `is_padding_right` check completely.
| 3,306 | 11,111 |
benjamin-marie
| 2025-04-16T17:20:50 |
Thanks for your investigation!
I wonder whether we don't have a similar issue of the padding side being overridden also in the other trainers.
Last week, I was running SFT with Llama 3.1, using the SFTTrainer, using the same environment, but when reaching the validation steps, the trainer returned an error about the padding side being set to right. No error when I set the eval batch size to 1.
That's why I also tagged SFT, but this seems like another issue.
| 3,306 | 11,112 |
LeonEricsson
| 2025-04-17T05:50:47 |
Interesting I'll see if I can replicate it for other trainers.
EDIT: SFTTrainer works for me, tried both Qwen 2.5 and LLama with this config:
```
training_args = SFTConfig(
output_dir=f"{model_id}-codeforces-SFT",
logging_steps=10,
bf16=True,
use_liger_kernel=True,
max_length=500,
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
gradient_accumulation_steps=4,
dataset_num_proc=32,
num_train_epochs=1,
eval_strategy="steps",
do_eval=True,
optim="paged_adamw_8bit",
eval_steps=10,
max_steps=10,
)
```
| 3,306 | 11,113 |
qgallouedec
| 2025-04-18T17:29:51 |
Thank you for this information. So if I understand correctly, the error occurs when this minimum set of conditions are met:
- with DPO
- during evaluation
- with FA2
- with Qwen2
possible for you to share a full and minimal code to reproduce?
| 3,306 | 11,114 |
LeonEricsson
| 2025-04-18T20:29:08 |
Yes I agree with those conditions, although I think I was getting the error immediately on the first training step (can't confirm tonight, can check tomorrow).
This reproduces the error (this dataset requires a bunch of preprocessing so probably best to replace it with something simpler if you've got one on hand).
```
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import TrainingArguments
from trl import DPOTrainer, DPOConfig
from utils import *
def main():
######## MODEL #############
model_name = "Qwen/Qwen2.5-1.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_name, attn_implementation="flash_attention_2")
tokenizer = AutoTokenizer.from_pretrained(model_name)
############ DATASET ############
# source: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Zephyr_(7B)-DPO.ipynb#scrollTo=AqkY_wHdKyOl
raw_datasets = get_datasets(
{"HuggingFaceH4/ultrafeedback_binarized" : 0.005}, # 0.5% sampled
splits = ["train_prefs", "test_prefs"],
)
column_names = list(raw_datasets["train"].features)
raw_datasets = raw_datasets.map(
apply_chat_template,
fn_kwargs = {"tokenizer": tokenizer, "task": "dpo"},
num_proc = 12,
remove_columns = column_names,
desc = "Formatting comparisons with prompt template",
)
# Replace column names with what TRL needs, text_chosen -> chosen and text_rejected -> rejected
for split in ["train", "test"]:
raw_datasets[split] = raw_datasets[split].rename_columns(
{"text_prompt": "prompt", "text_chosen": "chosen", "text_rejected": "rejected"}
)
config = DPOConfig(
output_dir="outputs",
eval_strategy="steps",
do_eval=True,
optim="paged_adamw_8bit",
per_device_train_batch_size=1,
gradient_accumulation_steps=1,
per_device_eval_batch_size=1,
log_level="debug",
save_strategy="steps",
save_steps=200,
logging_steps=25,
learning_rate=5e-6,
bf16 = True,
beta = 0.1,
eval_steps=10,
max_steps=10,
warmup_ratio=0.1,
lr_scheduler_type="linear",
model_adapter_name="DPO",
ref_adapter_name="reference",
max_length = 256,
max_prompt_length = 128,
)
dpo_trainer = DPOTrainer(
model = model,
args = config,
train_dataset = raw_datasets["train"],
processing_class = tokenizer,
eval_dataset=raw_datasets['test']
)
dpo_trainer.train()
```
| 3,306 | 11,115 |
h7878778h
| 2025-04-16T08:09:00 |
I know, should run `pip install trl[vllm]` first
| 3,304 | 11,116 |
HuggingFaceDocBuilderDev
| 2025-04-16T05:50:22 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3303). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,303 | 11,117 |
qgallouedec
| 2025-04-24T22:15:27 |
Closing as resolved by #3310
| 3,303 | 11,118 |
HuggingFaceDocBuilderDev
| 2025-04-16T05:16:00 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3302). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,302 | 11,119 |
HuggingFaceDocBuilderDev
| 2025-04-15T18:29:47 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3300). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,300 | 11,120 |
qgallouedec
| 2025-04-15T20:42:01 |
@AMindToThink are your latest commits for this PR?
| 3,300 | 11,121 |
AMindToThink
| 2025-04-15T21:47:41 |
Oops, sorry, no. Reverted.
I'm working to allow users to save the value model along with the policy model.
I'm running into some odd errors. I'll make a bug report.
| 3,300 | 11,122 |
AMindToThink
| 2025-04-15T23:02:32 |
Hi @qgallouedec,
It turns out that [this issue was already raised a year and a half ago](https://github.com/huggingface/trl/issues/3293), and you decided to keep the value model Optional with a None default to keep the documentation easier to read. Sorry for rehashing old questions!
I have made [an issue with the bugs](https://github.com/huggingface/trl/issues/3301) I ran into while trying to implement saving the value model. I would appreciate your help!
Thank you,
Matthew (@AMindToThink)
| 3,300 | 11,123 |
HuggingFaceDocBuilderDev
| 2025-04-15T12:17:45 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3299). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,299 | 11,124 |
HuggingFaceDocBuilderDev
| 2025-04-15T11:23:30 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3297). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,297 | 11,125 |
LeonEricsson
| 2025-04-15T12:39:45 |
Vision models are not supported, there is an open PR working on it.
| 3,296 | 11,126 |
LeonEricsson
| 2025-04-15T11:17:47 |
Easiest way is to specify it as an environment variable when launching your file
`CUDA_VISIBLE_DEVICES = "1" python ...`
if you want it in code try
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
import trl
```
| 3,295 | 11,127 |
AMindToThink
| 2025-04-15T19:02:13 |
Note that you have to set the visible devices before any other imports, or else the libraries might remember your other GPUs!
| 3,295 | 11,128 |
shirinyamani
| 2025-04-24T19:46:12 |
@Aristomd
So a general note is: first based on your resources, see if you can use vllm for generation. for example if you have in total 4 gpus, depending on your model/data etc you can use 2 for generation (vllm) and the rest 2 for training. now that we have the resources, the easiest way to dictate which gpus should be used for training / generation is the following:
```sh
CUDA_VISIBLE_DEVICES=0,1 trl vllm-serve --model Qwen/Qwen2.5-7B
```
Then, run the training script by passing `use_vllm=True` in the training arguments (this example allocate 2 GPUs for training) by;
```sh
CUDA_VISIBLE_DEVICES=2,3 accelerate launch train.py
| 3,295 | 11,129 |
LeonEricsson
| 2025-04-15T06:21:24 |
At each step $\pi_{\theta_{old}}$ and $\pi_{\theta}$ are the same up until line 10 in Algorithm 1 when the GRPO iterations start updating $\pi_{\theta}$ away from $\pi_{\theta_{old}}$. If you are using `num_iterations=1` (not performing multiple policy updates per batch) in the config then you are correct that the ratio is always 1.
| 3,294 | 11,130 |
YooSungHyun
| 2025-04-15T08:47:34 |
> At each step π θ o l d and π θ are the same up until line 10 in Algorithm 1 when the GRPO iterations start updating π θ away from π θ o l d . If you are using `num_iterations=1` (not performing multiple policy updates per batch) in the config then you are correct that the ratio is always 1.
@LeonEricsson thx for reply!
Given that old_model and model are effectively the same in the current implementation, wouldn’t the probability ratio remain 1 regardless of num_iteration?
I would like to point out that in the trl code, there doesn't appear to be a meaningful distinction between old_model and model.
Or is there something I might have misunderstood in the implementation?
| 3,294 | 11,131 |
LeonEricsson
| 2025-04-15T11:08:23 |
Yes, you are misunderstanding the implementation. Let me attempt to clarify this further with an example:
Assume `num_iterations=2`. I'll walk through the process step by step:
1. At the beginning of each step (referencing Algorithm 1), we generate samples and completions, then score these completions to compute the loss and update the policy ($\pi_{\theta}$). This is orchestrated from
https://github.com/huggingface/trl/blob/d625c5533a6b1c84d3565c8080857f6bb81c538a/trl/trainer/grpo_trainer.py#L735-L750
2. In the first step (`self.state.global_step = 0`), since `self.state.global_step % self.num_iterations == 0` evaluates to `True`, we generate and score completions for the entire batch. Here we also store the old model's token probabilities (`old_per_token_logps`):
https://github.com/huggingface/trl/blob/d625c5533a6b1c84d3565c8080857f6bb81c538a/trl/trainer/grpo_trainer.py#L840-L848
3. Next, we compute the loss and update the policy ($\pi_{\theta}$). Note that `per_token_logps = old_per_token_logps` for now:
https://github.com/huggingface/trl/blob/d625c5533a6b1c84d3565c8080857f6bb81c538a/trl/trainer/grpo_trainer.py#L1043-L1065
5. However, on the next iteration (`global_step = 1`), we **do not** generate new completions. Instead, we reuse the inputs and completions from the previous iteration to perform a second policy update on the same batch. This step aligns with the intended GRPO iterations mentioned in Algorithm 1.
6. In this second iteration, `old_per_token_logps` remain as originally computed at `global_step = 0`, but now `per_token_logps` are computed using the updated policy, causing these probabilities to diverge.
7. On the subsequent iteration (`global_step = 2`), we generate a new batch of prompts and completions, restarting the cycle.
| 3,294 | 11,132 |
YooSungHyun
| 2025-04-15T23:31:31 |
@LeonEricsson
Damn! I completely misunderstood!!!
I totally missed this line:
`if self.state.global_step % self.num_iterations == 0 or buffered_inputs is None:`
Sorry for the trouble, and thank you so much for pointing it out!
| 3,294 | 11,133 |
genghisun
| 2025-05-14T08:27:37 |
so many bugs 😢
| 3,292 | 11,134 |
qgallouedec
| 2025-04-15T18:55:00 |
For the record, setting the `pad_token` when it's not set doesn't seem to me to be good practice. For more context, see https://github.com/huggingface/trl/pull/3200.
Nevertheless, I can't think of a better solution here. But as you point out, the user must be informed. The best thing to do is to add a line to the `processing_class` argument doc, informing that if the processing class has no `pad_token`, then it will be set to `eos`.
Let's avoid warning, as it can be the desired behaviour
| 3,290 | 11,135 |
LeonEricsson
| 2025-04-16T07:09:25 |
on second thought how about matching the approach of `SFTTrainer` and defining it as a config parameter?
https://github.com/huggingface/trl/blob/1e61f6cc5a43185a5c169e351223b4dd48f9d1ca/trl/trainer/sft_config.py#L52-L54
this makes the default behaviour very explicit, i fear it would get drowned out in the processing_class doc.
my only gripe with the above is stuffing the config with too many parameters, how common is it for a user to want to use a custom padding token.
| 3,290 | 11,136 |
qgallouedec
| 2025-04-16T13:53:49 |
> on second thought how about matching the approach of `SFTTrainer` and defining it as a config parameter?
Yes, that's an option as well. But this one would require to change the way we pad the inputs. Currently we rely on the tokenizer for padding.
> my only gripe with the above is stuffing the config with too many parameters, how common is it for a user to want to use a custom padding token.
Hardly never I would guess.
I'm good with both options
| 3,290 | 11,137 |
LeonEricsson
| 2025-04-17T06:33:53 |
Chose to document the default behavior of `pad_token = eos_token` in the `processing_class` docstring. As you mentioned, the use case for a custom padding token doesn't seem common enough to justify adding a dedicated parameter.
| 3,290 | 11,138 |
HuggingFaceDocBuilderDev
| 2025-04-21T23:48:27 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3290). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,290 | 11,139 |
HuggingFaceDocBuilderDev
| 2025-04-14T10:03:38 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3289). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,289 | 11,140 |
qgallouedec
| 2025-04-15T21:54:52 |
Thanks @jarrelscy
I understand the motivation. Just for clarification, if
> Just like with gradient accumulation, we can call .backwards on the loss for each completion separately.
then why not using gradient accumulation? Is it because the generation will also be done on smaller batches, then makes things slower?
| 3,288 | 11,141 |
jarrelscy
| 2025-04-15T22:08:31 |
Hi @qgallouedec as @JamesBowerXanda pointed out in [here](https://github.com/huggingface/trl/issues/3017), the quality of the loss depends on the group size. In this [paper ](https://arxiv.org/abs/2502.18548) they point that you need a large group size to approximate the expected reward normalised by the standard deviation of the reward of an output sampled from the previous policy.
In GRPO each generation is assigned a relative advantage against other generations, so if the group size is small, this can lead to erratic losses.
In gradient accumulation (per batch), we are still comparing the advantage of each generation against other generations within that batch.
| 3,288 | 11,142 |
qgallouedec
| 2025-04-21T23:55:05 |
FYI, now you can pass a group as large as `gradient_accumulation * per_device_batch_size * num_devices` thanks to #3283
| 3,288 | 11,143 |
qgallouedec
| 2025-04-28T21:19:29 |
Closing this as I believe the motivation behind this PR has been addressed by #3283
| 3,288 | 11,144 |
jarrelscy
| 2025-04-28T21:45:01 |
@qgallouedec this PR is not exactly the same as 3283 - its akin to the comment [here](https://github.com/huggingface/trl/pull/3283#discussion_r2044304049) on 3283, which states that this functionality is not implemented in 3283.
| 3,288 | 11,145 |
qgallouedec
| 2025-04-28T21:47:44 |
Ok, sorry for the misjudgement, so I'm reopening the PR.
| 3,288 | 11,146 |
jiangix-paper
| 2025-07-01T09:14:52 |
Hello, any update for this? Have you merge it to the master branch?
| 3,288 | 11,147 |
jarrelscy
| 2025-07-01T13:43:32 |
@jiangix-paper there are some new changes in the trl main branch which I think are not compatible - the entropy masking implementation. i've just done a merge, feel free to try cloning and testing it
| 3,288 | 11,148 |
qgallouedec
| 2025-04-13T19:34:24 |
Thanks for reporting!
You could do something like this:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from trl import GRPOConfig, GRPOTrainer
training_args = GRPOConfig(output_dir="some_path", logging_steps=1)
model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
tokenizer.pad_token = tokenizer.eos_token
trainer = GRPOTrainer(
model=model,
processing_class=tokenizer,
reward_funcs=reward_func,
args=training_args,
train_dataset=some_dataset,
)
trainer.train()
```
But ideally, your code should be supported, please keep this issue opened until it's resolved.
| 3,287 | 11,149 |
HuggingFaceDocBuilderDev
| 2025-04-15T21:59:52 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3286). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,286 | 11,150 |
qgallouedec
| 2025-04-15T22:22:06 |
For the record: I can't test for Ascend NPU, but this PR introduces no regression for cuda
| 3,286 | 11,151 |
qgallouedec
| 2025-04-15T21:22:49 |
Great job on all the work you've done, and thanks for sharing it with the TRL community!
This likely involves significant changes to TRL, but your motivations seem solid and well thought out—it feels like the right time to explore this direction.
Would it be possible to break your work into several smaller PRs? That would make the review process much smoother. For example, you could start with a PR focused on leveraging vLLM server, followed by another that integrates the tools/agents. (Of course, feel free to divide it differently if you think there's a better approach.)
| 3,284 | 11,152 |
BjarniHaukur
| 2025-04-16T13:02:18 |
Will do!
I believe I’ve found a clean abstraction that minimizes the impact on existing code. Specifically, I’m exploring repurposing `vllm_client` to become a proper client interface, rather than just a wrapper around `vllm_serve.py`. With this in place, users could extend it and implement their own `.generate()` as needed.
The only other change would be to pass full data dictionaries (rather than just prompts) into `.generate()`, and expect modified dictionaries in return.
I’ll keep iterating on this until I find something that’s both elegant and fits my specific use case. Once it’s settled, I’ll split it into smaller, reviewable PRs.
I believe this could meaningfully lower the barrier to entry in this specific domain of RL training.
Here’s a minimal example showing how my use case looks now. With this, the "normal" `GRPOTrainer` setup can directly train on SWE-GYM with no extra scaffolding:
```python
import os, multiprocessing as mp
from contextlib import redirect_stdout, redirect_stderr
from datasets import load_dataset
from aider.coders import Coder
from aider.models import Model
from aider.io import InputOutput
from trl import GRPOConfig, GRPOTrainer
from trl.extras.vllm_client import VLLMClient
class AiderClient(VLLMClient):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
os.environ["OPENAI_API_BASE"] = f"http://{self.host}:{self.server_port}/v1/completions"
def process_one(self, data: dict[str, any]) -> tuple[str, list]:
orig = os.getcwd()
try:
temp = clone_repo_at_commit(data["repo_url"], data["base_commit"])
os.chdir(temp)
with open(os.devnull, "w") as d, redirect_stdout(d), redirect_stderr(d):
coder = Coder.create(main_model=Model("openai/our-model"), io=InputOutput(yes=True), suggest_shell_commands=False)
coder.run(data["problem_statement"])
messages = coder.format_chat_chunks().all_messages()
diff = get_head_commit_diff(temp)
finally:
clean_repo_dir(temp)
os.chdir(orig)
return diff, messages
def generate(self, data: list[dict[str, any]], timeout: int = 300, **kwargs) -> list[dict[str, any]]:
with mp.Pool(min(len(data), mp.cpu_count())) as p:
results = p.map_async(self.process_one, data).get(timeout=timeout)
for i, (d, m) in zip(data, results): i["generated_diff"] = d; i["messages"] = m
return data
trainer = GRPOTrainer(
args=GRPOConfig(use_vllm=True),
client=AiderClient(host="0.0.0.0", server_port=8000),
train_dataset=load_dataset("SWE-Gym/SWE-Gym", split="train")
)
trainer.train()
```
| 3,284 | 11,153 |
qgallouedec
| 2025-04-18T17:25:08 |
> I believe I’ve found a clean abstraction that minimizes the impact on existing code. Specifically, I’m exploring repurposing vllm_client to become a proper client interface, rather than just a wrapper around vllm_serve.py. With this in place, users could extend it and implement their own .generate() as needed.
Ok, modifying the client-server seems acceptable to me, especially if it can allow easier customization.
I look forward to hearing about your progress, keep us posted!
| 3,284 | 11,154 |
kwanUm
| 2025-05-14T15:02:42 |
Hey @BjarniHaukur – I’m also looking at migrating GRPO roll-outs to an online vLLM setup for better performance and agent-style usability.
Also, really like the idea of letting the server keep the full conversation so the client doesn’t have to resend context; we’re doing that today in a separate RolloutManager on the client side, but server-side seems cleaner.
Do you already have a branch/PR that swaps the current LLM() offline path for AsyncLLMEngine? Have you been able to test throughput or latency compared to the batched .generate() route?
Would love to follow the work and possibly help test / contribute.
| 3,284 | 11,155 |
BjarniHaukur
| 2025-05-14T15:48:09 |
Hey @kwanUm, still working on it. I've been super close to finishing this for a while now. The main problem resides in the online weight syncing behavior for `AsyncLLM`. I've tried a bunch of things out but the system ends up in a deadlock somewhere internally in vLLM. That isn't too surprising, vLLM only just recently merged a PR adding `collective_rpc_async`, which is the method we would need to use to initialize the updates. I'm also checking out SGLang as an async rollout client. That seems a bit more mature but still a WIP on my end.
I've put quite a lot of though into how it would be best to integrate custom clients, and I'm relatively convinced of my approach. It decouples all the generation logic from the GRPOTrainer, and offloads it to client.generate(). It receives the `inputs` dictionary containing everything from your HF dataset and returns `prompts`/`completions` + anything the user wants to pass to the reward functions.
(overly simplified example)
```python
class GenerationResult(TypedDict, total=False):
"""GRPO payload with required prompt/completion; extras allowed."""
# Shared inputs across N rollouts in GRPO (across many GenerationResults)
prompt: list[dict[str, str]] # {role: str, content: str}
# This comes after that, N different rollouts of the same prompt
completion: list[dict[str, str]] # {role: str, content: str}
# Extra keys and values are forwarded to the user specified reward functions
class VLLMClient(ABC):
@abstractmethod
def generate(self, data: list[dict], **kwargs) -> list[GenerationResult]:
pass
# Inside GRPOTrainer
...
output = client.generate(inputs)
...
rewards = reward_func(**output)
```
You can check out my working branch (though cautionary warning its not stable / ready at all), it might help you.
https://github.com/ASSERT-KTH/trl/tree/dev/trl
I'll post here again when I have something more concrete. Would love some help in integrating this type of behavior into TRL though! There's some semblance of it in `verl`, `openpipe/art`, and `verifiers`, but nothing that quite checks all the boxes. Most existing approaches miss one of the following: full OpenAI compatibility (true async multi-step, support tool-calling etc.), or just general ease of use.
| 3,284 | 11,156 |
BjarniHaukur
| 2025-05-19T15:58:48 |
Hey @qgallouedec
Finally got it working and found an abstraction that I believe could fit in TRL (#3469).
The new `vllm_serve_async.py` script behaves exactly like `vllm_serve.py`. It supports tensor and data parallelism but runs into the same error you mentioned here: [vllm-project/vllm#17079](https://github.com/vllm-project/vllm/issues/17079).
Instead of extending `VLLMClient`, I opted for adding a `rollout_func` to the `GRPOTrainer`. This gives users full control over how rollouts are generated and enables users to forward reward signals that are not present in the completions (e.g. a program's runtime, test coverage, or other environment-based feedback).
When `vllm_mode` is set to `"async_server"`, the server that is launched exposes a fully featured `/v1/` OpenAI-compatible endpoint with tool calling support. All endpoint complexity is offloaded to vLLM’s upstream implementation; the script just mirrors their design and adds our weight syncing logic.
```bash
trl vllm-serve-async \
--model Qwen/Qwen3-8B \
--max_model_len 8192 \
--enable-auto-tool-choice \
--reasoning_parser deepseek_r1 \
--tool-call-parser hermes
```
This allows any LLM-powered application with measurable reward metrics to be trained with `GRPOTrainer` by minimally wrapping the app in a `rollout_func` which interacts with this server.
My [CodeRepairRL](https://github.com/ASSERT-KTH/CodeRepairRL) project provides an example of a [rollout_func](https://github.com/ASSERT-KTH/CodeRepairRL/blob/master/src/agents/nano_agent.py) using a terminal-based coding agent ([Nano-Agent](https://github.com/ASSERT-KTH/nano-agent)).
@kwanUm, this might be of interest to you too!
| 3,284 | 11,157 |
HuggingFaceDocBuilderDev
| 2025-04-12T00:07:58 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3283). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,283 | 11,158 |
qgallouedec
| 2025-04-12T16:13:26 |
## Regression
The following experiments aim to ensure that that the results match when the same configuration is used. For all graphs,
- red is for the current main (d625c55)
- green, this branch (a5f0f03)
*Outline:*
- `gradient_accumulation_steps=1` and `num_iterations=1`
- `gradient_accumulation_steps=1` and `num_iterations=2`
- `gradient_accumulation_steps=1` and `num_iterations=4`
- `gradient_accumulation_steps=4` and `num_iterations=1`
- `gradient_accumulation_steps=4` and `num_iterations=2`
- `gradient_accumulation_steps=4` and `num_iterations=4`
- `gradient_accumulation_steps=16` and `num_iterations=1`
- `gradient_accumulation_steps=16` and `num_iterations=2`
- `gradient_accumulation_steps=16` and `num_iterations=4`
### `gradient_accumulation_steps=1` and `num_iterations=1`
<img width="1087" alt="1-1" src="https://github.com/user-attachments/assets/9da7e705-d085-413b-b82a-6f9225c5d360" />
### `gradient_accumulation_steps=1` and `num_iterations=2`
<img width="1087" alt="2-1" src="https://github.com/user-attachments/assets/4a26ad1d-0959-4487-91c4-ce7b23948c0b" />
### `gradient_accumulation_steps=1` and `num_iterations=4`
<img width="1087" alt="4-1" src="https://github.com/user-attachments/assets/be8477d5-e329-436e-b2eb-4dbe709067dc" />
### `gradient_accumulation_steps=4` and `num_iterations=1`
<img width="1087" alt="1-4" src="https://github.com/user-attachments/assets/3202b2fa-5e17-4001-af62-989b15d1eddb" />
### `gradient_accumulation_steps=4` and `num_iterations=2`
<img width="1087" alt="2-4" src="https://github.com/user-attachments/assets/59ebb562-d8b5-4576-8924-7b9533dabf56" />
### `gradient_accumulation_steps=4` and `num_iterations=4`
<img width="1087" alt="4-4" src="https://github.com/user-attachments/assets/4ab2d314-838d-4fbe-8264-15df1da78259" />
### `gradient_accumulation_steps=16` and `num_iterations=1`
<img width="1087" alt="1-16" src="https://github.com/user-attachments/assets/8606ce17-7539-4009-847c-bf2ab312fc12" />
### `gradient_accumulation_steps=16` and `num_iterations=2`
<img width="1087" alt="2-16" src="https://github.com/user-attachments/assets/019eb4d2-0954-40bc-bbf4-3f21b66b8e61" />
### `gradient_accumulation_steps=16` and `num_iterations=4`
<img width="1087" alt="4-16" src="https://github.com/user-attachments/assets/8e65dd40-0b67-417d-b22c-8318162ebc29" />
| 3,283 | 11,159 |
zhiqihuang
| 2025-05-07T18:14:55 |
From trl logging, I noticed sometime the generations is not correctly paired with prompts. It could a logging issue in multi-gpu training or it could be the order changed during one call per batch to the vllm server. I will try to find more trace and post here.
| 3,283 | 11,160 |
HuggingFaceDocBuilderDev
| 2025-04-15T19:58:50 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3282). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,282 | 11,161 |
HuggingFaceDocBuilderDev
| 2025-04-11T19:47:30 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3281). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,281 | 11,162 |
wybryan
| 2025-04-12T08:33:07 |
Hi @qgallouedec, is it possible for you to review this PR please?
| 3,280 | 11,163 |
qgallouedec
| 2025-05-19T23:57:47 |
Hi, so sorry for the late review. We are trying to mimic as much as possible de vLLM server. Do you know if it supports it? If so, how?
| 3,280 | 11,164 |
wybryan
| 2025-05-24T07:55:35 |
The vLLM engine natively supports input token ids instead of input strings. My PR just expose this feature from the wrapper in TRL. > Hi, so sorry for the late review. We are trying to mimic as much as possible de vLLM server. Do you know if it supports it? If so, how?
| 3,280 | 11,165 |
wybryan
| 2025-05-24T08:02:55 |
The rationale is that sometimes we want the training code take care of tokenization, i.e, we may manipulate directly at token ids and we want to the vLLM rollout generation takes the same manipulated token ids directly as opposed to original input string and do standard tokenization inside vLLM which will cause inconsistency between training and rollout generation.
That's what this PR is about, to feed raw token ids directly to vLLM (the vLLM engine supports this already but not accessible without this PR).
| 3,280 | 11,166 |
benjamin-marie
| 2025-04-14T20:11:46 |
Do you mean training on completions only?
There is this:
https://huggingface.co/docs/trl/en/sft_trainer#train-on-completions-only
| 3,279 | 11,167 |
ParadoxZW
| 2025-04-15T02:50:40 |
> Do you mean training on completions only? There is this: https://huggingface.co/docs/trl/en/sft_trainer#train-on-completions-only
Thanks! It seems `DataCollatorForCompletionOnlyLM` is a convenient interface to implement what I want. I haven't known this interface before. But do you think my implementation can also work properly?
I also notice
> Note that this works only in the case when packing=False.
while my implementation way can set `packing=True`.
Do you think I can PR my implementation to the TRL repo?
| 3,279 | 11,168 |
qgallouedec
| 2025-04-15T18:32:56 |
@ParadoxZW I've been wanting to make the completions-only training simpler for a while, but haven't had the time to get around to it yet. I like your approach and the use of `return_assistant_tokens_mask`. Ideally, we'd use it in the `trl/data_utils.py` functions. If you feel like opening a PR it would be very welcome!
EDIT: make the completions-only training simpler = find a better option than `DataCollatorForCompletionOnlyLM`
| 3,279 | 11,169 |
ParadoxZW
| 2025-04-22T08:39:44 |
@qgallouedec Sure, I'd love to do that.
I've spent days to modify my training script and to examine if the implementations works as expected. Following is my main modification:
```
# <omit>
##################
# Data Processing
##################
chat_template = """\
{%- for message in messages %}
{%- if (message.role == "user") %}
{{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}
{%- elif message.role == "assistant" %}
{{- '<|im_start|>' + message.role + '\\n'}}
{%- generation %}
{{- message.content + '<|im_end|>'}}
{%- endgeneration %}
{{- '\\n' }}
{%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\\n' }}
{%- endif %}
"""
def formatting_func(
example,
tokenizer,
):
output = tokenizer.apply_chat_template(
example["messages"], tokenize=True, return_dict=True, return_tensors="pt",
chat_template=chat_template, add_generation_prompt=False, truncation=True,
return_assistant_tokens_mask=True,
)
labels = output.input_ids.clone()
labels[output.input_ids==tokenizer.pad_token_id] = -100
labels[output.assistant_masks==0] = -100 # assign -100 to non-assistant part
data = {
"input_ids": output.input_ids[0].tolist(),
"attention_mask": output.attention_mask[0].tolist(),
"labels": labels[0].tolist()
}
return data
raw_dataset = load_dataset("./ultrachat_200k")
train_dataset = raw_dataset["train_sft"]
column_names = list(train_dataset.features)
processed_train_dataset = train_dataset.map(
formatting_func, fn_kwargs={"tokenizer": tokenizer},
remove_columns=column_names, num_proc=10,
desc="Applying chat template to train_sft",
)
# do packing by ourselves.
def pack_examples_without_fragmentation(
examples: dict[str, list[list]], seq_length: int
) -> dict[str, list[list]]:
"""
Pack examples into chunks of size less than `seq_length`. The difference with `pack_examples` is
that this function does not continue to concatenate a new sample if the packed length will be
greater than `seq_length`, but instead starts a new pack.
"""
new_examples = {}
for k in examples.keys():
first_data = examples[k][0]
new_examples[k] = [first_data[:seq_length]]
for i in range(len(examples[k])):
data = examples[k][i]
last_pack = new_examples[k][-1]
if len(last_pack) + len(data) <= seq_length:
new_examples[k][-1] = last_pack + data
else:
new_examples[k].append(data)
return new_examples
processed_train_dataset = processed_train_dataset.map(
pack_examples_without_fragmentation,
batched=True, fn_kwargs={"seq_length": max_seq_length},
new_fingerprint="packing_2",
)
###########
# Training
###########
def length_variable_data_collator(batch):
max_sample_length = max([len(x["input_ids"]) for x in batch])
input_ids = []
attention_mask = []
labels = []
for i in range(len(batch)):
input_ids.append(batch[i]["input_ids"] + [tokenizer.pad_token_id] * (max_sample_length - len(batch[i]["input_ids"])))
attention_mask.append(batch[i]["attention_mask"] + [0] * (max_sample_length - len(batch[i]["attention_mask"])))
labels.append(batch[i]["labels"] + [-100] * (max_sample_length - len(batch[i]["labels"])))
return {
"input_ids": torch.tensor(input_ids),
"attention_mask": torch.tensor(attention_mask),
"labels": torch.tensor(labels)
}
trainer = SFTTrainer(
model=model,
args=train_conf, # train_conf["packing"]=False, because We do packing by ourselves.
data_collator=length_variable_data_collator,
train_dataset=processed_train_dataset,
)
# <omit>
```
To conclude, I've done following things:
1. use `return_assistant_tokens_mask` in the function `formatting_func` to realize completions-only training
2. do packing by my own. `pack_examples_without_fragmentation` will concatenate samples without break apart each of them (the original pack function in `trl` may break a sample apart into two packed sequence)
3. since packed samples have variable length, I implement a new collector `length_variable_data_collator`
Do you think I can move above modifications into the main branch of `trl`? Or do you have any suggestions, like which part is unnecessary or need to be improved, and how to merge my code to `trl` in an elegant way?
I am looking forward to your reply and wanna commit a PR as soon as possible.
| 3,279 | 11,170 |
pasztorb
| 2025-04-11T10:47:20 |
My bad. Realised that it is handled before the `truncate_right` is called
| 3,278 | 11,171 |
HuggingFaceDocBuilderDev
| 2025-04-10T23:13:01 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3277). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,277 | 11,172 |
qgallouedec
| 2025-04-10T15:18:06 |
Thanks @I-l-l-I!
What version of vllm do you use? I get this error when trying to initialise the client:
```
Traceback (most recent call last):
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py", line 409, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/starlette/routing.py", line 714, in __call__
await self.middleware_stack(scope, receive, send)
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/starlette/routing.py", line 734, in app
await route.handle(scope, receive, send)
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/starlette/routing.py", line 74, in app
await response(scope, receive, send)
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/starlette/responses.py", line 160, in __call__
await self.background()
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/starlette/background.py", line 41, in __call__
await task()
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/starlette/background.py", line 28, in __call__
await run_in_threadpool(self.func, *self.args, **self.kwargs)
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/starlette/concurrency.py", line 37, in run_in_threadpool
return await anyio.to_thread.run_sync(func)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2470, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 967, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/vllm/entrypoints/llm.py", line 496, in collective_rpc
return self.llm_engine.collective_rpc(method, timeout, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py", line 291, in collective_rpc
return self.engine_core.collective_rpc(method, timeout, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 555, in collective_rpc
return self.call_utility("collective_rpc", method, timeout, args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 508, in call_utility
return future.result()
^^^^^^^^^^^^^^^
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
Exception: Call to collective_rpc method failed: 'Worker' object has no attribute 'pynccl_comm
```
| 3,276 | 11,173 |
I-l-l-I
| 2025-04-10T15:54:52 |
@qgallouedec I use vllm 0.8.3. The error seems to be because `pynccl_comm` is not initialized, but I've added judgment to each function in `WeightSyncWorkerExtension`, so this shouldn't happen. Everything works fine when I use it to train.
| 3,276 | 11,174 |
qgallouedec
| 2025-04-10T17:01:26 |
It's now working on my side 👍 no idea what I did wrong in the first place
| 3,276 | 11,175 |
qgallouedec
| 2025-04-11T00:08:02 |
Massive speed-up!

| 3,276 | 11,176 |
HuggingFaceDocBuilderDev
| 2025-04-11T00:17:53 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3276). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,276 | 11,177 |
I-l-l-I
| 2025-04-11T00:25:54 |
You're welcome @qgallouedec. By the way, from your test results, it is obviously faster to generate a large number of answers at once, but now in `GRPOTrainer` it is necessary to generate `gradient_accumulation_steps` times before one optimizer step, why not merge these generation tasks and generate first? I think this can greatly improve the speed when we need gradient accumulation.
| 3,276 | 11,178 |
qgallouedec
| 2025-04-11T00:41:21 |
It makes sense, but 2 remarks why oit's not currently done:
- it's not necessarily faster, see the plot. You can see that using a mini batch size of 64 is mostly equivalent to 256
- the implementation is rather tricky because we rely on transformers trainer, and the sampling logic doesn't natively allow this. You'd have to hack the sampler/batch size or the dataloader. I'm not sure how to do that at this point, but it would undoubtedly introduce additional complexity into the code.
| 3,276 | 11,179 |
HuggingFaceDocBuilderDev
| 2025-04-10T02:30:26 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3275). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,275 | 11,180 |
HuggingFaceDocBuilderDev
| 2025-04-10T14:08:06 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3274). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,274 | 11,181 |
HuggingFaceDocBuilderDev
| 2025-04-09T16:56:15 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3273). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,273 | 11,182 |
HuggingFaceDocBuilderDev
| 2025-05-06T04:33:34 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3272). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,272 | 11,183 |
qgallouedec
| 2025-04-09T17:59:58 |
You can install the package with all its dependencies with
```
pip install trl
```
If you encounter an error, please provide a MRE, the full traceback and the full system info 🙏
| 3,271 | 11,184 |
lebronjamesking
| 2025-04-10T01:33:09 |
> You can install the package with all its dependencies with
>
> ```
> pip install trl
> ```
>
> If you encounter an error, please provide a MRE, the full traceback and the full system info 🙏
Many thanks for the reply. Basically I cannot have access to huggingface server during the experiment. I was wondering if fully operate on a local python interpreter is allowed.
As an alternative, I find transformers has a built-in python-interpreter in default. What's the difference between that and https://huggingface.co/spaces/lvwerra/python-interpreter/ pls
| 3,271 | 11,185 |
qgallouedec
| 2025-04-09T18:09:02 |
Hi, thanks for the feedback.
That sounds more like a question about transformers, doesn't it?
`Trainer` still exposes a [`training_step`](https://github.com/huggingface/transformers/blob/f834ca2c19215f1e4fb0959cc3faafeaf56cd4f7/src/transformers/trainer.py#L3698) method, and I looked at the [codebase a year ago](https://github.com/huggingface/transformers/blob/9322576e2f49d1014fb0c00a7a7c8c34b6a5fd35/src/transformers/trainer.py), `Trainer` didn't exposed either a `step` method, or a `generate` method. What are you referring to exactly? Or maybe in an even older version?
| 3,270 | 11,186 |
eryawww
| 2025-04-10T03:56:41 |
Hello, thank you for the answer.
I believe I have a similar question. In the earlier version (trl==0.11), there was a [PPOTrainer.step(query, response, score)](https://github.com/huggingface/trl/blob/v0.11-release/trl/trainer/ppo_trainer.py#L655) method that was really handy for online/iterative RL scenario. From what I see in the current implementation, it looks like everything is now wrapped into [PPOTrainer.train](https://github.com/huggingface/trl/blob/main/trl/trainer/ppo_trainer.py).
I’m wondering, what is the recommended way to implement an iterative scenario with the new version?
| 3,270 | 11,187 |
jskaf34
| 2025-04-10T07:14:22 |
Hey thank you for your answer.
In `trl==0.11 version`, `PPOTrainer` had a `generate` and a `step` method that were really convenient to customise our RL loops. In fact, many examples on the internet relies on this old version of TRL. Those methods were removed afterwards, I was wondering if there were any reason for that removal please, in case I had to implement them back to upgrade to TRL's new version.
Have a great day !
| 3,270 | 11,188 |
qgallouedec
| 2025-04-10T15:23:56 |
Ah ok you're talking about `PPOTrainer`. It wasn't clear, we have more than 15 trainers in this repo
The motivation was that we wanted our trainers to all inherit from `transformers.Trainer`, this allows us to benefit from all its great features and reduces the maintenance effort considerably. You have two options: either pin trl 0.11, or implement these method in the current `PPOTrainer`, and possibly open a PR.
| 3,270 | 11,189 |
jskaf34
| 2025-04-11T08:37:31 |
Okay, cristal clear, I'll let you know ! Thx for your answers 😊
| 3,270 | 11,190 |
kevinyee628
| 2025-04-10T11:39:58 |
same error.
| 3,269 | 11,191 |
githigher
| 2025-05-10T15:50:09 |
same error
| 3,269 | 11,192 |
kiritoxkiriko
| 2025-06-25T06:59:38 |
same error, im using interlm2-1.8b-reward for reward
| 3,269 | 11,193 |
qgallouedec
| 2025-04-09T23:13:43 |
Try with `SFTConfig(..., dataset_kwargs={"skip_prepare_dataset": True})`
| 3,268 | 11,194 |
Lexie-gjr
| 2025-04-10T02:11:31 |
> Try with `SFTConfig(..., dataset_kwargs={"skip_prepare_dataset": True})`
Thank you so much! it works!!
however, I met another problem
` raise ValueError(f"Could not make a flat list of images from {images}")
ValueError: Could not make a flat list of images from `
I used the "HuggingFaceH4/llava-instruct-mix-vsft" dataset, and the examples are passed correctly to collate_fn, I couldn't find anything related to this anywhere, could you help me with this
`Examples sample:
[0] keys: dict_keys(['messages', 'images'])
[1] keys: dict_keys(['messages', 'images'])`
| 3,268 | 11,195 |
HuggingFaceDocBuilderDev
| 2025-04-09T00:30:45 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3266). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,266 | 11,196 |
HuggingFaceDocBuilderDev
| 2025-04-08T22:41:42 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3265). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,265 | 11,197 |
HuggingFaceDocBuilderDev
| 2025-04-08T21:53:17 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3264). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,264 | 11,198 |
HuggingFaceDocBuilderDev
| 2025-04-08T12:29:35 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3262). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,262 | 11,199 |
kashif
| 2025-04-09T11:42:01 |
running via on a 2GPU node:
```
torchrun --nproc_per_node 2 trl/scripts/sft.py --model_name_or_path Qwen/Qwen2-0.5B --dataset_name trl-lib/Capybara --learning_rate 2.0e-5 --num_train_epochs 1 --per_device_train_batch_size 1 --gradient_accumulation_steps 8 --gradient_checkpointing --logging_steps 25 --eval_strategy steps --eval_steps 100 --output_dir Qwen2-0.5B-SFT --sequence_parallel_degree 2 --heads_k_stride 2 --attn-implementation flash_attention_2 --bf16 --torch_dtype bfloat16
```
| 3,262 | 11,200 |
qgallouedec
| 2025-06-11T13:14:51 |
cc @sunmarc
| 3,262 | 11,201 |
qgallouedec
| 2025-06-12T17:25:32 |
Use https://github.com/huggingface/accelerate/tree/cp-dataloader
```python
import tempfile
import torch
from datasets import load_dataset
from trl import SFTConfig, SFTTrainer
from transformers.trainer_pt_utils import AcceleratorConfig
ring_attn = True
AcceleratorConfig.cp = ring_attn
dataset = load_dataset("roneneldan/TinyStories", split="train[:10000]")
with tempfile.TemporaryDirectory() as tmp_dir:
training_args = SFTConfig(
output_dir=tmp_dir,
model_init_kwargs={"attn_implementation": "flash_attention_2", "torch_dtype": torch.bfloat16},
sequence_parallel_size=8 if ring_attn else 1,
per_device_train_batch_size=64 if ring_attn else 8,
accelerator_config={"split_batches": ring_attn},
run_name="ring_attn" if ring_attn else "no_ring_attn",
)
trainer = SFTTrainer(model="Qwen/Qwen3-0.6B-Base", args=training_args, train_dataset=dataset)
trainer.train()
```





| 3,262 | 11,202 |
qgallouedec
| 2025-04-09T23:15:13 |
Thanks for suggesting, I'm closing this issue in favour of https://github.com/huggingface/trl/issues/3258 (duplicate). We're currently working on it
| 3,261 | 11,203 |
kashif
| 2025-04-11T09:44:38 |
testing using:
```py
import torch
from datasets import load_dataset
from trl import GRPOConfig, GRPOTrainer
import torch.distributed as dist
from torch.profiler import profile, record_function, ProfilerActivity
from transformers import TrainerCallback
import os
# from torch.distributed.fsdp import FSDPConfig, AutoWrapPolicy
# dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train")
dataset = load_dataset("trl-lib/ultrafeedback-gpt-3.5-turbo-helpfulness", split="train")
# only keep the prompt column
dataset = dataset.map(lambda x: {"prompt": x["prompt"]}, remove_columns=dataset.column_names)
training_args = GRPOConfig(
output_dir="./scratch_dir",
learning_rate=0.001, # increase the learning rate to speed up the test
per_device_train_batch_size=3, # reduce the batch size to reduce memory usage
num_generations=3, # reduce the number of generations to reduce memory usage
report_to=["tensorboard"],
max_completion_length=256, # reduce the completion length to reduce memory usage
logging_steps=1,
save_strategy="no",
max_steps=50,
use_liger_loss=True,
)
trainer = GRPOTrainer(
model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5",
reward_funcs="trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5",
args=training_args,
train_dataset=dataset,
)
class ProfCallback(TrainerCallback):
def __init__(self, prof):
self.prof = prof
def on_step_end(self, args, state, control, **kwargs):
self.prof.step()
# Create directory for profiling outputs
os.makedirs("profiling_results", exist_ok=True)
# Define profiling context manager
def train_with_profiling(enable_profiling=True):
if enable_profiling:
with profile(
activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],
record_shapes=True,
profile_memory=True,
with_stack=True,
with_flops=True,
on_trace_ready=torch.profiler.tensorboard_trace_handler("profiling_results") if trainer.accelerator.is_main_process else None,
schedule=torch.profiler.schedule(
wait=1,
warmup=1,
active=2,
repeat=1),
) as prof:
trainer.add_callback(ProfCallback(prof))
trainer.train()
# Print profiling results summary
# print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=20))
else:
trainer.train()
# trainer.train()
train_with_profiling(enable_profiling=False)
# destroy process group
if dist.is_initialized():
dist.destroy_process_group()
```
| 3,260 | 11,204 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.