user
stringlengths
3
28
created_at
timestamp[us]date
2020-04-01 09:48:12
2025-07-30 20:59:07
body
stringlengths
1
173k
issue_number
int64
1
3.81k
__index_level_0__
int64
0
11.8k
qgallouedec
2025-04-03T14:57:46
If I rephrase, you want to be able to define reward functions that use the tokenized version of the prompts and completions instead of their textual version, right?
3,222
11,305
MohamedAliRashad
2025-04-03T15:13:35
@qgallouedec Correct
3,222
11,306
LeonEricsson
2025-04-05T09:40:23
I read the discussion in [open-r1](https://github.com/huggingface/open-r1/pull/567), but I agree with OP that the completion tokens are valuable, would enable more reward functions and we've already got them tokenized anyway.
3,222
11,307
MohamedAliRashad
2025-05-04T07:52:51
@qgallouedec Any updates for this issue ?
3,222
11,308
qgallouedec
2025-05-09T18:57:02
Solved by #3272
3,222
11,309
wilrop
2025-04-03T09:57:00
I've been running into the same issues over and over. This now seems to happen with and without setting the flag `export NCCL_P2P_DISABLE=1` so I believe this may not be the problem after all.
3,221
11,310
wilrop
2025-04-03T10:01:05
This issue seems to be impacting many people. I believe the following issues are also dealing with the same problem: - https://github.com/huggingface/trl/issues/3157 - https://github.com/huggingface/trl/issues/3214
3,221
11,311
thesillystudent
2025-04-04T09:36:21
@wilrop were you able to fix this ?
3,221
11,312
wilrop
2025-04-04T09:41:08
No, not yet. I hope this will get additional attention soon because it enormously hinders training. Anecdotally, it seems that this issue only occurs in the setting where 1 GPU is used for vLLM generation and 1 other GPU for training. My training run where 1 GPU is used for vLLM and another 4 for training has been running for longer than it took my other experiments to crash. Have others also noticed that?
3,221
11,313
thesillystudent
2025-04-04T11:23:26
For me 1 GPU for vLLM and 3 GPUs for Training ran for more than 1 pass of the train set and then failed.
3,221
11,314
Techie5879
2025-04-10T14:11:37
Same. Running trl serve on GPU 0 and GRPO on GPU 1, have the same DistStoreError after ~80 ish steps
3,221
11,315
cii030
2025-04-11T08:43:15
same issue :(
3,221
11,316
PrinceJayJiao
2025-04-21T08:36:12
I hope this can be fixed quickly
3,221
11,317
leonardtang
2025-05-19T22:22:52
what's going on here.
3,221
11,318
shanghongsim
2025-05-21T11:26:53
Hi everyone, I just got a similar error but managed to fix it by upgrading to trl 0.17.0. ## The setup TRL version: 0.16.0 Accelerate version: 1.6.0 Deepspeed version: 0.16.4 vllm version: 0.7.3 Transformers version: 4.51.3 ## Commands Starting vllm server: ```bash CUDA_VISIBLE_DEVICES=0 trl vllm-serve --model shanghong/stage1 ``` Launching training script: ```bash #!/bin/bash export TORCH_CPP_LOG_LEVEL=INFO NCCL_DEBUG=None export CUDA_VISIBLE_DEVICES="2,3,4,5" task="grpo" ACCELERATE_LOG_LEVEL=info accelerate launch \ --main_process_port 25678 --config_file training_configs/deepspeed_zero3_cpu.yaml \ ${task}.py \ training_configs/${task}_full.yaml ``` ## The error ![Image](https://github.com/user-attachments/assets/c7752740-7f69-4567-a4f5-95ab73b50ac9) ## The fix Start a fresh env. Then: ```bash pip install trl pip install "trl[vllm]" ``` TRL version: 0.17.0 Accelerate version: 1.7.0 Deepspeed version: 0.16.8 vllm version: 0.8.5.post1 Transformers version: 4.52.2
3,221
11,319
HuggingFaceDocBuilderDev
2025-04-03T00:07:31
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3219). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,219
11,320
qgallouedec
2025-04-03T00:50:34
```python from datasets import load_dataset from trl import GRPOTrainer, GRPOConfig dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train") # Dummy reward function: count the number of unique characters in the completions def reward_num_unique_chars(completions, **kwargs): return [len(set(c)) for c in completions] training_args = GRPOConfig( log_completions=True, gradient_accumulation_steps=2, num_generations=3, per_device_train_batch_size=2, logging_steps=4, max_completion_length=8, ) trainer = GRPOTrainer( model="Qwen/Qwen2-0.5B-Instruct", reward_funcs=reward_num_unique_chars, train_dataset=dataset, args = training_args ) trainer.train() ``` ``` accelerate launch --num_processes 3 3219.py ``` ``` wandb: WARNING The `run_name` is currently set to the same value as `TrainingArguments.output_dir`. If this was not intended, please specify a different run name by setting the `TrainingArguments.run_name` parameter. wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information. wandb: Currently logged in as: qgallouedec (huggingface) to https://api.wandb.ai. Use `wandb login --relogin` to force relogin wandb: Tracking run with wandb version 0.19.8 wandb: Run data is saved locally in /fsx/qgallouedec/trl/wandb/run-20250403_004247-47nd8ofl wandb: Run `wandb offline` to turn off syncing. wandb: Syncing run trainer_output wandb: ⭐️ View project at https://wandb.ai/huggingface/huggingface wandb: 🚀 View run at https://wandb.ai/huggingface/huggingface/runs/47nd8ofl 0%| | 0/12 [00:00<?, ?it/s][rank0]:[W403 00:42:49.628634618 reducer.cpp:1400] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [rank1]:[W403 00:42:49.629350692 reducer.cpp:1400] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [rank2]:[W403 00:42:49.629607447 reducer.cpp:1400] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) {'loss': 0.0, 'grad_norm': 37.730892181396484, 'learning_rate': 6.666666666666666e-07, 'num_tokens': 594.0, 'mean_completion_length': 8.0, 'min_completion_length': 8.0, 'max_completion_length': 8.0, 'clipped_completions_ratio': 1.0, 'mean_terminated_completion_length': 0.0, 'min_terminated_completion_length': 0.0, 'max_terminated_completion_length': 0.0, 'rewards/reward_num_unique_chars/mean': 16.5625, 'rewards/reward_num_unique_chars/std': 2.098927229642868, 'reward': 16.562500476837158, 'reward_std': 1.9038410782814026, 'kl': 0.000272930920800718, 'clip_ratio': 0.0, 'epoch': 0.89} 33%|███████████████▋ | 4/12 [00:02<00:05, 1.51it/s] ╭──────────────────────────────────── Step 4 ─────────────────────────────────────╮ │ ┏━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┓ │ │ ┃ Prompt ┃ Completion ┃ reward_num_unique_chars ┃ │ │ ┡━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ │ │ Explicit is │ not a polynomial │ 14.00 │ │ │ │ │ │ │ │ │ │ │ I am trying to │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Explicit is │ a great tool for │ 16.00 │ │ │ │ │ people who are │ │ │ │ │ │ interested │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Explicit is │ a function in this │ 15.00 │ │ │ │ │ context. It is │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Special cases aren't │ . They might not have │ 16.00 │ │ │ │ special │ as many options │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Special cases aren't │ : they're just special │ 18.00 │ │ │ │ special │ cases of other │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Special cases aren't │ ! When we speak of │ 17.00 │ │ │ │ special │ special cases, │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Now is │ the time to help your │ 16.00 │ │ │ │ │ community. Let │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Now is │ a good time to begin │ 16.00 │ │ │ │ │ planning for the │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Now is │ the perfect time to be │ 18.00 │ │ │ │ │ looking into your │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Beautiful is better │ the perfect. │ 19.00 │ │ │ │ than │ As we all know, │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Beautiful is better │ beautiful. Beauty is │ 19.00 │ │ │ │ than │ better than everything. │ │ │ │ │ │ │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Beautiful is better │ the average. And good │ 16.00 │ │ │ │ than │ is better than │ │ │ │ └─────────────────────────┴─────────────────────────┴─────────────────────────┘ │ ╰─────────────────────────────────────────────────────────────────────────────────╯ {'loss': 0.0009, 'grad_norm': 37.897972106933594, 'learning_rate': 3.333333333333333e-07, 'num_tokens': 1188.0, 'mean_completion_length': 8.0, 'min_completion_length': 8.0, 'max_completion_length': 8.0, 'clipped_completions_ratio': 1.0, 'mean_terminated_completion_length': 0.0, 'min_terminated_completion_length': 0.0, 'max_terminated_completion_length': 0.0, 'rewards/reward_num_unique_chars/mean': 16.770833492279053, 'rewards/reward_num_unique_chars/std': 2.1402209401130676, 'reward': 16.770833730697632, 'reward_std': 2.0098465979099274, 'kl': 0.021565582719631493, 'clip_ratio': 0.0, 'epoch': 1.89} 67%|███████████████████████████████▎ | 8/12 [00:05<00:02, 1.57it/s] ╭──────────────────────────────────── Step 8 ─────────────────────────────────────╮ │ ┏━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┓ │ │ ┃ Prompt ┃ Completion ┃ reward_num_unique_chars ┃ │ │ ┡━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ │ │ Unless explicitly │ stated, what is the │ 16.00 │ │ │ │ │ relationship between │ │ │ │ │ │ the │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Unless explicitly │ stated otherwise, the │ 18.00 │ │ │ │ │ software being provided │ │ │ │ │ │ with │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Unless explicitly │ stated, is it possible │ 18.00 │ │ │ │ │ for a company │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Special cases aren't │ ! - Mathematical │ 19.00 │ │ │ │ special │ Physics │ │ │ │ │ │ │ │ │ │ │ │ # Special cases │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Special cases aren't │ 2021 │ 13.00 │ │ │ │ special │ │ │ │ │ │ │ Given that │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Special cases aren't │ │ 19.00 │ │ │ │ special │ │ │ │ │ │ │ I'm on the third day │ │ │ │ │ │ studying │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ There should be one-- │ only one -- place │ 17.00 │ │ │ │ and preferably │ where you can find │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ There should be one-- │ only one-- office or │ 16.00 │ │ │ │ and preferably │ building or a │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ There should be one-- │ two-- windows on your │ 17.00 │ │ │ │ and preferably │ website. │ │ │ │ │ │ It │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Readability │ : What should I use for │ 17.00 │ │ │ │ │ the new │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Readability │ & Accessibility: When │ 20.00 │ │ │ │ │ are you ready to │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Readability │ . A book that reads as │ 15.00 │ │ │ │ │ if it │ │ │ │ └─────────────────────────┴─────────────────────────┴─────────────────────────┘ │ ╰─────────────────────────────────────────────────────────────────────────────────╯ {'loss': 0.002, 'grad_norm': 41.002220153808594, 'learning_rate': 0.0, 'num_tokens': 1770.0, 'mean_completion_length': 8.0, 'min_completion_length': 8.0, 'max_completion_length': 8.0, 'clipped_completions_ratio': 1.0, 'mean_terminated_completion_length': 0.0, 'min_terminated_completion_length': 0.0, 'max_terminated_completion_length': 0.0, 'rewards/reward_num_unique_chars/mean': 16.37499976158142, 'rewards/reward_num_unique_chars/std': 2.165749952197075, 'reward': 16.375000596046448, 'reward_std': 2.0700144916772842, 'kl': 0.051057277189102024, 'clip_ratio': 0.0, 'epoch': 2.89} 100%|██████████████████████████████████████████████| 12/12 [00:08<00:00, 1.64it/s] ╭──────────────────────────────────── Step 12 ────────────────────────────────────╮ │ ┏━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┓ │ │ ┃ Prompt ┃ Completion ┃ reward_num_unique_chars ┃ │ │ ┡━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ │ │ In the face of │ to answer. In today's │ 18.00 │ │ │ │ ambiguity, refuse │ media climate │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ In the face of │ to be misunderstood. │ 15.00 │ │ │ │ ambiguity, refuse │ To be misunderstood is │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ In the face of │ to answer. │ 16.00 │ │ │ │ ambiguity, refuse │ This is an allusion │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ If the implementation │ it will not be good │ 17.00 │ │ │ │ is hard to explain, │ for business. │ │ │ │ │ │ │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ If the implementation │ I'm sure your project │ 19.00 │ │ │ │ is hard to explain, │ will be easier │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ If the implementation │ we can use (but not │ 17.00 │ │ │ │ is hard to explain, │ necessarily enforce │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Readability │ │ 7.00 │ │ │ │ │ A. 初级 │ │ │ │ │ │ B │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Readability │ of your paper depends │ 17.00 │ │ │ │ │ on several factors, │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Readability │ is the foundation from │ 16.00 │ │ │ │ │ which most of the │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Although practicality │ the heart of a man │ 13.00 │ │ │ │ beats │ many a times │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Although practicality │ aesthetics every day, │ 16.00 │ │ │ │ beats │ it's a good │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Although practicality │ emotion in politics, I │ 16.00 │ │ │ │ beats │ can't help │ │ │ │ └─────────────────────────┴─────────────────────────┴─────────────────────────┘ │ ╰─────────────────────────────────────────────────────────────────────────────────╯ {'train_runtime': 19.9479, 'train_samples_per_second': 2.557, 'train_steps_per_second': 0.602, 'train_loss': 0.0009722735073106984, 'epoch': 2.89} 100%|██████████████████████████████████████████████| 12/12 [00:18<00:00, 1.64it/s] ╭──────────────────────────────────── Step 12 ────────────────────────────────────╮ │ ┏━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┓ │ │ ┃ Prompt ┃ Completion ┃ reward_num_unique_chars ┃ │ │ ┡━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ │ │ In the face of │ to answer. In today's │ 18.00 │ │ │ │ ambiguity, refuse │ media climate │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ In the face of │ to be misunderstood. │ 15.00 │ │ │ │ ambiguity, refuse │ To be misunderstood is │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ In the face of │ to answer. │ 16.00 │ │ │ │ ambiguity, refuse │ This is an allusion │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ If the implementation │ it will not be good │ 17.00 │ │ │ │ is hard to explain, │ for business. │ │ │ │ │ │ │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ If the implementation │ I'm sure your project │ 19.00 │ │ │ │ is hard to explain, │ will be easier │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ If the implementation │ we can use (but not │ 17.00 │ │ │ │ is hard to explain, │ necessarily enforce │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Readability │ │ 7.00 │ │ │ │ │ A. 初级 │ │ │ │ │ │ B │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Readability │ of your paper depends │ 17.00 │ │ │ │ │ on several factors, │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Readability │ is the foundation from │ 16.00 │ │ │ │ │ which most of the │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Although practicality │ the heart of a man │ 13.00 │ │ │ │ beats │ many a times │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Although practicality │ aesthetics every day, │ 16.00 │ │ │ │ beats │ it's a good │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Although practicality │ emotion in politics, I │ 16.00 │ │ │ │ beats │ can't help │ │ │ │ └─────────────────────────┴─────────────────────────┴─────────────────────────┘ │ ╰─────────────────────────────────────────────────────────────────────────────────╯ 100%|██████████████████████████████████████████████| 12/12 [00:18<00:00, 1.58s/it] wandb: wandb: 🚀 View run trainer_output at: https://wandb.ai/huggingface/huggingface/runs/47nd8ofl wandb: Find logs at: wandb/run-20250403_004247-47nd8ofl/logs [rank0]:[W403 00:43:08.870327835 ProcessGroupNCCL.cpp:1496] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) ``` <img width="1436" alt="Screenshot 2025-04-02 at 17 54 46" src="https://github.com/user-attachments/assets/8ff5175a-362e-4f46-b8c7-04d2060e52d1" /> <img width="1436" alt="Screenshot 2025-04-02 at 17 54 50" src="https://github.com/user-attachments/assets/6494b214-200f-4b47-9b40-b3ee2206f5dc" />
3,219
11,321
qgallouedec
2025-04-03T00:58:58
With `num_completions_to_print=2` we get: ``` andb: WARNING The `run_name` is currently set to the same value as `TrainingArguments.output_dir`. If this was not intended, please specify a different run name by setting the `TrainingArguments.run_name` parameter. wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information. wandb: Currently logged in as: qgallouedec (huggingface) to https://api.wandb.ai. Use `wandb login --relogin` to force relogin wandb: Tracking run with wandb version 0.19.8 wandb: Run data is saved locally in /fsx/qgallouedec/trl/wandb/run-20250403_005751-yij5hg8w wandb: Run `wandb offline` to turn off syncing. wandb: Syncing run trainer_output wandb: ⭐️ View project at https://wandb.ai/huggingface/huggingface wandb: 🚀 View run at https://wandb.ai/huggingface/huggingface/runs/yij5hg8w 0%| | 0/12 [00:00<?, ?it/s][rank0]:[W403 00:57:52.295133203 reducer.cpp:1400] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [rank1]:[W403 00:57:52.296845625 reducer.cpp:1400] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [rank2]:[W403 00:57:52.297741432 reducer.cpp:1400] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) {'loss': 0.0, 'grad_norm': 37.730892181396484, 'learning_rate': 6.666666666666666e-07, 'num_tokens': 594.0, 'mean_completion_length': 8.0, 'min_completion_length': 8.0, 'max_completion_length': 8.0, 'clipped_completions_ratio': 1.0, 'mean_terminated_completion_length': 0.0, 'min_terminated_completion_length': 0.0, 'max_terminated_completion_length': 0.0, 'rewards/reward_num_unique_chars/mean': 16.5625, 'rewards/reward_num_unique_chars/std': 2.098927229642868, 'reward': 16.562500476837158, 'reward_std': 1.9038410782814026, 'kl': 0.000272930920800718, 'clip_ratio': 0.0, 'epoch': 0.89} 33%|███████████████▋ | 4/12 [00:02<00:05, 1.51it/s] ╭──────────────────────────────────── Step 4 ─────────────────────────────────────╮ │ ┏━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┓ │ │ ┃ Prompt ┃ Completion ┃ reward_num_unique_chars ┃ │ │ ┡━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ │ │ Beautiful is better │ beautiful. Beauty is │ 19.00 │ │ │ │ than │ better than everything. │ │ │ │ │ │ │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Explicit is │ a great tool for │ 16.00 │ │ │ │ │ people who are │ │ │ │ │ │ interested │ │ │ │ └─────────────────────────┴─────────────────────────┴─────────────────────────┘ │ ╰─────────────────────────────────────────────────────────────────────────────────╯ {'loss': 0.0009, 'grad_norm': 37.897972106933594, 'learning_rate': 3.333333333333333e-07, 'num_tokens': 1188.0, 'mean_completion_length': 8.0, 'min_completion_length': 8.0, 'max_completion_length': 8.0, 'clipped_completions_ratio': 1.0, 'mean_terminated_completion_length': 0.0, 'min_terminated_completion_length': 0.0, 'max_terminated_completion_length': 0.0, 'rewards/reward_num_unique_chars/mean': 16.770833492279053, 'rewards/reward_num_unique_chars/std': 2.1402209401130676, 'reward': 16.770833730697632, 'reward_std': 2.0098465979099274, 'kl': 0.021565582719631493, 'clip_ratio': 0.0, 'epoch': 1.89} 67%|███████████████████████████████▎ | 8/12 [00:05<00:02, 1.58it/s] ╭──────────────────────────────────── Step 8 ─────────────────────────────────────╮ │ ┏━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┓ │ │ ┃ Prompt ┃ Completion ┃ reward_num_unique_chars ┃ │ │ ┡━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ │ │ Unless explicitly │ stated, what is the │ 16.00 │ │ │ │ │ relationship between │ │ │ │ │ │ the │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Special cases aren't │ 2021 │ 13.00 │ │ │ │ special │ │ │ │ │ │ │ Given that │ │ │ │ └─────────────────────────┴─────────────────────────┴─────────────────────────┘ │ ╰─────────────────────────────────────────────────────────────────────────────────╯ {'loss': 0.002, 'grad_norm': 41.002220153808594, 'learning_rate': 0.0, 'num_tokens': 1770.0, 'mean_completion_length': 8.0, 'min_completion_length': 8.0, 'max_completion_length': 8.0, 'clipped_completions_ratio': 1.0, 'mean_terminated_completion_length': 0.0, 'min_terminated_completion_length': 0.0, 'max_terminated_completion_length': 0.0, 'rewards/reward_num_unique_chars/mean': 16.37499976158142, 'rewards/reward_num_unique_chars/std': 2.165749952197075, 'reward': 16.375000596046448, 'reward_std': 2.0700144916772842, 'kl': 0.051057277189102024, 'clip_ratio': 0.0, 'epoch': 2.89} 100%|██████████████████████████████████████████████| 12/12 [00:08<00:00, 1.63it/s] ╭──────────────────────────────────── Step 12 ────────────────────────────────────╮ │ ┏━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┓ │ │ ┃ Prompt ┃ Completion ┃ reward_num_unique_chars ┃ │ │ ┡━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ │ │ If the implementation │ it will not be good │ 17.00 │ │ │ │ is hard to explain, │ for business. │ │ │ │ │ │ │ │ │ │ ├─────────────────────────┼─────────────────────────┼─────────────────────────┤ │ │ │ Although practicality │ emotion in politics, I │ 16.00 │ │ │ │ beats │ can't help │ │ │ │ └─────────────────────────┴─────────────────────────┴─────────────────────────┘ │ ╰─────────────────────────────────────────────────────────────────────────────────╯ {'train_runtime': 15.5779, 'train_samples_per_second': 3.274, 'train_steps_per_second': 0.77, 'train_loss': 0.0009722735073106984, 'epoch': 2.89} 100%|██████████████████████████████████████████████| 12/12 [00:14<00:00, 1.63it/s] ╭──────────────────────────────────── Step 12 ────────────────────────────────────╮ │ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┓ │ │ ┃ Prompt ┃ Completion ┃ reward_num_unique_chars ┃ │ │ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ │ │ In the face of │ to answer. │ 16.00 │ │ │ │ ambiguity, refuse │ This is an allusion │ │ │ │ ├──────────────────────────┼────────────────────────┼─────────────────────────┤ │ │ │ In the face of │ to be misunderstood. │ 15.00 │ │ │ │ ambiguity, refuse │ To be misunderstood is │ │ │ │ └──────────────────────────┴────────────────────────┴─────────────────────────┘ │ ╰─────────────────────────────────────────────────────────────────────────────────╯ 100%|██████████████████████████████████████████████| 12/12 [00:14<00:00, 1.22s/it] ```
3,219
11,322
qgallouedec
2025-04-03T01:01:23
And with `wandb_log_unique_prompts=True` <img width="1208" alt="Screenshot 2025-04-02 at 18 00 53" src="https://github.com/user-attachments/assets/88986956-2ba8-41b4-aec0-c60b432a3fc8" />
3,219
11,323
qgallouedec
2025-04-09T23:10:38
You should try with gradient_checkpointing
3,218
11,324
HuggingFaceDocBuilderDev
2025-04-02T19:50:18
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3217). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,217
11,325
HuggingFaceDocBuilderDev
2025-04-02T18:36:18
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3216). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,216
11,326
qgallouedec
2025-04-02T18:37:32
```python import tempfile from datasets import load_dataset from trl import GRPOConfig, GRPOTrainer dataset = load_dataset("trl-internal-testing/zen", "standard_prompt_only", split="train").to_iterable_dataset() def dummy_reward_func(completions, **kwargs): return [0.0] * len(completions) with tempfile.TemporaryDirectory() as tmp_dir: training_args = GRPOConfig( output_dir=tmp_dir, learning_rate=0.1, # increase the learning rate to speed up the test per_device_train_batch_size=3, # reduce the batch size to reduce memory usage num_generations=3, # reduce the number of generations to reduce memory usage max_completion_length=32, # reduce the completion length to reduce memory usage report_to="none", ) trainer = GRPOTrainer( model="trl-internal-testing/tiny-Qwen2ForCausalLM-2.5", reward_funcs=dummy_reward_func, args=training_args, train_dataset=dataset, ) ``` ``` Traceback (most recent call last): File "/fsx/qgallouedec/trl/tests/test_raise.py", line 20, in <module> trainer = GRPOTrainer( ^^^^^^^^^^^^ File "/fsx/qgallouedec/trl/trl/trainer/grpo_trainer.py", line 418, in __init__ raise NotImplementedError( NotImplementedError: Iterable datasets are not yet supported in GRPOTrainer. Please use a standard dataset instead. ```
3,216
11,327
thesillystudent
2025-04-03T17:17:54
@qgallouedec any quick fixes for this one ? Any help would be really appreciated
3,214
11,328
jglaser
2025-04-04T18:38:56
see https://github.com/huggingface/trl/pull/3186 (increasing the `keepalive_timeout` in the server to 60s) ....
3,214
11,329
qgallouedec
2025-04-02T18:26:51
Thanks for this detailed report! Let's raise an error first and then work on the support for this case
3,213
11,330
wilrop
2025-04-02T18:29:26
> Thanks for this detailed report! Let's raise an error first and then work on the support for this case What do you mean by raising an error first? As of right now, it just silently fails due to all the shapes being correct but the underlying sampler not sampling the assumed data.
3,213
11,331
qgallouedec
2025-04-02T18:32:55
Sorry, it wasn't clear. This is what I mean: https://github.com/huggingface/trl/pull/3216 The goal is to avoid silent failing until we truly support iterable dataset
3,213
11,332
wilrop
2025-04-02T21:36:06
Okay sounds good! Which of the two suggested fixes (or any other approach) do you prefer? I think it will be easier, although slightly more convoluted, to work around it by adding the duplicates after sampling. Having custom samplers for an iterable dataset seems to be a hard no in Torch, see: https://github.com/pytorch/pytorch/blob/1017927c83dd95a4be6074c48e0fb38f0a1bd8f3/torch/utils/data/dataloader.py#L301 > \# Arg-check dataset related before checking samplers because we want to > \# tell users that iterable-style datasets are incompatible with custom > \# samplers first, so that they don't learn that this combo doesn't work > \# after spending time fixing the custom sampler errors.
3,213
11,333
leoyuppieqnew
2025-04-08T08:25:43
I would like to ask if the problem I encountered is the same as the one you mentioned?https://github.com/huggingface/trl/issues/3098
3,213
11,334
HuggingFaceDocBuilderDev
2025-04-02T14:07:25
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3212). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,212
11,335
jiangix-paper
2025-04-08T01:52:30
Hello, can i use it to do trl grpo in npu? this is my question: https://github.com/vllm-project/vllm-ascend/issues/459
3,211
11,336
ji-huazhong
2025-04-11T09:23:10
@jiangix-paper The current draft uses torch's collective communication interface, and some problems were found during testing, such as not supporting multi-NPU training. To solve this problem, I plan to refer to the existing implementation and introduce PyNcclCommunicator to support the creation of weight communication groups.
3,211
11,337
shaipranesh2
2025-04-03T13:44:15
@qgallouedec This PR is ready for review :)
3,210
11,338
kanishkg
2025-05-30T04:07:27
Can this instead just use the SamplingParams object from vllm instead? Or something more flexible? Since the flags for sampling in vllm will keep changing. PS: Was about to create a related issue and found this pr :)
3,210
11,339
chaodreaming
2025-05-08T09:24:23
+1
3,209
11,340
yimuu
2025-05-15T07:59:43
+1
3,209
11,341
qgallouedec
2025-04-02T04:57:43
```diff - num_processes:1 + num_processes:3 ```
3,208
11,342
SydWingss
2025-04-02T05:14:40
> - num_processes:1 > + num_processes:3 Thanks for your answer, I found that the yaml file I uploaded the first time was wrong (now corrected). In fact, I set num_processes:3 and still two GPUs did not work, which is the most confusing.
3,208
11,343
qgallouedec
2025-04-02T05:28:46
Please share the train.py, currently I can't reproduce
3,208
11,344
2018211801
2025-04-21T09:53:02
> > * num_processes:1 > > > > > > * num_processes:3 > > Thanks for your answer, I found that the yaml file I uploaded the first time was wrong (now corrected). In fact, I set num_processes:3 and still two GPUs did not work, which is the most confusing. i meet this question too. Did you solve it?
3,208
11,345
HuggingFaceDocBuilderDev
2025-04-01T22:14:49
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3206). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,206
11,346
qgallouedec
2025-04-01T23:15:15
We can safely ignore the failing CI, it's related to the hub timeout, fixed in of #3174
3,206
11,347
HuggingFaceDocBuilderDev
2025-04-01T21:51:00
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3205). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,205
11,348
qgallouedec
2025-04-01T23:21:59
I hope this would help: ```python per_device_train_batch_size = 4 gradient_accumulation_steps = 2 num_devices = 3 num_generations = 6 dataset_len = 8000 num_epoch = 2 batch_size = per_device_train_batch_size*gradient_accumulation_steps*num_devices # 24 num_prompts_per_batch = batch_size / num_generations # 4 num_steps = dataset_len/num_prompts_per_batch * num_epoch # 4000 ```
3,204
11,349
qgallouedec
2025-04-01T23:24:38
This may also help to understand why `per_device_train_batch_size * num processes` must be divisible by the number of generations: https://github.com/huggingface/trl/blob/9f3702f6be24505c58aadbbf8651b24ef10363b6/trl/trainer/grpo_trainer.py#L543-L561
3,204
11,350
Tavish9
2025-04-02T02:12:27
A simpler explanation: `num_generations = 6` means that every data point copies 6 times, so `dataset_len=6 * 8000 = 48000` Now, treat it as the normal setting. ``` effective_batch_size = per_device_train_batch_size * gradient_accumulation_steps * num_devices = 24 (24 data points per update) global_steps_per_epoch = dataset_len / effective_batch_size = 2000 ```
3,204
11,351
qgallouedec
2025-04-02T19:01:44
Yes, this function was removed in #2480 because it wasn't used anymore in the codebase.
3,203
11,352
alexnikulkov
2025-04-10T18:12:18
I've beed getting the same error with DeepSpeed ZeRO stage 3. Switching to stage 2 removed the error. Not sure about the root cause
3,202
11,353
alexnikulkov
2025-04-10T19:09:15
My best guess - the policy model is unwrapped for generation: https://github.com/huggingface/trl/blob/5e2e9cb4420023b73de317aabc307f6d9daf9533/trl/trainer/grpo_trainer.py#L811 But the reward model is not, so it remains sharded in ZeRO stage 3. ZeRO stage 2 doesn't shard the model, that's why it works without unwrapping https://github.com/huggingface/trl/blob/5e2e9cb4420023b73de317aabc307f6d9daf9533/trl/trainer/grpo_trainer.py#L894
3,202
11,354
vienmai
2025-04-17T08:19:07
Modifying [this block](https://github.com/huggingface/trl/blob/5e2e9cb4420023b73de317aabc307f6d9daf9533/trl/trainer/grpo_trainer.py#L586-L587) into the below solves my issue: ```python if isinstance(reward_func, PreTrainedModel): if self.is_deepspeed_enabled: self.reward_funcs[i] = prepare_deepspeed(reward_func, self.accelerator) else: self.reward_funcs[i] = self.accelerator.prepare_model(reward_func, evaluation_mode=True) ```
3,202
11,355
shaipranesh2
2025-04-01T09:07:59
This feature will allow models like qwen-2.5 to be run correctly
3,201
11,356
qgallouedec
2025-04-01T15:49:32
Thanks, feel free to open a PR :)
3,201
11,357
qgallouedec
2025-04-01T17:55:06
Experiments: The following code was run on both branch main and this branch, with both packing and not. ```python from datasets import load_dataset from trl import SFTTrainer, SFTConfig from accelerate import PartialState from transformers import AutoTokenizer def main(): dataset = load_dataset("trl-lib/Capybara", split="train") model_id = "meta-llama/Llama-3.2-3B" def func(example): messages = example["messages"] messages = [f"{message['role']}: {message['content']}" for message in messages] text = "\n".join(messages) return {"text": text} with PartialState().main_process_first(): dataset = dataset.map(func, remove_columns=dataset.column_names) tokenizer = AutoTokenizer.from_pretrained(model_id) # The bug occurs when the pad token is set to the eos token. # We intentionally keep this line to verify that the fix works. tokenizer.pad_token = tokenizer.eos_token trainer = SFTTrainer( model=model_id, args=SFTConfig( output_dir="Llama-3.2-3B-556-2-fix-pack", max_length=4096, gradient_checkpointing=True, per_device_train_batch_size=4, logging_steps=5, save_steps=20, bf16=True, dataset_num_proc=16, num_train_epochs=1, packing=True, ), train_dataset=dataset, processing_class=tokenizer, ) trainer.train() if __name__ == "__main__": main() ``` ``` accelerate launch --config_file examples/accelerate_configs/deepspeed_zero3.yaml sandbox/3200.py ``` The learning curves match, as expected: ### No packing <img width="1057" alt="Screenshot 2025-04-01 at 10 46 11" src="https://github.com/user-attachments/assets/7c79266b-3768-4641-b39e-171a7250609a" /> ### Packing <img width="1057" alt="Screenshot 2025-04-01 at 10 46 25" src="https://github.com/user-attachments/assets/8e4dde33-bcd3-4b24-be46-29bf6bb75776" /> The length distribution after training, which validates the bug is fixed: ![completion_lengths](https://github.com/user-attachments/assets/8d830e8d-b30f-48b7-a76f-f0fe6427f3d2)
3,200
11,358
HuggingFaceDocBuilderDev
2025-04-01T18:25:33
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3200). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,200
11,359
HuggingFaceDocBuilderDev
2025-04-01T22:23:51
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3199). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,199
11,360
qgallouedec
2025-03-31T23:13:14
Thanks for asking! There is no particular reason, these optimisations are just not implemented for now. But we are open to contributions
3,198
11,361
qgallouedec
2025-03-31T20:59:19
That's an interesting point. So, you're saying that we should think of μ as the number of mini-batches the batch is divided into, rather than as the number of times the same batch is reused? By the way, have you observed what is described in the paper “it often leads to destructively large policy updates”? I'm curious to know if the same thing happens with GRPO.
3,197
11,362
SonuDixit
2025-04-01T07:54:47
“it often leads to destructively large policy updates” - I have not tried it with GRPO, but I think, it's true for any on-policy Policy Gradient Algo. I can think of two reasons - 1) overfitting to a batch, and 2) once the policy has been updated the previous batch trajectories are no longer from the policy we want to update. Since PPO(hence GRPO) objective is a lower bound on the unclipped PG objective $log \pi_\theta(a_t|s_t)A_t$, the condition can be slightly relaxed, It won't hurt as much if $KM$ is slightly higher than $NT$. ref - PPO paper-Appendix, Table4-Roboschool_Locomotion. In GRPO, all the tokens get the same advantage as the last token. It is much more aggressive in pushing good trajectory prob high, and lowering the bad trajectory prob. I think, here we need to be much more strict in keeping $KM \approx NT$. From DeepSeekMath-5.2.2 - "it seems that the improvement is attributed to **boosting the correct response from TopK** rather than the enhancement of fundamental capabilities." If we fit $\mu$ times on the same batch, we might be overfitting to the correct response. I think, we should train on $\mu$ mini-batches from $D_b$.
3,197
11,363
avishaiElmakies
2025-07-20T16:01:57
@qgallouedec I was wondering about this as well, is this being handled or something? I did see that in my case using `num_iterations > 1` gives much worse results than using `num_iterations == 1`. Which was kinda of weird to me since it makes the clipping of PPO/GRPO irrelevant. for PPO the clipping was needed and without it the model will be very bad. I also believe this is how verl grpo behaves https://verl.readthedocs.io/en/latest/algo/grpo.html as seen with the argument `actor_rollout_ref.actor.ppo_mini_batch_size`. they also have `ppo_epochs` which seems to be similar to ours `num_iterations`. So i was wondering if a change to how `num_iterations` behaves is being planned. Maybe we don't have to change `num_iterations` and we can just add `ppo_mini_batch_size` as another option?
3,197
11,364
avishaiElmakies
2025-07-21T12:10:19
Ok, after looking into this more. it seems i missed `steps_per_generation` which seems to do the `mini_batch_size` thing i want. I will admit that the documentation for this argument is not very clear
3,197
11,365
qgallouedec
2025-07-21T17:59:49
Thanks for the feedback. The doc could be clearer. If you think of way to make it clearer, feel free to suggest.
3,197
11,366
avishaiElmakies
2025-07-21T18:02:25
I will try to think of what can be done, and open a PR if i have an idea. Thanks!
3,197
11,367
qgallouedec
2025-03-31T16:24:08
Thanks @LoganVegnaSHOP. I wonder in what situation one would want this flag to be set `False`?
3,196
11,368
LoganVegnaSHOP
2025-03-31T16:48:39
> Thanks @LoganVegnaSHOP. I wonder in what situation one would want this flag to be set `False`? If you are initializing brand new LoRA adapters it would be more efficient to set the flag to false since the ref model would then be slightly smaller by excluding the lora weights. @qgallouedec
3,196
11,369
qgallouedec
2025-03-31T14:49:52
5min! That's huge! What value do you use for TP and `max_completion_len`?
3,195
11,370
lewtun
2025-03-31T14:58:23
> vlm does not support DDP natively True, although they do provide an example using multi-processing: https://docs.vllm.ai/en/latest/getting_started/examples/data_parallel.html
3,195
11,371
qgallouedec
2025-03-31T20:26:30
I've done a bit of benchmarking, and the results are quite interesting: It seems that running a bigger batch doesn't always give a higher throughput. In fact, the opposite is true above a certain threshold, here 32, for all TP values. Consequently, in this scenario, generating all gradient accumulation steps at once (the mega-batch of point 3) would not necessarily give better results. EDIT: This threshold value seems to be the same for different value of `max_completion_length` ![Image](https://github.com/user-attachments/assets/8d04ff4e-760c-4dc0-850f-4887cfeb9604) ``` trl vllm-serve --model deepseek-ai/DeepSeek-R1-Distill-Qwen-7B --tensor_parallel_size 2 ``` ```python from datasets import load_dataset from transformers import AutoTokenizer from trl.extras.vllm_client import VLLMClient import time from tqdm import tqdm dataset = load_dataset("open-r1/OpenR1-Math-cn_k12-86k", split="train[:256]") tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-R1-Distill-Qwen-7B") def get_prompt(example, tokenizer): prompt = [{"role": "user", "content": example["problem"]}] return {"prompt": tokenizer.apply_chat_template(prompt, tokenize=False)} dataset = dataset.map(get_prompt, fn_kwargs={"tokenizer": tokenizer}, remove_columns=dataset.column_names) prompts = dataset["prompt"] client = VLLMClient() for mini_batch_size in [4, 8, 16, 32, 64, 128, 256]: start = time.time() for idx in tqdm(range(0, len(prompts), mini_batch_size)): batch = prompts[idx : idx + mini_batch_size] completions = client.generate(batch, n=8, max_tokens=1024) print(f"Generated with mini_batch_size={mini_batch_size} in {time.time() - start:.2f} seconds") ```
3,195
11,372
qgallouedec
2025-03-31T20:45:04
> I believe in the 7B setting, it is possible to host the model on a single device (H100), vlm does not support DDP natively, but perhaps we could implement something. One idea is that in the 2 node setting, there are 8 accelerate processes on the node running the optimization loop. We could spawn 8 independent vllm instances on the second node and have each accelerate process send prompts to its own dedicated vllm instance, which would be specified by a unique port per process. > True, although they do provide an example using multi-processing: https://docs.vllm.ai/en/latest/getting_started/examples/data_parallel.html I like the idea! I don't see any major difficulty in implementing DP in our server.
3,195
11,373
edbeeching
2025-04-02T19:57:38
@qgallouedec > 5min! That's huge! What value do you use for TP and max_completion_len? TP 4, 26k completion length
3,195
11,374
edbeeching
2025-04-03T09:59:26
> It seems that running a bigger batch doesn't always give a higher throughput. In fact, the opposite is true above a certain threshold, here 32, for all TP values. Consequently, in this scenario, generating all gradient accumulation steps at once (the mega-batch of point 3) would not necessarily give better results. I am not sure the benchmark is valid for `deepseek-ai/DeepSeek-R1-Distill-Qwen-7B` as it is likely that all generations will terminate with EOS at 1024 tokens, reasoning models generate answers that are 10k+ tokens. So vllm will not benefit from continuous batching. I will run it for a higher max_tokens.
3,195
11,375
edbeeching
2025-04-03T15:20:39
@qgallouedec here is the benchmark with up to 32k tokens, I ran on `[8, 16, 32, 64]` as it takes forever. I strongly believe that generating one mega batch and amortizing the generation time over the k accumulation steps would lead to significant speedups. ![Image](https://github.com/user-attachments/assets/aa31acb0-441e-44c0-8321-8aa5781fcd22)
3,195
11,376
edbeeching
2025-04-03T15:32:56
I exposed PP but unfortunately it would require quite a refactor: `NotImplementedError: Pipeline parallelism is only supported through AsyncLLMEngine as performance will be severely degraded otherwise.`
3,195
11,377
HuggingFaceDocBuilderDev
2025-03-31T12:40:58
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3193). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,193
11,378
edbeeching
2025-04-02T07:03:48
I benchmarked this, and the performance is comparable but speed improvement is marginal, so it's not worth adding the complexity to the codebase. ![image](https://github.com/user-attachments/assets/4c02c7b9-5cbc-4fc0-8bbf-20136cdc9eb6)
3,193
11,379
HuggingFaceDocBuilderDev
2025-03-31T12:17:46
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3192). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,192
11,380
HuggingFaceDocBuilderDev
2025-03-31T12:07:37
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3191). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,191
11,381
HuggingFaceDocBuilderDev
2025-03-31T10:16:59
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3190). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,190
11,382
Serge-weihao
2025-03-31T12:36:12
Could you provide an example code for us to verify this bug?
3,189
11,383
chi2liu
2025-03-31T13:53:32
> Could you provide an example code for us to verify this bug? This is very obvious. I started a separate trl vllm process to provide generation service. This vllm was started based on the original model and has been providing services with the original model without interruption. However, the training process has been relying on this vllm process to generate generation in each group. Even after one epoch has been trained, the vllm process still provides services with the original model at the beginning. I want to find out whether this is taken into account in our code. The independent vllm service has never changed during the training process. This means that generation has always used the original model.
3,189
11,384
Serge-weihao
2025-03-31T14:33:17
> > Could you provide an example code for us to verify this bug? > > This is very obvious. I started a separate trl vllm process to provide generation service. This vllm was started based on the original model and has been providing services with the original model without interruption. However, the training process has been relying on this vllm process to generate generation in each group. Even after one epoch has been trained, the vllm process still provides services with the original model at the beginning. I want to find out whether this is taken into account in our code. The independent vllm service has never changed during the training process. This means that generation has always used the original model. It seems the server will be updated in this line https://github.com/huggingface/trl/blob/5b586da3cc7f8212fa63ec9e7721435fe414a732/trl/trainer/grpo_trainer.py#L646 Have you seen the growth of acc reward during training? if it grow, the weight of vllm must have been updated.
3,189
11,385
lfranceschetti
2025-04-09T20:27:26
I'm experiencing the same issue, with evidence suggesting parameter synchronization problems: - During training, my train/loss increases exponentially, but train/reward doesn't increase, indicating the policy updates aren't affecting generation. for me it was related to #2856.
3,189
11,386
HuggingFaceDocBuilderDev
2025-03-31T09:12:18
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3188). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,188
11,387
qgallouedec
2025-03-31T15:08:06
Thanks for this question. It is intended, see the documentation: https://huggingface.co/docs/trl/main/en/grpo_trainer#computing_the_loss
3,187
11,388
qgallouedec
2025-04-08T00:56:07
Thanks @jglaser I'm not sure to understand why FSDP requires to have one vLLM instance per node?
3,186
11,389
jglaser
2025-04-08T00:59:24
> Thanks @jglaser > > I'm not sure to understand why FSDP requires to have one vLLM instance per node? It does not... FSDP changes and vllm scaling in this PR are not strictly related - however they arose in the same stream of work, as I was trying to train a 14B model which also required sharding (in addition to data parallelism). If the FSDP feature complicates review unnecessarily, this can be factored out into a separate PR. Suggestions?
3,186
11,390
qgallouedec
2025-04-08T01:05:32
Ok, it makes more sense. To make the review easier can you split into two separate PRs 🙏
3,186
11,391
LeonEricsson
2025-04-24T08:01:10
hey @jglaser, just checking in. really appreciate the work on this! there's been some interest in getting the FSDP part merged. would you be cool with someone helping to split into separate PRs as suggested?
3,186
11,392
jglaser
2025-04-24T15:02:54
> hey @jglaser, just checking in. really appreciate the work on this! there's been some interest in getting the FSDP part merged. would you be cool with someone helping to split into separate PRs as suggested? working on #3354
3,186
11,393
HuggingFaceDocBuilderDev
2025-04-01T21:25:35
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3185). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,185
11,394
shivam15s
2025-04-02T23:26:45
liger-kernel [v0.5.6](https://github.com/linkedin/Liger-Kernel/tree/v0.5.6) is out which has the changes needed for grpo, so we can officially test this integration!
3,184
11,395
HuggingFaceDocBuilderDev
2025-04-03T07:48:17
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3184). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
3,184
11,396
daijing5763
2025-06-10T05:54:37
i have the same promble with dpo trainning
3,183
11,397
qgallouedec
2025-06-10T06:34:32
Thanks for reporting, can you possibly minimize the code so that we can have a better idea where to look at?
3,183
11,398
binary-husky
2025-03-30T13:33:58
oh, there is another detail worth mentioning: I add a `version` param, `self.version += 1` whenever `update_model_params` is called. at server side, I add some lines to ensure there are no on-going generation with some async sleep logic <div align="center"> <img src="https://github.com/user-attachments/assets/42449e7e-eeb2-4aa6-a42d-841bea50e452" width="400" > </div>
3,182
11,399
fabianlim
2025-03-31T03:01:48
@binary-husky the speedups you posted look great, though I have a question on how you parallelize the computation. THe picture shows a data dependency between roll outs and model training (and vllm update). - are you saying that within gradient accumulation steps the rollouts do not change? - the `completion_ids` are futures, are you saying the will return enough rollouts for you to complete the grad accum step? In other words, this achieve parallization within grad accum steps, and works only if the grad accum > 1? ![image](https://github.com/user-attachments/assets/2b5f4e71-ac4a-47c6-b04e-f0b4dc165255)
3,182
11,400
binary-husky
2025-03-31T06:23:11
@fabianlim Yes, works only if the grad acc step > 1. ![image](https://github.com/user-attachments/assets/11ce663b-7982-40fc-9f07-37debd377197)
3,182
11,401
tcapelle
2025-06-16T13:37:04
We should make this happen! The new Mistral reasoning model uses a pipeline like this one.
3,182
11,402
tcapelle
2025-06-17T17:54:33
Shouldn't populating `_buffered_inputs` asynchronously be enough?
3,182
11,403
Tuziking
2025-03-30T02:31:06
> I want to continue training from the last checkpoint. How should I do it? I set `resume_from_checkpoint=True` in the GRPOConfig, but based on the output, it seems to start training from the first step. Do I also need to change the model to the checkpoint path?
3,179
11,404