user
stringlengths 3
28
| created_at
timestamp[us]date 2020-04-01 09:48:12
2025-07-30 20:59:07
| body
stringlengths 1
173k
| issue_number
int64 1
3.81k
| __index_level_0__
int64 0
11.8k
|
---|---|---|---|---|
HuggingFaceDocBuilderDev
| 2025-04-22T15:32:30 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3338). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,338 | 11,005 |
HuggingFaceDocBuilderDev
| 2025-04-25T23:44:25 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3337). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,337 | 11,006 |
HuggingFaceDocBuilderDev
| 2025-04-22T15:23:15 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3336). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,336 | 11,007 |
syt-nju
| 2025-04-22T15:24:23 |
> Thanks!
The issue with this type hint should span the entire file, so I might need to trouble you to help me modify it together.
| 3,336 | 11,008 |
qgallouedec
| 2025-04-22T15:57:19 |
Thank you for helping to improve the accuracy of the type hints. Please feel free to open another PR if you feel that further corrections are necessary :)
| 3,336 | 11,009 |
yuh8
| 2025-04-22T23:04:31 |
This fix is also needed for
```
def _prepare_inputs(
self, accumulated_local_batch: dict[str, Union[torch.Tensor, Any]]
) -> dict[str, Union[torch.Tensor, Any]]:
```
should change to
```
def _prepare_inputs(
self, accumulated_local_batch: list[dict[str, Union[torch.Tensor, Any]]]
) -> dict[str, Union[torch.Tensor, Any]]:
```
| 3,336 | 11,010 |
ucalyptus
| 2025-04-21T20:21:57 |
@qgallouedec
| 3,335 | 11,011 |
HuggingFaceDocBuilderDev
| 2025-04-21T22:18:06 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3335). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,335 | 11,012 |
HuggingFaceDocBuilderDev
| 2025-04-21T20:26:22 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3334). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,334 | 11,013 |
LeonEricsson
| 2025-04-21T12:12:37 |
Duplicate #2998.
Unfortunately there is currently no support for this. I'd be happy to submit a PR, seems like a very reasonable request.
| 3,333 | 11,014 |
qgallouedec
| 2025-04-21T16:22:21 |
Agree it's a very reasonable request. I'd be happy to receive such PR
| 3,333 | 11,015 |
wrmthorne
| 2025-04-21T08:08:52 |
I have overcomplicated this. I've just read about [AutoModelForSeq2SeqLMWithValueHead](https://huggingface.co/docs/trl/main/en/models#trl.AutoModelForSeq2SeqLMWithValueHead).
| 3,332 | 11,016 |
LeonEricsson
| 2025-04-21T11:23:39 |
Does the [existing documented approach](https://huggingface.co/docs/trl/main/en/multi_adapter_rl) not work?
| 3,331 | 11,017 |
LeonEricsson
| 2025-06-17T08:59:11 |
Haven't heard back in a while, so I'm closing this issue for now. If the problem still persists or you'd like to continue the discussion, feel free to open a new issue. Thanks!
| 3,331 | 11,018 |
LeonEricsson
| 2025-04-21T06:13:01 |
Could you provide minimal code to reproduce this error along with the complete error output
| 3,330 | 11,019 |
shirinyamani
| 2025-04-28T16:31:11 |
Please share your;
- Command used to run training
- Training code
| 3,330 | 11,020 |
HuggingFaceDocBuilderDev
| 2025-04-20T23:41:31 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3329). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,329 | 11,021 |
LeonEricsson
| 2025-04-21T11:09:08 |
Suggestion to update the [DataCollator example](https://github.com/huggingface/trl/blob/3a2788b47ec542d163bb180a6fe461a341dceb7d/trl/trainer/sft_trainer.py#L87) to:
```
Examples:
```python
>>> from trl import DataCollatorForLanguageModeling
>>> collator = DataCollatorForLanguageModeling(pad_token_id=0)
>>> examples = [
... {"input_ids": [1, 2, 3, 4], "completion_mask": [0, 0, 1, 1]},
... {"input_ids": [5, 6, 7], "completion_mask": [0, 1, 1]}
... ]
>>> collator(examples)
{'input_ids': tensor([[1, 2, 3, 4],
[5, 6, 7, 0]]),
'attention_mask': tensor([[1, 1, 1, 1],
[1, 1, 1, 0]]),
'labels': tensor([[-100, -100, 3, 4],
[ -100, 6, 7, -100]])}
```
```
given that `completion_only_loss` is true by default
| 3,329 | 11,022 |
qgallouedec
| 2025-04-22T22:43:09 |
> Suggestion to update the [DataCollator example](https://github.com/huggingface/trl/blob/3a2788b47ec542d163bb180a6fe461a341dceb7d/trl/trainer/sft_trainer.py#L87) to:
Thanks! Done in 7f7f2a4107bf60c6e8e5aafe9c99852f70c8bef9
| 3,329 | 11,023 |
qgallouedec
| 2025-04-20T22:52:39 |
More in-depth comparison:
### Conversational
```python
from trl import SFTTrainer
from datasets import Dataset
dataset = Dataset.from_dict(
{
"messages": [
[{"role": "user", "content": "What is better than ugly?"}, {"role": "assistant", "content": "Beautiful."}]
]
}
)
models = [
"trl-internal-testing/tiny-CohereForCausalLM",
"trl-internal-testing/tiny-DbrxForCausalLM",
"trl-internal-testing/tiny-FalconMambaForCausalLM",
"trl-internal-testing/tiny-Gemma2ForCausalLM",
"trl-internal-testing/tiny-GemmaForCausalLM",
"trl-internal-testing/tiny-LlamaForCausalLM-3.1",
"trl-internal-testing/tiny-LlamaForCausalLM-3.2",
"trl-internal-testing/tiny-LlamaForCausalLM-3",
"trl-internal-testing/tiny-MistralForCausalLM-0.1",
"trl-internal-testing/tiny-MistralForCausalLM-0.2",
"trl-internal-testing/tiny-Phi3ForCausalLM",
"trl-internal-testing/tiny-Qwen2ForCausalLM-2.5",
]
for model in models:
trainer = SFTTrainer(model=model, train_dataset=dataset)
toknizer = trainer.processing_class
sample = trainer.train_dataset[0]["input_ids"]
print()
print(model)
print(repr(toknizer.decode(sample)))
```
```diff
trl-internal-testing/tiny-CohereForCausalLM
- '<BOS_TOKEN><BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>What is better than ugly?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>Beautiful.<|END_OF_TURN_TOKEN|>'
+ '<BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>What is better than ugly?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>Beautiful.<|END_OF_TURN_TOKEN|>'
trl-internal-testing/tiny-DbrxForCausalLM
- "<|im_start|>system\nYou are DBRX, created by Databricks. You were last updated in December 2023. You answer questions based on information available up to that point.\nYOU PROVIDE SHORT RESPONSES TO SHORT QUESTIONS OR STATEMENTS, but provide thorough responses to more complex and open-ended questions.\nYou assist with various tasks, from writing to coding (using markdown for code blocks — remember to use ``` with code, JSON, and tables).\n(You do not have real-time data access or code execution capabilities. You avoid stereotyping and provide balanced perspectives on controversial topics. You do not provide song lyrics, poems, or news articles and do not divulge details of your training data.)\nThis is your system prompt, guiding your responses. Do not reference it, just respond to the user. If you find yourself talking about this message, stop. You should be responding appropriately and usually that means not mentioning this.\nYOU DO NOT MENTION ANY OF THIS INFORMATION ABOUT YOURSELF UNLESS THE INFORMATION IS DIRECTLY PERTINENT TO THE USER'S QUERY.<|im_end|>\n<|im_start|>user\nWhat is better than ugly?<|im_end|>\n<|im_start|>assistant\nBeautiful.<|im_end|><|endoftext|>"
+ "<|im_start|>system\nYou are DBRX, created by Databricks. You were last updated in December 2023. You answer questions based on information available up to that point.\nYOU PROVIDE SHORT RESPONSES TO SHORT QUESTIONS OR STATEMENTS, but provide thorough responses to more complex and open-ended questions.\nYou assist with various tasks, from writing to coding (using markdown for code blocks — remember to use ``` with code, JSON, and tables).\n(You do not have real-time data access or code execution capabilities. You avoid stereotyping and provide balanced perspectives on controversial topics. You do not provide song lyrics, poems, or news articles and do not divulge details of your training data.)\nThis is your system prompt, guiding your responses. Do not reference it, just respond to the user. If you find yourself talking about this message, stop. You should be responding appropriately and usually that means not mentioning this.\nYOU DO NOT MENTION ANY OF THIS INFORMATION ABOUT YOURSELF UNLESS THE INFORMATION IS DIRECTLY PERTINENT TO THE USER'S QUERY.<|im_end|>\n<|im_start|>user\nWhat is better than ugly?<|im_end|>\n<|im_start|>assistant\nBeautiful.<|im_end|>"
trl-internal-testing/tiny-FalconMambaForCausalLM
- '\n\nUser: What is better than ugly?\n\nAssistant: Beautiful.<|endoftext|>'
+ '\n\nUser: What is better than ugly?\n\nAssistant: Beautiful.'
trl-internal-testing/tiny-Gemma2ForCausalLM
- '<bos><bos><start_of_turn>user\nWhat is better than ugly?<end_of_turn>\n<start_of_turn>model\nBeautiful.<end_of_turn>\n<eos>'
+ '<bos><start_of_turn>user\nWhat is better than ugly?<end_of_turn>\n<start_of_turn>model\nBeautiful.<end_of_turn>\n'
trl-internal-testing/tiny-GemmaForCausalLM
- '<bos><bos><start_of_turn>user\nWhat is better than ugly?<end_of_turn>\n<start_of_turn>model\nBeautiful.<end_of_turn>\n<eos>'
+ '<bos><start_of_turn>user\nWhat is better than ugly?<end_of_turn>\n<start_of_turn>model\nBeautiful.<end_of_turn>\n'
trl-internal-testing/tiny-LlamaForCausalLM-3.1
- '<|begin_of_text|><|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December 2023\nToday Date: 26 Jul 2024\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nWhat is better than ugly?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nBeautiful.<|eot_id|>'
+ '<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December 2023\nToday Date: 26 Jul 2024\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nWhat is better than ugly?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nBeautiful.<|eot_id|>'
trl-internal-testing/tiny-LlamaForCausalLM-3.2
- '<|begin_of_text|><|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December 2023\nToday Date: 20 Apr 2025\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nWhat is better than ugly?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nBeautiful.<|eot_id|>'
+ '<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nCutting Knowledge Date: December 2023\nToday Date: 20 Apr 2025\n\n<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nWhat is better than ugly?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nBeautiful.<|eot_id|>'
trl-internal-testing/tiny-LlamaForCausalLM-3
- '<|begin_of_text|><|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\nWhat is better than ugly?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nBeautiful.<|eot_id|>'
+ '<|begin_of_text|><|start_header_id|>user<|end_header_id|>\n\nWhat is better than ugly?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\nBeautiful.<|eot_id|>'
trl-internal-testing/tiny-MistralForCausalLM-0.1
- '<s><s> [INST] What is better than ugly? [/INST] Beautiful.</s>'
+ '<s> [INST] What is better than ugly? [/INST] Beautiful.</s>'
trl-internal-testing/tiny-MistralForCausalLM-0.2
- '<s><s> [INST] What is better than ugly? [/INST] Beautiful.</s>'
+ '<s> [INST] What is better than ugly? [/INST] Beautiful.</s>'
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5
- '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\nWhat is better than ugly?<|im_end|>\n<|im_start|>assistant\nBeautiful.<|im_end|>\n<|im_end|>'
+ '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\nWhat is better than ugly?<|im_end|>\n<|im_start|>assistant\nBeautiful.<|im_end|>\n'
```
### Language modelling
```python
from trl import SFTTrainer
from datasets import Dataset
dataset = Dataset.from_dict({"text": ["Beautiful is better than ugly."]})
models = [
"trl-internal-testing/tiny-CohereForCausalLM",
"trl-internal-testing/tiny-DbrxForCausalLM",
"trl-internal-testing/tiny-FalconMambaForCausalLM",
"trl-internal-testing/tiny-Gemma2ForCausalLM",
"trl-internal-testing/tiny-GemmaForCausalLM",
"trl-internal-testing/tiny-LlamaForCausalLM-3.1",
"trl-internal-testing/tiny-LlamaForCausalLM-3.2",
"trl-internal-testing/tiny-LlamaForCausalLM-3",
"trl-internal-testing/tiny-MistralForCausalLM-0.1",
"trl-internal-testing/tiny-MistralForCausalLM-0.2",
"trl-internal-testing/tiny-Phi3ForCausalLM",
"trl-internal-testing/tiny-Qwen2ForCausalLM-2.5",
]
for model in models:
trainer = SFTTrainer(model=model, train_dataset=dataset)
toknizer = trainer.processing_class
sample = trainer.train_dataset[0]["input_ids"]
print()
print(model)
print(repr(toknizer.decode(sample)))
```
No diff
| 3,328 | 11,024 |
HuggingFaceDocBuilderDev
| 2025-04-20T23:06:10 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3328). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,328 | 11,025 |
shirinyamani
| 2025-04-23T00:04:59 |
Maybe pointing to your relevant blog on tokenizer for context of why `add_special_tokens = False` shoud be added `if is_conversational(example)` ?
| 3,328 | 11,026 |
qgallouedec
| 2025-04-23T00:30:10 |
> Maybe pointing to your relevant blog on tokenizer for context of why `add_special_tokens = False` shoud be added `if is_conversational(example)` ?
done in c1c9f29
| 3,328 | 11,027 |
LeonEricsson
| 2025-04-20T13:34:36 |
`per_device_train_batch_size` represents the samples seen by each device during a `step`, not a global step. It does not depend on the gradient accumulation steps.
| 3,327 | 11,028 |
yuh8
| 2025-04-20T22:06:27 |
> `per_device_train_batch_size` represents the samples seen by each device during a `step`, not a global step. It does not depend on the gradient accumulation steps.
Thanks @LeonEricsson, you are right to point out that the `per_device_train_batch_size` is independent of accum steps. I made a mistake of using this term, rather it should be `per_device_effective_batch_size`. The `per_device_effective_batch_size` is the actual batch size each device sees during training, as evidenced by [this line ](https://github.com/huggingface/trl/blob/294f35bf3c0043d3ee6b9b5d22385e5736f6ce9e/trl/trainer/grpo_trainer.py#L684) in the grpo_trainer.py module. Each DataLoader per device has a `batch_size = self._train_batch_size * self.args.gradient_accumulation_steps` where `self._train_batch_size` references to `per_device_train_batch_size`. So effective batch size does depend on accum steps
I have changed the `per_device_train_batch_size` to `per_device_effective_batch_size` in the figure above.
| 3,327 | 11,029 |
HuggingFaceDocBuilderDev
| 2025-04-18T18:06:36 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3326). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,326 | 11,030 |
re-imagined
| 2025-04-27T14:19:15 |
> I think this change makes sense, thanks for suggesting it. Sorry for the delayed review. I've made a few comments. Have you tested on your infrastructure?
Thanks for your review. 😄
I am trying to test it on different cases, and checking if the model output is correct, but I am struggling on some nccl error when executing the tests. (it worked well on last commit 🤣, until I merge the main branch)
I will let you know as soon as i have an update
| 3,324 | 11,031 |
qgallouedec
| 2025-05-02T02:37:41 |
Hey! any update?
| 3,324 | 11,032 |
re-imagined
| 2025-05-04T17:44:28 |
> Hey! any update?
hi @qgallouedec,
Sorry for late update, I've been out for few days.
After several tests, I realized that using 0.0.0.0 as the host address in StatelessProcessGroup.create isn't viable—it requires the server's actual IP address to function properly. But if we use base url to init the vllm client, we don't know the server's ip.
Proposed Approach: Use the base url to resolve the server's IP programmatically in the client (when calling `/health`).
I’m currently testing this approach and diving deeper into how StatelessProcessGroup interacts with PyTorch’s TCPStore
I’ll update this thread once I’ve narrowed down the root cause and validated the fix. Let me know if you have insights or suggestions!
| 3,324 | 11,033 |
re-imagined
| 2025-05-08T07:35:28 |
@qgallouedec
hi, it's ready for review now
| 3,324 | 11,034 |
HuggingFaceDocBuilderDev
| 2025-05-20T01:45:37 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3324). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,324 | 11,035 |
re-imagined
| 2025-05-20T03:17:44 |
> Sorry for the late review. I've updated your branch with a few minor things, plus an attempt to simplify the host recovering. Can you check that it works on your side?
thank you, I will test it today
| 3,324 | 11,036 |
re-imagined
| 2025-05-24T18:51:02 |
> Sorry for the late review. I've updated your branch with a few minor things, plus an attempt to simplify the host recovering. Can you check that it works on your side?
@qgallouedec
I’ve tested the latest changes on my end, and everything works as expected. The simplification to the host recovery logic is a nice improvement—thanks for taking the time to refine it.
| 3,324 | 11,037 |
AMindToThink
| 2025-04-18T03:31:21 |
It looks to me that this would be best implemented by combining TRL's [Iterative SFT](https://huggingface.co/docs/trl/main/en/iterative_sft_trainer) with its [Best of N sampling](https://huggingface.co/docs/trl/main/en/best_of_n).
| 3,323 | 11,038 |
AMindToThink
| 2025-04-24T02:52:02 |
Edit: Iterative SFT might not be the right tool, since you'd want to make multiple steps of SFT on every distillation. Can just use SFTTrainer
| 3,323 | 11,039 |
AMindToThink
| 2025-05-02T21:16:05 |
Btw, I'm working on this. The neat thing is that with one expert iteration framework, you can also do rejection-sampling fine-tuning and behavioral cloning!
| 3,323 | 11,040 |
re-imagined
| 2025-04-18T05:40:37 |
hi @binary-husky 👋,
As the original author of the `vLLMClient` implementation, would you have a moment to take a look at this request, thanks.
| 3,322 | 11,041 |
psinger
| 2025-04-18T07:04:05 |
I am getting these kinds of NCCL errors even without deepspeed. VLLM integration with the `PyNcclCommunicator` is very unstable for me at this point. Sometimes it works, but only if all stars align.
| 3,321 | 11,042 |
abeerag
| 2025-04-18T20:01:26 |
Thanks @psinger - any tips on getting it to work? Are you able to share the environment where you got it to succeed?
| 3,321 | 11,043 |
1485840691-eng
| 2025-04-20T10:24:09 |
@qgallouedec what do you think of the entropy regularization in helping improve model generalization? Similar entropy regularization methods have also been applied to other RL algos, its pitfall is its high sensitivity to training data and difficulty to control the coefficient, adaptive entropy control provides one alternative to these problems, but not sure how much it could help. If we think the feature is good for GPRO, I could help provide a PR for it.
| 3,320 | 11,044 |
vinceamaz
| 2025-05-19T02:02:55 |
@1485840691-eng Would be nice to see official support of Entropy Loss in GRPO. It seems to be standard in frameworks like verl. I also came across a paper called "Reinforcement Learning for Reasoning in Large Language Models with One Training Example". They did an ablation study with entropy loss and found that it improves LLM on MATH500 from 70.8 to 74.8.
| 3,320 | 11,045 |
qgallouedec
| 2025-05-19T02:39:15 |
Sounds promising! We are very opened to contributions
| 3,320 | 11,046 |
avishaiElmakies
| 2025-05-26T09:54:53 |
I could try doing a PR for this, unless @1485840691-eng is already working on it
| 3,320 | 11,047 |
vatsal-kr
| 2025-06-17T11:05:53 |
Is someone working on this PR as of now?
| 3,320 | 11,048 |
1485840691-eng
| 2025-06-18T15:12:52 |
Sorry missing reply, I am working on that and would send a PR soon.
| 3,320 | 11,049 |
1485840691-eng
| 2025-06-18T15:16:56 |
@vinceamaz thanks for your knowledge on that. I have read that and related code in verl, would try to leverage as much as possible. The entropy part would be like acting upon the log probabilities of whole vocab.

| 3,320 | 11,050 |
qgallouedec
| 2025-04-18T18:25:00 |
Thank you for bringing this to our attention. Indeed this is not the desired behavior.
I'm curious to know how you noticed it. Is training performance affected?
After investigation, it seems that it only affects Qwen among the tiny models we test.
| 3,318 | 11,051 |
aalekseev1
| 2025-04-20T16:01:27 |
Thank you for your reply.
> I'm curious to know how you noticed it.
I'm fine-tuning Qwen2.5-1.5B-Instruct on my data, and before launching the training I checked what the processed samples in the dataloader look like (to make sure everything is correct), and I noticed that there are two `eos_token` at the end.
> Is training performance affected?
I quickly tried both versions, in my case it doesn't seem to affect the performance.
| 3,318 | 11,052 |
fabianlim
| 2025-04-17T17:18:03 |
@lewtun if its FSDP2, its it gauranteed that summoning `named_parameters` will give the full params? This looking at the [this](https://github.com/huggingface/trl/blob/9874b3aa04d745df6cdf36cce33c74495e044c8f/trl/trainer/grpo_trainer.py#L756)
| 3,317 | 11,053 |
qgallouedec
| 2025-04-21T22:38:10 |
> if we feel the accelerate version is too high
I think it's ok to bump high since trl is also beta
> The alternative would be to check the accelerate version being used and only allow FSDP2 to be run for versions > 1.6.0
So it seems that `mergekit` requires `accelerate>=1.3.0,<1.4.dev0`. So the above solution is probably the best.
Maybe just documenting it is enough? Something like `# requires accelerate>=1.6` on top of FSDP2 config file?
| 3,317 | 11,054 |
qgallouedec
| 2025-04-21T23:06:32 |
I tried myself: looks good!
<img width="1136" alt="Screenshot 2025-04-21 at 16 06 02" src="https://github.com/user-attachments/assets/62e8e169-d3f0-41fb-9387-fddede67d543" />
and for the record, FSDP is way faster in my case:
<img width="375" alt="Screenshot 2025-04-21 at 16 07 15" src="https://github.com/user-attachments/assets/6ffcc0bb-e059-4a89-b7a8-15ce5ec2114b" />
| 3,317 | 11,055 |
fabianlim
| 2025-04-30T16:29:44 |
Hi we had a meeting with @qgallouedec and he mentioned that i should update two thoughts on FSDP2.
1. calling `named_parameters` may not scale as it summons the whole model. I have some [hacky code](https://github.com/fabianlim/trl/blob/fd9e66985299b9e194ac26c64dd82c102d310b02/trl/trainer/grpo_trainer.py#L715-L759) to summon the model per each FSDP module. But perhaps we can make this nicer and contribute. AFAIK unfortunately FSDP2 does not have a nice API to summon parameters in a sharded manner.
2. I have a [bug report](https://github.com/vllm-project/vllm/issues/14443) in vllm that warns FSDP1 does not play nice with vllm. This regards the new [collocation PR](https://github.com/huggingface/trl/pull/3162) that we are currently working on to have both vllm and training in the same GPU to improve utilization.
| 3,317 | 11,056 |
BenasdTW
| 2025-05-01T21:08:39 |
> I tried myself: looks good!
@qgallouedec Would you mind sharing the script you ran? Cause I got errors trying to use fsdp2 with this PR.
my code:
```python
# accelerate launch --config_file fsdp.yaml error.py
import torch
from datasets import Dataset
from transformers import AutoTokenizer, AutoModelForCausalLM
from trl import SFTConfig, SFTTrainer
from accelerate import PartialState
from peft import get_peft_model, LoraConfig
model_id = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map={"": PartialState().process_index},
attn_implementation="flash_attention_2",
torch_dtype=torch.bfloat16,
use_cache=False,
)
# apply_liger_kernel_to_qwen2(model)
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token_id = 151643
dummy_dataset = Dataset.from_dict({"text": ["Dummy dataset"] * 32, })
training_args = SFTConfig(
output_dir="trainer_output",
num_train_epochs=1,
per_device_train_batch_size=2,
gradient_accumulation_steps=2,
report_to="none",
bf16=True,
gradient_checkpointing=True,
gradient_checkpointing_kwargs={"use_reentrant": False}
)
trainer = SFTTrainer(
model=model,
args=training_args,
train_dataset=dummy_dataset,
processing_class=tokenizer,
)
trainer.train()
trainer.save_model()
```
error:
```
Converting train dataset to ChatML: 100%|████████████████████████████████████████████████████████████████████████████████| 32/32 [00:00<00:00, 7169.37 examples/s]
Adding EOS to train dataset: 100%|███████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:00<00:00, 9362.94 examples/s]
Tokenizing train dataset: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:00<00:00, 2514.95 examples/s]
Truncating train dataset: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:00<00:00, 13881.24 examples/s]
[rank0]:[W501 21:01:27.110829382 ProcessGroupNCCL.cpp:4561] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id.
Converting train dataset to ChatML: 100%|████████████████████████████████████████████████████████████████████████████████| 32/32 [00:00<00:00, 4829.89 examples/s]
Adding EOS to train dataset: 100%|███████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:00<00:00, 4561.20 examples/s]
Tokenizing train dataset: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:00<00:00, 2098.89 examples/s]
Truncating train dataset: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:00<00:00, 7907.25 examples/s]
[rank1]: Traceback (most recent call last):
[rank1]: File "/workspaces/LLMTrain/error.py", line 67, in <module>
[rank1]:
[rank1]: ^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/transformers/trainer.py", line 2239, in train
[rank1]: return inner_training_loop(
[rank1]: ^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/transformers/trainer.py", line 2368, in _inner_training_loop
[rank1]: model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/accelerate/accelerator.py", line 1438, in prepare
[rank1]: result = tuple(
[rank1]: ^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/accelerate/accelerator.py", line 1439, in <genexpr>
[rank1]: self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/accelerate/accelerator.py", line 1281, in _prepare_one
[rank1]: return self.prepare_model(obj, device_placement=device_placement)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/accelerate/accelerator.py", line 1605, in prepare_model
[rank1]: model = fsdp2_prepare_model(self, model)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/accelerate/utils/fsdp_utils.py", line 644, in fsdp2_prepare_model
[rank1]: fsdp2_load_full_state_dict(accelerator, model, original_sd)
[rank1]: File "/opt/conda/lib/python3.11/site-packages/accelerate/utils/fsdp_utils.py", line 512, in fsdp2_load_full_state_dict
[rank1]: sharded_tensor = distribute_tensor(full_tensor, mesh, sharded_param.placements)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/distributed/tensor/_api.py", line 741, in distribute_tensor
[rank1]: local_tensor = placement._shard_tensor(local_tensor, device_mesh, idx)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/distributed/tensor/placement_types.py", line 176, in _shard_tensor
[rank1]: mesh_scatter(output, scatter_list, mesh, mesh_dim=mesh_dim)
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/distributed/tensor/_collective_utils.py", line 123, in mesh_scatter
[rank1]: fut = scatter(
[rank1]: ^^^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper
[rank1]: return func(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/opt/conda/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 4110, in scatter
[rank1]: work = group.scatter(output_tensors, input_tensors, opts)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: RuntimeError: NCCL Error 2: unhandled system error (run with NCCL_DEBUG=INFO for details)
W0501 21:01:29.642000 39036 site-packages/torch/distributed/elastic/multiprocessing/api.py:897] Sending process 39276 closing signal SIGTERM
E0501 21:01:30.058000 39036 site-packages/torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 1 (pid: 39277) of binary: /opt/conda/bin/python3
Traceback (most recent call last):
File "/opt/conda/bin/accelerate", line 8, in <module>
sys.exit(main())
^^^^^^
File "/opt/conda/lib/python3.11/site-packages/accelerate/commands/accelerate_cli.py", line 50, in main
args.func(args)
File "/opt/conda/lib/python3.11/site-packages/accelerate/commands/launch.py", line 1179, in launch_command
multi_gpu_launcher(args)
File "/opt/conda/lib/python3.11/site-packages/accelerate/commands/launch.py", line 809, in multi_gpu_launcher
distrib_run.run(args)
File "/opt/conda/lib/python3.11/site-packages/torch/distributed/run.py", line 909, in run
elastic_launch(
File "/opt/conda/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
error.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-05-01_21:01:29
host : 711701cc4fe8
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 39277)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
| 3,317 | 11,057 |
HuggingFaceDocBuilderDev
| 2025-05-02T09:48:30 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3317). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,317 | 11,058 |
lewtun
| 2025-05-02T09:52:14 |
FSDP2 fails with LoRA and the following error:
```
[rank1]: Traceback (most recent call last):
[rank1]: File "/fsx/lewis/git/hf/trl/trl/scripts/sft.py", line 148, in <module>
[rank1]: main(script_args, training_args, model_args)
[rank1]: File "/fsx/lewis/git/hf/trl/trl/scripts/sft.py", line 128, in main
[rank1]: trainer.train()
[rank1]: File "/fsx/lewis/git/hf/trl/trl-env/lib/python3.11/site-packages/transformers/trainer.py", line 2238, in train
[rank1]: return inner_training_loop(
[rank1]: ^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/fsx/lewis/git/hf/trl/trl-env/lib/python3.11/site-packages/transformers/trainer.py", line 2367, in _inner_training_loop
[rank1]: model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/fsx/lewis/git/hf/trl/trl-env/lib/python3.11/site-packages/accelerate/accelerator.py", line 1438, in prepare
[rank1]: result = tuple(
[rank1]: ^^^^^^
[rank1]: File "/fsx/lewis/git/hf/trl/trl-env/lib/python3.11/site-packages/accelerate/accelerator.py", line 1439, in <genexpr>
[rank1]: self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/fsx/lewis/git/hf/trl/trl-env/lib/python3.11/site-packages/accelerate/accelerator.py", line 1281, in _prepare_one
[rank1]: return self.prepare_model(obj, device_placement=device_placement)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/fsx/lewis/git/hf/trl/trl-env/lib/python3.11/site-packages/accelerate/accelerator.py", line 1605, in prepare_model
[rank1]: model = fsdp2_prepare_model(self, model)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/fsx/lewis/git/hf/trl/trl-env/lib/python3.11/site-packages/accelerate/utils/fsdp_utils.py", line 644, in fsdp2_prepare_model
[rank1]: fsdp2_load_full_state_dict(accelerator, model, original_sd)
[rank1]: File "/fsx/lewis/git/hf/trl/trl-env/lib/python3.11/site-packages/accelerate/utils/fsdp_utils.py", line 513, in fsdp2_load_full_state_dict
[rank1]: to_contiguous, casting_dtype = _infer_parameter_dtype(
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/fsx/lewis/git/hf/trl/trl-env/lib/python3.11/site-packages/accelerate/utils/fsdp_utils.py", line 475, in _infer_parameter_dtype
[rank1]: old_param = model.get_parameter_or_buffer(param_name)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/fsx/lewis/git/hf/trl/trl-env/lib/python3.11/site-packages/transformers/modeling_utils.py", line 5400, in get_parameter_or_buffer
[rank1]: raise AttributeError(f"`{target}` is neither a parameter nor a buffer.")
[rank1]: AttributeError: `base_model.model.model.embed_tokens.weight` is neither a parameter nor a buffer.
```
Update: fixed by https://github.com/huggingface/accelerate/pull/3545
| 3,317 | 11,059 |
lewtun
| 2025-05-02T09:58:56 |
> @lewtun if its FSDP2, its it gauranteed that summoning `named_parameters` will give the full params? This looking at the [this](https://github.com/huggingface/trl/blob/9874b3aa04d745df6cdf36cce33c74495e044c8f/trl/trainer/grpo_trainer.py#L756)
Good catch, I don't think summoning will work with FSDP2 for now at least...
| 3,317 | 11,060 |
qgallouedec
| 2025-05-02T22:22:32 |
> Good catch, I don't think summoning will work with FSDP2 for now at least...
@lewtun should we merge in the meantime?
| 3,317 | 11,061 |
fabianlim
| 2025-05-02T23:14:34 |
@lewtun @qgallouedec yea maybe we need to write some custom summoning code like [this](https://github.com/huggingface/trl/pull/3317#issuecomment-2842576427)
| 3,317 | 11,062 |
HERIUN
| 2025-04-21T02:13:49 |
It would be done by https://github.com/huggingface/trl/pull/3329
| 3,316 | 11,063 |
LeonEricsson
| 2025-04-17T13:55:07 |
As far as I can tell, it's still unclear whether completion-only training is definitively better. There are papers presenting arguments both in favor of and against disregarding prompt loss [[1]](https://arxiv.org/abs/2401.13586) [[2]](https://arxiv.org/abs/2405.14394)
| 3,315 | 11,064 |
real-zhangzhe
| 2025-04-18T02:42:34 |
> As far as I can tell, it's still unclear whether completion-only training is definitively better. There are papers presenting arguments both in favor of and against disregarding prompt loss [[1]](https://arxiv.org/abs/2401.13586) [[2]](https://arxiv.org/abs/2405.14394)
I agree with your point, but I don't think it can overturn my proposal. It is undeniable that in the eyes of the vast majority of engineers, SFT == prompt loss weight is zero. I think a good open source library should not block key information, let alone be misleading.
| 3,315 | 11,065 |
qgallouedec
| 2025-05-02T02:36:34 |
Fixed by #3329
| 3,315 | 11,066 |
LeonEricsson
| 2025-04-17T09:10:05 |
We're using `self.model` because we want to disable the lora adapters of the core model, which is `self.model`. I suspect `unwrap_model` call is a safety precaution, or an artifact. From what I can gather `self.model` is the base model already, e.g a Transformer Model, so it doesn't need unwrapping, but there may be edge-cases I'm unaware of. And either way the `unwrap_model` function is idempotent.
| 3,314 | 11,067 |
Tavish9
| 2025-04-17T09:27:12 |
Try to train with deepspeed zero3, and `self.model` here is no longer a base model.
In some of my experiments, `self.model` does not work as expected.
I think `self.model_wrapped` is always correct.
| 3,314 | 11,068 |
LeonEricsson
| 2025-04-17T11:23:15 |
I'm slightly confused because with zero3 you should never enter the aforementioned case? The `ref_model` should have been set here:
https://github.com/huggingface/trl/blob/9874b3aa04d745df6cdf36cce33c74495e044c8f/trl/trainer/grpo_trainer.py#L367-L373
apologies if i'm missing something, just trying to re-create your issue
| 3,314 | 11,069 |
Tavish9
| 2025-04-17T14:29:12 |
I'm lost, too.
Actually, `self.ref_model` is aways **NOT** a base model, see https://github.com/huggingface/trl/blob/9874b3aa04d745df6cdf36cce33c74495e044c8f/trl/trainer/grpo_trainer.py#L606-L610
Why directly use `self.ref_model` without unwrapping in https://github.com/huggingface/trl/blob/9874b3aa04d745df6cdf36cce33c74495e044c8f/trl/trainer/grpo_trainer.py#L882-L886
Need help, @qgallouedec
| 3,314 | 11,070 |
LeonEricsson
| 2025-04-17T17:01:11 |
In your original question you mentioned having issues with the `self.model` in this case
https://github.com/huggingface/trl/blob/9874b3aa04d745df6cdf36cce33c74495e044c8f/trl/trainer/grpo_trainer.py#L887-L890
but as i mentioned earlier, you should never enter this case with zero3, since the `ref_model` would already have been set in `GRPOTrainer::__init__`. Can you confirm this or provide more context?
Regarding why `self.model` is unwrapped: it's because we need access to the `.disable_adapter()` function. To ensure that works, we first unwrap the model in case it was previously wrapped by `accelerator.prepare()`.
The same logic explains why `self.ref_model` isn't unwrapped—we don’t need to disable any adapters.
| 3,314 | 11,071 |
Tavish9
| 2025-04-18T01:14:36 |
We get rid of the case `self.beta=0`.
If `self.ref_model` is None, then it means `self.model` is a `PEFT model`, so we need to `disable_adapter()` when getting the reference model. The logic to disable adapter is correct here, but we should unwrap `self. model_wrapped` instead of `self.model`.
On the other hand, `self.ref_model` is a base model no matter the current context is `deepspeed zero` or `fsdp`. The logic here is right.
https://github.com/huggingface/trl/blob/9874b3aa04d745df6cdf36cce33c74495e044c8f/trl/trainer/grpo_trainer.py#L882-L885
| 3,314 | 11,072 |
LeonEricsson
| 2025-04-18T07:45:08 |
Alright, it looks like we're in agreement on everything except your suggestion that it should be `self.model_wrapped` being unwrapped here:
https://github.com/huggingface/trl/blob/9874b3aa04d745df6cdf36cce33c74495e044c8f/trl/trainer/grpo_trainer.py#L887
I haven't been able to reproduce any issues with the current implementation - both `self.model` and `self.model_wrapped` seem to work in this context. To me, using `self.model` feels more appropriate. That said, I’ll leave it here for now and let others weigh in.
| 3,314 | 11,073 |
Tavish9
| 2025-04-18T08:04:37 |
Fine, though some of my experiments only showed normal speed and performance afte changing `self.model` to `self.model_wrapped`, as it aligns with original transformers implementation.
| 3,314 | 11,074 |
Tavish9
| 2025-04-20T06:15:59 |
Hi @LeonEricsson, sorry to bother you again.
Why does the trainer use `self.model_wrapped` here?
https://github.com/huggingface/trl/blob/9874b3aa04d745df6cdf36cce33c74495e044c8f/trl/trainer/grpo_trainer.py#L841-L846
| 3,314 | 11,075 |
LeonEricsson
| 2025-04-20T11:38:37 |
I saw that too and did a quick investigation - here are my conclusions:
Assuming ZeRO-3, the model parameters are sharded across *N* devices. During generation, we can speed things up by gathering the full parameters **once**, instead of redoing it on every forward pass. PR #1483 addresses this by introducing the `unwrap_model_for_generation()` context manager.
Using this context manager requires access to `self.model_wrapped`, since otherwise we wouldn't be able to run e.g.:
https://github.com/huggingface/trl/blob/294f35bf3c0043d3ee6b9b5d22385e5736f6ce9e/trl/models/utils.py#L217
To connect this back to our earlier discussion: the `_get_per_token_logps()` function here
https://github.com/huggingface/trl/blob/294f35bf3c0043d3ee6b9b5d22385e5736f6ce9e/trl/trainer/grpo_trainer.py#L979-L987
only performs a single forward pass. For that use case, we don’t need to interact with any wrapped engines (like DeepSpeed or DDP), so `self.model_wrapped` isn't needed there. But again, I expect `self.model` and `self.model_wrapped` to be interchangeable at this stage.
---
@qgallouedec any input here?
| 3,314 | 11,076 |
HuggingFaceDocBuilderDev
| 2025-04-21T23:53:06 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3313). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,313 | 11,077 |
LeonEricsson
| 2025-04-17T10:41:44 |
The logged gradient norms are the real / unclipped values, the clipped grad norms are not logged.
| 3,312 | 11,078 |
jiangix-paper
| 2025-04-17T13:59:28 |
Thanks for your reply. In fact, I have some questions for the picture:

Obviously, there are three points in the figure where grad_norm is greater than 1. If my max_grad_norm is set to 1, even if grad_norm shows the value before clipping, then after clipping, the loss should not fluctuate so much. In fact, the loss and grad_norm in the figure fluctuate almost synchronously. I think if clippingis done, should the loss be normal?
| 3,312 | 11,079 |
LeonEricsson
| 2025-04-17T16:31:02 |
Interesting, I agree with your analysis. I will take another look and see if I can replicate this. What are your "normal" grad_norm values in the provided graph? Because if your normal grad norms are << 1 then the spikes may still be present. Would be interesting to see the same graph for the clipped grad norms
| 3,312 | 11,080 |
jiangix-paper
| 2025-04-18T01:01:25 |
Hello,the normal grad_norm should between 0 and 1。
When the grad_norm is normal, the loss should between 0 and 0.05。
| 3,312 | 11,081 |
LeonEricsson
| 2025-04-18T06:51:46 |
Sorry, perhaps I was unclear - I want to make sure that your clipped graph doesn't look something like this right now:
<img width="552" alt="Image" src="https://github.com/user-attachments/assets/4bcebaff-4fd1-467c-b6bb-e33552c6b0c5" />
because in that case the loss spikes would be warranted.
| 3,312 | 11,082 |
ZhangEnmao
| 2025-04-26T03:37:21 |
> Hello,the normal grad_norm should between 0 and 1。 When the grad_norm is normal, the loss should between 0 and 0.05。
hi, I have encountered the same problem. Do you have the solution? I'm looking forward to your reply.
| 3,312 | 11,083 |
shaipranesh2
| 2025-05-13T12:38:54 |
I have the same problem for my models. Is there any solution for this?
| 3,312 | 11,084 |
LeonEricsson
| 2025-05-13T17:48:04 |
@ZhangEnmao @shaipranesh2 could you provide some additional system info, are you also running multi-gpu with zero3? if possible i'd love to see both your grad norms (preferable if you can clip them so the graph is in appropriate scale) and loss.
| 3,312 | 11,085 |
shaipranesh2
| 2025-05-15T08:36:25 |
I had this configuration for the run:
`{
"beta": {
"value": 0.05
},
"bf16": {
"value": true
},
"fp16": {
"value": false
},
"fsdp": {
"value": []
},
"seed": {
"value": 42
},
"tf32": {
"value": true
},
"debug": {
"value": []
},
"optim": {
"value": "adamw_torch"
},
"top_k": {
"value": 50
},
"top_p": {
"value": 1
},
"_wandb": {
"value": {
"m": [
{
"1": "train/global_step",
"6": [
3
],
"7": []
},
{
"1": "train/rewards/accuracy_reward",
"5": 1,
"6": [
1,
3
],
"7": []
},
{
"1": "train/reward",
"5": 1,
"6": [
1,
3
],
"7": []
},
{
"1": "train/kl",
"5": 1,
"6": [
1,
3
],
"7": []
},
{
"1": "train/loss",
"5": 1,
"6": [
1,
3
],
"7": []
},
{
"1": "train/reward_std",
"5": 1,
"6": [
1,
3
],
"7": []
},
{
"1": "train/epoch",
"5": 1,
"6": [
1,
3
],
"7": []
},
{
"1": "train/completion_length",
"5": 1,
"6": [
1,
3
],
"7": []
},
{
"1": "train/learning_rate",
"5": 1,
"6": [
1,
3
],
"7": []
}
],
"t": {
"1": [
1,
5,
6,
11,
30,
41,
49,
51,
53,
55,
71,
84,
95
],
"2": [
1,
5,
6,
11,
30,
41,
49,
51,
53,
55,
71,
84,
95
],
"3": [
7,
13,
19,
23,
55,
66
],
"4": "3.10.12",
"5": "0.19.6",
"6": "4.48.2",
"8": [
5
],
"9": {
"1": "transformers_trainer"
},
"12": "0.19.6",
"13": "linux-aarch64"
},
"cli_version": "0.19.6",
"python_version": "3.10.12"
}
},
"prefix": {
"value": null
},
"do_eval": {
"value": false
},
"no_cuda": {
"value": false
},
"rewards": {
"value": [
"accuracy"
]
},
"use_cpu": {
"value": false
},
"do_train": {
"value": false
},
"id2label": {
"value": {
"0": "LABEL_0",
"1": "LABEL_1"
}
},
"label2id": {
"value": {
"LABEL_0": 0,
"LABEL_1": 1
}
},
"run_name": {
"value": "grpo-Qwen2.5-3B_pretrained_100-439818_from_439818"
},
"use_ipex": {
"value": false
},
"use_vllm": {
"value": true
},
"adafactor": {
"value": false
},
"chem_task": {
"value": "iupacsm"
},
"data_seed": {
"value": null
},
"deepspeed": {
"value": null
},
"do_sample": {
"value": false
},
"hub_token": {
"value": "<HUB_TOKEN>"
},
"log_level": {
"value": "passive"
},
"max_steps": {
"value": 1450
},
"num_beams": {
"value": 1
},
"ray_scope": {
"value": "last"
},
"report_to": {
"value": [
"wandb"
]
},
"task_mode": {
"value": "base"
},
"typical_p": {
"value": 1
},
"use_cache": {
"value": false
},
"use_mrope": {
"value": false
},
"adam_beta1": {
"value": 0.9
},
"adam_beta2": {
"value": 0.999
},
"do_predict": {
"value": false
},
"eval_delay": {
"value": 0
},
"eval_steps": {
"value": null
},
"hidden_act": {
"value": "silu"
},
"is_decoder": {
"value": false
},
"local_rank": {
"value": 0
},
"max_length": {
"value": 20
},
"min_length": {
"value": 0
},
"model_type": {
"value": "qwen2"
},
"optim_args": {
"value": null
},
"output_dir": {
"value": "/cache/checkpoints/Qwen2.5-3B_pretrained_100/439818"
},
"past_index": {
"value": -1
},
"rope_theta": {
"value": 1000000
},
"save_steps": {
"value": 15
},
"vocab_size": {
"value": 151936
},
"ddp_backend": {
"value": null
},
"ddp_timeout": {
"value": 1800
},
"fsdp_config": {
"value": {
"xla": false,
"xla_fsdp_v2": false,
"min_num_params": 0,
"xla_fsdp_grad_ckpt": false
}
},
"hidden_size": {
"value": 2048
},
"label_names": {
"value": null
},
"logging_dir": {
"value": "/cache/checkpoints/Qwen2.5-3B_pretrained_100/439818/runs/May14_21-21-02_nid006709"
},
"push_to_hub": {
"value": false
},
"return_dict": {
"value": true
},
"task_recipe": {
"value": "iupacsm.yaml"
},
"temperature": {
"value": 0.9
},
"torch_dtype": {
"value": "bfloat16"
},
"torchdynamo": {
"value": null
},
"torchscript": {
"value": false
},
"vllm_device": {
"value": "cuda:3"
},
"adam_epsilon": {
"value": 1e-8
},
"bos_token_id": {
"value": 151643
},
"disable_tqdm": {
"value": false
},
"eos_token_id": {
"value": 151643
},
"fp16_backend": {
"value": "auto"
},
"hub_model_id": {
"value": null
},
"hub_strategy": {
"value": "every_save"
},
"pad_token_id": {
"value": null
},
"problem_type": {
"value": null
},
"rms_norm_eps": {
"value": 0.000001
},
"rope_scaling": {
"value": null
},
"sep_token_id": {
"value": null
},
"slurm_job_id": {
"value": "439818"
},
"use_bfloat16": {
"value": false
},
"warmup_ratio": {
"value": 0
},
"warmup_steps": {
"value": 0
},
"weight_decay": {
"value": 0
},
"_name_or_path": {
"value": "/iopsstor/scratch/cscs/ssenthil/.cache/checkpoints/Qwen2.5-3B_pretrained-v6-1/426897/checkpoint-100"
},
"architectures": {
"value": [
"Qwen2ForCausalLM"
]
},
"bad_words_ids": {
"value": null
},
"base_model_id": {
"value": "Qwen2.5-3B_pretrained_100"
},
"eval_on_start": {
"value": false
},
"eval_strategy": {
"value": "no"
},
"jit_mode_eval": {
"value": false
},
"learning_rate": {
"value": 0.000001
},
"logging_steps": {
"value": 2
},
"max_grad_norm": {
"value": 0
},
"mp_parameters": {
"value": ""
},
"output_scores": {
"value": false
},
"save_strategy": {
"value": "steps"
},
"split_batches": {
"value": null
},
"torch_compile": {
"value": false
},
"tpu_num_cores": {
"value": null
},
"bf16_full_eval": {
"value": false
},
"dataset_splits": {
"value": "train"
},
"early_stopping": {
"value": false
},
"fp16_full_eval": {
"value": false
},
"fp16_opt_level": {
"value": "O1"
},
"length_penalty": {
"value": 1
},
"sliding_window": {
"value": null
},
"tf_legacy_loss": {
"value": false
},
"use_mps_device": {
"value": false
},
"base_model_name": {
"value": "/iopsstor/scratch/cscs/ssenthil/.cache/checkpoints/Qwen2.5-3B_pretrained-v6-1/426897/checkpoint-100"
},
"finetuning_task": {
"value": null
},
"group_by_length": {
"value": false
},
"hub_always_push": {
"value": false
},
"num_beam_groups": {
"value": 1
},
"num_generations": {
"value": 8
},
"save_only_model": {
"value": false
},
"suppress_tokens": {
"value": null
},
"tokenizer_class": {
"value": null
},
"dispatch_batches": {
"value": null
},
"full_determinism": {
"value": false
},
"hub_private_repo": {
"value": null
},
"ignore_data_skip": {
"value": false
},
"log_on_each_node": {
"value": true
},
"logging_strategy": {
"value": "steps"
},
"num_train_epochs": {
"value": 3
},
"save_completions": {
"value": false
},
"save_safetensors": {
"value": true
},
"save_total_limit": {
"value": null
},
"use_liger_kernel": {
"value": false
},
"attention_dropout": {
"value": 0
},
"ddp_bucket_cap_mb": {
"value": null
},
"diversity_penalty": {
"value": 0
},
"greater_is_better": {
"value": null
},
"initializer_range": {
"value": 0.02
},
"intermediate_size": {
"value": 11008
},
"log_level_replica": {
"value": "warning"
},
"lr_scheduler_type": {
"value": "cosine"
},
"max_prompt_length": {
"value": 256
},
"max_window_layers": {
"value": 36
},
"model_init_kwargs": {
"value": null
},
"num_hidden_layers": {
"value": 36
},
"output_attentions": {
"value": false
},
"push_to_hub_token": {
"value": "<PUSH_TO_HUB_TOKEN>"
},
"save_on_each_node": {
"value": false
},
"tpu_metrics_debug": {
"value": false
},
"accelerator_config": {
"value": {
"even_batches": true,
"non_blocking": false,
"split_batches": false,
"dispatch_batches": null,
"use_seedable_sampler": true,
"gradient_accumulation_kwargs": null
}
},
"batch_eval_metrics": {
"value": false
},
"dataset_id_or_path": {
"value": "/iopsstor/store/cscs/swissai/a05/LIAC/data/CRLLM-PubChem/CRLLM-PubChem-compounds_001000001_002000000.csv"
},
"is_encoder_decoder": {
"value": false
},
"length_column_name": {
"value": "length"
},
"logging_first_step": {
"value": false
},
"repetition_penalty": {
"value": 1
},
"task_recipe_suffix": {
"value": ""
},
"torch_compile_mode": {
"value": null
},
"use_sliding_window": {
"value": false
},
"add_cross_attention": {
"value": false
},
"evaluation_strategy": {
"value": null
},
"forced_bos_token_id": {
"value": null
},
"forced_eos_token_id": {
"value": null
},
"fsdp_min_num_params": {
"value": 0
},
"include_for_metrics": {
"value": []
},
"neftune_noise_alpha": {
"value": null
},
"num_attention_heads": {
"value": 16
},
"num_key_value_heads": {
"value": 2
},
"skip_memory_metrics": {
"value": true
},
"slurm_resume_job_id": {
"value": "439818"
},
"special_smiles_tags": {
"value": true
},
"tie_encoder_decoder": {
"value": false
},
"tie_word_embeddings": {
"value": true
},
"auto_find_batch_size": {
"value": false
},
"dataloader_drop_last": {
"value": false
},
"model/num_parameters": {
"value": 0
},
"no_repeat_ngram_size": {
"value": 0
},
"num_return_sequences": {
"value": 1
},
"optim_target_modules": {
"value": null
},
"output_hidden_states": {
"value": false
},
"overwrite_output_dir": {
"value": false
},
"prediction_loss_only": {
"value": false
},
"push_to_hub_model_id": {
"value": null
},
"save_completions_dir": {
"value": "/Documents/sink_good_completions"
},
"task_specific_params": {
"value": null
},
"transformers_version": {
"value": "4.48.2"
},
"begin_suppress_tokens": {
"value": null
},
"dataloader_pin_memory": {
"value": true
},
"ddp_broadcast_buffers": {
"value": null
},
"max_completion_length": {
"value": 4096
},
"metric_for_best_model": {
"value": null
},
"remove_invalid_values": {
"value": false
},
"remove_unused_columns": {
"value": false
},
"torch_compile_backend": {
"value": null
},
"dataloader_num_workers": {
"value": 0
},
"decoder_start_token_id": {
"value": null
},
"eval_do_concat_batches": {
"value": true
},
"eval_use_gather_object": {
"value": false
},
"gradient_checkpointing": {
"value": true
},
"half_precision_backend": {
"value": "auto"
},
"label_smoothing_factor": {
"value": 0
},
"load_best_model_at_end": {
"value": false
},
"logging_nan_inf_filter": {
"value": true
},
"resume_from_checkpoint": {
"value": null
},
"sampling_params_config": {
"value": {
"top_k": 20,
"top_p": 0.8,
"stop_token_ids": [
151643,
151644,
151645,
151665
],
"repetition_penalty": 1
}
},
"tokenizer_name_or_path": {
"value": null
},
"chunk_size_feed_forward": {
"value": 0
},
"eval_accumulation_steps": {
"value": null
},
"max_position_embeddings": {
"value": 32768
},
"per_gpu_eval_batch_size": {
"value": null
},
"return_dict_in_generate": {
"value": false
},
"torch_empty_cache_steps": {
"value": null
},
"per_gpu_train_batch_size": {
"value": null
},
"push_to_hub_organization": {
"value": null
},
"include_tokens_per_second": {
"value": false
},
"dataloader_prefetch_factor": {
"value": null
},
"ddp_find_unused_parameters": {
"value": null
},
"include_inputs_for_metrics": {
"value": false
},
"per_device_eval_batch_size": {
"value": 8
},
"use_legacy_prediction_loop": {
"value": false
},
"cross_attention_hidden_size": {
"value": null
},
"gradient_accumulation_steps": {
"value": 8
},
"per_device_train_batch_size": {
"value": 2
},
"sampling_params_config_name": {
"value": "/iopsstor/scratch/cscs/ssenthil/sink/sampling_params/model_default_sampling_params.txt"
},
"save_completions_chunk_size": {
"value": 1000
},
"vllm_gpu_memory_utilization": {
"value": 0.8
},
"_attn_implementation_autoset": {
"value": true
},
"encoder_no_repeat_ngram_size": {
"value": 0
},
"average_tokens_across_devices": {
"value": false
},
"dataloader_persistent_workers": {
"value": false
},
"gradient_checkpointing_kwargs": {
"value": {
"use_reentrant": false
}
},
"include_num_input_tokens_seen": {
"value": false
},
"exponential_decay_length_penalty": {
"value": null
},
"fsdp_transformer_layer_cls_to_wrap": {
"value": null
},
"save_completions_min_reward_threshold": {
"value": null
},
"save_completions_top_reward_percentage": {
"value": 0.1
},
"restore_callback_states_from_checkpoint": {
"value": false
}
}`
I set the max_grad_norm to 0 as a test, but still saw grad norm rising and accuracy changing!

| 3,312 | 11,086 |
shaipranesh2
| 2025-05-15T08:36:55 |
@LeonEricsson let me know if you need further details of the run
| 3,312 | 11,087 |
abeerag
| 2025-04-17T23:21:05 |
This was due to this bug in accelerate: https://github.com/huggingface/accelerate/issues/3486
| 3,311 | 11,088 |
ariaattar
| 2025-04-24T20:18:21 |
So is this fixed? Im still running into the same issue @abeerag
| 3,311 | 11,089 |
ariaattar
| 2025-04-24T20:25:51 |
This is solved by not passing in `device_map`
```python
model = AutoModelForCausalLM.from_pretrained(
grpo_checkpoint_path,
# device_map="auto",
torch_dtype=torch.bfloat16 if bf16_supported else torch.float16,
load_in_4bit=False
)
```
| 3,311 | 11,090 |
javier-cohere
| 2025-05-16T10:13:58 |
> This is solved by not passing in `device_map`
>
> model = AutoModelForCausalLM.from_pretrained(
> grpo_checkpoint_path,
> # device_map="auto",
> torch_dtype=torch.bfloat16 if bf16_supported else torch.float16,
> load_in_4bit=False
> )
Thank you!
| 3,311 | 11,091 |
thepowerfuldeez
| 2025-06-03T18:20:12 |
thank you! I got an issue `AssertionError: FSDP requires named DeviceMesh dims for ND parallelism` during resume from checkpoint, solution would be to not set device_map argument when instantiating a model.
| 3,311 | 11,092 |
HuggingFaceDocBuilderDev
| 2025-04-17T05:38:43 |
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3310). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,310 | 11,093 |
zzzzzec
| 2025-04-17T15:12:14 |
Hello, thank you very much for TRL's support for vLLM DP, which is exactly what I've been looking forward to and needing. It has greatly accelerated my experiments.
However, I encountered an issue when running vllm_serve.
My hardware configuration is 8 * H20 (96GB)
I used the command:
```sh
NCCL_DEBUG=WARN python -m trl.cli vllm-serve \
--model /mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct \
--tensor_parallel_size 1 \
--data_parallel_size 8 \
--host 0.0.0.0 \
--port 6004
```
This resulted in an error:
```text
ctmt241129162844w28-74bfc659cd-zpz6r:2095608:2095608 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 0 and rank 1 both on CUDA device 8000
ctmt241129162844w28-74bfc659cd-zpz6r:2095691:2095691 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 5 and rank 0 both on CUDA device 8000
ctmt241129162844w28-74bfc659cd-zpz6r:2095613:2095613 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 1 and rank 0 both on CUDA device 8000
ctmt241129162844w28-74bfc659cd-zpz6r:2095681:2095681 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 7 and rank 0 both on CUDA device 8000
ctmt241129162844w28-74bfc659cd-zpz6r:2095494:2095494 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 3 and rank 0 both on CUDA device 8000
ctmt241129162844w28-74bfc659cd-zpz6r:2095499:2095499 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 6 and rank 0 both on CUDA device 8000
ctmt241129162844w28-74bfc659cd-zpz6r:2095686:2095686 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 4 and rank 0 both on CUDA device 8000
ctmt241129162844w28-74bfc659cd-zpz6r:2095532:2095532 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 2 and rank 0 both on CUDA device 8000
```
After searching for the cause for a long time, I finally discovered that deleting all TRL-related code from the vllm_server.py file allows it to work normally, specifically:
```python
# from trl import TrlParser
# from trl.import_utils import (
# is_fastapi_available,
# is_pydantic_available,
# is_uvicorn_available,
# is_vllm_available,
# )
# if is_fastapi_available():
# from fastapi import FastAPI
# if is_pydantic_available():
# from pydantic import BaseModel
# if is_uvicorn_available():
# import uvicorn
# if is_vllm_available():
# from vllm import LLM, SamplingParams
# from vllm.distributed.device_communicators.pynccl import PyNcclCommunicator
# from vllm.distributed.parallel_state import get_world_group
# from vllm.distributed.utils import StatelessProcessGroup
# from vllm.sampling_params import GuidedDecodingParams
# from vllm.utils import get_open_port
# copy class TrlParser(HfArgumentParser): to there
...
```
and run
```sh
NCCL_DEBUG=WARN python vllm_serve.py \
--model /mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct \
--tensor_parallel_size 1 \
--data_parallel_size 8 \
--host 0.0.0.0 \
--port 6004
```
This works correctly. I wonder if the error might be that when importing TRL, certain processes are placed on the cuda:0 device, which causes this error?
Could you please help look into this error? Thank you for your assistance.
The complete error log:
```text
[2025-04-17 23:03:28,056] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect)
INFO 04-17 23:03:33 [__init__.py:239] Automatically detected platform cuda.
INFO: Started server process [2093690]
INFO: Waiting for application startup.
INFO 04-17 23:04:10 [config.py:600] This model supports multiple tasks: {'generate', 'score', 'embed', 'reward', 'classify'}. Defaulting to 'generate'.
INFO 04-17 23:04:10 [config.py:1780] Chunked prefill is enabled with max_num_batched_tokens=8192.
WARNING 04-17 23:04:10 [cuda.py:96] To see benefits of async output processing, enable CUDA graph. Since, enforce-eager is enabled, async output processor cannot be used
INFO 04-17 23:04:11 [config.py:600] This model supports multiple tasks: {'generate', 'score', 'embed', 'reward', 'classify'}. Defaulting to 'generate'.
INFO 04-17 23:04:11 [config.py:1780] Chunked prefill is enabled with max_num_batched_tokens=8192.
WARNING 04-17 23:04:11 [cuda.py:96] To see benefits of async output processing, enable CUDA graph. Since, enforce-eager is enabled, async output processor cannot be used
INFO 04-17 23:04:11 [config.py:600] This model supports multiple tasks: {'generate', 'score', 'embed', 'reward', 'classify'}. Defaulting to 'generate'.
INFO 04-17 23:04:11 [config.py:1780] Chunked prefill is enabled with max_num_batched_tokens=8192.
WARNING 04-17 23:04:11 [cuda.py:96] To see benefits of async output processing, enable CUDA graph. Since, enforce-eager is enabled, async output processor cannot be used
INFO 04-17 23:04:12 [config.py:600] This model supports multiple tasks: {'generate', 'score', 'embed', 'reward', 'classify'}. Defaulting to 'generate'.
INFO 04-17 23:04:12 [config.py:600] This model supports multiple tasks: {'generate', 'score', 'embed', 'reward', 'classify'}. Defaulting to 'generate'.
INFO 04-17 23:04:12 [config.py:600] This model supports multiple tasks: {'generate', 'score', 'embed', 'reward', 'classify'}. Defaulting to 'generate'.
INFO 04-17 23:04:12 [config.py:600] This model supports multiple tasks: {'generate', 'score', 'embed', 'reward', 'classify'}. Defaulting to 'generate'.
INFO 04-17 23:04:12 [config.py:600] This model supports multiple tasks: {'generate', 'score', 'embed', 'reward', 'classify'}. Defaulting to 'generate'.
INFO 04-17 23:04:12 [config.py:1780] Chunked prefill is enabled with max_num_batched_tokens=8192.
WARNING 04-17 23:04:12 [cuda.py:96] To see benefits of async output processing, enable CUDA graph. Since, enforce-eager is enabled, async output processor cannot be used
INFO 04-17 23:04:12 [config.py:1780] Chunked prefill is enabled with max_num_batched_tokens=8192.
INFO 04-17 23:04:12 [config.py:1780] Chunked prefill is enabled with max_num_batched_tokens=8192.
WARNING 04-17 23:04:12 [cuda.py:96] To see benefits of async output processing, enable CUDA graph. Since, enforce-eager is enabled, async output processor cannot be used
WARNING 04-17 23:04:12 [cuda.py:96] To see benefits of async output processing, enable CUDA graph. Since, enforce-eager is enabled, async output processor cannot be used
INFO 04-17 23:04:12 [config.py:1780] Chunked prefill is enabled with max_num_batched_tokens=8192.
WARNING 04-17 23:04:12 [cuda.py:96] To see benefits of async output processing, enable CUDA graph. Since, enforce-eager is enabled, async output processor cannot be used
INFO 04-17 23:04:12 [config.py:1780] Chunked prefill is enabled with max_num_batched_tokens=8192.
WARNING 04-17 23:04:12 [cuda.py:96] To see benefits of async output processing, enable CUDA graph. Since, enforce-eager is enabled, async output processor cannot be used
[2025-04-17 23:04:40,964] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-04-17 23:04:40,964] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-04-17 23:04:40,964] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-04-17 23:04:43,643] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-04-17 23:04:43,769] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-04-17 23:04:43,791] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-04-17 23:04:43,812] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2025-04-17 23:04:43,816] [INFO] [real_accelerator.py:239:get_accelerator] Setting ds_accelerator to cuda (auto detect)
INFO 04-17 23:04:54 [__init__.py:239] Automatically detected platform cuda.
INFO 04-17 23:04:54 [__init__.py:239] Automatically detected platform cuda.
INFO 04-17 23:04:54 [__init__.py:239] Automatically detected platform cuda.
INFO 04-17 23:04:55 [__init__.py:239] Automatically detected platform cuda.
INFO 04-17 23:04:56 [__init__.py:239] Automatically detected platform cuda.
INFO 04-17 23:04:56 [__init__.py:239] Automatically detected platform cuda.
INFO 04-17 23:04:56 [__init__.py:239] Automatically detected platform cuda.
INFO 04-17 23:04:56 [__init__.py:239] Automatically detected platform cuda.
(EngineCore_0 pid=2095608) INFO 04-17 23:05:00 [core.py:61] Initializing a V1 LLM engine (v0.8.3) with config: model='/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct', speculative_config=None, tokenizer='/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=102400, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=False, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[],"max_capture_size":0}
(EngineCore_4 pid=2095686) INFO 04-17 23:05:00 [core.py:61] Initializing a V1 LLM engine (v0.8.3) with config: model='/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct', speculative_config=None, tokenizer='/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=102400, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=False, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[],"max_capture_size":0}
(EngineCore_1 pid=2095613) INFO 04-17 23:05:00 [core.py:61] Initializing a V1 LLM engine (v0.8.3) with config: model='/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct', speculative_config=None, tokenizer='/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=102400, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=False, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[],"max_capture_size":0}
(EngineCore_7 pid=2095681) INFO 04-17 23:05:00 [core.py:61] Initializing a V1 LLM engine (v0.8.3) with config: model='/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct', speculative_config=None, tokenizer='/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=102400, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=False, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[],"max_capture_size":0}
(EngineCore_2 pid=2095532) INFO 04-17 23:05:00 [core.py:61] Initializing a V1 LLM engine (v0.8.3) with config: model='/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct', speculative_config=None, tokenizer='/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=102400, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=False, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[],"max_capture_size":0}
(EngineCore_5 pid=2095691) INFO 04-17 23:05:00 [core.py:61] Initializing a V1 LLM engine (v0.8.3) with config: model='/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct', speculative_config=None, tokenizer='/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=102400, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=False, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[],"max_capture_size":0}
(EngineCore_3 pid=2095494) INFO 04-17 23:05:00 [core.py:61] Initializing a V1 LLM engine (v0.8.3) with config: model='/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct', speculative_config=None, tokenizer='/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=102400, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=False, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[],"max_capture_size":0}
(EngineCore_6 pid=2095499) INFO 04-17 23:05:00 [core.py:61] Initializing a V1 LLM engine (v0.8.3) with config: model='/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct', speculative_config=None, tokenizer='/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=102400, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=/mnt/tenant-home_speed/Model/Qwen/Qwen2.5-7B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=False, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[],"max_capture_size":0}
(EngineCore_4 pid=2095686) INFO 04-17 23:05:02 [worker_base.py:589] Injected <class 'trl.scripts.vllm_serve.WeightSyncWorkerExtension'> into <class 'vllm.v1.worker.gpu_worker.Worker'> for extended collective_rpc calls ['close_communicator', 'init_communicator', 'update_named_param']
(EngineCore_3 pid=2095494) INFO 04-17 23:05:02 [worker_base.py:589] Injected <class 'trl.scripts.vllm_serve.WeightSyncWorkerExtension'> into <class 'vllm.v1.worker.gpu_worker.Worker'> for extended collective_rpc calls ['close_communicator', 'init_communicator', 'update_named_param']
(EngineCore_7 pid=2095681) INFO 04-17 23:05:02 [worker_base.py:589] Injected <class 'trl.scripts.vllm_serve.WeightSyncWorkerExtension'> into <class 'vllm.v1.worker.gpu_worker.Worker'> for extended collective_rpc calls ['close_communicator', 'init_communicator', 'update_named_param']
(EngineCore_1 pid=2095613) INFO 04-17 23:05:02 [worker_base.py:589] Injected <class 'trl.scripts.vllm_serve.WeightSyncWorkerExtension'> into <class 'vllm.v1.worker.gpu_worker.Worker'> for extended collective_rpc calls ['close_communicator', 'init_communicator', 'update_named_param']
(EngineCore_5 pid=2095691) INFO 04-17 23:05:02 [worker_base.py:589] Injected <class 'trl.scripts.vllm_serve.WeightSyncWorkerExtension'> into <class 'vllm.v1.worker.gpu_worker.Worker'> for extended collective_rpc calls ['close_communicator', 'init_communicator', 'update_named_param']
(EngineCore_6 pid=2095499) INFO 04-17 23:05:02 [worker_base.py:589] Injected <class 'trl.scripts.vllm_serve.WeightSyncWorkerExtension'> into <class 'vllm.v1.worker.gpu_worker.Worker'> for extended collective_rpc calls ['close_communicator', 'init_communicator', 'update_named_param']
(EngineCore_0 pid=2095608) INFO 04-17 23:05:02 [worker_base.py:589] Injected <class 'trl.scripts.vllm_serve.WeightSyncWorkerExtension'> into <class 'vllm.v1.worker.gpu_worker.Worker'> for extended collective_rpc calls ['close_communicator', 'init_communicator', 'update_named_param']
(EngineCore_2 pid=2095532) INFO 04-17 23:05:02 [worker_base.py:589] Injected <class 'trl.scripts.vllm_serve.WeightSyncWorkerExtension'> into <class 'vllm.v1.worker.gpu_worker.Worker'> for extended collective_rpc calls ['close_communicator', 'init_communicator', 'update_named_param']
(EngineCore_4 pid=2095686) WARNING 04-17 23:05:02 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7f004f0af4f0>
(EngineCore_1 pid=2095613) WARNING 04-17 23:05:02 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7fbb174a7730>
(EngineCore_5 pid=2095691) WARNING 04-17 23:05:02 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7f31e4137730>
(EngineCore_0 pid=2095608) WARNING 04-17 23:05:02 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7feca8acb250>
(EngineCore_6 pid=2095499) WARNING 04-17 23:05:02 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7f5ef1ab7250>
(EngineCore_7 pid=2095681) WARNING 04-17 23:05:02 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7f62fa3bb400>
(EngineCore_3 pid=2095494) WARNING 04-17 23:05:02 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7f65aecef730>
(EngineCore_2 pid=2095532) WARNING 04-17 23:05:02 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7f9f1dc4b4f0>
(EngineCore_5 pid=2095691) INFO 04-17 23:05:03 [parallel_state.py:836] Adjusting world_size=8 rank=5 distributed_init_method=tcp://127.0.0.1:37608 for DP
(EngineCore_0 pid=2095608) INFO 04-17 23:05:03 [parallel_state.py:836] Adjusting world_size=8 rank=0 distributed_init_method=tcp://127.0.0.1:37608 for DP
(EngineCore_1 pid=2095613) INFO 04-17 23:05:03 [parallel_state.py:836] Adjusting world_size=8 rank=1 distributed_init_method=tcp://127.0.0.1:37608 for DP
(EngineCore_2 pid=2095532) INFO 04-17 23:05:03 [parallel_state.py:836] Adjusting world_size=8 rank=2 distributed_init_method=tcp://127.0.0.1:37608 for DP
(EngineCore_7 pid=2095681) INFO 04-17 23:05:03 [parallel_state.py:836] Adjusting world_size=8 rank=7 distributed_init_method=tcp://127.0.0.1:37608 for DP
(EngineCore_6 pid=2095499) INFO 04-17 23:05:03 [parallel_state.py:836] Adjusting world_size=8 rank=6 distributed_init_method=tcp://127.0.0.1:37608 for DP
(EngineCore_4 pid=2095686) INFO 04-17 23:05:03 [parallel_state.py:836] Adjusting world_size=8 rank=4 distributed_init_method=tcp://127.0.0.1:37608 for DP
(EngineCore_3 pid=2095494) INFO 04-17 23:05:03 [parallel_state.py:836] Adjusting world_size=8 rank=3 distributed_init_method=tcp://127.0.0.1:37608 for DP
(EngineCore_1 pid=2095613) INFO 04-17 23:05:04 [utils.py:990] Found nccl from library libnccl.so.2
(EngineCore_2 pid=2095532) INFO 04-17 23:05:04 [utils.py:990] Found nccl from library libnccl.so.2
(EngineCore_1 pid=2095613) INFO 04-17 23:05:04 [pynccl.py:69] vLLM is using nccl==2.21.5
(EngineCore_2 pid=2095532) INFO 04-17 23:05:04 [pynccl.py:69] vLLM is using nccl==2.21.5
(EngineCore_3 pid=2095494) INFO 04-17 23:05:04 [utils.py:990] Found nccl from library libnccl.so.2
(EngineCore_5 pid=2095691) INFO 04-17 23:05:04 [utils.py:990] Found nccl from library libnccl.so.2
(EngineCore_7 pid=2095681) INFO 04-17 23:05:04 [utils.py:990] Found nccl from library libnccl.so.2
(EngineCore_6 pid=2095499) INFO 04-17 23:05:04 [utils.py:990] Found nccl from library libnccl.so.2
(EngineCore_0 pid=2095608) INFO 04-17 23:05:04 [utils.py:990] Found nccl from library libnccl.so.2
(EngineCore_4 pid=2095686) INFO 04-17 23:05:04 [utils.py:990] Found nccl from library libnccl.so.2
(EngineCore_3 pid=2095494) INFO 04-17 23:05:04 [pynccl.py:69] vLLM is using nccl==2.21.5
(EngineCore_5 pid=2095691) INFO 04-17 23:05:04 [pynccl.py:69] vLLM is using nccl==2.21.5
(EngineCore_7 pid=2095681) INFO 04-17 23:05:04 [pynccl.py:69] vLLM is using nccl==2.21.5
(EngineCore_6 pid=2095499) INFO 04-17 23:05:04 [pynccl.py:69] vLLM is using nccl==2.21.5
(EngineCore_0 pid=2095608) INFO 04-17 23:05:04 [pynccl.py:69] vLLM is using nccl==2.21.5
(EngineCore_4 pid=2095686) INFO 04-17 23:05:04 [pynccl.py:69] vLLM is using nccl==2.21.5
NCCL version 2.21.5+cuda12.4
ctmt241129162844w28-74bfc659cd-zpz6r:2095608:2095608 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 0 and rank 1 both on CUDA device 8000
ctmt241129162844w28-74bfc659cd-zpz6r:2095691:2095691 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 5 and rank 0 both on CUDA device 8000
ctmt241129162844w28-74bfc659cd-zpz6r:2095613:2095613 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 1 and rank 0 both on CUDA device 8000
ctmt241129162844w28-74bfc659cd-zpz6r:2095681:2095681 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 7 and rank 0 both on CUDA device 8000
ctmt241129162844w28-74bfc659cd-zpz6r:2095494:2095494 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 3 and rank 0 both on CUDA device 8000
ctmt241129162844w28-74bfc659cd-zpz6r:2095499:2095499 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 6 and rank 0 both on CUDA device 8000
ctmt241129162844w28-74bfc659cd-zpz6r:2095686:2095686 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 4 and rank 0 both on CUDA device 8000
ctmt241129162844w28-74bfc659cd-zpz6r:2095532:2095532 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 2 and rank 0 both on CUDA device 8000
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] EngineCore hit an exception: Traceback (most recent call last):
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 376, in run_engine_core
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] engine_core = DPEngineCoreProc(*args, **kwargs)
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 564, in __init__
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] super().__init__(input_path, output_path, vllm_config, executor_class,
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 319, in __init__
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] super().__init__(vllm_config, executor_class, log_stats)
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 67, in __init__
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] self.model_executor = executor_class(vllm_config)
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 52, in __init__
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] EngineCore hit an exception: Traceback (most recent call last):
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] self._init_executor()
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 376, in run_engine_core
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 46, in _init_executor
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] engine_core = DPEngineCoreProc(*args, **kwargs)
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] self.collective_rpc("init_device")
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 564, in __init__
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] super().__init__(input_path, output_path, vllm_config, executor_class,
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] answer = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 319, in __init__
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/utils.py", line 2347, in run_method
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] super().__init__(vllm_config, executor_class, log_stats)
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] return func(*args, **kwargs)
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 67, in __init__
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 604, in init_device
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] self.model_executor = executor_class(vllm_config)
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] self.worker.init_device() # type: ignore
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 52, in __init__
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 113, in init_device
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] self._init_executor()
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] init_worker_distributed_environment(self.parallel_config, self.rank,
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 46, in _init_executor
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 299, in init_worker_distributed_environment
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] self.collective_rpc("init_device")
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] ensure_model_parallel_initialized(parallel_config.tensor_parallel_size,
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 995, in ensure_model_parallel_initialized
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] answer = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] initialize_model_parallel(tensor_model_parallel_size,
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/utils.py", line 2347, in run_method
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 952, in initialize_model_parallel
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] return func(*args, **kwargs)
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] _DP = init_model_parallel_group(group_ranks,
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 604, in init_device
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 733, in init_model_parallel_group
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] self.worker.init_device() # type: ignore
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] return GroupCoordinator(
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 209, in __init__
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 113, in init_device
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] self.device_communicator = device_comm_cls(
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] init_worker_distributed_environment(self.parallel_config, self.rank,
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/cuda_communicator.py", line 39, in __init__
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 299, in init_worker_distributed_environment
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] self.pynccl_comm = PyNcclCommunicator(
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] ensure_model_parallel_initialized(parallel_config.tensor_parallel_size,
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl.py", line 101, in __init__
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 995, in ensure_model_parallel_initialized
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] self.comm: ncclComm_t = self.nccl.ncclCommInitRank(
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] initialize_model_parallel(tensor_model_parallel_size,
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 279, in ncclCommInitRank
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 952, in initialize_model_parallel
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] self.NCCL_CHECK(self._funcs["ncclCommInitRank"](ctypes.byref(comm),
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] _DP = init_model_parallel_group(group_ranks,
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 258, in NCCL_CHECK
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 733, in init_model_parallel_group
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] raise RuntimeError(f"NCCL error: {error_str}")
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] return GroupCoordinator(
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390] RuntimeError: NCCL error: invalid usage (run with NCCL_DEBUG=WARN for details)
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 209, in __init__
(EngineCore_7 pid=2095681) ERROR 04-17 23:05:06 [core.py:390]
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] self.device_communicator = device_comm_cls(
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/cuda_communicator.py", line 39, in __init__
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] self.pynccl_comm = PyNcclCommunicator(
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl.py", line 101, in __init__
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] self.comm: ncclComm_t = self.nccl.ncclCommInitRank(
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 279, in ncclCommInitRank
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] self.NCCL_CHECK(self._funcs["ncclCommInitRank"](ctypes.byref(comm),
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 258, in NCCL_CHECK
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] raise RuntimeError(f"NCCL error: {error_str}")
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390] RuntimeError: NCCL error: invalid usage (run with NCCL_DEBUG=WARN for details)
(EngineCore_0 pid=2095608) ERROR 04-17 23:05:06 [core.py:390]
CRITICAL 04-17 23:05:06 [core_client.py:361] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
CRITICAL 04-17 23:05:06 [core_client.py:361] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] EngineCore hit an exception: Traceback (most recent call last):
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 376, in run_engine_core
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] engine_core = DPEngineCoreProc(*args, **kwargs)
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 564, in __init__
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] super().__init__(input_path, output_path, vllm_config, executor_class,
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 319, in __init__
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] super().__init__(vllm_config, executor_class, log_stats)
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 67, in __init__
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] self.model_executor = executor_class(vllm_config)
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 52, in __init__
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] self._init_executor()
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 46, in _init_executor
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] self.collective_rpc("init_device")
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] answer = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/utils.py", line 2347, in run_method
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] return func(*args, **kwargs)
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 604, in init_device
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] self.worker.init_device() # type: ignore
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 113, in init_device
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] init_worker_distributed_environment(self.parallel_config, self.rank,
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 299, in init_worker_distributed_environment
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] ensure_model_parallel_initialized(parallel_config.tensor_parallel_size,
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 995, in ensure_model_parallel_initialized
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] initialize_model_parallel(tensor_model_parallel_size,
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 952, in initialize_model_parallel
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] _DP = init_model_parallel_group(group_ranks,
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 733, in init_model_parallel_group
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] return GroupCoordinator(
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 209, in __init__
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] self.device_communicator = device_comm_cls(
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/cuda_communicator.py", line 39, in __init__
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] self.pynccl_comm = PyNcclCommunicator(
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] EngineCore hit an exception: Traceback (most recent call last):
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl.py", line 101, in __init__
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 376, in run_engine_core
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] self.comm: ncclComm_t = self.nccl.ncclCommInitRank(
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] engine_core = DPEngineCoreProc(*args, **kwargs)
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 279, in ncclCommInitRank
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 564, in __init__
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] self.NCCL_CHECK(self._funcs["ncclCommInitRank"](ctypes.byref(comm),
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] super().__init__(input_path, output_path, vllm_config, executor_class,
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 258, in NCCL_CHECK
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 319, in __init__
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] raise RuntimeError(f"NCCL error: {error_str}")
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] super().__init__(vllm_config, executor_class, log_stats)
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390] RuntimeError: NCCL error: invalid usage (run with NCCL_DEBUG=WARN for details)
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] EngineCore hit an exception: Traceback (most recent call last):
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 67, in __init__
(EngineCore_4 pid=2095686) ERROR 04-17 23:05:06 [core.py:390]
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 376, in run_engine_core
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] self.model_executor = executor_class(vllm_config)
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] engine_core = DPEngineCoreProc(*args, **kwargs)
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 52, in __init__
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 564, in __init__
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] self._init_executor()
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] super().__init__(input_path, output_path, vllm_config, executor_class,
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 46, in _init_executor
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 319, in __init__
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] self.collective_rpc("init_device")
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] super().__init__(vllm_config, executor_class, log_stats)
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 67, in __init__
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] answer = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] self.model_executor = executor_class(vllm_config)
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/utils.py", line 2347, in run_method
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 52, in __init__
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] return func(*args, **kwargs)
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] self._init_executor()
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 604, in init_device
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 46, in _init_executor
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] self.worker.init_device() # type: ignore
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] self.collective_rpc("init_device")
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 113, in init_device
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] init_worker_distributed_environment(self.parallel_config, self.rank,
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] answer = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 299, in init_worker_distributed_environment
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/utils.py", line 2347, in run_method
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] ensure_model_parallel_initialized(parallel_config.tensor_parallel_size,
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] return func(*args, **kwargs)
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 995, in ensure_model_parallel_initialized
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 604, in init_device
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] initialize_model_parallel(tensor_model_parallel_size,
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] self.worker.init_device() # type: ignore
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 952, in initialize_model_parallel
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 113, in init_device
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] _DP = init_model_parallel_group(group_ranks,
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] init_worker_distributed_environment(self.parallel_config, self.rank,
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 733, in init_model_parallel_group
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 299, in init_worker_distributed_environment
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] return GroupCoordinator(
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] EngineCore hit an exception: Traceback (most recent call last):
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] EngineCore hit an exception: Traceback (most recent call last):
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] ensure_model_parallel_initialized(parallel_config.tensor_parallel_size,
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 209, in __init__
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 376, in run_engine_core
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 376, in run_engine_core
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 995, in ensure_model_parallel_initialized
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] self.device_communicator = device_comm_cls(
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] engine_core = DPEngineCoreProc(*args, **kwargs)
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] engine_core = DPEngineCoreProc(*args, **kwargs)
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 564, in __init__
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] initialize_model_parallel(tensor_model_parallel_size,
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/cuda_communicator.py", line 39, in __init__
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 564, in __init__
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] self.pynccl_comm = PyNcclCommunicator(
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] super().__init__(input_path, output_path, vllm_config, executor_class,
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 952, in initialize_model_parallel
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] super().__init__(input_path, output_path, vllm_config, executor_class,
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl.py", line 101, in __init__
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 319, in __init__
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] self.comm: ncclComm_t = self.nccl.ncclCommInitRank(
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] _DP = init_model_parallel_group(group_ranks,
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 319, in __init__
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] super().__init__(vllm_config, executor_class, log_stats)
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 279, in ncclCommInitRank
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 733, in init_model_parallel_group
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] super().__init__(vllm_config, executor_class, log_stats)
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 67, in __init__
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 67, in __init__
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] self.NCCL_CHECK(self._funcs["ncclCommInitRank"](ctypes.byref(comm),
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] return GroupCoordinator(
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] self.model_executor = executor_class(vllm_config)
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] self.model_executor = executor_class(vllm_config)
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 258, in NCCL_CHECK
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 209, in __init__
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] raise RuntimeError(f"NCCL error: {error_str}")
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 52, in __init__
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 52, in __init__
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] self.device_communicator = device_comm_cls(
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390] RuntimeError: NCCL error: invalid usage (run with NCCL_DEBUG=WARN for details)
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] self._init_executor()
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] self._init_executor()
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/cuda_communicator.py", line 39, in __init__
(EngineCore_6 pid=2095499) ERROR 04-17 23:05:06 [core.py:390]
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 46, in _init_executor
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 46, in _init_executor
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] EngineCore hit an exception: Traceback (most recent call last):
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] self.pynccl_comm = PyNcclCommunicator(
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] self.collective_rpc("init_device")
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl.py", line 101, in __init__
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] self.collective_rpc("init_device")
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 376, in run_engine_core
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] self.comm: ncclComm_t = self.nccl.ncclCommInitRank(
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] answer = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] engine_core = DPEngineCoreProc(*args, **kwargs)
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] answer = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 564, in __init__
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 279, in ncclCommInitRank
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/utils.py", line 2347, in run_method
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/utils.py", line 2347, in run_method
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] super().__init__(input_path, output_path, vllm_config, executor_class,
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] self.NCCL_CHECK(self._funcs["ncclCommInitRank"](ctypes.byref(comm),
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] return func(*args, **kwargs)
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] return func(*args, **kwargs)
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 319, in __init__
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 258, in NCCL_CHECK
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 604, in init_device
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 604, in init_device
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] self.worker.init_device() # type: ignore
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] super().__init__(vllm_config, executor_class, log_stats)
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 113, in init_device
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] raise RuntimeError(f"NCCL error: {error_str}")
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] self.worker.init_device() # type: ignore
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 67, in __init__
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] init_worker_distributed_environment(self.parallel_config, self.rank,
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390] RuntimeError: NCCL error: invalid usage (run with NCCL_DEBUG=WARN for details)
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 113, in init_device
CRITICAL 04-17 23:05:06 [core_client.py:361] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] self.model_executor = executor_class(vllm_config)
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 299, in init_worker_distributed_environment
(EngineCore_2 pid=2095532) ERROR 04-17 23:05:06 [core.py:390]
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] init_worker_distributed_environment(self.parallel_config, self.rank,
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 52, in __init__
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] ensure_model_parallel_initialized(parallel_config.tensor_parallel_size,
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 299, in init_worker_distributed_environment
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] self._init_executor()
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 995, in ensure_model_parallel_initialized
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] ensure_model_parallel_initialized(parallel_config.tensor_parallel_size,
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 46, in _init_executor
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] initialize_model_parallel(tensor_model_parallel_size,
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 995, in ensure_model_parallel_initialized
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] self.collective_rpc("init_device")
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 952, in initialize_model_parallel
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] initialize_model_parallel(tensor_model_parallel_size,
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] _DP = init_model_parallel_group(group_ranks,
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 952, in initialize_model_parallel
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] answer = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 733, in init_model_parallel_group
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] _DP = init_model_parallel_group(group_ranks,
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/utils.py", line 2347, in run_method
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] return GroupCoordinator(
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 733, in init_model_parallel_group
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] return func(*args, **kwargs)
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 209, in __init__
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] return GroupCoordinator(
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 604, in init_device
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] self.device_communicator = device_comm_cls(
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 209, in __init__
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] self.worker.init_device() # type: ignore
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/cuda_communicator.py", line 39, in __init__
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] self.device_communicator = device_comm_cls(
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 113, in init_device
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/cuda_communicator.py", line 39, in __init__
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] self.pynccl_comm = PyNcclCommunicator(
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] self.pynccl_comm = PyNcclCommunicator(
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] init_worker_distributed_environment(self.parallel_config, self.rank,
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl.py", line 101, in __init__
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl.py", line 101, in __init__
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 299, in init_worker_distributed_environment
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] self.comm: ncclComm_t = self.nccl.ncclCommInitRank(
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] self.comm: ncclComm_t = self.nccl.ncclCommInitRank(
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] ensure_model_parallel_initialized(parallel_config.tensor_parallel_size,
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 279, in ncclCommInitRank
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 279, in ncclCommInitRank
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 995, in ensure_model_parallel_initialized
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] self.NCCL_CHECK(self._funcs["ncclCommInitRank"](ctypes.byref(comm),
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] self.NCCL_CHECK(self._funcs["ncclCommInitRank"](ctypes.byref(comm),
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] initialize_model_parallel(tensor_model_parallel_size,
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 258, in NCCL_CHECK
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 258, in NCCL_CHECK
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 952, in initialize_model_parallel
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] raise RuntimeError(f"NCCL error: {error_str}")
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] raise RuntimeError(f"NCCL error: {error_str}")
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] _DP = init_model_parallel_group(group_ranks,
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390] RuntimeError: NCCL error: invalid usage (run with NCCL_DEBUG=WARN for details)
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390] RuntimeError: NCCL error: invalid usage (run with NCCL_DEBUG=WARN for details)
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 733, in init_model_parallel_group
(EngineCore_5 pid=2095691) ERROR 04-17 23:05:06 [core.py:390]
(EngineCore_1 pid=2095613) ERROR 04-17 23:05:06 [core.py:390]
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] return GroupCoordinator(
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 209, in __init__
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] self.device_communicator = device_comm_cls(
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/cuda_communicator.py", line 39, in __init__
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] self.pynccl_comm = PyNcclCommunicator(
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl.py", line 101, in __init__
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] self.comm: ncclComm_t = self.nccl.ncclCommInitRank(
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 279, in ncclCommInitRank
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] self.NCCL_CHECK(self._funcs["ncclCommInitRank"](ctypes.byref(comm),
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 258, in NCCL_CHECK
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] raise RuntimeError(f"NCCL error: {error_str}")
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390] RuntimeError: NCCL error: invalid usage (run with NCCL_DEBUG=WARN for details)
(EngineCore_3 pid=2095494) ERROR 04-17 23:05:06 [core.py:390]
CRITICAL 04-17 23:05:06 [core_client.py:361] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
CRITICAL 04-17 23:05:06 [core_client.py:361] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
CRITICAL 04-17 23:05:06 [core_client.py:361] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
CRITICAL 04-17 23:05:06 [core_client.py:361] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
CRITICAL 04-17 23:05:06 [core_client.py:361] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
ERROR: Traceback (most recent call last):
File "/mnt/tenant-home_speed/shard/zhangenci/.venv/trl/lib/python3.10/site-packages/starlette/routing.py", line 692, in lifespan
async with self.lifespan_context(app) as maybe_state:
File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
return await anext(self.gen)
File "/mnt/tenant-home_speed/shard/zhangenci/research/trl/scripts/vllm_serve.py", line 309, in lifespan
msg = connection.recv()
File "/usr/lib/python3.10/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/usr/lib/python3.10/multiprocessing/connection.py", line 414, in _recv_bytes
buf = self._recv(4)
File "/usr/lib/python3.10/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError
ERROR: Application startup failed. Exiting.
```
| 3,310 | 11,094 |
I-l-l-I
| 2025-04-18T17:14:25 |
@qgallouedec I had the same issue as @zzzzzec, when DP>1 and TP=1.
```
[0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 0 and rank 1 both on CUDA device 4f000
```
This is because the top of `cli.py` is executed additionally when the DP process is created, causing cuda to be initialized. Once cuda is initialized, changing `CUDA_VISIBLE_DEVICES` in the current process doesn't work, which leads to different DP processes having the same available devices and running on the same GPU when TP=1.
One solution is to put the import part of `cli.py` into the main function:
```
def main():
import os
......
from .scripts.vllm_serve import make_parser as make_vllm_serve_parser
```
| 3,310 | 11,095 |
qgallouedec
| 2025-04-18T17:38:57 |
Thanks @I-l-l-I! I've been having some issues as well, I'll try your option, keep you posted
| 3,310 | 11,096 |
qgallouedec
| 2025-04-24T21:46:24 |
Training a 7B with GRPO on two nodes (1 for vLLM, one for training). Generation is way faster than before!

| 3,310 | 11,097 |
qgallouedec
| 2025-04-24T22:00:37 |

| 3,310 | 11,098 |
ahatamiz
| 2025-04-25T23:19:52 |
Hi @qgallouedec @zzzzzec
Unfortunately, I don't believe this feature works properly ! I am not able to run anything with DP>1 as I get this weird error (log for 2 nodes for the DeepSeek-R1-Distill-Qwen-7B, using trl=0.17.0 and vllm==0.8.3):
(EngineCore_1 pid=2353892) INFO 04-25 16:14:42 [worker_base.py:589] Injected <class 'trl.scripts.vllm_serve.WeightSyncWorkerExtension'> into <class 'vllm.v1.worker.gpu_worker.Worker'> for extended collective_rpc calls ['close_communicator', 'init_communicator', 'update_named_param']
(EngineCore_0 pid=2353893) INFO 04-25 16:14:42 [worker_base.py:589] Injected <class 'trl.scripts.vllm_serve.WeightSyncWorkerExtension'> into <class 'vllm.v1.worker.gpu_worker.Worker'> for extended collective_rpc calls ['close_communicator', 'init_communicator', 'update_named_param']
(EngineCore_6 pid=2353882) INFO 04-25 16:14:42 [worker_base.py:589] Injected <class 'trl.scripts.vllm_serve.WeightSyncWorkerExtension'> into <class 'vllm.v1.worker.gpu_worker.Worker'> for extended collective_rpc calls ['close_communicator', 'init_communicator', 'update_named_param']
(EngineCore_2 pid=2353883) INFO 04-25 16:14:42 [worker_base.py:589] Injected <class 'trl.scripts.vllm_serve.WeightSyncWorkerExtension'> into <class 'vllm.v1.worker.gpu_worker.Worker'> for extended collective_rpc calls ['close_communicator', 'init_communicator', 'update_named_param']
(EngineCore_3 pid=2353898) WARNING 04-25 16:14:42 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x155408633710>
(EngineCore_1 pid=2353892) WARNING 04-25 16:14:42 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x15530242e790>
(EngineCore_0 pid=2353893) WARNING 04-25 16:14:42 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x155302dcf8d0>
(EngineCore_7 pid=2353909) INFO 04-25 16:14:42 [worker_base.py:589] Injected <class 'trl.scripts.vllm_serve.WeightSyncWorkerExtension'> into <class 'vllm.v1.worker.gpu_worker.Worker'> for extended collective_rpc calls ['close_communicator', 'init_communicator', 'update_named_param']
(EngineCore_6 pid=2353882) WARNING 04-25 16:14:42 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x15540862f0d0>
(EngineCore_2 pid=2353883) WARNING 04-25 16:14:42 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x155302442110>
(EngineCore_4 pid=2353914) INFO 04-25 16:14:42 [worker_base.py:589] Injected <class 'trl.scripts.vllm_serve.WeightSyncWorkerExtension'> into <class 'vllm.v1.worker.gpu_worker.Worker'> for extended collective_rpc calls ['close_communicator', 'init_communicator', 'update_named_param']
(EngineCore_7 pid=2353909) WARNING 04-25 16:14:42 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x155554fdde90>
(EngineCore_4 pid=2353914) WARNING 04-25 16:14:42 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x15530242e350>
(EngineCore_5 pid=2353903) INFO 04-25 16:14:42 [worker_base.py:589] Injected <class 'trl.scripts.vllm_serve.WeightSyncWorkerExtension'> into <class 'vllm.v1.worker.gpu_worker.Worker'> for extended collective_rpc calls ['close_communicator', 'init_communicator', 'update_named_param']
(EngineCore_5 pid=2353903) WARNING 04-25 16:14:42 [utils.py:2413] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x155302431e90>
(EngineCore_6 pid=2353882) INFO 04-25 16:14:43 [parallel_state.py:836] Adjusting world_size=8 rank=6 distributed_init_method=tcp://127.0.0.1:12730 for DP
(EngineCore_1 pid=2353892) INFO 04-25 16:14:43 [parallel_state.py:836] Adjusting world_size=8 rank=1 distributed_init_method=tcp://127.0.0.1:12730 for DP
(EngineCore_0 pid=2353893) INFO 04-25 16:14:43 [parallel_state.py:836] Adjusting world_size=8 rank=0 distributed_init_method=tcp://127.0.0.1:12730 for DP
(EngineCore_4 pid=2353914) INFO 04-25 16:14:43 [parallel_state.py:836] Adjusting world_size=8 rank=4 distributed_init_method=tcp://127.0.0.1:12730 for DP
(EngineCore_2 pid=2353883) INFO 04-25 16:14:43 [parallel_state.py:836] Adjusting world_size=8 rank=2 distributed_init_method=tcp://127.0.0.1:12730 for DP
(EngineCore_5 pid=2353903) INFO 04-25 16:14:43 [parallel_state.py:836] Adjusting world_size=8 rank=5 distributed_init_method=tcp://127.0.0.1:12730 for DP
(EngineCore_7 pid=2353909) INFO 04-25 16:14:43 [parallel_state.py:836] Adjusting world_size=8 rank=7 distributed_init_method=tcp://127.0.0.1:12730 for DP
(EngineCore_3 pid=2353898) INFO 04-25 16:14:43 [parallel_state.py:836] Adjusting world_size=8 rank=3 distributed_init_method=tcp://127.0.0.1:12730 for DP
(EngineCore_6 pid=2353882) INFO 04-25 16:14:43 [utils.py:990] Found nccl from library libnccl.so.2
(EngineCore_6 pid=2353882) INFO 04-25 16:14:43 [pynccl.py:69] vLLM is using nccl==2.21.5
(EngineCore_0 pid=2353893) INFO 04-25 16:14:43 [utils.py:990] Found nccl from library libnccl.so.2
(EngineCore_2 pid=2353883) INFO 04-25 16:14:43 [utils.py:990] Found nccl from library libnccl.so.2
(EngineCore_0 pid=2353893) INFO 04-25 16:14:43 [pynccl.py:69] vLLM is using nccl==2.21.5
(EngineCore_7 pid=2353909) INFO 04-25 16:14:43 [utils.py:990] Found nccl from library libnccl.so.2
(EngineCore_3 pid=2353898) INFO 04-25 16:14:43 [utils.py:990] Found nccl from library libnccl.so.2
(EngineCore_1 pid=2353892) INFO 04-25 16:14:43 [utils.py:990] Found nccl from library libnccl.so.2
(EngineCore_4 pid=2353914) INFO 04-25 16:14:43 [utils.py:990] Found nccl from library libnccl.so.2
(EngineCore_2 pid=2353883) INFO 04-25 16:14:43 [pynccl.py:69] vLLM is using nccl==2.21.5
(EngineCore_5 pid=2353903) INFO 04-25 16:14:43 [utils.py:990] Found nccl from library libnccl.so.2
(EngineCore_7 pid=2353909) INFO 04-25 16:14:43 [pynccl.py:69] vLLM is using nccl==2.21.5
(EngineCore_3 pid=2353898) INFO 04-25 16:14:43 [pynccl.py:69] vLLM is using nccl==2.21.5
(EngineCore_4 pid=2353914) INFO 04-25 16:14:43 [pynccl.py:69] vLLM is using nccl==2.21.5
(EngineCore_1 pid=2353892) INFO 04-25 16:14:43 [pynccl.py:69] vLLM is using nccl==2.21.5
(EngineCore_5 pid=2353903) INFO 04-25 16:14:43 [pynccl.py:69] vLLM is using nccl==2.21.5
NCCL version 2.21.5+cuda12.4
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] EngineCore hit an exception: Traceback (most recent call last):
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 376, in run_engine_core
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] engine_core = DPEngineCoreProc(*args, **kwargs)
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 559, in __init__
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] super().__init__(input_path, output_path, vllm_config, executor_class,
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 319, in __init__
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] super().__init__(vllm_config, executor_class, log_stats)
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 67, in __init__
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] self.model_executor = executor_class(vllm_config)
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 52, in __init__
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] self._init_executor()
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 46, in _init_executor
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] self.collective_rpc("init_device")
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] answer = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/utils.py", line 2347, in run_method
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] return func(*args, **kwargs)
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/worker/worker_base.py", line 604, in init_device
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] self.worker.init_device() # type: ignore
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 113, in init_device
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] init_worker_distributed_environment(self.parallel_config, self.rank,
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 299, in init_worker_distributed_environment
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] ensure_model_parallel_initialized(parallel_config.tensor_parallel_size,
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 995, in ensure_model_parallel_initialized
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] initialize_model_parallel(tensor_model_parallel_size,
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 952, in initialize_model_parallel
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] _DP = init_model_parallel_group(group_ranks,
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 733, in init_model_parallel_group
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] return GroupCoordinator(
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 209, in __init__
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] self.device_communicator = device_comm_cls(
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/cuda_communicator.py", line 39, in __init__
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] self.pynccl_comm = PyNcclCommunicator(
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl.py", line 99, in __init__
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] self.comm: ncclComm_t = self.nccl.ncclCommInitRank(
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 277, in ncclCommInitRank
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] self.NCCL_CHECK(self._funcs["ncclCommInitRank"](ctypes.byref(comm),
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 256, in NCCL_CHECK
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] raise RuntimeError(f"NCCL error: {error_str}")
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390] RuntimeError: NCCL error: invalid usage (run with NCCL_DEBUG=WARN for details)
(EngineCore_2 pid=2353883) ERROR 04-25 16:14:44 [core.py:390]
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] EngineCore hit an exception: Traceback (most recent call last):
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 376, in run_engine_core
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] engine_core = DPEngineCoreProc(*args, **kwargs)
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 559, in __init__
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] super().__init__(input_path, output_path, vllm_config, executor_class,
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 319, in __init__
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] super().__init__(vllm_config, executor_class, log_stats)
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 67, in __init__
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] self.model_executor = executor_class(vllm_config)
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 52, in __init__
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] self._init_executor()
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 46, in _init_executor
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] self.collective_rpc("init_device")
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] answer = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/utils.py", line 2347, in run_method
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] return func(*args, **kwargs)
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/worker/worker_base.py", line 604, in init_device
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] self.worker.init_device() # type: ignore
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 113, in init_device
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] init_worker_distributed_environment(self.parallel_config, self.rank,
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 299, in init_worker_distributed_environment
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] ensure_model_parallel_initialized(parallel_config.tensor_parallel_size,
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 995, in ensure_model_parallel_initialized
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] initialize_model_parallel(tensor_model_parallel_size,
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 952, in initialize_model_parallel
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] _DP = init_model_parallel_group(group_ranks,
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 733, in init_model_parallel_group
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] return GroupCoordinator(
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 209, in __init__
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] self.device_communicator = device_comm_cls(
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/cuda_communicator.py", line 39, in __init__
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] self.pynccl_comm = PyNcclCommunicator(
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl.py", line 99, in __init__
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] self.comm: ncclComm_t = self.nccl.ncclCommInitRank(
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 277, in ncclCommInitRank
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] self.NCCL_CHECK(self._funcs["ncclCommInitRank"](ctypes.byref(comm),
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 256, in NCCL_CHECK
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] raise RuntimeError(f"NCCL error: {error_str}")
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390] RuntimeError: NCCL error: invalid usage (run with NCCL_DEBUG=WARN for details)
(EngineCore_4 pid=2353914) ERROR 04-25 16:14:44 [core.py:390]
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] EngineCore hit an exception: Traceback (most recent call last):
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 376, in run_engine_core
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] engine_core = DPEngineCoreProc(*args, **kwargs)
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 559, in __init__
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] super().__init__(input_path, output_path, vllm_config, executor_class,
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 319, in __init__
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] super().__init__(vllm_config, executor_class, log_stats)
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 67, in __init__
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] self.model_executor = executor_class(vllm_config)
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 52, in __init__
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] self._init_executor()
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 46, in _init_executor
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] self.collective_rpc("init_device")
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] answer = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/utils.py", line 2347, in run_method
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] return func(*args, **kwargs)
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/worker/worker_base.py", line 604, in init_device
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] self.worker.init_device() # type: ignore
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 113, in init_device
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] init_worker_distributed_environment(self.parallel_config, self.rank,
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 299, in init_worker_distributed_environment
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] ensure_model_parallel_initialized(parallel_config.tensor_parallel_size,
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 995, in ensure_model_parallel_initialized
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] initialize_model_parallel(tensor_model_parallel_size,
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 952, in initialize_model_parallel
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] _DP = init_model_parallel_group(group_ranks,
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 733, in init_model_parallel_group
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] return GroupCoordinator(
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 209, in __init__
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] self.device_communicator = device_comm_cls(
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/cuda_communicator.py", line 39, in __init__
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] self.pynccl_comm = PyNcclCommunicator(
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl.py", line 99, in __init__
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] self.comm: ncclComm_t = self.nccl.ncclCommInitRank(
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 277, in ncclCommInitRank
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] self.NCCL_CHECK(self._funcs["ncclCommInitRank"](ctypes.byref(comm),
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 256, in NCCL_CHECK
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] raise RuntimeError(f"NCCL error: {error_str}")
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390] RuntimeError: NCCL error: invalid usage (run with NCCL_DEBUG=WARN for details)
(EngineCore_1 pid=2353892) ERROR 04-25 16:14:44 [core.py:390]
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] EngineCore hit an exception: Traceback (most recent call last):
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 376, in run_engine_core
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] engine_core = DPEngineCoreProc(*args, **kwargs)
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 559, in __init__
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] super().__init__(input_path, output_path, vllm_config, executor_class,
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 319, in __init__
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] super().__init__(vllm_config, executor_class, log_stats)
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 67, in __init__
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] self.model_executor = executor_class(vllm_config)
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 52, in __init__
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] self._init_executor()
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 46, in _init_executor
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] self.collective_rpc("init_device")
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] answer = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/utils.py", line 2347, in run_method
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] return func(*args, **kwargs)
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/worker/worker_base.py", line 604, in init_device
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] self.worker.init_device() # type: ignore
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 113, in init_device
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] init_worker_distributed_environment(self.parallel_config, self.rank,
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 299, in init_worker_distributed_environment
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] ensure_model_parallel_initialized(parallel_config.tensor_parallel_size,
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 995, in ensure_model_parallel_initialized
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] initialize_model_parallel(tensor_model_parallel_size,
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 952, in initialize_model_parallel
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] _DP = init_model_parallel_group(group_ranks,
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 733, in init_model_parallel_group
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] return GroupCoordinator(
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 209, in __init__
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] self.device_communicator = device_comm_cls(
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/cuda_communicator.py", line 39, in __init__
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] self.pynccl_comm = PyNcclCommunicator(
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl.py", line 99, in __init__
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] self.comm: ncclComm_t = self.nccl.ncclCommInitRank(
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 277, in ncclCommInitRank
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] self.NCCL_CHECK(self._funcs["ncclCommInitRank"](ctypes.byref(comm),
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 256, in NCCL_CHECK
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] raise RuntimeError(f"NCCL error: {error_str}")
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390] RuntimeError: NCCL error: invalid usage (run with NCCL_DEBUG=WARN for details)
(EngineCore_6 pid=2353882) ERROR 04-25 16:14:44 [core.py:390]
CRITICAL 04-25 16:14:44 [core_client.py:361] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] EngineCore hit an exception: Traceback (most recent call last):
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 376, in run_engine_core
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] engine_core = DPEngineCoreProc(*args, **kwargs)
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 559, in __init__
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] super().__init__(input_path, output_path, vllm_config, executor_class,
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 319, in __init__
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] super().__init__(vllm_config, executor_class, log_stats)
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 67, in __init__
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] self.model_executor = executor_class(vllm_config)
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 52, in __init__
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] self._init_executor()
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 46, in _init_executor
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] self.collective_rpc("init_device")
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] answer = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/utils.py", line 2347, in run_method
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] return func(*args, **kwargs)
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/worker/worker_base.py", line 604, in init_device
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] self.worker.init_device() # type: ignore
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 113, in init_device
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] init_worker_distributed_environment(self.parallel_config, self.rank,
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 299, in init_worker_distributed_environment
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] ensure_model_parallel_initialized(parallel_config.tensor_parallel_size,
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 995, in ensure_model_parallel_initialized
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] initialize_model_parallel(tensor_model_parallel_size,
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 952, in initialize_model_parallel
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] _DP = init_model_parallel_group(group_ranks,
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 733, in init_model_parallel_group
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] return GroupCoordinator(
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 209, in __init__
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] self.device_communicator = device_comm_cls(
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/cuda_communicator.py", line 39, in __init__
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] self.pynccl_comm = PyNcclCommunicator(
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl.py", line 99, in __init__
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] self.comm: ncclComm_t = self.nccl.ncclCommInitRank(
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 277, in ncclCommInitRank
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] self.NCCL_CHECK(self._funcs["ncclCommInitRank"](ctypes.byref(comm),
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 256, in NCCL_CHECK
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] raise RuntimeError(f"NCCL error: {error_str}")
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390] RuntimeError: NCCL error: invalid usage (run with NCCL_DEBUG=WARN for details)
(EngineCore_5 pid=2353903) ERROR 04-25 16:14:44 [core.py:390]
CRITICAL 04-25 16:14:44 [core_client.py:361] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
CRITICAL 04-25 16:14:44 [core_client.py:361] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
CRITICAL 04-25 16:14:44 [core_client.py:361] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] EngineCore hit an exception: Traceback (most recent call last):
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 376, in run_engine_core
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] engine_core = DPEngineCoreProc(*args, **kwargs)
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 559, in __init__
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] super().__init__(input_path, output_path, vllm_config, executor_class,
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 319, in __init__
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] super().__init__(vllm_config, executor_class, log_stats)
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 67, in __init__
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] self.model_executor = executor_class(vllm_config)
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 52, in __init__
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] self._init_executor()
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 46, in _init_executor
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] self.collective_rpc("init_device")
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] answer = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/utils.py", line 2347, in run_method
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] return func(*args, **kwargs)
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/worker/worker_base.py", line 604, in init_device
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] self.worker.init_device() # type: ignore
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 113, in init_device
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] init_worker_distributed_environment(self.parallel_config, self.rank,
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 299, in init_worker_distributed_environment
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] ensure_model_parallel_initialized(parallel_config.tensor_parallel_size,
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 995, in ensure_model_parallel_initialized
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] initialize_model_parallel(tensor_model_parallel_size,
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 952, in initialize_model_parallel
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] _DP = init_model_parallel_group(group_ranks,
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 733, in init_model_parallel_group
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] return GroupCoordinator(
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 209, in __init__
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] self.device_communicator = device_comm_cls(
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/cuda_communicator.py", line 39, in __init__
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] self.pynccl_comm = PyNcclCommunicator(
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl.py", line 99, in __init__
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] self.comm: ncclComm_t = self.nccl.ncclCommInitRank(
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 277, in ncclCommInitRank
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] self.NCCL_CHECK(self._funcs["ncclCommInitRank"](ctypes.byref(comm),
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 256, in NCCL_CHECK
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] raise RuntimeError(f"NCCL error: {error_str}")
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390] RuntimeError: NCCL error: invalid usage (run with NCCL_DEBUG=WARN for details)
(EngineCore_7 pid=2353909) ERROR 04-25 16:14:44 [core.py:390]
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] EngineCore hit an exception: Traceback (most recent call last):
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 376, in run_engine_core
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] engine_core = DPEngineCoreProc(*args, **kwargs)
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 559, in __init__
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] super().__init__(input_path, output_path, vllm_config, executor_class,
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 319, in __init__
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] super().__init__(vllm_config, executor_class, log_stats)
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 67, in __init__
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] self.model_executor = executor_class(vllm_config)
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 52, in __init__
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] self._init_executor()
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 46, in _init_executor
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] self.collective_rpc("init_device")
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] answer = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/utils.py", line 2347, in run_method
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] return func(*args, **kwargs)
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/worker/worker_base.py", line 604, in init_device
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] self.worker.init_device() # type: ignore
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 113, in init_device
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] init_worker_distributed_environment(self.parallel_config, self.rank,
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 299, in init_worker_distributed_environment
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] ensure_model_parallel_initialized(parallel_config.tensor_parallel_size,
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 995, in ensure_model_parallel_initialized
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] initialize_model_parallel(tensor_model_parallel_size,
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 952, in initialize_model_parallel
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] _DP = init_model_parallel_group(group_ranks,
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 733, in init_model_parallel_group
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] return GroupCoordinator(
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 209, in __init__
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] self.device_communicator = device_comm_cls(
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/cuda_communicator.py", line 39, in __init__
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] self.pynccl_comm = PyNcclCommunicator(
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl.py", line 99, in __init__
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] self.comm: ncclComm_t = self.nccl.ncclCommInitRank(
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 277, in ncclCommInitRank
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] self.NCCL_CHECK(self._funcs["ncclCommInitRank"](ctypes.byref(comm),
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 256, in NCCL_CHECK
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] raise RuntimeError(f"NCCL error: {error_str}")
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390] RuntimeError: NCCL error: invalid usage (run with NCCL_DEBUG=WARN for details)
(EngineCore_3 pid=2353898) ERROR 04-25 16:14:44 [core.py:390]
CRITICAL 04-25 16:14:44 [core_client.py:361] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] EngineCore hit an exception: Traceback (most recent call last):
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 376, in run_engine_core
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] engine_core = DPEngineCoreProc(*args, **kwargs)
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 559, in __init__
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] super().__init__(input_path, output_path, vllm_config, executor_class,
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 319, in __init__
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] super().__init__(vllm_config, executor_class, log_stats)
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 67, in __init__
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] self.model_executor = executor_class(vllm_config)
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 52, in __init__
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] self._init_executor()
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 46, in _init_executor
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] self.collective_rpc("init_device")
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] answer = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/utils.py", line 2347, in run_method
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] return func(*args, **kwargs)
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/worker/worker_base.py", line 604, in init_device
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] self.worker.init_device() # type: ignore
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 113, in init_device
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] init_worker_distributed_environment(self.parallel_config, self.rank,
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 299, in init_worker_distributed_environment
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] ensure_model_parallel_initialized(parallel_config.tensor_parallel_size,
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 995, in ensure_model_parallel_initialized
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] initialize_model_parallel(tensor_model_parallel_size,
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 952, in initialize_model_parallel
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] _DP = init_model_parallel_group(group_ranks,
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 733, in init_model_parallel_group
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] return GroupCoordinator(
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/parallel_state.py", line 209, in __init__
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] self.device_communicator = device_comm_cls(
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/cuda_communicator.py", line 39, in __init__
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] self.pynccl_comm = PyNcclCommunicator(
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl.py", line 99, in __init__
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] self.comm: ncclComm_t = self.nccl.ncclCommInitRank(
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 277, in ncclCommInitRank
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] self.NCCL_CHECK(self._funcs["ncclCommInitRank"](ctypes.byref(comm),
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] File "/opt/conda/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 256, in NCCL_CHECK
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] raise RuntimeError(f"NCCL error: {error_str}")
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390] RuntimeError: NCCL error: invalid usage (run with NCCL_DEBUG=WARN for details)
(EngineCore_0 pid=2353893) ERROR 04-25 16:14:44 [core.py:390]
CRITICAL 04-25 16:14:44 [core_client.py:361] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
CRITICAL 04-25 16:14:44 [core_client.py:361] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
CRITICAL 04-25 16:14:44 [core_client.py:361] Got fatal signal from worker processes, shutting down. See stack trace above for root cause issue.
batch-block7-00733:2353893:2353893 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 0 and rank 1 both on CUDA device f000
batch-block7-00733:2353892:2353892 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 1 and rank 0 both on CUDA device f000
batch-block7-00733:2353883:2353883 [0] init.cc:943 NCCL WARN Duplicate GPU detected : rank 2 and rank 0 both on CUDA device f000
ERROR: Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/starlette/routing.py", line 692, in lifespan
async with self.lifespan_context(app) as maybe_state:
File "/opt/conda/lib/python3.11/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/trl/scripts/vllm_serve.py", line 362, in lifespan
msg = connection.recv()
^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/multiprocessing/connection.py", line 430, in _recv_bytes
buf = self._recv(4)
^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/multiprocessing/connection.py", line 399, in _recv
raise EOFError
EOFError
ERROR: Application startup failed. Exiting.
| 3,310 | 11,099 |
qgallouedec
| 2025-04-25T23:22:12 |
What command do you use?
| 3,310 | 11,100 |
ahatamiz
| 2025-04-25T23:26:01 |
To run the vllm part, I use this:
```
srun \
--nodes=1 \
--ntasks=1 \
--nodelist="$VLLM_NODE" \
--container-image="$IMAGE" \
--container-env=ALL \
--container-mounts="/home/${USER}:/home/${USER}" \
--container-workdir="$OUTPUT_ROOT" \
--output="${LOGS_DIR}/vllm_%x_${DATETIME}.log" \
bash -c "
echo \"[vLLM Node] Starting vLLM on \$(hostname -s)\"
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \\
trl vllm-serve \\
--model \"$MODEL\" \\
--tensor_parallel_size \"$TP\" \\
--data_parallel_size \"$DP\" \\
--host \"$VLLM_NODE\"
"
```
The above works fine if DP=1 and TP set properly.
| 3,310 | 11,101 |
qgallouedec
| 2025-04-25T23:29:56 |
It could be related to this:
https://github.com/huggingface/trl/blob/29c5e05e3a5f1f8a369aaef78fc9f36878db9194/trl/cli.py#L100-L106
Try to replace `trl vllm-serve` by `python -m trl.scripts.vllm_serve`.
| 3,310 | 11,102 |
ahatamiz
| 2025-04-25T23:32:43 |
Thanks @qgallouedec ! just tested it with TP=1 and DP=8 and it works !
| 3,310 | 11,103 |
ahatamiz
| 2025-04-26T02:26:49 |
@qgallouedec unfortunately the issue seems to persist despite seemingly being resolved at first. This time, we use TP=1 and DP=8. I'd appreciate if you may have any insights here:
```
Processed prompts: 25%|██▌ | 4/16 [00:47<02:21, 11.78s/it, est. speed input: 365.46 toks/s, output: 125.39 toProcessed prompts: 25%|██▌ | 4/16 [00:57<02:52, 14.40s/it, est. speed input: 281.88 toks/s, output: 112.20 toProcessed prompts: 25%|██▌ | 4/16 [00:59<02:59, 14.94s/it, est. speed input: 283.54 toks/s, output: 120.78 toProcessed prompts: 50%|█████ | 8/16 [01:04<00:54, 6.78s/it, est. speed input: 527.17 toks/s, output: 239.39 toProcessed prompts: 25%|██▌ | 4/16 [01:04<03:12, 16.01s/it, est. speed input: 263.78 toks/s, output: 127.95 toProcessed prompts: 25%|██▌ | 4/16 [01:04<03:12, 16.01s/it, est. speed input: 271.25 toks/s, output: 127.94 toProcessed prompts: 25%|██▌ | 4/16 [01:04<03:12, 16.01s/it, est. speed input: 270.38 toks/s, output: 127.91 toProcessed prompts: 50%|█████ | 8/16 [01:04<00:58, 7.34s/it, est. speed input: 532.39 toks/s, output: 218.91 toProcessed prompts: 50%|█████ | 8/16 [01:04<00:55, 6.88s/it, est. speed input: 519.56 toks/s, output: 228.68 toProcessed prompts: 25%|██▌ | 4/16 [01:04<03:12, 16.01s/it, est. speed input: 267.23 toks/s, output: 116.30 toProcessed prompts: 25%|██▌ | 4/16 [01:04<03:12, 16.01s/it, est. speed input: 265.71 toks/s, output: 127.92 toProcessed prompts: 100%|██████████| 16/16 [01:04<00:00, 4.01s/it, est. speed input: 1058.16 toks/s, output: 474.48 toks/s]
Processed prompts: 100%|██████████| 16/16 [01:04<00:00, 4.01s/it, est. speed input: 1004.50 toks/s, output: 486.24 toks/s]
Processed prompts: 100%|██████████| 16/16 [01:04<00:00, 4.00s/it, est. speed input: 1084.13 toks/s, output: 504.99 toks/s]
Processed prompts: 100%|██████████| 16/16 [01:04<00:00, 4.01s/it, est. speed input: 1069.00 toks/s, output: 509.25 toks/s]
Processed prompts: 100%|██████████| 16/16 [01:04<00:00, 4.01s/it, est. speed input: 1051.96 toks/s, output: 494.82 toks/s]
Processed prompts: 100%|██████████| 16/16 [01:04<00:00, 4.01s/it, est. speed input: 1044.32 toks/s, output: 484.14 toks/s]
Processed prompts: 100%|██████████| 16/16 [01:04<00:00, 4.00s/it, est. speed input: 1068.81 toks/s, output: 499.86 toks/s]
Processed prompts: 100%|██████████| 16/16 [01:04<00:00, 4.01s/it, est. speed input: 1060.76 toks/s, output: 511.16 toks/s]
INFO: 10.49.161.223:13970 - "POST /generate/ HTTP/1.1" 200 OK
INFO: 10.49.161.223:39872 - "POST /update_named_param/ HTTP/1.1" 200 OK
(EngineCore_7 pid=3107394) ERROR 04-25 17:45:42 [core.py:459] Invocation of collective_rpc method failed
(EngineCore_7 pid=3107394) ERROR 04-25 17:45:42 [core.py:459] Traceback (most recent call last):
(EngineCore_7 pid=3107394) ERROR 04-25 17:45:42 [core.py:459] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 456, in _handle_client_request
(EngineCore_7 pid=3107394) ERROR 04-25 17:45:42 [core.py:459] output.result = method(
(EngineCore_7 pid=3107394) ERROR 04-25 17:45:42 [core.py:459] ^^^^^^^
(EngineCore_7 pid=3107394) ERROR 04-25 17:45:42 [core.py:459] File "/opt/conda/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 303, in collective_rpc
(EngineCore_7 pid=3107394) ERROR 04-25 17:45:42 [core.py:459] return self.model_executor.collective_rpc(method, timeout, args,
(EngineCore_7 pid=3107394) ERROR 04-25 17:45:42 [core.py:459] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_7 pid=3107394) ERROR 04-25 17:45:42 [core.py:459] File "/opt/conda/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
(EngineCore_7 pid=3107394) ERROR 04-25 17:45:42 [core.py:459] answer = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_7 pid=3107394) ERROR 04-25 17:45:42 [core.py:459] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_7 pid=3107394) ERROR 04-25 17:45:42 [core.py:459] File "/opt/conda/lib/python3.11/site-packages/vllm/utils.py", line 2347, in run_method
(EngineCore_7 pid=3107394) ERROR 04-25 17:45:42 [core.py:459] return func(*args, **kwargs)
(EngineCore_7 pid=3107394) ERROR 04-25 17:45:42 [core.py:459] ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_7 pid=3107394) ERROR 04-25 17:45:42 [core.py:459] File "/opt/conda/lib/python3.11/site-packages/trl/scripts/vllm_serve.py", line 126, in update_named_param
(EngineCore_7 pid=3107394) ERROR 04-25 17:45:42 [core.py:459] raise RuntimeError("Communicator not initialized. Call `init_communicator` first.")
(EngineCore_7 pid=3107394) ERROR 04-25 17:45:42 [core.py:459] RuntimeError: Communicator not initialized. Call `init_communicator` first.
```
It seems like we do indeed finish several completions, before running into this weird error !
| 3,310 | 11,104 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.