Skip to content

Conversation

@wujinyuan1
Copy link
Contributor

@wujinyuan1 wujinyuan1 commented Nov 22, 2025

What this PR does / why we need it?

When cudagraph_mode is set to FULL_DECODE_ONLY, if dp > 1, the dummy-run process will be triggered. When calling the update_attn_params function, the num_tokens parameter needs to be passed, and this value is obtained through positions.shape[0]. However, the multimodal model uses mRope (multi-dimensional rotary positional embeddings), which causes the shape of positions to be 2. As a result, the value obtained from positions.shape[0] is incorrect. We solve this problem by replacing positions.shape[0] with num_tokens.

Does this PR introduce any user-facing change?

NO

How was this patch tested?

vLLM version: v0.11.0rc3
vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a hang issue for multimodal models when data parallelism is used. The fix involves using the correct num_tokens variable instead of positions.shape[0], which is incorrect for models with multi-dimensional rotary positional embeddings. The change is correct, but I've pointed out in a review comment that the fix is incomplete as a similar issue exists in another code path within the same function that was not addressed.

Comment on lines 2933 to +2939
if self.pcp_size * self.dcp_size > 1:
update_attn_dcp_pcp_params(self.update_stream,
forward_context,
positions.shape[0])
num_tokens)
else:
update_attn_params(self.update_stream, forward_context,
positions.shape[0])
num_tokens)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This change correctly replaces positions.shape[0] with num_tokens to fix the hang issue for multimodal models. However, the same logic error exists in the if self.vllm_config.model_config.use_mla: block on lines 2920-2931. To fully resolve the bug, positions.shape[0] should also be replaced with num_tokens in that block.

@wujinyuan1 wujinyuan1 closed this by deleting the head repository Nov 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants