Skip to content

[Bugfix] Fix assertion error in flashmla backend with fullgraph enabled#33496

Open
Kurumi5210 wants to merge 1 commit intovllm-project:mainfrom
Kurumi5210:bugfix-flashmla
Open

[Bugfix] Fix assertion error in flashmla backend with fullgraph enabled#33496
Kurumi5210 wants to merge 1 commit intovllm-project:mainfrom
Kurumi5210:bugfix-flashmla

Conversation

@Kurumi5210
Copy link

@Kurumi5210 Kurumi5210 commented Feb 1, 2026

Purpose

Fix an assertion error when using flashmla backend with fullgraph enabled.

Previously, _build_attention_metadata was called with num_tokens=num_tokens_unpadded. When pad_attn=True (required by fullgraph + flashmla), this leads to an inconsistency between num_tokens and padded attention-related metadata, triggering an assertion failure inside the attention backend.
This PR fixes the issue by passing num_tokens_padded when full cudagraph is enabled, ensuring consistency between num_tokens
Related to #33384

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@mergify mergify bot added v1 bug Something isn't working labels Feb 1, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses an assertion error that occurs when using the flashmla backend with fullgraph enabled. The issue stemmed from an inconsistency in _dummy_run where _build_attention_metadata was called with an unpadded token count (num_tokens_unpadded) while other parameters were padded when pad_attn was true. This mismatch in padding led to the failure.

The proposed change corrects this by conditionally passing num_tokens_padded when pad_attn is true, ensuring all arguments to _build_attention_metadata are consistently padded. This is a direct and effective fix for the bug. The change is correct and well-contained.

@github-actions
Copy link

github-actions bot commented Feb 1, 2026

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

@Kurumi5210
Copy link
Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant