Skip to content

Conversation

@chelsea0x3b
Copy link
Contributor

@chelsea0x3b chelsea0x3b commented Oct 3, 2025

Purpose

Compiles the drafter model used for speculative decoding. Also cleaning up the compilation logic a bit

Benchmark speculative decoding model with 4 spec tokens vllm bench serve on B200:

vllm serve Qwen/Qwen3-Next-80B-A3B-Instruct -tp 4 --tokenizer-mode auto --speculative-config '{"method": "qwen3_next_mtp", "num_speculative_tokens": 4}' --no-enable-chunked-prefill
vllm bench serve --backend vllm --model Qwen/Qwen3-Next-80B-A3B-Instruct   --endpoint /v1/completions --dataset-name random --random-input 2048  --random-output 1024 --max-concurrency 256 --num-prompt 1000

Ran the benchmarks 4 separate times and took averages:
Main branch: 25912.5375 tok/s
This PR: 26184.05tok/s

For a consistent 1% boost


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: Corey Lowman <clowman1993@gmail.com>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a performance optimization by compiling the drafter model used for speculative decoding. This is achieved by refactoring the model compilation logic from load_model into a new _compile_model helper method, which is then applied to both the main model and the drafter model. The refactoring improves code organization and readability. The changes appear correct and well-implemented, and the provided benchmarks indicate a consistent performance boost.

@mergify
Copy link

mergify bot commented Oct 8, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @coreylowman.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Oct 8, 2025
else:
self.model = UBatchWrapper(self.model, self.vllm_config,
CUDAGraphMode.NONE, self.device)
full = self.compilation_config.cudagraph_mode.has_full_cudagraphs()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this happen to enable full cuda graphs for the draft model? If so, how would that work?

I believe we currently force piecewise for the drafter, but I still don't fully understand how many obstacles remain to unblock full-graphs for the draft model. See here for more context #23679.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants