Skip to content

Conversation

@kyuyeunk
Copy link
Collaborator

@kyuyeunk kyuyeunk commented Nov 12, 2025

Description

Add support for loading mxfp4 weights in torchax path but dequantize into bf16 for now.

Verified that it returns correct numeric outputs with openai/gpt-oss-120b

Also verified that performance is the same with bf16 version unsloth/gpt-oss-120b-BF16

Referenced mxfp4 pr in JAX path for some of functionalities: #992

Tests

pytest -v -s tests/layers/vllm/test_mxfp4.py
MODEL_IMPL_TYPE=vllm examples/offline_inference.py --task=generate --model=openai/gpt-oss-120b --max-model-len=1024 --max-num-batched-tokens=1024 --max-num-seqs=128 --no-enable-prefix-caching --tensor-parallel-size=4 --gpu-memory-utilization=0.8

https://buildkite.com/tpu-commons/tpu-inference-ci/builds/5302

Checklist

Before submitting this PR, please make sure:

  • I have performed a self-review of my code.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have made or will make corresponding changes to any relevant documentation.

@kyuyeunk kyuyeunk requested review from qihqi and yaochengji November 12, 2025 12:54
@github-actions
Copy link

Description

Start with a short description of what the PR does and how this is a change from
the past.

The rest of the description includes relevant details and context, examples:

  • why is this change being made,
  • the problem being solved and any relevant context,
  • why this is a good solution,
  • some information about the specific implementation,
  • shortcomings of the solution and possible future improvements.

If the change fixes a bug or a Github issue, please include a link, e.g.,:
FIXES: b/123456
FIXES: #123456

Tests

Please describe how you tested this change, and include any instructions and/or
commands to reproduce.

Checklist

Before submitting this PR, please make sure:

  • I have performed a self-review of my code.
  • I have necessary comments in my code, particularly in hard-to-understand areas.
  • I have made or will make corresponding changes to any relevant documentation.

@kyuyeunk
Copy link
Collaborator Author

cc @amishacorns @bzgoogle

@kyuyeunk kyuyeunk force-pushed the mxfp4_support branch 5 times, most recently from 2215a21 to da67acb Compare November 13, 2025 08:00
Signed-off-by: Kyuyeun Kim <kyuyeunk@google.com>
@kyuyeunk kyuyeunk merged commit 005b50a into main Nov 14, 2025
3 checks passed
@kyuyeunk kyuyeunk deleted the mxfp4_support branch November 14, 2025 03:04
AahilA pushed a commit to amishacorns/tpu-inference that referenced this pull request Nov 14, 2025
Signed-off-by: Kyuyeun Kim <kyuyeunk@google.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants