Skip to content

Commit 811df41

Browse files
authored
Update Flashinfer from v0.4.1 to v0.5.2 (#27952)
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
1 parent 67a2da8 commit 811df41

File tree

4 files changed

+11
-13
lines changed

4 files changed

+11
-13
lines changed

docker/Dockerfile

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -132,9 +132,7 @@ WORKDIR /workspace
132132
COPY requirements/common.txt requirements/common.txt
133133
COPY requirements/cuda.txt requirements/cuda.txt
134134
RUN --mount=type=cache,target=/root/.cache/uv \
135-
# TODO: remove apache-tvm-ffi once FlashInfer is fixed https://github.com/flashinfer-ai/flashinfer/issues/1962
136-
uv pip install --python /opt/venv/bin/python3 --pre apache-tvm-ffi==0.1.0b15 \
137-
&& uv pip install --python /opt/venv/bin/python3 -r requirements/cuda.txt \
135+
uv pip install --python /opt/venv/bin/python3 -r requirements/cuda.txt \
138136
--extra-index-url ${PYTORCH_CUDA_INDEX_BASE_URL}/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.')
139137

140138
# cuda arch list used by torch
@@ -356,16 +354,14 @@ RUN --mount=type=cache,target=/root/.cache/uv \
356354
# Install vllm wheel first, so that torch etc will be installed.
357355
RUN --mount=type=bind,from=build,src=/workspace/dist,target=/vllm-workspace/dist \
358356
--mount=type=cache,target=/root/.cache/uv \
359-
# TODO: remove apache-tvm-ffi once FlashInfer is fixed https://github.com/flashinfer-ai/flashinfer/issues/1962
360-
uv pip install --system --pre apache-tvm-ffi==0.1.0b15 \
361-
&& uv pip install --system dist/*.whl --verbose \
357+
uv pip install --system dist/*.whl --verbose \
362358
--extra-index-url ${PYTORCH_CUDA_INDEX_BASE_URL}/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.')
363359

364360
# Install FlashInfer pre-compiled kernel cache and binaries
365361
# https://docs.flashinfer.ai/installation.html
366362
RUN --mount=type=cache,target=/root/.cache/uv \
367-
uv pip install --system flashinfer-cubin==0.4.1 \
368-
&& uv pip install --system flashinfer-jit-cache==0.4.1 \
363+
uv pip install --system flashinfer-cubin==0.5.2 \
364+
&& uv pip install --system flashinfer-jit-cache==0.5.2 \
369365
--extra-index-url https://flashinfer.ai/whl/cu$(echo $CUDA_VERSION | cut -d. -f1,2 | tr -d '.') \
370366
&& flashinfer show-config
371367

docker/Dockerfile.nightly_torch

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -246,15 +246,15 @@ RUN pip install setuptools==75.6.0 packaging==23.2 ninja==1.11.1.3 build==1.2.2.
246246

247247

248248
# build flashinfer for torch nightly from source around 10 mins
249-
# release version: v0.4.1
249+
# release version: v0.5.2
250250
# todo(elainewy): cache flashinfer build result for faster build
251251
ENV CCACHE_DIR=/root/.cache/ccache
252252
RUN --mount=type=cache,target=/root/.cache/ccache \
253253
--mount=type=cache,target=/root/.cache/uv \
254254
echo "git clone flashinfer..." \
255255
&& git clone --recursive https://github.com/flashinfer-ai/flashinfer.git \
256256
&& cd flashinfer \
257-
&& git checkout v0.4.1\
257+
&& git checkout v0.5.2 \
258258
&& git submodule update --init --recursive \
259259
&& echo "finish git clone flashinfer..." \
260260
&& rm -rf build \

requirements/cuda.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,4 +12,4 @@ torchvision==0.24.0 # Required for phi3v processor. See https://github.com/pytor
1212
# Build from https://github.com/facebookresearch/xformers/releases/tag/v0.0.32.post1
1313
xformers==0.0.33+5d4b92a5.d20251029; platform_system == 'Linux' and platform_machine == 'x86_64' # Requires PyTorch >= 2.9
1414
# FlashInfer should be updated together with the Dockerfile
15-
flashinfer-python==0.4.1
15+
flashinfer-python==0.5.2

tests/kernels/attention/test_flashinfer_trtllm_attention.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -238,9 +238,11 @@ def test_flashinfer_trtllm_decode_with_baseline(
238238
if q_quant_dtype == FP8_DTYPE and o_quant_dtype == FP4_DTYPE:
239239
rtol, atol = 7e-2, 9e-2
240240
elif q_quant_dtype == FP8_DTYPE and o_quant_dtype == FP8_DTYPE:
241-
rtol, atol = 2e-2, 4e-2
241+
rtol, atol = 3e-2, 4e-2
242242
elif q_quant_dtype == FP8_DTYPE and o_quant_dtype == dtype:
243-
rtol, atol = 1e-2, 2e-2
243+
rtol, atol = 2e-2, 2e-2
244+
elif kv_quant_dtype == FP8_DTYPE:
245+
rtol, atol = 4e-2, 6e-2
244246
else:
245247
rtol, atol = 1e-2, 1e-2
246248

0 commit comments

Comments
 (0)