Skip to content

Commit 285eaa4

Browse files
[Bugfix] Safeguard against missing backend in AttentionBackendEnum (vllm-project#28846)
Signed-off-by: jesse <szxfml@gmail.com> Signed-off-by: Song Zhixin <szxfml@gmail.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
1 parent 4393684 commit 285eaa4

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

vllm/attention/layer.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -310,7 +310,8 @@ def __init__(
310310
kv_sharing_target_layer_name,
311311
**extra_impl_args,
312312
)
313-
self.backend = AttentionBackendEnum[self.attn_backend.get_name()]
313+
backend_name = self.attn_backend.get_name()
314+
self.backend = AttentionBackendEnum.__members__.get(backend_name)
314315
self.dtype = dtype
315316

316317
# For cuda-alike (CUDA and ROCM) and cpu platforms, we control how

0 commit comments

Comments
 (0)