Commit 334d10b
* Fix Float16 segfault with Metal algorithms
Add compatibility check to prevent MetalLUFactorization and MetalOffload32MixedLUFactorization
from being used with Float16 element types. Metal Performance Shaders only supports Float32,
and attempting to use Float16 causes a segfault in MPSMatrixDecompositionLU.
The fix adds an early check in test_algorithm_compatibility() to filter out Metal algorithms
for Float16 before they're attempted, allowing LinearSolveAutotune to gracefully skip them
rather than crash.
Fixes: #743
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Expand Float16 compatibility rules for GPU algorithms
Add comprehensive compatibility checks to prevent Float16 usage with GPU algorithms
that don't support it:
- CUDA algorithms: CudaOffloadLUFactorization, CudaOffloadQRFactorization, and
CudaOffloadFactorization don't support Float16 as cuSOLVER factorization routines
require Float32/Float64
- AMD GPU algorithms: AMDGPUOffloadLUFactorization and AMDGPUOffloadQRFactorization
have limited/unclear Float16 support in rocSOLVER
- Metal algorithms: Keep existing MetalLUFactorization rule but allow mixed precision
MetalOffload32MixedLUFactorization as it converts inputs to Float32
Mixed precision algorithms (*32Mixed*) are allowed as they internally convert inputs
to Float32, making them compatible with Float16 inputs.
This prevents potential segfaults, errors, or undefined behavior when attempting to
use Float16 with GPU libraries that don't support it.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add comprehensive Float16 compatibility rules for sparse and specialized solvers
Add compatibility checks for additional solver categories that don't support Float16:
- Sparse factorization: UMFPACKFactorization and KLUFactorization from SuiteSparse
don't support Float16 (currently limited to double precision with single precision
in development)
- PARDISO solvers: All PARDISO variants (MKL/Panua) only support single/double precision
- CUSOLVERRF: Specifically requires Float64/Int32 types for sparse LU refactorization
This comprehensive set of compatibility rules prevents attempting to use Float16 with:
- All major GPU algorithms (CUDA, Metal, AMD)
- Sparse direct solvers (UMFPACK, KLU, PARDISO, CUSOLVERRF)
- BLAS-dependent dense algorithms (already covered by existing BlasFloat check)
Iterative/Krylov methods are allowed as they're type-generic and only need matrix-vector
products, which should work with Float16.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Fix manual BLAS wrapper compatibility for non-standard types
Corrects a critical issue with manual BLAS wrapper algorithms that Chris mentioned:
BLISLUFactorization, MKLLUFactorization, and AppleAccelerateLUFactorization have
explicit method signatures for only [Float32, Float64, ComplexF32, ComplexF64].
Key fixes:
1. Fixed algorithm names in compatibility rules (was "BLISFactorization",
now "BLISLUFactorization")
2. Added separate check for manual BLAS wrappers that bypass Julia's BLAS interface
3. These algorithms use direct ccall() with hardcoded type signatures, so they fail
with MethodError for unsupported types like Float16, not BlasFloat conversion
4. Updated to catch all non-BLAS types, not just Float16
This prevents MethodError crashes when autotune attempts to use these algorithms
with unsupported numeric types, addressing the "manual BLAS wrappers" issue
Chris identified in the original issue comment.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add OpenBLASLUFactorization to autotune and fix its compatibility
Two important fixes for OpenBLAS direct wrapper:
1. **Added to autotune algorithm detection**: OpenBLASLUFactorization was missing
from get_available_algorithms() despite being a manual BLAS wrapper like MKL,
BLIS, and AppleAccelerate. Now included when OpenBLAS_jll.is_available().
2. **Added to manual BLAS wrapper compatibility rules**: OpenBLASLUFactorization
has the same explicit method signatures as other manual BLAS wrappers
(Float32/64, ComplexF32/64 only) and would fail with MethodError for Float16.
This ensures OpenBLAS direct wrapper is:
- Benchmarked alongside other manual BLAS wrappers for performance comparison
- Protected from crashes when used with unsupported types like Float16
- Consistent with the treatment of other manual BLAS wrapper algorithms
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add OpenBLAS_jll dependency to LinearSolveAutotune
Fixes the missing OpenBLAS_jll dependency that was preventing OpenBLASLUFactorization
from being properly detected and included in autotune benchmarks.
Changes:
- Added OpenBLAS_jll to LinearSolveAutotune/Project.toml dependencies
- Added OpenBLAS_jll import to LinearSolveAutotune.jl
- Set compat entry for OpenBLAS_jll = "0.3"
This resolves the undefined variable error when checking OpenBLAS_jll.is_available()
in get_available_algorithms(), ensuring OpenBLAS direct wrapper is properly included
in autotune benchmarks alongside other manual BLAS wrappers.
Fixes: #764 (comment)
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: ChrisRackauckas <accounts@chrisrackauckas.com>
Co-authored-by: Claude <noreply@anthropic.com>
1 parent eebab6f commit 334d10b
File tree
4 files changed
+126
-57
lines changed- lib/LinearSolveAutotune
- src
4 files changed
+126
-57
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
20 | 20 | | |
21 | 21 | | |
22 | 22 | | |
| 23 | + | |
23 | 24 | | |
24 | 25 | | |
25 | 26 | | |
| |||
47 | 48 | | |
48 | 49 | | |
49 | 50 | | |
| 51 | + | |
50 | 52 | | |
51 | 53 | | |
52 | 54 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
3 | 3 | | |
4 | 4 | | |
5 | 5 | | |
| 6 | + | |
6 | 7 | | |
7 | 8 | | |
8 | 9 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
43 | 43 | | |
44 | 44 | | |
45 | 45 | | |
| 46 | + | |
| 47 | + | |
| 48 | + | |
| 49 | + | |
| 50 | + | |
| 51 | + | |
| 52 | + | |
| 53 | + | |
46 | 54 | | |
47 | 55 | | |
48 | 56 | | |
| |||
53 | 61 | | |
54 | 62 | | |
55 | 63 | | |
56 | | - | |
| 64 | + | |
| 65 | + | |
57 | 66 | | |
58 | 67 | | |
59 | 68 | | |
| |||
98 | 107 | | |
99 | 108 | | |
100 | 109 | | |
101 | | - | |
| 110 | + | |
| 111 | + | |
102 | 112 | | |
103 | 113 | | |
104 | 114 | | |
| |||
113 | 123 | | |
114 | 124 | | |
115 | 125 | | |
116 | | - | |
| 126 | + | |
| 127 | + | |
117 | 128 | | |
118 | 129 | | |
119 | 130 | | |
| |||
0 commit comments