|
30 | 30 |
|
31 | 31 | Julia's built in `lu`. Equivalent to calling `lu!(A)` |
32 | 32 | |
33 | | - * On dense matrices, this uses the current BLAS implementation of the user's computer, |
34 | | - which by default is OpenBLAS but will use MKL if the user does `using MKL` in their |
35 | | - system. |
36 | | - * On sparse matrices, this will use UMFPACK from SuiteSparse. Note that this will not |
37 | | - cache the symbolic factorization. |
38 | | - * On CuMatrix, it will use a CUDA-accelerated LU from CuSolver. |
39 | | - * On BandedMatrix and BlockBandedMatrix, it will use a banded LU. |
| 33 | +* On dense matrices, this uses the current BLAS implementation of the user's computer, |
| 34 | +which by default is OpenBLAS but will use MKL if the user does `using MKL` in their |
| 35 | +system. |
| 36 | +* On sparse matrices, this will use UMFPACK from SuiteSparse. Note that this will not |
| 37 | +cache the symbolic factorization. |
| 38 | +* On CuMatrix, it will use a CUDA-accelerated LU from CuSolver. |
| 39 | +* On BandedMatrix and BlockBandedMatrix, it will use a banded LU. |
40 | 40 |
|
41 | 41 | ## Positional Arguments |
42 | 42 |
|
@@ -136,12 +136,12 @@ end |
136 | 136 |
|
137 | 137 | Julia's built in `qr`. Equivalent to calling `qr!(A)`. |
138 | 138 | |
139 | | - * On dense matrices, this uses the current BLAS implementation of the user's computer |
140 | | - which by default is OpenBLAS but will use MKL if the user does `using MKL` in their |
141 | | - system. |
142 | | - * On sparse matrices, this will use SPQR from SuiteSparse |
143 | | - * On CuMatrix, it will use a CUDA-accelerated QR from CuSolver. |
144 | | - * On BandedMatrix and BlockBandedMatrix, it will use a banded QR. |
| 139 | +* On dense matrices, this uses the current BLAS implementation of the user's computer |
| 140 | +which by default is OpenBLAS but will use MKL if the user does `using MKL` in their |
| 141 | +system. |
| 142 | +* On sparse matrices, this will use SPQR from SuiteSparse |
| 143 | +* On CuMatrix, it will use a CUDA-accelerated QR from CuSolver. |
| 144 | +* On BandedMatrix and BlockBandedMatrix, it will use a banded QR. |
145 | 145 | """ |
146 | 146 | struct QRFactorization{P} <: AbstractFactorization |
147 | 147 | pivot::P |
|
324 | 324 |
|
325 | 325 | Julia's built in `svd`. Equivalent to `svd!(A)`. |
326 | 326 | |
327 | | - * On dense matrices, this uses the current BLAS implementation of the user's computer |
328 | | - which by default is OpenBLAS but will use MKL if the user does `using MKL` in their |
329 | | - system. |
| 327 | +* On dense matrices, this uses the current BLAS implementation of the user's computer |
| 328 | +which by default is OpenBLAS but will use MKL if the user does `using MKL` in their |
| 329 | +system. |
330 | 330 | """ |
331 | 331 | struct SVDFactorization{A} <: AbstractFactorization |
332 | 332 | full::Bool |
|
0 commit comments