-
Notifications
You must be signed in to change notification settings - Fork 13.6k
Add ops needed for new hybrid models: SOFTPLUS, EXPM1, TRI, SOLVE_TRI, CUMSUM #17063
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
@gabe-l-hart guess you'll be interested in this one as well :) |
gabe-l-hart
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few comments on the op signatures and how they relate to the versions I have on the SSD branch
| | Operation | BLAS | CANN | CPU | CUDA | Metal | OpenCL | SYCL | Vulkan | zDNN | | ||
| |-----------|------|------|------|------|------|------|------|------|------| | ||
| | ABS | ❌ | ✅ | ✅ | 🟡 | 🟡 | ❌ | 🟡 | ❌ | ❌ | | ||
| | ACC | ❌ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | ❌ | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whelp, glad I know about this file now
| struct ggml_context * ctx, | ||
| struct ggml_tensor * a); | ||
|
|
||
| GGML_API struct ggml_tensor * ggml_softplus( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also have this one on my SSD branch #16982
| struct ggml_context * ctx, | ||
| struct ggml_tensor * a); | ||
|
|
||
| GGML_API struct ggml_tensor * ggml_cumsum( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In an effort to remove permutations and conts, I updated this to also allow an additional dim argument and then added a ggml_cumsum_0 convenience wrapper for dim 0 https://github.com/ggml-org/llama.cpp/pull/16982/files#diff-7dea3e94fe52f756a218321acc77042d0a333fd3d7e4c35160920ce6e86cb400R997
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have no strong opinion, but if we want the dim version, we could bring that in here, or we could consider changing that one to ggml_cumsum_dim on my branch to keep the function signature after this is merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be honest, I just made it to mirror the behavior of ggml_sum exactly. I do agree the PyTorch approach (of specifying the dimension) is better and I think minimizing permutes / conts is good, my perfectionist self just thinks that should entail the refactoring of OP_SUM as well :>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be OK. If we need to sum along dim 1, we can transpose the input and extend ggml_cumsum to work with transposed data. As initial pass, assert data is contiguous.
| int shift3); | ||
|
|
||
| // Make matrix into a triangular one (upper, upper + diagonal, lower or lower + diagonal) with constant value | ||
| GGML_API struct ggml_tensor * ggml_tri( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to cumsum, I added a version of this with the ability to specify dimensions https://github.com/ggml-org/llama.cpp/pull/16982/files#diff-7dea3e94fe52f756a218321acc77042d0a333fd3d7e4c35160920ce6e86cb400R2219
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here I'd be a bit skeptical. The convention is that the first two dimensions are the matrix dimensions. All the code is written under that assumption. PyTorch also defines .tril() without the dimension parameters. I don't think extending that is worth it tbh and it makes it much harder to write optimized kernels, so I'd drop it, but I guess @ggerganov should have a say.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, totally fair. This was very much "let's see if I can engineer any improvements." I'm definitely not clear if it makes any significant difference in the places I've used it currently.
| struct ggml_context * ctx, | ||
| struct ggml_tensor * a); | ||
|
|
||
| GGML_API struct ggml_tensor * ggml_cumsum( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be OK. If we need to sum along dim 1, we can transpose the input and extend ggml_cumsum to work with transposed data. As initial pass, assert data is contiguous.
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Diego Devesa <slarengh@gmail.com>
|
@slaren @ggerganov Should be ready for final review. |
The ops needed for the new hybrid models including Qwen3 Next and Kimi Linear.
Prerequisite to merging #16095