-
Notifications
You must be signed in to change notification settings - Fork 11.5k
ggml : fix quantized cpy op #12310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ggml : fix quantized cpy op #12310
Conversation
I no longer have garbled output with quantized cache. Only repetitions when reaching context-size, depending on the batch-size and the number of slots. |
Is there any chance we could add the copy operations for |
@jukofyork bc25236 should cover BF16 <-> F32 copies. |
This change does not look right to me. If llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c Lines 4166 to 4187 in 938c779
|
5da8ae3
to
3384f36
Compare
Good catch. This code wasn't exercised by the tests it is used when I used the |
ggml-ci
ggml-ci
ggml-ci
3384f36
to
d266584
Compare
* ggml : fix quantized cpy op ggml-ci * tests : add cpy tests for all types ggml-ci * tests : add BF16 copy tests ggml-ci * tests : fix loop for same-type copy ggml-ci * tests : add option to permute the dst tensor ggml-ci
Fixed #12253 |
ref #12253
This should fix
CPY(Q8_0, Q8_0)