We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
.\llama-cli.exe --version version: 4942 (fbdfefe) built with MSVC 19.43.34808.0 for x64
Windows
Test code
> .\test-backend-ops.exe grad -o CPY or > .\test-backend-ops.exe grad
Commit #12310 crashes test-backend-ops grad. It doesn't seem to matter which backend.
Run test-backend-ops as grad mode.
test-backend-ops
grad
Commit #12310 : SHA ba932df
[3/23 08:24:26] PS E:\AI\llama.cpp\b4942\llama-b4942-bin-win-vulkan-x64 > .\test-backend-ops.exe grad -o CPY ggml_vulkan: Found 1 Vulkan devices: ggml_vulkan: 0 = AMD Radeon(TM) Graphics (AMD proprietary driver) | uma: 1 | fp16: 1 | warp size: 64 | shared memory: 32768 | matrix cores: none Testing 2 devices Backend 1/2: Vulkan0 Device description: AMD Radeon(TM) Graphics Device memory: 256 MB (256 MB free) CPY(type_src=f32,type_dst=f32,ne=[1,2,3,4],permute_src=[0,0,0,0],permute_dst=[0,0,0,0]): OK CPY(type_src=f32,type_dst=f32,ne=[1,2,3,4],permute_src=[0,2,1,3],permute_dst=[0,0,0,0]): OK CPY(type_src=f32,type_dst=f32,ne=[1,2,3,4],permute_src=[0,3,1,2],permute_dst=[0,2,1,3]): D:\a\llama.cpp\llama.cpp\ggml\src\ggml.c:5816: GGML_ASSERT(!src0_needs_grads || ggml_are_same_shape(src0, cgraph->grads[isrc0])) failed [3/23 08:24:39] PS E:\AI\llama.cpp\b4942\llama-b4942-bin-win-vulkan-x64 > cd ..\llama-b4942-bin-win-avx2-x64\ [3/23 08:24:55] PS E:\AI\llama.cpp\b4942\llama-b4942-bin-win-avx2-x64 > .\test-backend-ops.exe grad -o CPY Testing 1 devices Backend 1/1: CPU Device description: AMD Ryzen 7 5700U with Radeon Graphics Device memory: 0 MB (0 MB free) CPY(type_src=f32,type_dst=f32,ne=[1,2,3,4],permute_src=[0,0,0,0],permute_dst=[0,0,0,0]): OK CPY(type_src=f32,type_dst=f32,ne=[1,2,3,4],permute_src=[0,2,1,3],permute_dst=[0,0,0,0]): OK CPY(type_src=f32,type_dst=f32,ne=[1,2,3,4],permute_src=[0,3,1,2],permute_dst=[0,2,1,3]): D:\a\llama.cpp\llama.cpp\ggml\src\ggml.c:5816: GGML_ASSERT(!src0_needs_grads || ggml_are_same_shape(src0, cgraph->grads[isrc0])) failed
The text was updated successfully, but these errors were encountered:
GGML_ASSERT line here.
llama.cpp/ggml/src/ggml.c
Lines 5814 to 5818 in fbdfefe
Sorry, something went wrong.
No branches or pull requests
Name and Version
.\llama-cli.exe --version
version: 4942 (fbdfefe)
built with MSVC 19.43.34808.0 for x64
Operating systems
Windows
Which llama.cpp modules do you know to be affected?
Test code
Command line
Problem description & steps to reproduce
description
Commit #12310 crashes test-backend-ops grad.
It doesn't seem to matter which backend.
steps to reproduce
Run
test-backend-ops
asgrad
mode.First Bad Commit
Commit #12310 : SHA ba932df
Relevant log output
The text was updated successfully, but these errors were encountered: