Skip to content

Misc. bug: test-backend-ops grad crash by GGML_ASSERT error #12520

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
masamaru-san opened this issue Mar 22, 2025 · 1 comment
Open

Misc. bug: test-backend-ops grad crash by GGML_ASSERT error #12520

masamaru-san opened this issue Mar 22, 2025 · 1 comment

Comments

@masamaru-san
Copy link

Name and Version

.\llama-cli.exe --version
version: 4942 (fbdfefe)
built with MSVC 19.43.34808.0 for x64

Operating systems

Windows

Which llama.cpp modules do you know to be affected?

Test code

Command line

> .\test-backend-ops.exe grad -o CPY
or
> .\test-backend-ops.exe grad

Problem description & steps to reproduce

description

Commit #12310 crashes test-backend-ops grad.
It doesn't seem to matter which backend.

steps to reproduce

Run test-backend-ops as grad mode.

First Bad Commit

Commit #12310 : SHA ba932df

Relevant log output

[3/23 08:24:26] PS E:\AI\llama.cpp\b4942\llama-b4942-bin-win-vulkan-x64
> .\test-backend-ops.exe grad -o CPY
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon(TM) Graphics (AMD proprietary driver) | uma: 1 | fp16: 1 | warp size: 64 | shared memory: 32768 | matrix cores: none
Testing 2 devices

Backend 1/2: Vulkan0
  Device description: AMD Radeon(TM) Graphics
  Device memory: 256 MB (256 MB free)

  CPY(type_src=f32,type_dst=f32,ne=[1,2,3,4],permute_src=[0,0,0,0],permute_dst=[0,0,0,0]): OK
  CPY(type_src=f32,type_dst=f32,ne=[1,2,3,4],permute_src=[0,2,1,3],permute_dst=[0,0,0,0]): OK
  CPY(type_src=f32,type_dst=f32,ne=[1,2,3,4],permute_src=[0,3,1,2],permute_dst=[0,2,1,3]): D:\a\llama.cpp\llama.cpp\ggml\src\ggml.c:5816: GGML_ASSERT(!src0_needs_grads || ggml_are_same_shape(src0, cgraph->grads[isrc0])) failed
[3/23 08:24:39] PS E:\AI\llama.cpp\b4942\llama-b4942-bin-win-vulkan-x64
> cd ..\llama-b4942-bin-win-avx2-x64\
[3/23 08:24:55] PS E:\AI\llama.cpp\b4942\llama-b4942-bin-win-avx2-x64
> .\test-backend-ops.exe grad -o CPY
Testing 1 devices

Backend 1/1: CPU
  Device description: AMD Ryzen 7 5700U with Radeon Graphics
  Device memory: 0 MB (0 MB free)

  CPY(type_src=f32,type_dst=f32,ne=[1,2,3,4],permute_src=[0,0,0,0],permute_dst=[0,0,0,0]): OK
  CPY(type_src=f32,type_dst=f32,ne=[1,2,3,4],permute_src=[0,2,1,3],permute_dst=[0,0,0,0]): OK
  CPY(type_src=f32,type_dst=f32,ne=[1,2,3,4],permute_src=[0,3,1,2],permute_dst=[0,2,1,3]): D:\a\llama.cpp\llama.cpp\ggml\src\ggml.c:5816: GGML_ASSERT(!src0_needs_grads || ggml_are_same_shape(src0, cgraph->grads[isrc0])) failed
@masamaru-san
Copy link
Author

GGML_ASSERT line here.

llama.cpp/ggml/src/ggml.c

Lines 5814 to 5818 in fbdfefe

}
GGML_ASSERT(!src0_needs_grads || ggml_are_same_shape(src0, cgraph->grads[isrc0]));
GGML_ASSERT(!src1_needs_grads || ggml_are_same_shape(src1, cgraph->grads[isrc1]));
GGML_ASSERT(!src2_needs_grads || ggml_are_same_shape(src2, cgraph->grads[isrc2]));

@github-actions github-actions bot added the stale label Apr 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant