Skip to content

Conversation

sirus20x6
Copy link
Contributor

Add a SIMD path to ggml_vec_set_f32, broadcasting the fill value with the existing GGML_F32_VEC helpers

Keep the scalar tail for leftover elements and non-SIMD builds

…e across SIMD registers and store in vector-sized chunks, while retaining the scalar tail for leftover elements and non-SIMD builds.
@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Oct 11, 2025
@sirus20x6 sirus20x6 marked this pull request as draft October 11, 2025 17:18
@sirus20x6 sirus20x6 marked this pull request as ready for review October 11, 2025 17:24
@sirus20x6
Copy link
Contributor Author

microbenchmarks show sometimes very little change, but sometimes a nice bump

Baseline (pre-SIMD helpers)

add1
  n=128     throughput=90.09 GB/s
  n=1024    throughput=117.64 GB/s
  n=8192    throughput=55.15 GB/s
  n=65536   throughput=57.87 GB/s
  n=524288  throughput=51.29 GB/s
acc
  n=128     throughput=78.97 GB/s
  n=1024    throughput=118.10 GB/s
  n=8192    throughput=57.20 GB/s
  n=65536   throughput=57.96 GB/s
  n=524288  throughput=48.29 GB/s
acc1
  n=128     throughput=88.28 GB/s
  n=1024    throughput=114.89 GB/s
  n=8192    throughput=131.41 GB/s
  n=65536   throughput=114.31 GB/s
  n=524288  throughput=87.01 GB/s
mul
  n=128     throughput=87.03 GB/s
  n=1024    throughput=61.81 GB/s
  n=8192    throughput=40.49 GB/s
  n=65536   throughput=34.54 GB/s
  n=524288  throughput=31.66 GB/s

Current branch (with SIMD helpers)

add1
  n=128     throughput=100.72 GB/s
  n=1024    throughput=141.98 GB/s
  n=8192    throughput=55.42 GB/s
  n=65536   throughput=59.20 GB/s
  n=524288  throughput=51.91 GB/s
acc
  n=128     throughput=80.21 GB/s
  n=1024    throughput=134.74 GB/s
  n=8192    throughput=68.63 GB/s
  n=65536   throughput=56.20 GB/s
  n=524288  throughput=48.49 GB/s
acc1
  n=128     throughput=89.30 GB/s
  n=1024    throughput=142.30 GB/s
  n=8192    throughput=142.24 GB/s
  n=65536   throughput=118.68 GB/s
  n=524288  throughput=90.02 GB/s
mul
  n=128     throughput=86.29 GB/s
  n=1024    throughput=95.78 GB/s
  n=8192    throughput=42.58 GB/s
  n=65536   throughput=32.26 GB/s
  n=524288  throughput=31.22 GB/s

Comment on lines 80 to 102
inline static void ggml_vec_add1_f32(const int n, float * z, const float * x, const float v) {
#if defined(GGML_SIMD)
const int np = (n & ~(GGML_F32_STEP - 1));

GGML_F32_VEC vv = GGML_F32_VEC_SET1(v);

for (int i = 0; i < np; i += GGML_F32_STEP) {
for (int j = 0; j < GGML_F32_ARR; ++j) {
GGML_F32_VEC ax = GGML_F32_VEC_LOAD(x + i + j*GGML_F32_EPR);
GGML_F32_VEC az = GGML_F32_VEC_ADD(ax, vv);
GGML_F32_VEC_STORE(z + i + j*GGML_F32_EPR, az);
}
}

for (int i = np; i < n; ++i) {
z[i] = x[i] + v;
}
#else
for (int i = 0; i < n; ++i) {
z[i] = x[i] + v;
}
#endif
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should make the code consistent about how it handles the leftovers. Here we duplicate the scalar code, while in ggml_vec_add_f32 above we use a common loop iterator. I think we should do the same as in ggml_vec_add_f32.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure thing. latest brings simd/scalar functions inline with each other.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants