Skip to content

Conversation

ggerganov
Copy link
Member

In some cases, we might want to fuse ops that are not sequential in the graph. For example, when there are intermediate view ops in-between the fusable ops.

Sample usage: #16102

@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Sep 20, 2025
Copy link
Collaborator

@jeffbolznv jeffbolznv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fwiw, with the sorting I have in the vulkan backend, I try to separate empty nodes from ninety nodes, which makes it less important to be able to skip.

for (int i = 0; i < num_ops; ++i) {
struct ggml_tensor * node = cgraph->nodes[node_idx + i];
if (node_idxs[i] + num_ops > cgraph->n_nodes) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why add num_ops here? Also, should be >=, i think.

@@ -615,6 +627,11 @@ inline bool ggml_can_fuse(const struct ggml_cgraph * cgraph, int node_idx, std::
return ggml_can_fuse(cgraph, node_idx, ops.begin(), (int)ops.size());
}

inline bool ggml_can_fuse(const struct ggml_cgraph * cgraph, std::initializer_list<int> node_idx, std::initializer_list<enum ggml_op> ops) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not totally clear on how you foresee this function being used. I assume the caller needs to look through nodes and find the ops its interested in (skipping empty nodes), so I dont think it'll be able to use an initializer list for the node idxs.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea is to create a list of indices that excludes the empty nodes in idxs. And then use this like so:

    // non-empty node indices
    std::vector<int> idxs;

    bool can_fuse(int i0, int i1, std::initializer_list<enum ggml_op> ops) const {
        assert(use_fusion);
        assert(i0 >= 0 && i0 < n_nodes());
        assert(i1 >= 0 && i1 < n_nodes());
        assert(ops.size() == 2);

        return ggml_can_fuse(gf, { idxs[i0], idxs[i1] }, ops);
    }

Btw, I really need an API to tell if 2 ops are fusable - don't think there is any benefit to support more than a pair of ops. The assumption is that if we can fuse N ops, then for sure we can fuse N-1 ops. So at step N we only need to check if ops N and N+1 are fusable - no need to iterate again over all previous ops.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you have to rebuild the initializer_list from the vector, why not just pass an int* instead?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I can do that. Removed the extra overload.

@ggerganov
Copy link
Member Author

ggerganov commented Sep 20, 2025

Fwiw, with the sorting I have in the vulkan backend, I try to separate empty nodes from ninety nodes, which makes it less important to be able to skip.

Yes, I noticed that works, but I'm not really confident if this won't break any assumption in the future. Specifically, this case:

c0 = ADD(a, b)
f0 = VIEW(...)
c1 = ADD(c0, f0)
f1 = VIEW(...)
c2 = ADD(c1, f1)
... etc

If I just move all the adds together and put the views after them, everything seems to work. But I think I also saw that depending on the order of the views, the allocr can do different things. For example, at some point I was thinking, let's make the graph_optimize simply put all empty nodes at the end of the graph first and then work with a truncated graph with fewer nodes. This also runs correctly, but I did notice a different allocation pattern of the nodes. Hence I'm having some doubts about the implications of changing the order of the views (or any other empty node).

@ggerganov ggerganov merged commit 4f324a5 into master Sep 22, 2025
60 of 68 checks passed
@ggerganov ggerganov deleted the gg/ggml-fuse-overload branch September 22, 2025 08:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants