Skip to content

Optimisation of per-token CPU activities for GPU inference #7456

Closed
@agray3

Description

@agray3

When using a GPU backend, for each token evaluation there exists not only computation on the GPU but also significant CPU computation which can potentially be optimized.

Here are some timing measurements of the critical path for each token for llama2 Q4_K_M 7B and 13B models on A100 and H100 GPUs.

Firstly, here are absolute times:

and here are the same data presented as a percentage breakdown in each case:

CUDA Graph Execution is the time spent executing the compute graph on the GPU, which is responsible for around 85-90% of the time taken in evaluating each token..

The remaining 10-15% of the time is taken by CPU activities, the most dominant of which are discussed below.

GGML Graph Preparation: llama_build_graph and ggml_backend_sched_split_graph are related to the building/preparation of the compute graph in GGML format for each token, which is ultimately translated into a CUDA graph for execution. However, we know from the CUDA graph implementation (#6763) that only very minor adjustments are required across the majority of tokens. Therefore, it seems that most of the work is not required and we should be able to cache/reuse components of the GGML graph across tokens, in a similar way that we reuse each CUDA graph with only minor adjustments. E.g. in build_llama() we could add some code to save state across tokens, rather than perform the full re-build every token.

Sampling: llama_sampling_sample uses the CPU to perform sampling on the logits that have been evaluated on the GPU, for each token. In principle this sampling could be ported to the GPU.

I will continue to investigate these optimization possibilities.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions