Skip to content

Feature request: inference parallelisation #51

@hippalectryon-0

Description

@hippalectryon-0

As per the README, "The framework supports parallel evaluation of candidates". However, it seems that this only applies to the testing evaluation - not to the LLM querying itself.

Many problems, such as circle packing, have very small testing times compared to the querying times. I believe it would be highly beneficial to have some parallelization on the querying.

I expect this is not totally trivial - the current sequential approach ensures that the n+1th iteration can benefit from the n-th - but it should still be fairly easy to batch this logic.

Nb: I noticed some unused code e.g. AsyncLLMClient, was that an attempt in this direction ?

Best,

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions