Skip to content

Reproducing results with local models on circle packing #44

@rahulmoorthy19

Description

@rahulmoorthy19

Thank you for the excellent repository and the detailed setup instructions.

I’m trying to set up ShinkaEvolve with local models using Ollama. I ran it with qwen3:8b as the main LLM (and meta LLM), and qwen3-embedding:8b as the embedding model. However, even after ~150 generations with the default large_budget configuration (only changing the LLM and embedding models as mentioned), the best score I’ve been able to achieve is about 1.6.

I wanted to check if this might be a limitation of using an 8B parameter model for this task, or do you suspect something else?

Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions