Thank you for the excellent repository and the detailed setup instructions.
I’m trying to set up ShinkaEvolve with local models using Ollama. I ran it with qwen3:8b as the main LLM (and meta LLM), and qwen3-embedding:8b as the embedding model. However, even after ~150 generations with the default large_budget configuration (only changing the LLM and embedding models as mentioned), the best score I’ve been able to achieve is about 1.6.
I wanted to check if this might be a limitation of using an 8B parameter model for this task, or do you suspect something else?
Thank you!