Replies: 2 comments
-
|
The timeout warnings you're seeing may be caused by OpenEvolve's default 60-second timeout being too aggressive for providers like OpenRouter, NovitaAI. Add this to your config.yaml: Let me know if this resolves it! |
Beta Was this translation helpful? Give feedback.
-
|
Thanks for the reply. The function_minimization example already has a timeout of 120s. I tried 240, to no avail. Most calls, but not all, still timeout. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I've been trying various models, various providers and two hosts (openrouter & NovitaAI).
For all of them it seems that the LLM calls end up taking many minutes and often timeout.
I don't understand why, models like gemini flash 2 have a e2e latency of maybe 2-3 seconds on openrouter and work fine in other programs.
But here it seems somehow it massively overloads and timeouts happen. This is all with the basic function_min example.
Anyone else has this issue?
EDIT: So it seems no errors show up when using the OpenAI API. I guess something open evolve is doing is not good for other API providers somehow?
Beta Was this translation helpful? Give feedback.
All reactions