-
Notifications
You must be signed in to change notification settings - Fork 663
Open
Description
Hi, i have been looking through the various examples in the llm prompt optimisation. I am trying to figure out how to optimise my own prompt by testing with the tutorial code. I notice
| use_llm_feedback: true |
using llm_feedback is set to true which supposedly improves the performance. However, unlike circle with artifacts example, it seems the evaluator.py in llm optimisation does not have the artifact code option available.
| def evaluate(program_path): |
So if it is not implemented, this option cant be used ? If i understand correctly, the feedback provides incorrect examples to the LLM optimiser based on the current prompt.
Metadata
Metadata
Assignees
Labels
No labels