You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After many tries, I always get some errors when trying to optimize an agent workflow made using langgraph or it does not optimize:
I could only optimize the main function which generates the graph (e.g. generate_report in the example) but when I create a longer graph it would rarely optimize it.
I cannot train/bundle function of a graph's node directly (ValueError: no signature found for builtin...) => the workaround I found is to move the code of this function into another function which I set it trainable and call it (e.g. plan_node_train and plan_node in the example) => it does not generate an error but seems not to be optimized.
only the main function generating the graph (not a node) seems to be optimizable (no node, no function from graph's node is optimized)
optimizing a trace node value does not seem to work.
fromlanggraph.graphimportStateGraph, START, ENDfromopto.traceimportnode, bundlefromopto.optimizersimportOptoPrimestate_plan=node("Initial plan: Execute Task A, Task B, and Task C.", trainable=True, description="This represents the current plan of the agent.")
@bundle(trainable=True)defplan_node_train(state: dict):
"""Creates an initial plan."""globalstate_planstate['plan'] =state_planreturnstate#@bundle(trainable=True)defplan_node(state: dict):
returnplan_node_train(state)
# """Creates an initial plan."""# state['plan'] = "Initial plan: Execute Task A, Task B, and Task C."# return state@bundle(trainable=True)defself_critique_node_train(state: dict):
"""Improves the plan based on self-critique."""# For illustration, we simply append an improvement comment.state['plan'] +=" -- Improved after self critique."returnstatedefself_critique_node(state: dict):
returnself_critique_node_train(state)
@bundle(trainable=True)deffinalize_node_train(state: dict):
"""Finalizes the report using the current plan."""state['final_report'] =state['plan'] +" -- Final Report."returnstate['final_report']
deffinalize_node(state: dict):
returnfinalize_node_train(state)
@bundle(trainable=True)defgenerate_report():
# Initialize an empty stateinitial_state= {}
# Create a simple LangGraph with 3 nodes:# START -> plan_node -> self_critique_node -> finalize_node -> ENDgraph_builder=StateGraph(dict)
graph_builder.add_node("plan", plan_node)
graph_builder.add_node("self_critique", self_critique_node)
graph_builder.add_node("finalize", finalize_node)
graph_builder.add_edge(START, "plan")
graph_builder.add_edge("plan", "self_critique")
graph_builder.add_edge("self_critique", "finalize")
graph_builder.add_edge("finalize", END)
# Compile and invoke the graphgraph=graph_builder.compile()
final_state=graph.invoke(initial_state)
filnal_report_str=f"Final Report: {final_state}"print(filnal_report_str)
returnfilnal_report_strparameters=generate_report.parameters() # [state_plan] + generate_report.parameters() + plan_node_train.parameters() + self_critique_node_train.parameters() + finalize_node_train.parameters()# parameters = [state_plan] # NO OPTIMIZATION# parameters = plan_node_train.parameters() # NO OPTIMIZATIONoptimizer=OptoPrime(parameters) #, memory_size=2)optimizer.zero_feedback()
report=generate_report()
# Run a dummy backward pass and step to update parameters.optimizer.backward(report, "The report quality is low, plan and content are far too short and does not align with research pratice")
optimizer.step()
# Print out optimized parameters (for illustration)forparaminoptimizer.parameters:
print("Optimized parameter:", param.name, param.data)
The text was updated successfully, but these errors were encountered:
I went through the example -- this is actually a known issue in Trace's design. We made a choice of not tracing node that is operated inside a function decorated with bundle. What it means is that:
a = node(3)
@bundle(trainalbe=False)
def add1(a):
"""add function"""
return a + 3 // let's refer to this output as b
def add2(a):
"""add function"""
return a + 3 // let's refer to this output as b
How would add1 and add2 differ?
They actually create two different graphs!
If you use add1, then the graph is a -> add1 ("add function") -> b
If you use add2, then the graph is a, 3 -> + -> b
bundle hides operations applied in the node and only connects the input with output.
We made this decision to prevent graph blow-up and allow users to summarize operations inside a function.
I suspect this is why generate_report can be updated, but not nodes used inside -- because they are not part of the computational graph.
We had some mini project with an intern last summer about tracing inside bundle -- but that project might have been put on hold -- we'll figure out a decision on this.
For now, you should write them in a modular fashion.
After many tries, I always get some errors when trying to optimize an agent workflow made using langgraph or it does not optimize:
You can directly test the code here
The text was updated successfully, but these errors were encountered: