Skip to content

How to use llm outputs in the on_handoff function #567

Closed as not planned
Closed as not planned
@Salaudev

Description

@Salaudev

Hi everyone!
I'm trying to creation agent that will extract load details and detect warnings
After that it should handoff to another agent

so when it extracted the details we need to send request to another service to visualize them
I want to do it using on_handoff , is there any way to get LLM output on this function ?

def update_details() -> str:
    # send request to update details with A structure
    # send request to upsert warnings with B structure
    return details

# Define the agents
extractor_agent = Agent[OrchestratorContext](
    name="Triage Agent",
    instructions=prompt_with_handoff_instructions(
        "You are responsible to extract the load details ...."
    ),
    model=OpenAIChatCompletionsModel(
        model=os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME"), openai_client=azure_client
    ),
    handoffs=[handoff(agent=intent_detector_agent, on_handoff=update_details)]
)

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionQuestion about using the SDKstale

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions