-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enhancement] Improve the mechanism of asking for arguments for tool caller execution #300
Comments
@ElisonSherton The more I think about it, it seems like glossary terms are the way to go here. The reason is that a customer could ask about X even not in direction relation to a tool. I think that if an agent is to properly explain a concept, it has to be a glossary term. But perhaps we can make this more ergonomic. Maybe you could dynamically define glossary terms in tools: from parlant.core.glossary import Term
@tool
async def my_tool(
context: ToolContext,
x: Annotated[str, ToolParameterOptions(
glossary_term=Term.dynamic(
name="X",
description="Description...",
tags=["my_tag"],
),
)],
) -> ToolResult:
... When writing a tool like this, and connecting the tool service, the terms will be automatically loaded into the engine under the specified tags. And if the agent is tagged accordingly, the terms will be added to its glossary dynamically. WDYT about this direction? |
@kichanyurd |
I think this is a really good option. I appreciate the fact that the addition of glossary terms would be automatic in nature if we do it this way, otherwise if we have let's say 10 tools and each tool has let's say 5 parameters, and on average let's say 2 of them are indeed worthy of being glossary terms, we would have to write 20 glossary terms manually which seems like substantial effort that could be automated. @yannemcie , I tried using enum but it did not work i.e. I specified the type of the parameter (p1) as an enum which had only 2 values. When the user asked what is p1, the agent in it's response while telling the user what p1 is, did not give only the options from enum but gave additional options which it fabricated on its own. |
Motivation
Imagine a tool let's call it Tx which has several required parameters (a_1, a_2, a_3, ..., a_n) that are needed for it's successful execution.
Each of the a_is have been provided as follows
My observation has been that when a guideline to which this tool associated is triggered and this tool should run and a parameter let's say a_x has not been provided; the agent generates a message asking the user to provide the value of a_x.
Then, let's say the user asks a clarifying question like
I don't know what's a_x
Or,What do you mean by a_x?
OrI don't understand what you want from me
etc.The Agent seems to make things up when clarifying about a_x. Even with type annotations containing Enums, it is not able to clarify the user's question.
Solution Proposal
Can we have a field called
ask_schema
orhow_to_ask
in ToolParameterOptions which is meant to contain an instruction or a set of rules as to how it should ask for that parameter from the user?As a fix I tried to add the parameter name as a glossary term with the value being the possible set of values (exhaustive) for that parameter and it worked with this approach. But we can't have a glossary term for each parameter_name or it will become difficult to manage...
The text was updated successfully, but these errors were encountered: