-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
get_make
is not strict. Only strict
function tools can be auto-parsed'
#4447
Comments
Could you please post a complete code snippet. Especially showing how |
i am creating tools array
and then pass that array to agent i created and inside
|
Note: I did found 1 issue in OpenAI python repo might help in understanding the issue |
Thanks. it does look like something to do with openai client we are using. |
I am facing the same issue. Is there a fix for this? |
What is your package version? And your code? |
Here is my code; def word_len(word: str) -> int:
"""Return the length of a word.
Args:
word (str): The word to return the length of.
Returns:
int: The length of the word.
"""
return len(word)
candidates_generator_model_client = OpenAIChatCompletionClient(
model="gpt-4o-mini",
response_format=CandidatesGeneratorFormat,
)
candidates_generator_system_prompt = (
"""Generate a list of candidate answers for the crossword clue of given length. Use the `word_len` tool to """
"""determine the length of a word."""
)
candidates_generator = AssistantAgent(
name="candidates_generator",
model_client=candidates_generator_model_client,
tools=[word_len],
system_message=candidates_generator_system_prompt,
reflect_on_tool_use=True,
) |
I see. this is a bug. We need to allow an option to pass "strict = True" to the function schema when the response format is JSON schema. |
Full repo here: import asyncio
from pydantic import BaseModel
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.ui import Console
def word_len(word: str) -> int:
"""Return the length of a word.
Args:
word (str): The word to return the length of.
Returns:
int: The length of the word.
"""
return len(word)
class CandidatesGeneratorFormat(BaseModel):
candidates: list[str]
candidates_generator_model_client = OpenAIChatCompletionClient(
model="gpt-4o-mini",
response_format=CandidatesGeneratorFormat,
)
candidates_generator_system_prompt = (
"""Generate a list of candidate answers for the crossword clue of given length. Use the `word_len` tool to """
"""determine the length of a word."""
)
candidates_generator = AssistantAgent(
name="candidates_generator",
model_client=candidates_generator_model_client,
tools=[word_len],
system_message=candidates_generator_system_prompt,
reflect_on_tool_use=True,
)
async def main() -> None:
result = await Console(candidates_generator.run_stream(task="Crossword clue: 5 letters"))
asyncio.run(main())
|
What happened?
i have a function
get_make
when i added it to JSON client
its throwing the error
What did you expect to happen?
invoke the function if needed
How can we reproduce it (as minimally and precisely as possible)?
strict json_output client with function call
AutoGen version
0.4.0.dev8
Which package was this bug in
Core
Model used
No response
Python version
No response
Operating system
No response
Any additional info you think would be helpful for fixing this bug
No response
The text was updated successfully, but these errors were encountered: