Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

get_make is not strict. Only strict function tools can be auto-parsed' #4447

Open
gilada-shubham opened this issue Dec 2, 2024 · 10 comments · May be fixed by #5507
Open

get_make is not strict. Only strict function tools can be auto-parsed' #4447

gilada-shubham opened this issue Dec 2, 2024 · 10 comments · May be fixed by #5507

Comments

@gilada-shubham
Copy link

gilada-shubham commented Dec 2, 2024

What happened?

i have a function get_make

def get_make() -> str:
    return json.dumps(
        [
            {"make": "FORD", "make_code": "FRD"},
            {"make": "AUDI", "make_code": "AUD"},
        ],
        indent=4,
    )

when i added it to JSON client

response = await self._model_client.create(
            llm_messages,
            cancellation_token=ctx.cancellation_token,
            json_output=True,
            tools=self._tool_schema,
            extra_create_args={"response_format": EntityResponse},
        )

its throwing the error

`get_make` is not strict. Only `strict` function tools can be auto-parsed'

What did you expect to happen?

invoke the function if needed

How can we reproduce it (as minimally and precisely as possible)?

strict json_output client with function call

AutoGen version

0.4.0.dev8

Which package was this bug in

Core

Model used

No response

Python version

No response

Operating system

No response

Any additional info you think would be helpful for fixing this bug

No response

@ekzhu
Copy link
Collaborator

ekzhu commented Dec 2, 2024

Could you please post a complete code snippet. Especially showing how self._tool_schema, is generated

@ekzhu ekzhu added awaiting-op-response Issue or pr has been triaged or responded to and is now awaiting a reply from the original poster needs-triage and removed needs-triage labels Dec 2, 2024
@gilada-shubham
Copy link
Author

i am creating tools array

 def get_tools(self):
    tools = []
    tools.append(FunctionTool(get_make, "function to get list of vehicle make."))
    return tools

and then pass that array to agent i created

and inside
i do

self._tool_schema = [tool.schema for tool in tools]

@github-actions github-actions bot removed the awaiting-op-response Issue or pr has been triaged or responded to and is now awaiting a reply from the original poster label Dec 3, 2024
@gilada-shubham
Copy link
Author

Note: I did found 1 issue in OpenAI python repo might help in understanding the issue

openai/openai-python#1733

@ekzhu
Copy link
Collaborator

ekzhu commented Dec 3, 2024

Thanks. it does look like something to do with openai client we are using.

@ekzhu ekzhu closed this as completed Dec 7, 2024
@priyathamkat
Copy link

I am facing the same issue. Is there a fix for this?

@ekzhu
Copy link
Collaborator

ekzhu commented Feb 12, 2025

I am facing the same issue. Is there a fix for this?

What is your package version? And your code?

@priyathamkat
Copy link

autogen-core version is 0.4.6, openai client is 1.61.1.

Here is my code;

def word_len(word: str) -> int:
    """Return the length of a word.

    Args:
        word (str): The word to return the length of.

    Returns:
        int: The length of the word.
    """
    return len(word)

candidates_generator_model_client = OpenAIChatCompletionClient(
    model="gpt-4o-mini",
    response_format=CandidatesGeneratorFormat,
)
candidates_generator_system_prompt = (
    """Generate a list of candidate answers for the crossword clue of given length. Use the `word_len` tool to """
    """determine the length of a word."""
)
candidates_generator = AssistantAgent(
    name="candidates_generator",
    model_client=candidates_generator_model_client,
    tools=[word_len],
    system_message=candidates_generator_system_prompt,
    reflect_on_tool_use=True,
)

@ekzhu ekzhu reopened this Feb 12, 2025
@ekzhu ekzhu added this to the python-v0.4.7 milestone Feb 12, 2025
@ekzhu
Copy link
Collaborator

ekzhu commented Feb 12, 2025

I see. this is a bug. We need to allow an option to pass "strict = True" to the function schema when the response format is JSON schema.

@ekzhu
Copy link
Collaborator

ekzhu commented Feb 12, 2025

Full repo here:

import asyncio
from pydantic import BaseModel
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.ui import Console

def word_len(word: str) -> int:
    """Return the length of a word.

    Args:
        word (str): The word to return the length of.

    Returns:
        int: The length of the word.
    """
    return len(word)

class CandidatesGeneratorFormat(BaseModel):
    candidates: list[str]


candidates_generator_model_client = OpenAIChatCompletionClient(
    model="gpt-4o-mini",
    response_format=CandidatesGeneratorFormat,
)
candidates_generator_system_prompt = (
    """Generate a list of candidate answers for the crossword clue of given length. Use the `word_len` tool to """
    """determine the length of a word."""
)
candidates_generator = AssistantAgent(
    name="candidates_generator",
    model_client=candidates_generator_model_client,
    tools=[word_len],
    system_message=candidates_generator_system_prompt,
    reflect_on_tool_use=True,
)

async def main() -> None:
    result = await Console(candidates_generator.run_stream(task="Crossword clue: 5 letters"))

asyncio.run(main())
---------- user ----------
Crossword clue: 5 letters
Traceback (most recent call last):
  File "/Users/ekzhu/autogen/python/test.py", line 41, in <module>
    asyncio.run(main())
  File "/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/runners.py", line 194, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.12.7_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/Users/ekzhu/autogen/python/test.py", line 39, in main
    result = await Console(candidates_generator.run_stream(task="Crossword clue: 5 letters"))
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ekzhu/autogen/python/packages/autogen-agentchat/src/autogen_agentchat/ui/_console.py", line 117, in Console
    async for message in stream:
  File "/Users/ekzhu/autogen/python/packages/autogen-agentchat/src/autogen_agentchat/agents/_base_chat_agent.py", line 176, in run_stream
    async for message in self.on_messages_stream(input_messages, cancellation_token):
  File "/Users/ekzhu/autogen/python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py", line 415, in on_messages_stream
    model_result = await self._model_client.create(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ekzhu/autogen/python/packages/autogen-ext/src/autogen_ext/models/openai/_openai_client.py", line 529, in create
    result: Union[ParsedChatCompletion[BaseModel], ChatCompletion] = await future
                                                                     ^^^^^^^^^^^^
  File "/Users/ekzhu/autogen/python/.venv/lib/python3.12/site-packages/openai/resources/beta/chat/completions.py", line 423, in parse
    _validate_input_tools(tools)
  File "/Users/ekzhu/autogen/python/.venv/lib/python3.12/site-packages/openai/lib/_parsing/_completions.py", line 53, in validate_input_tools
    raise ValueError(
ValueError: `word_len` is not strict. Only `strict` function tools can be auto-parsed

@ekzhu
Copy link
Collaborator

ekzhu commented Feb 12, 2025

#5507

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants