Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add thought process handling in tool calls and expose ThoughtEvent through stream in AgentChat #5500

Merged
merged 7 commits into from
Feb 21, 2025

Conversation

ekzhu
Copy link
Collaborator

@ekzhu ekzhu commented Feb 11, 2025

Resolves #5192

Test

import asyncio
import os
from random import randint
from typing import List
from autogen_core.tools import BaseTool, FunctionTool
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console

async def get_current_time(city: str) -> str:
    return f"The current time in {city} is {randint(0, 23)}:{randint(0, 59)}."

tools: List[BaseTool] = [
    FunctionTool(
        get_current_time,
        name="get_current_time",
        description="Get current time for a city.",
    ),
]

model_client = OpenAIChatCompletionClient(
    model="anthropic/claude-3.5-haiku-20241022",
    base_url="https://openrouter.ai/api/v1",
    api_key=os.environ["OPENROUTER_API_KEY"],
    model_info={
        "family": "claude-3.5-haiku",
        "function_calling": True,
        "vision": False,
        "json_output": False,
    }
)

agent = AssistantAgent(
    name="Agent",
    model_client=model_client,
    tools=tools,
    system_message= "You are an assistant with some tools that can be used to answer some questions",
)

async def main() -> None:
    await Console(agent.run_stream(task="What is current time of Paris and Toronto?"))

asyncio.run(main())
---------- user ----------
What is current time of Paris and Toronto?
---------- Agent ----------
I'll help you find the current time for Paris and Toronto by using the get_current_time function for each city.
---------- Agent ----------
[FunctionCall(id='toolu_01NwP3fNAwcYKn1x656Dq9xW', arguments='{"city": "Paris"}', name='get_current_time'), FunctionCall(id='toolu_018d4cWSy3TxXhjgmLYFrfRt', arguments='{"city": "Toronto"}', name='get_current_time')]
---------- Agent ----------
[FunctionExecutionResult(content='The current time in Paris is 1:10.', call_id='toolu_01NwP3fNAwcYKn1x656Dq9xW', is_error=False), FunctionExecutionResult(content='The current time in Toronto is 7:28.', call_id='toolu_018d4cWSy3TxXhjgmLYFrfRt', is_error=False)]
---------- Agent ----------
The current time in Paris is 1:10.
The current time in Toronto is 7:28.

@ekzhu ekzhu changed the title feat: Add thought process handling in messages and update type annotations feat: Add thought process handling in tool calls and expose ThoughtEvent through stream in AgentChat Feb 11, 2025
Copy link

codecov bot commented Feb 12, 2025

Codecov Report

Attention: Patch coverage is 81.48148% with 5 lines in your changes missing coverage. Please review.

Project coverage is 75.56%. Comparing base (45c6d13) to head (e22ca16).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
...xt/src/autogen_ext/models/openai/_openai_client.py 66.66% 5 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #5500      +/-   ##
==========================================
+ Coverage   75.41%   75.56%   +0.14%     
==========================================
  Files         171      171              
  Lines       10467    10483      +16     
==========================================
+ Hits         7894     7921      +27     
+ Misses       2573     2562      -11     
Flag Coverage Δ
unittests 75.56% <81.48%> (+0.14%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ekzhu
Copy link
Collaborator Author

ekzhu commented Feb 12, 2025

@osdemah if you comment here I can add you as a reviewer

@ekzhu
Copy link
Collaborator Author

ekzhu commented Feb 19, 2025

@peterychang @lspinheiro Please take a look. The changes I made to OpenAIChatCompletionClient will be needed for Ollama, SK, and AzureAI clients as well.

@osdemah
Copy link

osdemah commented Feb 21, 2025

thanks @ekzhu! I think this should work! I had a patch like this to be able to catch them in the log handlers. but yours should work better, since it also handles adding that to the history.

diff --git a/autogen_agentchat/agents/_assistant_agent.py b/autogen_agentchat/agents/_assistant_agent.py
index 3b109c1..bf01962 100644
--- a/autogen_agentchat/agents/_assistant_agent.py
+++ b/autogen_agentchat/agents/_assistant_agent.py
@@ -44,6 +44,7 @@ from ..messages import (
     MemoryQueryEvent,
     ModelClientStreamingChunkEvent,
     TextMessage,
+    ThoughtMessage,
     ToolCallExecutionEvent,
     ToolCallRequestEvent,
     ToolCallSummaryMessage,
@@ -430,6 +431,11 @@ class AssistantAgent(BaseChatAgent, Component[AssistantAgentConfig]):
             )
             return
 
+        if model_result.thought:
+            event_logger.info(ThoughtMessage(
+                content=model_result.thought, source=self.name
+            ))
+
         # Process tool calls.
         assert isinstance(model_result.content, list) and all(
             isinstance(item, FunctionCall) for item in model_result.content
diff --git a/autogen_agentchat/messages.py b/autogen_agentchat/messages.py
index 17249e6..8c46fdf 100644
--- a/autogen_agentchat/messages.py
+++ b/autogen_agentchat/messages.py
@@ -47,6 +47,15 @@ class TextMessage(BaseChatMessage):
     type: Literal["TextMessage"] = "TextMessage"
 
 
+class ThoughtMessage(BaseChatMessage):
+    """A text message."""
+
+    content: str
+    """The content of the message."""
+
+    type: Literal["ThoughtMessage"] = "ThoughtMessage"
+
+
 class MultiModalMessage(BaseChatMessage):
     """A multimodal message."""
 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants