Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: No way to set or register custom LLM to GroupChat selector agents #2929

Closed
brycecf opened this issue Jun 12, 2024 · 12 comments
Closed
Labels
models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.)

Comments

@brycecf
Copy link

brycecf commented Jun 12, 2024

Describe the bug

GroupChat has _auto_select_speaker with two internally defined agents: checking_agent and speaker_selection_agent.

When using a custom LLM config, there is no way to assign that to either agent or run register_model_client for either one. Registering the ChatManager has no impact.

Steps to reproduce

  1. Use AnthropicClient defined here. Any other custom model client would also work.
  2. Run the following code:
import autogen
<!!! import AnthropicClient !!!>

llm_config={
    "config_list": [
        {
            # Choose your model name.
            "model": "claude-3-sonnet-20240229",
            # You need to provide your API key here.
            "api_key": os.getenv("ANTHROPIC_API_KEY"),
            "base_url": "https://api.anthropic.com",
            "api_type": "anthropic",
            "model_client_cls": "AnthropicClient",
        }
    ],
    "cache_seed": 42 # Turns off caching, useful for testing different models
}

user_proxy = autogen.UserProxyAgent(
    name="User_proxy",
    code_execution_config={
        "last_n_messages": 2,
        "work_dir": "groupchat",
        "use_docker": False,
    },  # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
    human_input_mode="ALWAYS",
    is_termination_msg=lambda msg: not msg["content"]
)

# define two GPTAssistants
coder = autogen.AssistantAgent(
    name="Coder",
    llm_config=llm_config,
    system_message=autogen.AssistantAgent.DEFAULT_SYSTEM_MESSAGE
)

analyst = autogen.AssistantAgent(
    name="Data_analyst",
    system_message="You are a data analyst that offers insight into data.",
    llm_config=llm_config,
)
# define group chat
groupchat = autogen.GroupChat(
    agents=[user_proxy, analyst, coder], messages=[], max_round=10, 
    max_retries_for_selecting_speaker=2, select_speaker_auto_verbose=True,
    role_for_select_speaker_messages='user'
)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)
coder.register_model_client(model_client_cls= AnthropicClient)
analyst.register_model_client(model_client_cls= AnthropicClient)
manager.register_model_client(model_client_cls= AnthropicClient)
  1. See error:
RuntimeError: Model client(s) ['AnthropicClient'] are not activated. Please register the custom model clients using `register_model_client` or filter them out form the config list.

Model Used

claude-3-sonnet-20240229

Expected Behavior

No error due to LLM configuration caused by checking_agent.

Screenshots and logs

Stack trace:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[3], line 1
----> 1 user_proxy.initiate_chat(
      2     manager,
      3     message="Get the number of issues and pull requests for the repository 'microsoft/autogen' over the past three weeks and offer analysis to the data. You should print the data in csv format grouped by weeks.",
      4 )
      5 # type exit to terminate the chat

File ~/Library/Caches/pypoetry/virtualenvs/-bS_C0Bui-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py:1018, in ConversableAgent.initiate_chat(self, recipient, clear_history, silent, cache, max_turns, summary_method, summary_args, message, **kwargs)
   1016     else:
   1017         msg2send = self.generate_init_message(message, **kwargs)
-> 1018     self.send(msg2send, recipient, silent=silent)
   1019 summary = self._summarize_chat(
   1020     summary_method,
   1021     summary_args,
   1022     recipient,
   1023     cache=cache,
   1024 )
   1025 for agent in [self, recipient]:

File ~/Library/Caches/pypoetry/virtualenvs/-bS_C0Bui-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py:655, in ConversableAgent.send(self, message, recipient, request_reply, silent)
    653 valid = self._append_oai_message(message, "assistant", recipient)
    654 if valid:
--> 655     recipient.receive(message, self, request_reply, silent)
    656 else:
    657     raise ValueError(
    658         "Message can't be converted into a valid ChatCompletion message. Either content or function_call must be provided."
    659     )

File ~/Library/Caches/pypoetry/virtualenvs/-bS_C0Bui-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py:818, in ConversableAgent.receive(self, message, sender, request_reply, silent)
    816 if request_reply is False or request_reply is None and self.reply_at_receive[sender] is False:
    817     return
--> 818 reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
    819 if reply is not None:
    820     self.send(reply, sender, silent=silent)

File ~/Library/Caches/pypoetry/virtualenvs/-bS_C0Bui-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py:1972, in ConversableAgent.generate_reply(self, messages, sender, **kwargs)
   1970     continue
   1971 if self._match_trigger(reply_func_tuple["trigger"], sender):
-> 1972     final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
   1973     if logging_enabled():
   1974         log_event(
   1975             self,
   1976             "reply_func_executed",
   (...)
   1980             reply=reply,
   1981         )

File ~/Library/Caches/pypoetry/virtualenvs/-bS_C0Bui-py3.11/lib/python3.11/site-packages/autogen/agentchat/groupchat.py:1053, in GroupChatManager.run_chat(self, messages, sender, config)
   1050     break
   1051 try:
   1052     # select the next speaker
-> 1053     speaker = groupchat.select_speaker(speaker, self)
   1054     if not silent:
   1055         iostream = IOStream.get_default()

File ~/Library/Caches/pypoetry/virtualenvs/-bS_C0Bui-py3.11/lib/python3.11/site-packages/autogen/agentchat/groupchat.py:538, in GroupChat.select_speaker(self, last_speaker, selector)
    535     return self.next_agent(last_speaker)
    537 # auto speaker selection with 2-agent chat
--> 538 return self._auto_select_speaker(last_speaker, selector, messages, agents)

File ~/Library/Caches/pypoetry/virtualenvs/-bS_C0Bui-py3.11/lib/python3.11/site-packages/autogen/agentchat/groupchat.py:659, in GroupChat._auto_select_speaker(self, last_speaker, selector, messages, agents)
    656     start_message = messages[-1]
    658 # Run the speaker selection chat
--> 659 result = checking_agent.initiate_chat(
    660     speaker_selection_agent,
    661     cache=None,  # don't use caching for the speaker selection chat
    662     message=start_message,
    663     max_turns=2
    664     * max(1, max_attempts),  # Limiting the chat to the number of attempts, including the initial one
    665     clear_history=False,
    666     silent=not self.select_speaker_auto_verbose,  # Base silence on the verbose attribute
    667 )
    669 return self._process_speaker_selection_result(result, last_speaker, agents)

File ~/Library/Caches/pypoetry/virtualenvs/-bS_C0Bui-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py:1011, in ConversableAgent.initiate_chat(self, recipient, clear_history, silent, cache, max_turns, summary_method, summary_args, message, **kwargs)
   1009         if msg2send is None:
   1010             break
-> 1011         self.send(msg2send, recipient, request_reply=True, silent=silent)
   1012 else:
   1013     self._prepare_chat(recipient, clear_history)

File ~/Library/Caches/pypoetry/virtualenvs/-bS_C0Bui-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py:655, in ConversableAgent.send(self, message, recipient, request_reply, silent)
    653 valid = self._append_oai_message(message, "assistant", recipient)
    654 if valid:
--> 655     recipient.receive(message, self, request_reply, silent)
    656 else:
    657     raise ValueError(
    658         "Message can't be converted into a valid ChatCompletion message. Either content or function_call must be provided."
    659     )

File ~/Library/Caches/pypoetry/virtualenvs/-bS_C0Bui-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py:818, in ConversableAgent.receive(self, message, sender, request_reply, silent)
    816 if request_reply is False or request_reply is None and self.reply_at_receive[sender] is False:
    817     return
--> 818 reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
    819 if reply is not None:
    820     self.send(reply, sender, silent=silent)

File ~/Library/Caches/pypoetry/virtualenvs/-bS_C0Bui-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py:1972, in ConversableAgent.generate_reply(self, messages, sender, **kwargs)
   1970     continue
   1971 if self._match_trigger(reply_func_tuple["trigger"], sender):
-> 1972     final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
   1973     if logging_enabled():
   1974         log_event(
   1975             self,
   1976             "reply_func_executed",
   (...)
   1980             reply=reply,
   1981         )

File ~/Library/Caches/pypoetry/virtualenvs/-bS_C0Bui-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py:1340, in ConversableAgent.generate_oai_reply(self, messages, sender, config)
   1338 if messages is None:
   1339     messages = self._oai_messages[sender]
-> 1340 extracted_response = self._generate_oai_reply_from_client(
   1341     client, self._oai_system_message + messages, self.client_cache
   1342 )
   1343 return (False, None) if extracted_response is None else (True, extracted_response)

File ~/Library/Caches/pypoetry/virtualenvs/-bS_C0Bui-py3.11/lib/python3.11/site-packages/autogen/agentchat/conversable_agent.py:1359, in ConversableAgent._generate_oai_reply_from_client(self, llm_client, messages, cache)
   1356         all_messages.append(message)
   1358 # TODO: #1143 handle token limit exceeded error
-> 1359 response = llm_client.create(
   1360     context=messages[-1].pop("context", None), messages=all_messages, cache=cache, agent=self
   1361 )
   1362 extracted_response = llm_client.extract_text_or_completion_object(response)[0]
   1364 if extracted_response is None:

File ~/Library/Caches/pypoetry/virtualenvs/-bS_C0Bui-py3.11/lib/python3.11/site-packages/autogen/oai/client.py:576, in OpenAIWrapper.create(self, **config)
    572 non_activated = [
    573     client.config["model_client_cls"] for client in self._clients if isinstance(client, PlaceHolderClient)
    574 ]
    575 if non_activated:
--> 576     raise RuntimeError(
    577         f"Model client(s) {non_activated} are not activated. Please register the custom model clients using `register_model_client` or filter them out form the config list."
    578     )
    579 for i, client in enumerate(self._clients):
    580     # merge the input config with the i-th config in the config list
    581     full_config = {**config, **self._config_list[i]}

RuntimeError: Model client(s) ['AnthropicClient'] are not activated. Please register the custom model clients using `register_model_client` or filter them out form the config list.

Additional Information

Python 3.11
pyautogen 0.2.28

@brycecf brycecf added the bug label Jun 12, 2024
@scruffynerf
Copy link

good catch. Perhaps revising the custom client setup is the way to solve both this and #2930
I think 'custom' clients will be more common than expected.

@Hk669
Copy link
Contributor

Hk669 commented Jun 18, 2024

@scruffynerf @brycecf , thanks for the issue, we are currently working on the non openai model clients. this should be out soon. feel free to checkout the roadmap #2946 and let us know your thoughts. thanks

@Hk669 Hk669 added the models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.) label Jun 18, 2024
@microsoft microsoft deleted a comment Jun 20, 2024
@Hk669
Copy link
Contributor

Hk669 commented Jun 28, 2024

closing the issue, considering it is addressed in the new release of AutoGen. feel free to raise a new issue or reach out to us on discord. thanks @brycecf

@Hk669 Hk669 closed this as completed Jun 28, 2024
@eh123-co-mines
Copy link

I'm actually still seeing the same issue, even with the new version of Autogen (0.2.32). Groupchat still has 2 internal agents that cannot be registered, so the same error is occurring. Can you please advise if there's a fix for this?

@ruile
Copy link

ruile commented Jul 12, 2024

Hi there, I am still facing the same issue after updating Autogen (0.2.32), using a custom azure openai model.

class ProxyClient(OpenAIClient):
    def __init__(self, *args, **kwargs):
        self._oai_client = OpenAIProxy(proxy_client=ProxyClient())

    def create(self, params: Dict[str, Any]) -> ChatCompletion:
        # We want to reuse as much as possible from OpenAIClient so we need to remove all parameters that are not expected there
        params.pop("model_client_cls", None)
        params.pop("model_purpose", None)
        return super().create(params)
        
config_list = [
    {
        "model": "gpt-4o",
        "model_client_cls": "ProxyClient",
        "model_purpose": "agent",
    }
]

manager_config_list = [
    {
        "model": "gpt-4",
        "model_client_cls": "ProxyClient",
        "model_purpose": "manager",
    }
]

assistant = AssistantAgent("assistant", llm_config={"config_list": config_list})
assistant.register_for_llm(
    name="weather_tool",
    description="This function fetches current state of weather",
)(weather_function)
assistant.register_model_client(
    model_client_cls=ProxyClient
)

user_proxy = ConversableAgent(
    "user_proxy",
    code_execution_config=False,
    is_termination_msg=lambda msg: msg.get("content") is not None
    and "TERMINATE" in msg["content"],
    human_input_mode="NEVER",
)

user_proxy.register_for_execution(name="weather_tool")(weather_function)

groupchat = GroupChat(
    agents=[user_proxy, assistant],
    messages=[],
    max_round=20,
    speaker_selection_method="auto",
)
manager = GroupChatManager(
    groupchat=groupchat, llm_config={"config_list": manager_config_list}
)
manager.register_model_client(
    model_client_cls=ProxyClient
)

This results in the following error
RuntimeError: Model client(s) ['ProxyClient'] are not activated. Please register the custom model clients using register_model_client or filter them out form the config list.

@zhangxina
Copy link

I have the same problem, do you have a solution?

@ruile
Copy link

ruile commented Jul 22, 2024

I have the same problem, do you have a solution?

Downgrading to 0.2.25 works for now.

@wireless90
Copy link

can we reopen this? i have same issue

@Hk669
Copy link
Contributor

Hk669 commented Jul 25, 2024

can we reopen this? i have same issue

can you specifiy, if this is related to anthropic client or custom model client. the conversation is not clear to understand which part of the issue is addressed.

@wireless90
Copy link

wireless90 commented Jul 25, 2024

can we reopen this? i have same issue

can you specifiy, if this is related to anthropic client or custom model client. the conversation is not clear to understand which part of the issue is addressed.

Sorry this is related to #2956 (comment)
Custom model client

@Vikram-BM
Copy link

Is this issue being worked on ? For pyautogen 0.3.0 custom model client class - group chat manager is not working.

@kaykumar
Copy link

kaykumar commented Jan 31, 2025

do we now have ability to add custom model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.)
Projects
None yet
Development

No branches or pull requests

9 participants