You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Bug]: Exception "TypeError: sequence item 0: expected str instance, dict found" was throwing due to different format of return when running Gemini
#2860
Closed
hemanoid opened this issue
Jun 4, 2024
· 4 comments
Function __post_carryover_processing(chat_info: Dict[str, Any]) of chat.py in agentchat folder throw the above exception when running Google Gemini.
The cause of the problem was the difference in returns when using models other than openai. In this case, the return of Gemini was of the format of {"Content": "{'Reviewer': 'SEO Reviewer', 'Review': ' .......'}", 'role': 'assistant', 'function_call': None, 'tool_calls': None}, whereas OPENAI returned {'Reviewer': 'SEO Reviewer', 'Review': ' .......'}.
Steps to reproduce
#Examples from DeepLearning.ai - almost a direct copy, autogen 0.2.25, python 3.12.2
from myutils import get_openai_api_key, get_gemini_api_key
from autogen import ConversableAgent
import autogen
import pprint
task = '''
Write a engaging blog post about why local deployed LLM
is important to AI's future. Make sure the blog post is
within 100 words.
'''
writer = autogen.AssistantAgent(
name="Writer",
system_message="You are a writer. You write engaging and intelligent "
"blog post (with title) on given topics. You must polish your "
"writing based on the feedback you receive and give a refined "
"version. Only return your final work without additional comments.",
llm_config=llm_config
)
critic = autogen.AssistantAgent(
name="Critic",
is_termination_msg=lambda x: x.get("content", "").find("TERMINATE") >= 0,
llm_config=llm_config,
system_message="You are a critic. you review the work of "
"the writer and provide constructive "
"feedback to help improve the quality of the content."
)
""" res = critic.initiate_chat(
recipient=writer,
message=task,
max_turns=2,
summary_method="last_msg"
) """
SEO_reviewer = autogen.AssistantAgent(
name="SEO Reviewer",
llm_config=llm_config,
system_message="You are an SEO reviewer, known for "
"your ability to optimize content for search engines, "
"ensuring that it ranks well and attracts organic traffic. "
"Make sure your suggestion is concise (within 3 bullet points), "
"concrete and to the point. "
"Begin the review by stating your role."
)
legal_reviewer = autogen.AssistantAgent(
name="Legal Reviewer",
llm_config=llm_config,
system_message="You are a legal reviewer, known for "
"your ability to ensure that content is legally compliant "
"and free from any potential legal issues. "
"Make sure your suggestion is concise (within 3 bullet points), "
"concrete and to the point. "
"Begin the review by stating your role."
)
ethics_reviewer = autogen.AssistantAgent(
name="Ethics Reviewer",
llm_config=llm_config,
system_message="You are an ethics reviewer, known for "
"your ability to ensure that content is ethically sound "
"and free from any potential ethical issues. "
"Make sure your suggestion is concise (within 3 bullet points), "
"concrete and to the point. "
"Begin the review by stating your role. "
)
meta_reviewer = autogen.AssistantAgent(
name="Meta Reviewer",
llm_config=llm_config,
system_message="You are a meta reviewer, you aggragate and review "
"the work of other reviewers and give a final suggestion on the content."
)
def reflection_message(recipient, messages, sender, config):
return f'''Review the following content.
\n\n {recipient.chat_messages_for_summary(sender)[-1]['content']}'''
review_chats = [
{
"recipient": SEO_reviewer,
"message": reflection_message,
"summary_method": "reflection_with_llm",
"summary_args": { "summary_prompt": "Return review into JSON object only:"
"{'Reviewer': '', 'Review': ''}. Here Reviewer should be your role",},
"max_turns": 1
},
{
"recipient": legal_reviewer,
"message": reflection_message,
"summary_method": "reflection_with_llm",
"summary_args": {"summary_prompt" :
"Return review into as JSON object only:"
"{'Reviewer': '', 'Review': ''}.",},
"max_turns": 1
},
{
"recipient": ethics_reviewer, "message": reflection_message,
"summary_method": "reflection_with_llm",
"summary_args": {"summary_prompt" :
"Return review into as JSON object only:"
"{'reviewer': '', 'review': ''}",},
"max_turns": 1
},
{
"recipient": meta_reviewer,
"message": "Aggregrate feedback from all reviewers and give final suggestions on the writing, also suggesting to double the use of the verb in the writing.",
"max_turns": 1
}
]
I'm facing the same issue while trying to use Gemini APIs for the deeplearning.ai course. As a temporary workaround, I'm using this function instead of initiate_chats. It assumes a string-based carryover, which I've only tested with Gemini APIs.
definitiate_chats_with_json_parsing(chat_queue: list[dict[str, any]]) ->list:
""" Initiate chats with enhanced carryover processing to handle JSON. """finished_chats= []
forchat_infoinchat_queue:
_chat_carryover=chat_info.get("carryover", [])
ifisinstance(_chat_carryover, str):
_chat_carryover= [_chat_carryover]
# Stringify everything in carryoverprocessed_carryover= [str(item) foritemin_chat_carryover]
processed_carryover+= [str(r.summary) forrinfinished_chats]
chat_info["carryover"] =processed_carryover# Initiate the chatchat_res=chat_info["sender"].initiate_chat(**chat_info)
finished_chats.append(chat_res)
returnfinished_chats# use it like thischat_results=initiate_chats_with_json_parsing(chats)
Describe the bug
Function __post_carryover_processing(chat_info: Dict[str, Any]) of chat.py in agentchat folder throw the above exception when running Google Gemini.
The cause of the problem was the difference in returns when using models other than openai. In this case, the return of Gemini was of the format of {"Content": "{'Reviewer': 'SEO Reviewer', 'Review': ' .......'}", 'role': 'assistant', 'function_call': None, 'tool_calls': None}, whereas OPENAI returned {'Reviewer': 'SEO Reviewer', 'Review': ' .......'}.
Steps to reproduce
#Examples from DeepLearning.ai - almost a direct copy, autogen 0.2.25, python 3.12.2
from myutils import get_openai_api_key, get_gemini_api_key
from autogen import ConversableAgent
import autogen
import pprint
GEMINI_API_KEY = get_gemini_api_key()
OPENAI_API_KEY = get_openai_api_key()
llm_config = {"model": "gemini-pro", "api_key": GEMINI_API_KEY, "api_type": "google"}
#llm_config ={"model": "gpt-3.5-turbo", "api_key": OPENAI_API_KEY}
task = '''
Write a engaging blog post about why local deployed LLM
is important to AI's future. Make sure the blog post is
within 100 words.
'''
writer = autogen.AssistantAgent(
name="Writer",
system_message="You are a writer. You write engaging and intelligent "
"blog post (with title) on given topics. You must polish your "
"writing based on the feedback you receive and give a refined "
"version. Only return your final work without additional comments.",
llm_config=llm_config
)
reply = writer.generate_reply(messages=[{"content": task, "role": "user"}])
print(reply)
critic = autogen.AssistantAgent(
name="Critic",
is_termination_msg=lambda x: x.get("content", "").find("TERMINATE") >= 0,
llm_config=llm_config,
system_message="You are a critic. you review the work of "
"the writer and provide constructive "
"feedback to help improve the quality of the content."
)
""" res = critic.initiate_chat(
recipient=writer,
message=task,
max_turns=2,
summary_method="last_msg"
) """
SEO_reviewer = autogen.AssistantAgent(
name="SEO Reviewer",
llm_config=llm_config,
system_message="You are an SEO reviewer, known for "
"your ability to optimize content for search engines, "
"ensuring that it ranks well and attracts organic traffic. "
"Make sure your suggestion is concise (within 3 bullet points), "
"concrete and to the point. "
"Begin the review by stating your role."
)
legal_reviewer = autogen.AssistantAgent(
name="Legal Reviewer",
llm_config=llm_config,
system_message="You are a legal reviewer, known for "
"your ability to ensure that content is legally compliant "
"and free from any potential legal issues. "
"Make sure your suggestion is concise (within 3 bullet points), "
"concrete and to the point. "
"Begin the review by stating your role."
)
ethics_reviewer = autogen.AssistantAgent(
name="Ethics Reviewer",
llm_config=llm_config,
system_message="You are an ethics reviewer, known for "
"your ability to ensure that content is ethically sound "
"and free from any potential ethical issues. "
"Make sure your suggestion is concise (within 3 bullet points), "
"concrete and to the point. "
"Begin the review by stating your role. "
)
meta_reviewer = autogen.AssistantAgent(
name="Meta Reviewer",
llm_config=llm_config,
system_message="You are a meta reviewer, you aggragate and review "
"the work of other reviewers and give a final suggestion on the content."
)
def reflection_message(recipient, messages, sender, config):
return f'''Review the following content.
\n\n {recipient.chat_messages_for_summary(sender)[-1]['content']}'''
review_chats = [
{
"recipient": SEO_reviewer,
"message": reflection_message,
"summary_method": "reflection_with_llm",
"summary_args": { "summary_prompt": "Return review into JSON object only:"
"{'Reviewer': '', 'Review': ''}. Here Reviewer should be your role",},
"max_turns": 1
},
{
"recipient": legal_reviewer,
"message": reflection_message,
"summary_method": "reflection_with_llm",
"summary_args": {"summary_prompt" :
"Return review into as JSON object only:"
"{'Reviewer': '', 'Review': ''}.",},
"max_turns": 1
},
{
"recipient": ethics_reviewer, "message": reflection_message,
"summary_method": "reflection_with_llm",
"summary_args": {"summary_prompt" :
"Return review into as JSON object only:"
"{'reviewer': '', 'review': ''}",},
"max_turns": 1
},
{
"recipient": meta_reviewer,
"message": "Aggregrate feedback from all reviewers and give final suggestions on the writing, also suggesting to double the use of the verb in the writing.",
"max_turns": 1
}
]
critic.register_nested_chats(
review_chats,
trigger=writer
)
res = critic.initiate_chat(
recipient=writer,
message=task,
max_turns=2,
summary_method="last_msg"
)
print(res.summary)
Model Used
Gemini-pro
Expected Behavior
No response
Screenshots and logs
No response
Additional Information
No response
The text was updated successfully, but these errors were encountered: