Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Exception "TypeError: sequence item 0: expected str instance, dict found" was throwing due to different format of return when running Gemini #2860

Closed
hemanoid opened this issue Jun 4, 2024 · 4 comments
Labels
0.2 Issues which are related to the pre 0.4 codebase needs-triage

Comments

@hemanoid
Copy link

hemanoid commented Jun 4, 2024

Describe the bug

Function __post_carryover_processing(chat_info: Dict[str, Any]) of chat.py in agentchat folder throw the above exception when running Google Gemini.

The cause of the problem was the difference in returns when using models other than openai. In this case, the return of Gemini was of the format of {"Content": "{'Reviewer': 'SEO Reviewer', 'Review': ' .......'}", 'role': 'assistant', 'function_call': None, 'tool_calls': None}, whereas OPENAI returned {'Reviewer': 'SEO Reviewer', 'Review': ' .......'}.

Steps to reproduce

#Examples from DeepLearning.ai - almost a direct copy, autogen 0.2.25, python 3.12.2

from myutils import get_openai_api_key, get_gemini_api_key
from autogen import ConversableAgent
import autogen
import pprint

GEMINI_API_KEY = get_gemini_api_key()
OPENAI_API_KEY = get_openai_api_key()
llm_config = {"model": "gemini-pro", "api_key": GEMINI_API_KEY, "api_type": "google"}
#llm_config ={"model": "gpt-3.5-turbo", "api_key": OPENAI_API_KEY}

task = '''
Write a engaging blog post about why local deployed LLM
is important to AI's future. Make sure the blog post is
within 100 words.
'''

writer = autogen.AssistantAgent(
name="Writer",
system_message="You are a writer. You write engaging and intelligent "
"blog post (with title) on given topics. You must polish your "
"writing based on the feedback you receive and give a refined "
"version. Only return your final work without additional comments.",
llm_config=llm_config
)

reply = writer.generate_reply(messages=[{"content": task, "role": "user"}])
print(reply)

critic = autogen.AssistantAgent(
name="Critic",
is_termination_msg=lambda x: x.get("content", "").find("TERMINATE") >= 0,
llm_config=llm_config,
system_message="You are a critic. you review the work of "
"the writer and provide constructive "
"feedback to help improve the quality of the content."
)

""" res = critic.initiate_chat(
recipient=writer,
message=task,
max_turns=2,
summary_method="last_msg"
) """

SEO_reviewer = autogen.AssistantAgent(
name="SEO Reviewer",
llm_config=llm_config,
system_message="You are an SEO reviewer, known for "
"your ability to optimize content for search engines, "
"ensuring that it ranks well and attracts organic traffic. "
"Make sure your suggestion is concise (within 3 bullet points), "
"concrete and to the point. "
"Begin the review by stating your role."
)

legal_reviewer = autogen.AssistantAgent(
name="Legal Reviewer",
llm_config=llm_config,
system_message="You are a legal reviewer, known for "
"your ability to ensure that content is legally compliant "
"and free from any potential legal issues. "
"Make sure your suggestion is concise (within 3 bullet points), "
"concrete and to the point. "
"Begin the review by stating your role."
)

ethics_reviewer = autogen.AssistantAgent(
name="Ethics Reviewer",
llm_config=llm_config,
system_message="You are an ethics reviewer, known for "
"your ability to ensure that content is ethically sound "
"and free from any potential ethical issues. "
"Make sure your suggestion is concise (within 3 bullet points), "
"concrete and to the point. "
"Begin the review by stating your role. "
)

meta_reviewer = autogen.AssistantAgent(
name="Meta Reviewer",
llm_config=llm_config,
system_message="You are a meta reviewer, you aggragate and review "
"the work of other reviewers and give a final suggestion on the content."
)

def reflection_message(recipient, messages, sender, config):
return f'''Review the following content.
\n\n {recipient.chat_messages_for_summary(sender)[-1]['content']}'''

review_chats = [
{
"recipient": SEO_reviewer,
"message": reflection_message,
"summary_method": "reflection_with_llm",
"summary_args": { "summary_prompt": "Return review into JSON object only:"
"{'Reviewer': '', 'Review': ''}. Here Reviewer should be your role",},
"max_turns": 1
},
{
"recipient": legal_reviewer,
"message": reflection_message,
"summary_method": "reflection_with_llm",
"summary_args": {"summary_prompt" :
"Return review into as JSON object only:"
"{'Reviewer': '', 'Review': ''}.",},
"max_turns": 1
},
{
"recipient": ethics_reviewer, "message": reflection_message,
"summary_method": "reflection_with_llm",
"summary_args": {"summary_prompt" :
"Return review into as JSON object only:"
"{'reviewer': '', 'review': ''}",},
"max_turns": 1
},
{
"recipient": meta_reviewer,
"message": "Aggregrate feedback from all reviewers and give final suggestions on the writing, also suggesting to double the use of the verb in the writing.",
"max_turns": 1
}
]

critic.register_nested_chats(
review_chats,
trigger=writer
)

res = critic.initiate_chat(
recipient=writer,
message=task,
max_turns=2,
summary_method="last_msg"
)

print(res.summary)

Model Used

Gemini-pro

Expected Behavior

No response

Screenshots and logs

No response

Additional Information

No response

@hemanoid hemanoid added the bug label Jun 4, 2024
@crispymcbacon
Copy link

I'm facing the same issue while trying to use Gemini APIs for the deeplearning.ai course. As a temporary workaround, I'm using this function instead of initiate_chats. It assumes a string-based carryover, which I've only tested with Gemini APIs.

def initiate_chats_with_json_parsing(chat_queue: list[dict[str, any]]) -> list:
    """
    Initiate chats with enhanced carryover processing to handle JSON.
    """
    finished_chats = []
    for chat_info in chat_queue:
        _chat_carryover = chat_info.get("carryover", [])
        if isinstance(_chat_carryover, str):
            _chat_carryover = [_chat_carryover]

        # Stringify everything in carryover
        processed_carryover = [str(item) for item in _chat_carryover]
        processed_carryover += [str(r.summary) for r in finished_chats]
        chat_info["carryover"] = processed_carryover

        # Initiate the chat
        chat_res = chat_info["sender"].initiate_chat(**chat_info)
        finished_chats.append(chat_res) 
    return finished_chats

# use it like this
chat_results = initiate_chats_with_json_parsing(chats)

@YvodeRooij
Copy link

I am facing a similar issue using Anthropic API, any updates on this?

@whydatk
Copy link

whydatk commented Aug 6, 2024

I'm also facing the same issue while accessing Autogen through AWS Bedrock (claude model)

@rysweet rysweet added 0.2 Issues which are related to the pre 0.4 codebase needs-triage labels Oct 2, 2024
@fniedtner fniedtner removed the bug label Oct 24, 2024
@jackgerrits
Copy link
Member

Closing this as model clients in 0.4 are separate in comparison to the one client in 0.2, and as such these issues are largely resolved.

@jackgerrits jackgerrits closed this as not planned Won't fix, can't repro, duplicate, stale Feb 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
0.2 Issues which are related to the pre 0.4 codebase needs-triage
Projects
None yet
Development

No branches or pull requests

7 participants