@@ -8,6 +8,7 @@ A Python application that enables conversations between multiple LLM agents usin
88 - Ollama (local models)
99 - OpenAI (GPT-5, GPT-5-mini, GPT-5-nano, o4-high, etc.)
1010 - Anthropic (Claude)
11+ - Google (Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.0 Flash, etc.)
1112 - OpenRouter, Together, Groq, DeepSeek, and any other provider with an OpenAI compatible API.
1213- Flexible configuration via JSON file or interactive setup
1314- Multiple conversation turn orders (round-robin, random, chain, moderator, vote)
@@ -89,9 +90,11 @@ You can provide a JSON configuration file using the `-c` flag for reproducible c
8990{
9091 "providers": {
9192 "openai": {
92- "api_key": "your-api-key-here",
93- "anthropic": "your-api-key-here"
93+ "api_key": "your-api-key-here"
9494 },
95+ "anthropic": {
96+ "api_key": "your-api-key-here"
97+ }
9598 },
9699 "agents": [
97100 {
@@ -137,9 +140,10 @@ The `providers` section defines API endpoints and credentials:
137140
138141Built-in providers (base_url automatically configured):
139142
140- - ` openai` : OpenAI GPT models
141143- ` ollama` : Local Ollama models
144+ - ` openai` : OpenAI GPT models
142145- ` anthropic` : Anthropic Claude models
146+ - ` google` : Google Gemini models
143147- ` openrouter` : OpenRouter proxy service
144148- ` together` : Together AI models
145149- ` groq` : Groq inference service
@@ -204,12 +208,14 @@ You can take a look at the [JSON configuration schema](schema.json) for more det
204208### Conversation Controls
205209
206210The conversation will continue until:
211+
207212- An agent terminates the conversation (if termination is enabled)
208213- The user interrupts with `Ctrl+C`
209214
210215## Output Format
211216
212217When saving conversations, the output file includes:
218+
213219- Configuration details for both agents
214220- Complete conversation history with agent names and messages
215221
0 commit comments