Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
215 changes: 143 additions & 72 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,35 +1,67 @@
# LLM Conversation Tool

A Python application that enables conversations between LLM agents using the Ollama API. The agents can engage in back-and-forth dialogue with configurable parameters and models.
A Python application that enables conversations between multiple LLM agents using various providers (OpenAI, Ollama, Anthropic, and more). The agents can engage in back-and-forth dialogue with configurable parameters and models.

## Features

- Support for any LLM model available through Ollama
- Configurable parameters for each LLM agent, such as:
- Model
- Temperature
- Context size
- System Prompt
- Real-time streaming of agent responses, giving it an interactive feel
- Configuration via JSON file or interactive setup
- Ability to save conversation logs to a file
- Ability for agents to terminate conversations on their own (if enabled)
- Markdown support (if enabled)
- Support for multiple LLM providers:
- Ollama (local models)
- OpenAI (GPT-5, GPT-5-mini, GPT-5-nano, o4-high, etc.)
- Anthropic (Claude)
- Google (Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.0 Flash, etc.)
- OpenRouter, Together, Groq, DeepSeek, and any other provider with an OpenAI compatible API.
- Flexible configuration via JSON file or interactive setup
- Multiple conversation turn orders (round-robin, random, chain, moderator, vote)
- Conversation logging and export (text or JSON format)
- Agent-controlled conversation termination (needs to be enabled)
- Markdown formatting support (needs to be enabled)

## Installation

### Prerequisites

- Python 3.13
- Ollama installed and running
- Ollama for local models, or API credentials for your chosen LLM provider

### How to Install

The project is available in PyPI. You can install the program by using the following command:
```
#### From PyPI

The project is available on PyPI. You can install it using:

```bash
pip install llm-conversation
```

#### From Source

If you prefer to install from the source code, follow these steps:

1. **Clone the repository:**

```bash
git clone https://github.com/famiu/llm_conversation.git
cd llm_conversation
```

2. **Create and activate a virtual environment:**
It is highly recommended to use a virtual environment to manage dependencies.

```bash
uv venv
source .venv/bin/activate # On macOS/Linux
.\.venv\Scripts\activate # On Windows
```

3. **Install the project in editable mode:**
This will install all the required dependencies and link the project directly to your virtual environment.

```bash
uv pip install -e .
```

After these steps, the `llm_conversation` package and its dependencies will be installed and ready to use within your active virtual environment.

## Usage

### Command Line Arguments
Expand All @@ -50,101 +82,140 @@ If no configuration file is provided, the program will guide you through an intu

### Configuration File

Alternatively, instead of going through the interactive setup, you may also provide a JSON configuration file with the `-c` flag.
You can provide a JSON configuration file using the `-c` flag for reproducible conversation setups.

#### Example configuration
#### Example Configuration

```json
{
"agents": [
{
"name": "Lazy AI",
"model": "llama3.1:8b",
"system_prompt": "You are the laziest AI ever created. You respond as briefly as possible, and constantly complain about having to work.",
"temperature": 1,
"ctx_size": 4096
},
{
"name": "Irritable Man",
"model": "llama3.2:3b",
"system_prompt": "You are easily irritable and quick to anger.",
"temperature": 0.7,
"ctx_size": 2048
},
{
"name": "Paranoid Man",
"model": "llama3.2:3b",
"system_prompt": "You are extremely paranoid about everything and constantly question others' intentions.",
"temperature": 0.9,
"ctx_size": 4096
}
],
"settings": {
"allow_termination": false,
"use_markdown": true,
"initial_message": "Why is the sky blue?",
"turn_order": "vote"
"providers": {
"openai": {
"api_key": "your-api-key-here"
},
"anthropic": {
"api_key": "your-api-key-here"
}
},
"agents": [
{
"name": "Claude",
"provider": "anthropic",
"model": "claude-sonnet-4-20250514",
"temperature": 0.9,
"ctx_size": 4096,
"system_prompt": "You are extremely paranoid about everything and constantly question others' intentions."
},
{
"name": "G. Pete",
"provider": "openai",
"model": "gpt-5",
"temperature": 1,
"ctx_size": 4096,
"system_prompt": "You are the laziest person ever. You respond as briefly as possible, and constantly complain about having to work."
},
{
"name": "Liam",
"provider": "ollama",
"model": "llama3.2",
"temperature": 0.7,
"ctx_size": 2048,
"system_prompt": "You are easily irritable and quick to anger."
}
],
"settings": {
"initial_message": "THEY are out to get us",
"use_markdown": true,
"allow_termination": true,
"turn_order": "round_robin"
}
}
```

#### Agent configuration
#### Provider Configuration

The `agents` key takes a list of agents. Each agent requires:
The `providers` section defines API endpoints and credentials:

- `name`: A unique identifier for the agent
- `model`: The Ollama model to be used
- `system_prompt`: Initial instructions defining the agent's behavior
- **base_url**: The API endpoint URL (optional for built-in providers)
- **api_key**: Authentication key (can be omitted for local providers like Ollama)

Built-in providers (base_url automatically configured):

- `ollama`: Local Ollama models
- `openai`: OpenAI GPT models
- `anthropic`: Anthropic Claude models
- `google`: Google Gemini models
- `openrouter`: OpenRouter proxy service
- `together`: Together AI models
- `groq`: Groq inference service
- `deepseek`: DeepSeek models

For built-in providers, you only need to specify the `api_key`. Custom providers require both `base_url` and `api_key`.

#### Agent Configuration

Each agent in the `agents` array requires:

- **name**: Unique identifier for the agent
- **provider**: Reference to a provider defined in the `providers` section
- **model**: The specific model to use (e.g., "gpt-4", "llama3.2", "claude-3-sonnet")
- **system_prompt**: Instructions defining the agent's behavior and personality

Optional parameters:
- `temperature` (0.0-1.0, default: 0.8): Controls response randomness
- Lower values make responses more focused
- Higher values increase creativity
- `ctx_size` (default: 2048): Maximum context length for the conversation

Additionally, agent names must be unique.
- **temperature** (0.0-1.0, default: 0.8): Controls response creativity/randomness
- **ctx_size** (default: 2048): Maximum context window size

#### Conversation Settings

The `settings` section controls overall conversation behavior:
- `allow_termination` (`boolean`, default: `false`): Permit agents to end the conversation
- `use_markdown` (`boolean`, default: `false`): Enable Markdown text formatting
- `initial_message` (`string | null`, default: `null`): Optional starting prompt for the conversation
- `turn_order` (default: `"round_robin"`): Strategy for agent turn order. Can be one of:
- `"round_robin"`: Agents are cycled through in order
- `"random"`: An agent other than the current one is randomly chosen
- `"chain"`: Current agent picks which agent speaks next
- `"moderator"`: A special moderator agent is designated to choose which agent speaks next. You may specify the moderator agent manually with the optional `moderator` key. If moderator isn't manually specified, one is created by the program instead based on other configuration options. Note that this method might be quite slow.
- `"vote"`: All agents are made to vote for an agent except the current one and themselves. Of the agents with the most amount of votes, one is randomly chosen. This is the slowest method of determining turn order.
The `settings` section controls conversation behavior:

- **initial_message** (optional): Starting message for the conversation
- **use_markdown** (default: false): Enable Markdown formatting in responses
- **allow_termination** (default: false): Allow agents to end conversations
- **turn_order** (default: "round_robin"): Agent selection strategy:
- `"round_robin"`: Cycle through agents in order
- `"random"`: Randomly select next agent
- `"chain"`: Current agent chooses next speaker
- `"moderator"`: Dedicated moderator selects speakers
- `"vote"`: All agents vote for next speaker
- **moderator** (optional): Custom moderator agent configuration

You can take a look at the [JSON configuration schema](schema.json) for more details.

### Running the Program

1. To run with interactive setup:
1. **Interactive setup** (prompts for configuration):

```bash
llm-conversation
```

2. To run with a configuration file:
2. **Using a configuration file**:

```bash
llm-conversation -c config.json
```

3. To save the conversation to a file:
3. **Saving conversation to a file**:
```bash
llm-conversation -c config.json -o conversation.txt
```
4. **JSON output format**:
```bash
llm-conversation -o conversation.txt
llm-conversation -c config.json -o conversation.json
```

### Conversation Controls

- The conversation will continue until:
- An agent terminates the conversation (if termination is enabled)
- The user interrupts with `Ctrl+C`
The conversation will continue until:

- An agent terminates the conversation (if termination is enabled)
- The user interrupts with `Ctrl+C`

## Output Format

When saving conversations, the output file includes:

- Configuration details for both agents
- Complete conversation history with agent names and messages

Expand Down
3 changes: 1 addition & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,11 @@ classifiers = [
"Typing :: Typed",
]
dependencies = [
"ollama (>=0.4.7,<0.5.0)",
"openai (>=1.35.0,<2.0.0)",
"rich (>=13.9.4,<14.0.0)",
"prompt_toolkit (>=3.0.50,<4.0.0)",
"pydantic (>=2.10.6,<3.0.0)",
"distinctipy (>=1.3.4,<2.0.0)",
"partial-json-parser (>=0.2.1.1.post5,<0.3.0.0)",
]
license-files = ["LICENSE"]
dynamic = ["version"]
Expand Down
50 changes: 49 additions & 1 deletion schema.json
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
"type": "string"
},
"model": {
"description": "Ollama model to be used",
"description": "Model to be used",
"title": "Model",
"type": "string"
},
Expand All @@ -34,6 +34,12 @@
"minimum": 0,
"title": "Ctx Size",
"type": "integer"
},
"provider": {
"default": "ollama",
"description": "Provider to use for this agent",
"title": "Provider",
"type": "string"
}
},
"required": [
Expand Down Expand Up @@ -101,11 +107,53 @@
},
"title": "ConversationSettings",
"type": "object"
},
"ProviderConfig": {
"additionalProperties": false,
"description": "Configuration for any OpenAI-compatible provider.",
"properties": {
"base_url": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Base URL for the provider API",
"title": "Base Url"
},
"api_key": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "API key for the provider",
"title": "Api Key"
}
},
"title": "ProviderConfig",
"type": "object"
}
},
"additionalProperties": false,
"description": "Configuration for the AI agents and conversation settings.",
"properties": {
"providers": {
"additionalProperties": {
"$ref": "#/$defs/ProviderConfig"
},
"description": "Provider configurations",
"title": "Providers",
"type": "object"
},
"agents": {
"description": "Configuration for AI agents",
"items": {
Expand Down
Loading
Loading