A Python application that enables conversations between LLM agents using the Ollama API. The agents can engage in back-and-forth dialogue with configurable parameters and models.
- Support for any LLM model available through Ollama
- Configurable parameters for each LLM agent, such as:
- Model
- Temperature
- Context size
- System Prompt
- Real-time streaming of agent responses, giving it an interactive feel
- Configuration via JSON file or interactive setup
- Ability to save conversation logs to a file
- Ability for agents to terminate conversations on their own (if enabled)
- Markdown support (if enabled)
- Python 3.13
- Ollama installed and running
The project is available in PyPI. You can install the program by using the following command:
pip install llm-conversation
llm-conversation [-h] [-V] [-o OUTPUT] [-c CONFIG]
options:
-h, --help Show this help message and exit
-V, --version Show program's version number and exit
-o, --output OUTPUT Path to save the conversation log to
-c, --config CONFIG Path to JSON configuration file
If no configuration file is provided, the program will guide you through an intuitive interactive setup process.
Alternatively, instead of going through the interactive setup, you may also provide a JSON configuration file with the -c
flag.
{
"agents": [
{
"name": "Lazy AI",
"model": "llama3.1:8b",
"system_prompt": "You are the laziest AI ever created. You respond as briefly as possible, and constantly complain about having to work.",
"temperature": 1,
"ctx_size": 4096
},
{
"name": "Irritable Man",
"model": "llama3.2:3b",
"system_prompt": "You are easily irritable and quick to anger.",
"temperature": 0.7,
"ctx_size": 2048
},
{
"name": "Paranoid Man",
"model": "llama3.2:3b",
"system_prompt": "You are extremely paranoid about everything and constantly question others' intentions.",
"temperature": 0.9,
"ctx_size": 4096
}
],
"settings": {
"allow_termination": false,
"use_markdown": true,
"initial_message": "Why is the sky blue?",
"turn_order": "vote"
}
}
The agents
key takes a list of agents. Each agent requires:
name
: A unique identifier for the agentmodel
: The Ollama model to be usedsystem_prompt
: Initial instructions defining the agent's behavior
Optional parameters:
temperature
(0.0-1.0, default: 0.8): Controls response randomness- Lower values make responses more focused
- Higher values increase creativity
ctx_size
(default: 2048): Maximum context length for the conversation
Additionally, agent names must be unique.
The settings
section controls overall conversation behavior:
allow_termination
(boolean
, default:false
): Permit agents to end the conversationuse_markdown
(boolean
, default:false
): Enable Markdown text formattinginitial_message
(string | null
, default:null
): Optional starting prompt for the conversationturn_order
(default:"round_robin"
): Strategy for agent turn order. Can be one of:"round_robin"
: Agents are cycled through in order"random"
: An agent other than the current one is randomly chosen"chain"
: Current agent picks which agent speaks next"moderator"
: A special moderator agent is designated to choose which agent speaks next. You may specify the moderator agent manually with the optionalmoderator
key. If moderator isn't manually specified, one is created by the program instead based on other configuration options. Note that this method might be quite slow."vote"
: All agents are made to vote for an agent except the current one and themselves. Of the agents with the most amount of votes, one is randomly chosen. This is the slowest method of determining turn order.
You can take a look at the JSON configuration schema for more details.
-
To run with interactive setup:
llm-conversation
-
To run with a configuration file:
llm-conversation -c config.json
-
To save the conversation to a file:
llm-conversation -o conversation.txt
- The conversation will continue until:
- An agent terminates the conversation (if termination is enabled)
- The user interrupts with
Ctrl+C
When saving conversations, the output file includes:
- Configuration details for both agents
- Complete conversation history with agent names and messages
Additionally, if the output file has a .json
extension, the output will automatically have JSON format.
If you face any issues while using the project, want any new features, or want to improve the documentation, you are welcome to contribute to the project. You may contribute by:
- Reporting bugs or requesting features if you're a user.
- Contributing code if you're a developer.
Please see CONTRIBUTING.md for detailed instructions on how you can contribute regardless of whether you're a user or a developer.
This software is licensed under the GNU Affero General Public License v3.0 or any later version. See LICENSE for more details.