Skip to content

Latest commit

 

History

History
151 lines (111 loc) · 4.92 KB

README.md

File metadata and controls

151 lines (111 loc) · 4.92 KB

Murmur

Murmur

License: MIT Python 3.7+ GitHub issues GitHub stars GitHub forks GitHub pull requests Code style: black aiohttp

A sophisticated multi-agent system that orchestrates different specialized AI agents using local LLM models to process and respond to user queries. The system implements a pipeline of Interpreter, Reasoner, Generator, and Critic agents to provide well-thought-out and refined responses.

🌟 Features

  • Multi-Agent Architecture: Four specialized agents working in concert:

    • Interpreter: Analyzes user intent and context
    • Reasoner: Develops logical approach to the problem
    • Generator: Creates initial responses
    • Critic: Reviews and refines generated content
  • Local LLM Integration: Works with locally hosted language models through a REST API

  • Asynchronous Processing: Built with asyncio for efficient concurrent operations

  • Robust Error Handling: Comprehensive error management with retries and graceful fallbacks

  • Conversation Memory: Maintains context through conversation history

  • Confidence Scoring: Evaluates response quality with multiple metrics

🔧 Prerequisites

  • Python 3.7+
  • Local LLM server (compatible with Ollama API)
  • Required Python packages:
    aiohttp
    asyncio
    

🚀 Installation

  1. Clone the repository:
git clone https://github.com/KazKozDev/murmur.git
cd murmur
  1. Install dependencies:
pip install -r requirements.txt
  1. Ensure your local LLM server is running (default: http://localhost:11434)

💻 Usage

  1. Navigate to the project directory and run:
cd murmur
python src/main.py

Or navigate directly to the source directory:

cd murmur/src
python main.py
  1. Enter your queries when prompted. Type 'quit' to exit.

Example interaction:

Enter your message: What is the capital of France?

Response: The capital of France is Paris.
Confidence: 0.95

🏗️ Architecture

The system follows a pipeline architecture:

  1. User InputInterpreter Agent

    • Analyzes core intent and context
    • Identifies implicit requirements
  2. Interpreted MessageReasoner Agent

    • Breaks down the problem
    • Develops logical approach
  3. ReasoningGenerator Agent

    • Creates initial response
    • Structures content clearly
  4. Generated ContentCritic Agent

    • Reviews for accuracy and completeness
    • Suggests improvements
    • Produces final version

⚙️ Configuration

The system uses the following default models:

  • Interpreter: mistral-nemo:latest
  • Reasoner: llama3.2-vision:11b
  • Generator: gemma2:9b
  • Critic: llama3.2-vision:11b

Models can be configured by modifying the AgentOrchestrator initialization.

🔐 Error Handling

The system implements multiple layers of error handling:

  • Connection retries (max 3 attempts)
  • Timeout management
  • Graceful degradation
  • Comprehensive error logging

🤝 Contributing

  1. Fork the repository (https://github.com/KazKozDev/murmur/fork)
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

📝 License

This project is licensed under the MIT License - see the LICENSE file for details.

🪲 Known Issues

  • High CPU usage with multiple concurrent requests
  • Memory consumption may increase with long conversations
  • Some LLM models may require significant local resources

🔜 Future Improvements

  • Add support for streaming responses
  • Implement agent personality customization
  • Add websocket support for real-time communication
  • Enhance conversation memory management
  • Add support for more LLM providers
  • Implement response caching

📞 Support

For support, please open an issue in the GitHub repository or contact the maintainers.

🙏 Acknowledgments

  • Thanks to the Ollama team for their local LLM server
  • Inspired by multi-agent architectures in AI systems