A practical guide and reference implementation of architecture patterns for building with Large Language Models (LLMs). This repository focuses on native implementations without relying on frameworks, demonstrating how to build both simple workflows and complex autonomous agents.
This repository provides:
- 🏗️ Reference architectures for common LLM patterns
- 💻 Example implementations using native LLM APIs
- 📚 Practical guides and best practices
- 🛠️ Templates for quick starts
- ✅ Augmented LLM: Basic LLM integration with retrieval, tools, and memory
- Web search, calculator, and weather tools
- Memory management
- Tool integration framework
-
✅ Prompt Chaining: Sequential LLM calls with intermediate processing
- Text analysis example
- Chain building and execution
- Context management
-
✅ Routing: Input classification and specialized handling
- Support ticket routing example
- Dynamic route selection
- Fallback handling
-
✅ Parallelization: Concurrent LLM processing with sectioning and voting
- Content moderation example
- Multiple voting strategies
- Result combination
-
✅ Orchestrator-Workers: Dynamic task decomposition and delegation
- Document processing example
- Task distribution
- Worker management
-
✅ Evaluator-Optimizer: Iterative improvement through feedback loops
- Code optimization example
- Multiple optimization strategies
- Progress tracking
-
✅ Autonomous Agents: Self-directed systems with planning and execution
- Research assistant example
- Dynamic planning
- Tool utilization
-
✅ Domain-Specific Agents: Implementations for specialized domains
- Medical diagnosis example
- Domain knowledge integration
- Constraint enforcement
- Python 3.8+
- API keys for the LLM provider(s) you want to use
- Git for version control
- Clone the repository:
git clone https://github.com/coderplex-tech/building-with-llms.git
cd building-with-llms
- Create and activate a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
- Install dependencies:
pip install -r requirements.txt
- Set up environment variables:
cp .env.example .env
# Edit .env with your API keys and configuration
Each pattern includes standalone examples that demonstrate its usage. Here's how to run them:
- Building Blocks:
# Augmented LLM example
python -m building-block.augmented-llm.examples.basic_usage
- Workflows:
# Prompt Chaining
python -m workflows.prompt-chaining.examples.text_analysis
# Routing
python -m workflows.routing.examples.support_routing
# Parallelization
python -m workflows.parallelization.examples.content_moderation
# Orchestrator-Workers
python -m workflows.orchestrator-workers.examples.document_processing
# Evaluator-Optimizer
python -m workflows.evaluator-optimizer.examples.code_optimization
- Agents:
# Autonomous Agent
python -m agents.autonomous-agent.examples.research_assistant
# Domain-Specific Agent
python -m agents.domain-specific.examples.medical_assistant
├── building-block/
│ └── augmented-llm/ # Basic LLM integration
├── workflows/
│ ├── prompt-chaining/ # Sequential processing
│ ├── routing/ # Input classification
│ ├── parallelization/ # Concurrent processing
│ ├── orchestrator-workers/ # Task decomposition
│ └── evaluator-optimizer/ # Iterative improvement
├── agents/
│ ├── autonomous-agent/ # Self-directed systems
│ └── domain-specific/ # Specialized agents
├── requirements.txt # Project dependencies
├── .env.example # Example environment variables
└── README.md # This file
Each pattern is implemented with:
- Core classes and interfaces
- Example implementations
- Comprehensive documentation
- Unit tests (coming soon)
- Best practices and usage guidelines
- Multiple LLM provider support (Anthropic, OpenAI, etc.)
- Async/await for efficient processing
- Type hints and Pydantic models
- Error handling and retries
- Detailed logging
- Configuration management
- Additional examples for each pattern
- Integration tests
- Performance benchmarks
- CI/CD pipeline
- Docker containerization
- API documentation
We welcome contributions! Here's how you can help:
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
Please ensure your code follows our style guide and includes appropriate tests and documentation.
While frameworks like LangChain can be useful, we believe building directly with LLM APIs provides:
- Better understanding of core concepts
- More control over implementation details
- Easier debugging and maintenance
- Lower overhead and dependencies
- 📚 Pattern Documentation
- 🔧 Implementation Guides
- 📊 Architecture Decision Records
- 🧪 Example Collection
This project is licensed under the MIT License - see the LICENSE file for details.
- Thanks to all contributors
- Inspired by software architecture patterns and LLM best practices
- Built with modern Python async/await patterns