diff --git a/README.md b/README.md
index 70dec26..12a9d6e 100644
--- a/README.md
+++ b/README.md
@@ -5,7 +5,7 @@
-

+
@@ -359,6 +359,15 @@ OpenDeepSearch is built on the shoulders of great open-source projects:
- **[LiteLLM](https://www.litellm.ai/)** ๐ฅ โ Used for efficient AI model integration.
- **Various Open-Source Libraries** ๐ โ Enhancing search and retrieval functionalities.
+## ๐ Documentation
+
+For detailed setup instructions and advanced usage:
+
+- **[๐ง Detailed Setup Guide](docs/DETAILED_SETUP.md)** - Comprehensive installation and configuration
+- **[๐ก Usage Examples](docs/EXAMPLES.md)** - Real-world examples and use cases
+- **[๐ API Reference](docs/API_REFERENCE.md)** - Complete API documentation
+- **[๐ ๏ธ Troubleshooting](docs/TROUBLESHOOTING.md)** - Common issues and solutions
+
## Citation
If you use `OpenDeepSearch` in your works, please cite it using the following BibTex entry:
diff --git a/docs/API_REFERENCE.md b/docs/API_REFERENCE.md
new file mode 100644
index 0000000..311aa70
--- /dev/null
+++ b/docs/API_REFERENCE.md
@@ -0,0 +1,451 @@
+# ๐ API Reference
+
+Complete API documentation for OpenDeepSearch components.
+
+## Table of Contents
+
+- [OpenDeepSearchTool](#opendeepsearchtool)
+- [Configuration Parameters](#configuration-parameters)
+- [Methods](#methods)
+- [Response Formats](#response-formats)
+- [Error Handling](#error-handling)
+
+## OpenDeepSearchTool
+
+The main class for performing searches with OpenDeepSearch.
+
+### Constructor
+
+```python
+class OpenDeepSearchTool(
+ model_name: str = "gpt-3.5-turbo",
+ mode: str = "default",
+ reranker: Optional[str] = "jina",
+ reranker_model: Optional[str] = None,
+ max_results: int = 10,
+ search_provider: str = "serper",
+ **kwargs
+)
+```
+
+#### Parameters
+
+| Parameter | Type | Default | Description |
+|-----------|------|---------|-------------|
+| `model_name` | `str` | `"gpt-3.5-turbo"` | LiteLLM-compatible model identifier |
+| `mode` | `str` | `"default"` | Search mode: `"default"` or `"pro"` |
+| `reranker` | `Optional[str]` | `"jina"` | Reranking service: `"jina"`, `"infinity"`, or `None` |
+| `reranker_model` | `Optional[str]` | `None` | Specific reranker model (for Infinity) |
+| `max_results` | `int` | `10` | Maximum number of search results to process |
+| `search_provider` | `str` | `"serper"` | Search provider: `"serper"` or `"searxng"` |
+
+#### Supported Models
+
+##### OpenAI
+- `gpt-4o` - Latest GPT-4 model
+- `gpt-4o-mini` - Cost-effective GPT-4 variant
+- `gpt-3.5-turbo` - Fast and economical
+- `gpt-4-turbo` - Previous generation GPT-4
+
+##### Anthropic
+- `claude-3-opus-20240229` - Highest capability
+- `claude-3-sonnet-20240229` - Balanced performance
+- `claude-3-haiku-20240307` - Fast and cost-effective
+
+##### Google
+- `gemini-1.5-pro` - Advanced reasoning
+- `gemini-1.5-flash` - Fast responses
+
+##### OpenRouter
+- `openrouter/google/gemini-2.0-flash-001`
+- `openrouter/anthropic/claude-3.5-sonnet`
+- `openrouter/meta-llama/llama-3.1-70b-instruct`
+
+##### Local Models (via Ollama)
+- Configure `OLLAMA_BASE_URL` environment variable
+- Use model names like `llama3.1:8b`, `qwen2:7b`
+
+### Methods
+
+#### forward()
+
+Main method for performing searches.
+
+```python
+def forward(
+ self,
+ query: str,
+ max_results: Optional[int] = None,
+ mode: Optional[str] = None
+) -> str
+```
+
+**Parameters:**
+- `query` (str): The search query string
+- `max_results` (Optional[int]): Override default max_results for this query
+- `mode` (Optional[str]): Override default mode for this query
+
+**Returns:**
+- `str`: Comprehensive answer based on search results
+
+**Example:**
+```python
+search_tool = OpenDeepSearchTool()
+result = search_tool.forward("What are the latest AI developments in 2024?")
+print(result)
+```
+
+#### search_and_rank()
+
+Lower-level method that returns structured search results.
+
+```python
+def search_and_rank(
+ self,
+ query: str,
+ max_results: Optional[int] = None
+) -> List[Dict[str, Any]]
+```
+
+**Parameters:**
+- `query` (str): The search query string
+- `max_results` (Optional[int]): Override default max_results
+
+**Returns:**
+- `List[Dict[str, Any]]`: List of ranked search results
+
+**Result Structure:**
+```python
+[
+ {
+ "title": "Article Title",
+ "url": "https://example.com",
+ "content": "Article content excerpt...",
+ "score": 0.95, # Relevance score (if reranking enabled)
+ "source": "example.com"
+ },
+ # ... more results
+]
+```
+
+#### update_config()
+
+Update configuration parameters after initialization.
+
+```python
+def update_config(self, **kwargs) -> None
+```
+
+**Parameters:**
+- `**kwargs`: Configuration parameters to update
+
+**Example:**
+```python
+search_tool = OpenDeepSearchTool()
+search_tool.update_config(model_name="gpt-4o", max_results=15)
+```
+
+## Configuration Parameters
+
+### Environment Variables
+
+#### Required Variables (choose one search provider)
+
+**Serper.dev:**
+```bash
+export SERPER_API_KEY='your-serper-api-key'
+```
+
+**SearXNG:**
+```bash
+export SEARXNG_INSTANCE_URL='https://your-searxng-instance.com'
+export SEARXNG_API_KEY='your-api-key' # Optional
+```
+
+#### LLM Provider Keys (choose one or more)
+
+```bash
+# OpenAI
+export OPENAI_API_KEY='your-openai-api-key'
+export OPENAI_BASE_URL='https://api.openai.com/v1' # Optional
+
+# Anthropic
+export ANTHROPIC_API_KEY='your-anthropic-api-key'
+
+# Google
+export GOOGLE_API_KEY='your-google-api-key'
+
+# OpenRouter
+export OPENROUTER_API_KEY='your-openrouter-api-key'
+
+# Local Ollama
+export OLLAMA_BASE_URL='http://localhost:11434'
+```
+
+#### Reranker Keys
+
+```bash
+# Jina AI
+export JINA_API_KEY='your-jina-api-key'
+
+# Infinity (self-hosted)
+export INFINITY_API_BASE='http://localhost:7997'
+```
+
+#### Optional Model Configuration
+
+```bash
+# Default models for different tasks
+export LITELLM_MODEL_ID='gpt-4o-mini'
+export LITELLM_SEARCH_MODEL_ID='gpt-4o-mini'
+export LITELLM_ORCHESTRATOR_MODEL_ID='claude-3-sonnet-20240229'
+export LITELLM_EVAL_MODEL_ID='gpt-4o'
+```
+
+### Search Modes
+
+#### Default Mode
+- **Speed**: โกโกโก Fast
+- **Quality**: โญโญโญ Good
+- **Use Case**: Quick searches, simple queries
+- **Process**: Single search โ Direct synthesis
+
+#### Pro Mode (Deep Search)
+- **Speed**: โกโก Moderate
+- **Quality**: โญโญโญโญโญ Excellent
+- **Use Case**: Research, complex multi-hop queries
+- **Process**: Initial search โ Query refinement โ Additional searches โ Comprehensive synthesis
+
+### Reranker Options
+
+#### Jina AI (Cloud)
+```python
+OpenDeepSearchTool(reranker="jina")
+```
+- **Pros**: Easy setup, high quality
+- **Cons**: Requires API key, usage costs
+- **Best for**: Production use, high-quality results
+
+#### Infinity (Self-hosted)
+```python
+OpenDeepSearchTool(
+ reranker="infinity",
+ reranker_model="Alibaba-NLP/gte-Qwen2-7B-instruct"
+)
+```
+- **Pros**: Private, customizable, no API costs
+- **Cons**: Requires setup, computational resources
+- **Best for**: Privacy-sensitive applications, high-volume usage
+
+#### No Reranking
+```python
+OpenDeepSearchTool(reranker=None)
+```
+- **Pros**: Fastest, no additional dependencies
+- **Cons**: Lower result quality
+- **Best for**: Speed-critical applications, resource-constrained environments
+
+## Response Formats
+
+### Text Response (default)
+
+```python
+result = search_tool.forward("query")
+# Returns: str - Comprehensive answer
+```
+
+### Structured Response
+
+```python
+results = search_tool.search_and_rank("query")
+# Returns: List[Dict] - Structured search results
+```
+
+Example structured response:
+```json
+[
+ {
+ "title": "Latest AI Developments in 2024",
+ "url": "https://example.com/ai-2024",
+ "content": "Recent breakthroughs in artificial intelligence include...",
+ "score": 0.95,
+ "source": "example.com",
+ "published_date": "2024-01-15",
+ "author": "AI Researcher"
+ }
+]
+```
+
+## Error Handling
+
+### Common Exceptions
+
+#### APIKeyError
+```python
+try:
+ search_tool = OpenDeepSearchTool()
+ result = search_tool.forward("test query")
+except APIKeyError as e:
+ print(f"API key missing or invalid: {e}")
+```
+
+#### ModelNotFoundError
+```python
+try:
+ search_tool = OpenDeepSearchTool(model_name="invalid-model")
+except ModelNotFoundError as e:
+ print(f"Model not available: {e}")
+```
+
+#### SearchProviderError
+```python
+try:
+ result = search_tool.forward("query")
+except SearchProviderError as e:
+ print(f"Search provider error: {e}")
+```
+
+#### RerankerError
+```python
+try:
+ search_tool = OpenDeepSearchTool(reranker="jina")
+ result = search_tool.forward("query")
+except RerankerError as e:
+ print(f"Reranker error: {e}")
+ # Fallback to no reranking
+ search_tool.reranker = None
+ result = search_tool.forward("query")
+```
+
+### Error Recovery Patterns
+
+#### Automatic Fallback
+```python
+class RobustSearchTool:
+ def __init__(self):
+ self.configs = [
+ {"model_name": "gpt-4o", "reranker": "jina"},
+ {"model_name": "gpt-4o-mini", "reranker": "jina"},
+ {"model_name": "gpt-3.5-turbo", "reranker": None},
+ ]
+
+ def search(self, query):
+ for config in self.configs:
+ try:
+ tool = OpenDeepSearchTool(**config)
+ return tool.forward(query)
+ except Exception as e:
+ print(f"Config failed: {config}, Error: {e}")
+ continue
+
+ raise Exception("All configurations failed")
+```
+
+#### Retry Logic
+```python
+import time
+from functools import wraps
+
+def retry(max_attempts=3, delay=1):
+ def decorator(func):
+ @wraps(func)
+ def wrapper(*args, **kwargs):
+ for attempt in range(max_attempts):
+ try:
+ return func(*args, **kwargs)
+ except Exception as e:
+ if attempt == max_attempts - 1:
+ raise e
+ time.sleep(delay * (2 ** attempt))
+ return None
+ return wrapper
+ return decorator
+
+@retry(max_attempts=3, delay=1)
+def robust_search(query):
+ search_tool = OpenDeepSearchTool()
+ return search_tool.forward(query)
+```
+
+## Advanced Usage
+
+### Custom Configuration Classes
+
+```python
+from dataclasses import dataclass
+from typing import Optional
+
+@dataclass
+class SearchConfig:
+ model_name: str = "gpt-4o-mini"
+ mode: str = "default"
+ reranker: Optional[str] = "jina"
+ max_results: int = 10
+ temperature: float = 0.1
+ timeout: int = 30
+
+class ConfiguredSearchTool:
+ def __init__(self, config: SearchConfig):
+ self.config = config
+ self.tool = OpenDeepSearchTool(
+ model_name=config.model_name,
+ mode=config.mode,
+ reranker=config.reranker,
+ max_results=config.max_results
+ )
+
+ def search(self, query: str) -> str:
+ return self.tool.forward(query)
+
+# Usage
+config = SearchConfig(model_name="gpt-4o", mode="pro")
+search_tool = ConfiguredSearchTool(config)
+result = search_tool.search("complex research query")
+```
+
+### Integration with Popular Frameworks
+
+#### FastAPI Integration
+```python
+from fastapi import FastAPI, HTTPException
+from pydantic import BaseModel
+
+app = FastAPI()
+search_tool = OpenDeepSearchTool()
+
+class SearchRequest(BaseModel):
+ query: str
+ mode: str = "default"
+ max_results: int = 10
+
+@app.post("/search")
+async def search_endpoint(request: SearchRequest):
+ try:
+ result = search_tool.forward(
+ request.query,
+ max_results=request.max_results,
+ mode=request.mode
+ )
+ return {"result": result}
+ except Exception as e:
+ raise HTTPException(status_code=500, detail=str(e))
+```
+
+#### Flask Integration
+```python
+from flask import Flask, request, jsonify
+
+app = Flask(__name__)
+search_tool = OpenDeepSearchTool()
+
+@app.route('/search', methods=['POST'])
+def search():
+ data = request.get_json()
+ try:
+ result = search_tool.forward(data['query'])
+ return jsonify({"result": result})
+ except Exception as e:
+ return jsonify({"error": str(e)}), 500
+```
+
+This API reference provides comprehensive documentation for all OpenDeepSearch functionality. For additional examples and use cases, see the [Examples Guide](EXAMPLES.md).
\ No newline at end of file
diff --git a/docs/DETAILED_SETUP.md b/docs/DETAILED_SETUP.md
new file mode 100644
index 0000000..fb68a81
--- /dev/null
+++ b/docs/DETAILED_SETUP.md
@@ -0,0 +1,351 @@
+# ๐ง Detailed Setup Guide
+
+This guide provides comprehensive setup instructions for OpenDeepSearch, covering different environments and use cases.
+
+## Table of Contents
+
+- [Prerequisites](#prerequisites)
+- [Environment Setup](#environment-setup)
+- [Provider Configuration](#provider-configuration)
+- [Advanced Configuration](#advanced-configuration)
+- [Troubleshooting](#troubleshooting)
+
+## Prerequisites
+
+### System Requirements
+
+- **Python**: 3.8 or higher
+- **PyTorch**: Required for embedding models
+- **Memory**: At least 4GB RAM (8GB+ recommended for local models)
+- **Disk Space**: 2GB+ for model downloads
+
+### Platform Support
+
+- โ
Linux (Ubuntu 20.04+, CentOS 8+)
+- โ
macOS (10.15+)
+- โ
Windows 10/11 (with WSL2 recommended)
+
+## Environment Setup
+
+### Option 1: Using pip (Recommended for most users)
+
+```bash
+# Create a virtual environment
+python -m venv ods-env
+source ods-env/bin/activate # On Windows: ods-env\Scripts\activate
+
+# Install OpenDeepSearch
+pip install -e .
+pip install -r requirements.txt
+
+# Verify installation
+python -c "from opendeepsearch import OpenDeepSearchTool; print('Installation successful!')"
+```
+
+### Option 2: Using uv (Fastest installation)
+
+```bash
+# Install uv if not already installed
+curl -LsSf https://astral.sh/uv/install.sh | sh
+
+# Create environment and install
+uv venv ods-env
+source ods-env/bin/activate # On Windows: ods-env\Scripts\activate
+uv pip install -e .
+uv pip install -r requirements.txt
+```
+
+### Option 3: Using PDM (For development)
+
+```bash
+# Install PDM
+curl -sSL https://raw.githubusercontent.com/pdm-project/pdm/main/install-pdm.py | python3 -
+
+# Initialize and install
+pdm install
+eval "$(pdm venv activate)"
+```
+
+## Provider Configuration
+
+### Search Providers
+
+#### Serper.dev (Recommended for beginners)
+
+1. **Sign up**: Visit [serper.dev](https://serper.dev)
+2. **Get API Key**: Copy your API key from the dashboard
+3. **Set Environment Variable**:
+
+```bash
+export SERPER_API_KEY='your-serper-api-key-here'
+```
+
+**Pros**: Free 2500 credits, easy setup, reliable
+**Cons**: Limited to free tier credits
+
+#### SearXNG (Self-hosted)
+
+1. **Deploy SearXNG**: Use Docker or manual installation
+2. **Configure Instance**:
+
+```bash
+export SEARXNG_INSTANCE_URL='https://your-searxng-instance.com'
+export SEARXNG_API_KEY='your-api-key-here' # Optional
+```
+
+**Example SearXNG Docker setup**:
+```bash
+docker run -d --name searxng -p 8080:8080 searxng/searxng
+export SEARXNG_INSTANCE_URL='http://localhost:8080'
+```
+
+**Pros**: Free, private, customizable
+**Cons**: Requires setup and maintenance
+
+### LLM Providers
+
+#### OpenAI
+
+```bash
+export OPENAI_API_KEY='your-openai-api-key-here'
+```
+
+**Recommended models**:
+- `gpt-4o-mini`: Cost-effective for most tasks
+- `gpt-4o`: Best performance for complex queries
+- `gpt-3.5-turbo`: Fastest, good for simple searches
+
+#### Anthropic Claude
+
+```bash
+export ANTHROPIC_API_KEY='your-anthropic-api-key-here'
+```
+
+**Recommended models**:
+- `claude-3-haiku-20240307`: Fast and cost-effective
+- `claude-3-sonnet-20240229`: Balanced performance
+- `claude-3-opus-20240229`: Best for complex reasoning
+
+#### Google Gemini
+
+```bash
+export GOOGLE_API_KEY='your-google-api-key-here'
+```
+
+**Recommended models**:
+- `gemini-1.5-flash`: Fast and efficient
+- `gemini-1.5-pro`: Best performance
+
+#### OpenRouter (Access to multiple models)
+
+```bash
+export OPENROUTER_API_KEY='your-openrouter-api-key-here'
+```
+
+**Popular models**:
+- `openrouter/google/gemini-2.0-flash-001`: Latest Gemini
+- `openrouter/anthropic/claude-3.5-sonnet`: Claude access
+- `openrouter/meta-llama/llama-3.1-70b-instruct`: Open source option
+
+#### Local Models (Ollama)
+
+```bash
+# Install Ollama
+curl -fsSL https://ollama.com/install.sh | sh
+
+# Pull a model
+ollama pull llama3.1:8b
+
+# Configure for OpenDeepSearch
+export OLLAMA_BASE_URL='http://localhost:11434'
+```
+
+### Reranking Models
+
+#### Jina AI (Cloud-based)
+
+```bash
+export JINA_API_KEY='your-jina-api-key-here'
+```
+
+**Usage**:
+```python
+search_agent = OpenDeepSearchTool(reranker="jina")
+```
+
+#### Infinity Embeddings (Self-hosted)
+
+```bash
+# Install Infinity
+pip install infinity-emb[all]
+
+# Start server with a model
+infinity_emb v2 --model-id Alibaba-NLP/gte-Qwen2-7B-instruct
+
+# Configure endpoint
+export INFINITY_API_BASE='http://localhost:7997'
+```
+
+**Usage**:
+```python
+search_agent = OpenDeepSearchTool(
+ reranker="infinity",
+ reranker_model="Alibaba-NLP/gte-Qwen2-7B-instruct"
+)
+```
+
+## Advanced Configuration
+
+### Model-Specific Settings
+
+Set different models for different tasks:
+
+```bash
+# General fallback model
+export LITELLM_MODEL_ID='openrouter/google/gemini-2.0-flash-001'
+
+# Task-specific models
+export LITELLM_SEARCH_MODEL_ID='gpt-4o-mini' # Fast for search queries
+export LITELLM_ORCHESTRATOR_MODEL_ID='claude-3-sonnet-20240229' # Good reasoning
+export LITELLM_EVAL_MODEL_ID='gpt-4o' # Best for evaluation
+```
+
+### Performance Tuning
+
+#### Memory Optimization
+
+```python
+# For low-memory environments
+search_agent = OpenDeepSearchTool(
+ model_name="gpt-3.5-turbo", # Smaller model
+ max_results=5, # Fewer results
+ reranker=None # Disable reranking
+)
+```
+
+#### Speed Optimization
+
+```python
+# For fastest responses
+search_agent = OpenDeepSearchTool(
+ model_name="gpt-3.5-turbo",
+ mode="default", # Use default mode (faster than pro)
+ max_results=3
+)
+```
+
+#### Quality Optimization
+
+```python
+# For best results
+search_agent = OpenDeepSearchTool(
+ model_name="gpt-4o",
+ mode="pro", # Deep search mode
+ max_results=10,
+ reranker="jina"
+)
+```
+
+### Custom Configuration Files
+
+Create a `.env` file for persistent settings:
+
+```bash
+# .env file
+SERPER_API_KEY=your-serper-api-key
+OPENAI_API_KEY=your-openai-api-key
+JINA_API_KEY=your-jina-api-key
+LITELLM_MODEL_ID=gpt-4o-mini
+```
+
+Load in Python:
+```python
+from dotenv import load_dotenv
+load_dotenv()
+```
+
+## Troubleshooting
+
+### Common Issues
+
+#### 1. Import Error
+
+**Error**: `ModuleNotFoundError: No module named 'opendeepsearch'`
+
+**Solution**:
+```bash
+pip install -e .
+# or
+python setup.py develop
+```
+
+#### 2. API Key Not Found
+
+**Error**: `API key not found for provider`
+
+**Solution**:
+```bash
+# Check if environment variable is set
+echo $OPENAI_API_KEY
+
+# Set it if missing
+export OPENAI_API_KEY='your-key-here'
+```
+
+#### 3. Model Not Found
+
+**Error**: `Model not found` or `Invalid model name`
+
+**Solution**: Use the correct model naming format:
+```python
+# Correct formats
+"gpt-4o-mini" # OpenAI
+"claude-3-sonnet-20240229" # Anthropic
+"openrouter/google/gemini-2.0-flash-001" # OpenRouter
+```
+
+#### 4. Reranker Issues
+
+**Error**: Reranking fails or returns poor results
+
+**Solutions**:
+```python
+# Disable reranking temporarily
+search_agent = OpenDeepSearchTool(reranker=None)
+
+# Or try a different reranker
+search_agent = OpenDeepSearchTool(reranker="infinity")
+```
+
+#### 5. Memory Issues
+
+**Error**: `CUDA out of memory` or system hangs
+
+**Solutions**:
+```python
+# Use smaller models
+search_agent = OpenDeepSearchTool(
+ model_name="gpt-3.5-turbo",
+ reranker=None
+)
+
+# Reduce batch size
+search_agent.max_results = 3
+```
+
+### Getting Help
+
+1. **Check the logs**: Enable debug mode for detailed error information
+2. **Community**: Join our [Discord](https://discord.gg/sentientfoundation)
+3. **Issues**: Create an issue on [GitHub](https://github.com/sentient-agi/OpenDeepSearch/issues)
+
+### Performance Benchmarks
+
+| Configuration | Speed | Quality | Cost | Use Case |
+|---------------|-------|---------|------|----------|
+| gpt-3.5-turbo + default | โกโกโก | โญโญ | ๐ฐ | Quick searches |
+| gpt-4o-mini + default | โกโก | โญโญโญ | ๐ฐ๐ฐ | Balanced |
+| gpt-4o + pro + jina | โก | โญโญโญโญโญ | ๐ฐ๐ฐ๐ฐ | Research tasks |
+| claude-3-sonnet + pro | โก | โญโญโญโญ | ๐ฐ๐ฐ๐ฐ | Complex reasoning |
+
+Legend: โก = Speed, โญ = Quality, ๐ฐ = Cost per query
\ No newline at end of file
diff --git a/docs/EXAMPLES.md b/docs/EXAMPLES.md
new file mode 100644
index 0000000..807513e
--- /dev/null
+++ b/docs/EXAMPLES.md
@@ -0,0 +1,533 @@
+# ๐ก Usage Examples
+
+This document provides comprehensive examples of using OpenDeepSearch in various scenarios.
+
+## Table of Contents
+
+- [Basic Usage](#basic-usage)
+- [Research & Academic](#research--academic)
+- [Business Intelligence](#business-intelligence)
+- [Technical Documentation](#technical-documentation)
+- [Creative Applications](#creative-applications)
+- [Integration Examples](#integration-examples)
+
+## Basic Usage
+
+### Simple Search Query
+
+```python
+from opendeepsearch import OpenDeepSearchTool
+import os
+
+# Basic setup
+os.environ["SERPER_API_KEY"] = "your-api-key"
+os.environ["OPENAI_API_KEY"] = "your-api-key"
+
+search_tool = OpenDeepSearchTool()
+
+# Simple search
+result = search_tool.forward("What is the latest news about OpenAI?")
+print(result)
+```
+
+### Multi-hop Query
+
+```python
+# Complex query requiring multiple steps
+query = """
+Find the current market cap of Tesla and compare it with the
+market cap of Tesla 2 years ago. What factors contributed to the change?
+"""
+
+search_tool = OpenDeepSearchTool(
+ model_name="gpt-4o",
+ mode="pro" # Use deep search for complex queries
+)
+
+result = search_tool.forward(query)
+print(result)
+```
+
+## Research & Academic
+
+### Literature Review
+
+```python
+# Academic research setup
+search_tool = OpenDeepSearchTool(
+ model_name="claude-3-sonnet-20240229",
+ mode="pro",
+ reranker="jina",
+ max_results=10
+)
+
+# Research query
+query = """
+Find recent papers (2023-2024) about transformer attention mechanisms
+and their computational efficiency improvements. Summarize the key findings
+and methodologies from at least 5 different papers.
+"""
+
+result = search_tool.forward(query)
+print(result)
+```
+
+### Fact Checking
+
+```python
+# Fact-checking configuration
+search_tool = OpenDeepSearchTool(
+ model_name="gpt-4o",
+ mode="pro",
+ reranker="jina"
+)
+
+claims_to_verify = [
+ "The human brain has 86 billion neurons",
+ "Honey never spoils",
+ "The Great Wall of China is visible from space"
+]
+
+for claim in claims_to_verify:
+ query = f"Verify this claim with recent scientific evidence: {claim}"
+ result = search_tool.forward(query)
+ print(f"Claim: {claim}")
+ print(f"Verification: {result}\n")
+```
+
+### Comparative Analysis
+
+```python
+# Compare different research approaches
+query = """
+Compare the effectiveness of different COVID-19 vaccines (mRNA vs viral vector vs protein subunit)
+based on peer-reviewed studies from 2023-2024. Include data on efficacy rates,
+side effects, and real-world effectiveness.
+"""
+
+search_tool = OpenDeepSearchTool(
+ model_name="gpt-4o",
+ mode="pro",
+ max_results=15
+)
+
+result = search_tool.forward(query)
+print(result)
+```
+
+## Business Intelligence
+
+### Market Research
+
+```python
+# Market analysis setup
+search_tool = OpenDeepSearchTool(
+ model_name="claude-3-sonnet-20240229",
+ mode="pro"
+)
+
+# Comprehensive market analysis
+query = """
+Analyze the electric vehicle charging infrastructure market in Europe for 2024.
+Include: market size, key players, growth projections, government policies,
+and challenges facing the industry.
+"""
+
+result = search_tool.forward(query)
+print(result)
+```
+
+### Competitor Analysis
+
+```python
+# Track competitors
+companies = ["Tesla", "BYD", "Volkswagen ID series"]
+
+for company in companies:
+ query = f"""
+ What are {company}'s latest strategic moves in the EV market?
+ Include recent partnerships, product launches, and market expansion plans.
+ """
+
+ result = search_tool.forward(query)
+ print(f"=== {company} Analysis ===")
+ print(result)
+ print("\n" + "="*50 + "\n")
+```
+
+### Industry Trends
+
+```python
+# Trend analysis
+search_tool = OpenDeepSearchTool(
+ model_name="gpt-4o-mini",
+ mode="default",
+ max_results=8
+)
+
+industries = [
+ "artificial intelligence in healthcare",
+ "renewable energy storage solutions",
+ "fintech innovations in 2024"
+]
+
+trend_analysis = {}
+for industry in industries:
+ query = f"""
+ What are the top 5 trends in {industry} for 2024?
+ Include market data, key innovations, and future outlook.
+ """
+
+ trend_analysis[industry] = search_tool.forward(query)
+
+# Print consolidated report
+for industry, analysis in trend_analysis.items():
+ print(f"\n{'='*60}")
+ print(f"TREND ANALYSIS: {industry.upper()}")
+ print(f"{'='*60}")
+ print(analysis)
+```
+
+## Technical Documentation
+
+### API Documentation Search
+
+```python
+# Find specific API information
+search_tool = OpenDeepSearchTool(
+ model_name="gpt-4o-mini",
+ mode="default"
+)
+
+# Search for specific technical information
+query = """
+How do I implement OAuth 2.0 authentication with Google APIs in Python?
+Provide code examples and explain the complete flow including token refresh.
+"""
+
+result = search_tool.forward(query)
+print(result)
+```
+
+### Technology Comparison
+
+```python
+# Compare technologies
+query = """
+Compare React, Vue.js, and Angular for building large-scale web applications in 2024.
+Consider: performance, learning curve, ecosystem, enterprise adoption, and recent updates.
+Provide specific examples and benchmarks where available.
+"""
+
+search_tool = OpenDeepSearchTool(
+ model_name="claude-3-sonnet-20240229",
+ mode="pro",
+ max_results=12
+)
+
+result = search_tool.forward(query)
+print(result)
+```
+
+### Best Practices Research
+
+```python
+# Find current best practices
+query = """
+What are the current best practices for Kubernetes security in 2024?
+Include RBAC configuration, network policies, secrets management,
+and container security scanning. Provide practical implementation examples.
+"""
+
+result = search_tool.forward(query)
+print(result)
+```
+
+## Creative Applications
+
+### Content Ideas Generation
+
+```python
+# Content creation assistant
+search_tool = OpenDeepSearchTool(
+ model_name="gpt-4o-mini",
+ mode="default"
+)
+
+topic = "sustainable living"
+query = f"""
+Find trending topics and recent developments related to {topic}.
+What are people currently discussing? What new innovations or
+studies have emerged recently? Suggest 10 engaging content ideas.
+"""
+
+result = search_tool.forward(query)
+print(result)
+```
+
+### Travel Planning
+
+```python
+# Comprehensive travel research
+destination = "Japan"
+travel_dates = "April 2024"
+
+query = f"""
+Plan a 2-week trip to {destination} in {travel_dates}.
+Find current travel requirements, weather conditions, cultural events,
+must-visit places, local transportation tips, and budget estimates.
+Include recent traveler reviews and recommendations.
+"""
+
+search_tool = OpenDeepSearchTool(
+ model_name="gpt-4o",
+ mode="pro",
+ max_results=15
+)
+
+result = search_tool.forward(query)
+print(result)
+```
+
+## Integration Examples
+
+### With SmolAgents
+
+```python
+from smolagents import CodeAgent, ReactCodeAgent, tool
+from opendeepsearch import OpenDeepSearchTool
+
+# Create search tool
+search_tool = OpenDeepSearchTool(
+ model_name="gpt-4o-mini",
+ reranker="jina"
+)
+
+# Create agent with search capability
+agent = ReactCodeAgent(
+ tools=[search_tool],
+ model="gpt-4o-mini"
+)
+
+# Use the agent
+result = agent.run("""
+Search for the latest Python web frameworks released in 2024
+and create a comparison table with their key features.
+""")
+print(result)
+```
+
+### Custom Tool Integration
+
+```python
+from smolagents import tool
+
+@tool
+def enhanced_search(query: str, mode: str = "default") -> str:
+ """
+ Enhanced search with custom processing
+
+ Args:
+ query: The search query
+ mode: Search mode ('default' or 'pro')
+ """
+ search_tool = OpenDeepSearchTool(
+ model_name="gpt-4o-mini",
+ mode=mode,
+ reranker="jina"
+ )
+
+ # Custom preprocessing
+ enhanced_query = f"Latest 2024 information about: {query}"
+
+ # Get search results
+ result = search_tool.forward(enhanced_query)
+
+ # Custom postprocessing
+ processed_result = f"๐ Search Results for '{query}':\n\n{result}"
+
+ return processed_result
+
+# Use the custom tool
+result = enhanced_search("machine learning interpretability", mode="pro")
+print(result)
+```
+
+### Batch Processing
+
+```python
+import asyncio
+from concurrent.futures import ThreadPoolExecutor
+
+class BatchSearchProcessor:
+ def __init__(self, max_workers=3):
+ self.search_tool = OpenDeepSearchTool(
+ model_name="gpt-4o-mini",
+ mode="default"
+ )
+ self.max_workers = max_workers
+
+ def search_single(self, query):
+ """Process a single search query"""
+ try:
+ result = self.search_tool.forward(query)
+ return {"query": query, "result": result, "status": "success"}
+ except Exception as e:
+ return {"query": query, "error": str(e), "status": "error"}
+
+ def search_batch(self, queries):
+ """Process multiple queries in parallel"""
+ with ThreadPoolExecutor(max_workers=self.max_workers) as executor:
+ futures = [executor.submit(self.search_single, query) for query in queries]
+ results = [future.result() for future in futures]
+
+ return results
+
+# Usage example
+processor = BatchSearchProcessor(max_workers=2)
+
+queries = [
+ "Latest AI breakthroughs in 2024",
+ "Climate change impact on agriculture",
+ "Quantum computing commercial applications",
+ "Space exploration missions planned for 2025"
+]
+
+results = processor.search_batch(queries)
+
+for result in results:
+ if result["status"] == "success":
+ print(f"Query: {result['query']}")
+ print(f"Result: {result['result'][:200]}...\n")
+ else:
+ print(f"Error with query '{result['query']}': {result['error']}\n")
+```
+
+### With LangChain
+
+```python
+from langchain.tools import BaseTool
+from langchain.agents import initialize_agent, AgentType
+from langchain.llms import OpenAI
+
+class OpenDeepSearchLangChainTool(BaseTool):
+ name = "OpenDeepSearch"
+ description = "Use this tool for comprehensive web search and information retrieval"
+
+ def __init__(self):
+ super().__init__()
+ self.search_tool = OpenDeepSearchTool(
+ model_name="gpt-4o-mini",
+ mode="pro"
+ )
+
+ def _run(self, query: str) -> str:
+ """Execute the search"""
+ return self.search_tool.forward(query)
+
+ async def _arun(self, query: str) -> str:
+ """Async version"""
+ return self._run(query)
+
+# Create LangChain agent with OpenDeepSearch
+llm = OpenAI(temperature=0)
+tools = [OpenDeepSearchLangChainTool()]
+
+agent = initialize_agent(
+ tools,
+ llm,
+ agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
+ verbose=True
+)
+
+# Use the agent
+result = agent.run("Find recent developments in renewable energy storage and summarize the key innovations")
+print(result)
+```
+
+## Configuration Templates
+
+### Research Configuration
+
+```python
+# Optimized for academic research
+RESEARCH_CONFIG = {
+ "model_name": "gpt-4o",
+ "mode": "pro",
+ "reranker": "jina",
+ "max_results": 12,
+ "search_provider": "serper" # Reliable for academic sources
+}
+
+search_tool = OpenDeepSearchTool(**RESEARCH_CONFIG)
+```
+
+### Business Intelligence Configuration
+
+```python
+# Optimized for business analysis
+BUSINESS_CONFIG = {
+ "model_name": "claude-3-sonnet-20240229",
+ "mode": "pro",
+ "reranker": "jina",
+ "max_results": 10,
+ "search_provider": "serper"
+}
+
+search_tool = OpenDeepSearchTool(**BUSINESS_CONFIG)
+```
+
+### Quick Search Configuration
+
+```python
+# Optimized for speed
+QUICK_CONFIG = {
+ "model_name": "gpt-3.5-turbo",
+ "mode": "default",
+ "reranker": None,
+ "max_results": 5,
+ "search_provider": "serper"
+}
+
+search_tool = OpenDeepSearchTool(**QUICK_CONFIG)
+```
+
+## Error Handling Examples
+
+```python
+import logging
+from opendeepsearch import OpenDeepSearchTool
+
+# Set up logging
+logging.basicConfig(level=logging.INFO)
+logger = logging.getLogger(__name__)
+
+def safe_search(query, max_retries=3):
+ """Search with error handling and retries"""
+ search_tool = OpenDeepSearchTool(
+ model_name="gpt-4o-mini",
+ mode="default"
+ )
+
+ for attempt in range(max_retries):
+ try:
+ result = search_tool.forward(query)
+ logger.info(f"Search successful on attempt {attempt + 1}")
+ return result
+
+ except Exception as e:
+ logger.warning(f"Attempt {attempt + 1} failed: {str(e)}")
+ if attempt == max_retries - 1:
+ logger.error(f"All {max_retries} attempts failed for query: {query}")
+ return f"Search failed after {max_retries} attempts: {str(e)}"
+
+ # Wait before retrying
+ import time
+ time.sleep(2 ** attempt) # Exponential backoff
+
+# Usage
+result = safe_search("Latest developments in quantum computing")
+print(result)
+```
+
+These examples demonstrate the versatility and power of OpenDeepSearch across different use cases. Adapt the configurations and approaches based on your specific requirements for optimal results.
\ No newline at end of file
diff --git a/docs/TROUBLESHOOTING.md b/docs/TROUBLESHOOTING.md
new file mode 100644
index 0000000..05917ce
--- /dev/null
+++ b/docs/TROUBLESHOOTING.md
@@ -0,0 +1,701 @@
+# ๐ ๏ธ Troubleshooting Guide
+
+Common issues and solutions for OpenDeepSearch.
+
+## Table of Contents
+
+- [Installation Issues](#installation-issues)
+- [Configuration Problems](#configuration-problems)
+- [Runtime Errors](#runtime-errors)
+- [Performance Issues](#performance-issues)
+- [Integration Problems](#integration-problems)
+- [Getting Help](#getting-help)
+
+## Installation Issues
+
+### Problem: `ModuleNotFoundError: No module named 'opendeepsearch'`
+
+**Cause:** Package not installed or not in Python path
+
+**Solutions:**
+
+1. **Install in development mode:**
+```bash
+cd OpenDeepSearch
+pip install -e .
+```
+
+2. **Install requirements:**
+```bash
+pip install -r requirements.txt
+```
+
+3. **Verify Python environment:**
+```bash
+which python
+pip list | grep opendeepsearch
+```
+
+4. **Check virtual environment:**
+```bash
+# Create new virtual environment
+python -m venv ods-env
+source ods-env/bin/activate # On Windows: ods-env\Scripts\activate
+pip install -e .
+```
+
+### Problem: `torch` Installation Issues
+
+**Symptoms:** CUDA errors, slow performance, or import errors
+
+**Solutions:**
+
+1. **Install PyTorch with CUDA (recommended):**
+```bash
+# For CUDA 11.8
+pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
+
+# For CUDA 12.1
+pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
+
+# CPU only
+pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
+```
+
+2. **Verify installation:**
+```python
+import torch
+print(f"PyTorch version: {torch.__version__}")
+print(f"CUDA available: {torch.cuda.is_available()}")
+print(f"CUDA version: {torch.version.cuda}")
+```
+
+### Problem: `PDM` Installation Fails
+
+**Solutions:**
+
+1. **Alternative installation methods:**
+```bash
+# Using pip
+pip install pdm
+
+# Using homebrew (macOS)
+brew install pdm
+
+# Manual installation
+curl -sSL https://raw.githubusercontent.com/pdm-project/pdm/main/install-pdm.py | python3 -
+```
+
+2. **Skip PDM and use pip:**
+```bash
+pip install -e .
+pip install -r requirements.txt
+```
+
+## Configuration Problems
+
+### Problem: API Key Not Found
+
+**Error Messages:**
+- `API key not found for OpenAI`
+- `Invalid API key`
+- `Authentication failed`
+
+**Solutions:**
+
+1. **Set environment variables:**
+```bash
+# Check current environment variables
+env | grep -E "(OPENAI|ANTHROPIC|GOOGLE|SERPER|JINA)"
+
+# Set missing keys
+export OPENAI_API_KEY='your-key-here'
+export SERPER_API_KEY='your-key-here'
+```
+
+2. **Use .env file:**
+```bash
+# Create .env file
+cat << EOF > .env
+OPENAI_API_KEY=your-openai-key
+SERPER_API_KEY=your-serper-key
+JINA_API_KEY=your-jina-key
+EOF
+
+# Load in Python
+from dotenv import load_dotenv
+load_dotenv()
+```
+
+3. **Verify API keys:**
+```python
+import os
+required_keys = ['OPENAI_API_KEY', 'SERPER_API_KEY']
+for key in required_keys:
+ if key in os.environ:
+ print(f"โ
{key}: {'*' * (len(os.environ[key]) - 4) + os.environ[key][-4:]}")
+ else:
+ print(f"โ {key}: Not set")
+```
+
+### Problem: Model Not Found
+
+**Error:** `Model 'xyz' not found` or `Invalid model name`
+
+**Solutions:**
+
+1. **Check model name format:**
+```python
+# Correct formats
+"gpt-4o-mini" # OpenAI
+"claude-3-sonnet-20240229" # Anthropic
+"openrouter/google/gemini-2.0-flash-001" # OpenRouter
+"models/gemini-1.5-pro" # Google
+
+# Common mistakes
+"gpt-4o-mini-preview" # โ Wrong name
+"claude-3-sonnet" # โ Missing date
+"gemini-2.0-flash" # โ Missing provider prefix
+```
+
+2. **Test model availability:**
+```python
+import litellm
+
+# Test model
+try:
+ response = litellm.completion(
+ model="gpt-4o-mini",
+ messages=[{"role": "user", "content": "test"}],
+ max_tokens=1
+ )
+ print("โ
Model available")
+except Exception as e:
+ print(f"โ Model error: {e}")
+```
+
+3. **Use fallback models:**
+```python
+models = ["gpt-4o", "gpt-4o-mini", "gpt-3.5-turbo"]
+for model in models:
+ try:
+ search_tool = OpenDeepSearchTool(model_name=model)
+ print(f"โ
Using model: {model}")
+ break
+ except Exception as e:
+ print(f"โ {model}: {e}")
+```
+
+### Problem: Reranker Configuration Issues
+
+**Error:** Reranking fails or returns poor results
+
+**Solutions:**
+
+1. **Test reranker separately:**
+```python
+# Test Jina
+import requests
+headers = {"Authorization": f"Bearer {os.environ['JINA_API_KEY']}"}
+response = requests.get("https://api.jina.ai/v1/models", headers=headers)
+print(response.json())
+
+# Test Infinity
+response = requests.get("http://localhost:7997/v1/models")
+print(response.json())
+```
+
+2. **Disable reranking temporarily:**
+```python
+search_tool = OpenDeepSearchTool(reranker=None)
+result = search_tool.forward("test query")
+```
+
+3. **Try alternative rerankers:**
+```python
+rerankers = ["jina", "infinity", None]
+for reranker in rerankers:
+ try:
+ tool = OpenDeepSearchTool(reranker=reranker)
+ result = tool.forward("test")
+ print(f"โ
{reranker}: Working")
+ break
+ except Exception as e:
+ print(f"โ {reranker}: {e}")
+```
+
+## Runtime Errors
+
+### Problem: Search Provider Timeouts
+
+**Error:** `Request timed out` or `Connection error`
+
+**Solutions:**
+
+1. **Increase timeout:**
+```python
+import os
+os.environ["REQUESTS_TIMEOUT"] = "60" # 60 seconds
+
+# Or in code
+search_tool = OpenDeepSearchTool(timeout=60)
+```
+
+2. **Check provider status:**
+```bash
+# Test Serper API
+curl -H "X-API-KEY: your-key" "https://google.serper.dev/search?q=test"
+
+# Test SearXNG instance
+curl "https://your-searxng-instance.com/search?q=test&format=json"
+```
+
+3. **Switch providers:**
+```python
+# If Serper fails, try SearXNG
+search_tool = OpenDeepSearchTool(search_provider="searxng")
+```
+
+### Problem: Memory Issues
+
+**Error:** `CUDA out of memory` or system hangs
+
+**Solutions:**
+
+1. **Reduce model size:**
+```python
+# Use smaller models
+search_tool = OpenDeepSearchTool(
+ model_name="gpt-3.5-turbo", # Instead of gpt-4o
+ reranker=None # Disable reranking
+)
+```
+
+2. **Limit results:**
+```python
+search_tool = OpenDeepSearchTool(max_results=5) # Instead of 10+
+```
+
+3. **Monitor memory usage:**
+```python
+import psutil
+import gc
+
+def check_memory():
+ process = psutil.Process()
+ memory_mb = process.memory_info().rss / 1024 / 1024
+ print(f"Memory usage: {memory_mb:.1f} MB")
+
+check_memory()
+result = search_tool.forward("query")
+check_memory()
+gc.collect() # Force garbage collection
+```
+
+4. **Use CPU-only mode:**
+```bash
+export CUDA_VISIBLE_DEVICES="" # Disable CUDA
+```
+
+### Problem: Rate Limit Errors
+
+**Error:** `Rate limit exceeded` or `429 Too Many Requests`
+
+**Solutions:**
+
+1. **Implement retry with backoff:**
+```python
+import time
+import random
+
+def search_with_retry(search_tool, query, max_retries=3):
+ for attempt in range(max_retries):
+ try:
+ return search_tool.forward(query)
+ except Exception as e:
+ if "rate limit" in str(e).lower() or "429" in str(e):
+ wait_time = (2 ** attempt) + random.uniform(0, 1)
+ print(f"Rate limited, waiting {wait_time:.1f}s...")
+ time.sleep(wait_time)
+ else:
+ raise e
+
+ raise Exception("Max retries exceeded")
+```
+
+2. **Use different API keys:**
+```python
+api_keys = [
+ "key1",
+ "key2",
+ "key3"
+]
+
+for key in api_keys:
+ os.environ["OPENAI_API_KEY"] = key
+ try:
+ result = search_tool.forward(query)
+ break
+ except Exception as e:
+ print(f"Key failed: {e}")
+```
+
+3. **Switch to different models/providers:**
+```python
+configs = [
+ {"model_name": "gpt-4o-mini", "provider": "openai"},
+ {"model_name": "claude-3-haiku-20240307", "provider": "anthropic"},
+ {"model_name": "gemini-1.5-flash", "provider": "google"},
+]
+```
+
+## Performance Issues
+
+### Problem: Slow Response Times
+
+**Symptoms:** Searches taking more than 30 seconds
+
+**Diagnostics:**
+
+1. **Time components:**
+```python
+import time
+
+def timed_search(search_tool, query):
+ start_time = time.time()
+
+ # Time search
+ search_start = time.time()
+ results = search_tool.search_and_rank(query)
+ search_time = time.time() - search_start
+
+ # Time synthesis
+ synthesis_start = time.time()
+ result = search_tool.synthesize_results(results, query)
+ synthesis_time = time.time() - synthesis_start
+
+ total_time = time.time() - start_time
+
+ print(f"Search: {search_time:.1f}s")
+ print(f"Synthesis: {synthesis_time:.1f}s")
+ print(f"Total: {total_time:.1f}s")
+
+ return result
+```
+
+**Solutions:**
+
+1. **Use faster configurations:**
+```python
+# Fast config
+fast_config = {
+ "model_name": "gpt-3.5-turbo",
+ "mode": "default", # Instead of "pro"
+ "reranker": None, # Disable reranking
+ "max_results": 5 # Fewer results
+}
+
+search_tool = OpenDeepSearchTool(**fast_config)
+```
+
+2. **Optimize queries:**
+```python
+# Instead of complex queries
+"Find comprehensive analysis of renewable energy trends, market data, policy impacts..."
+
+# Use focused queries
+"Renewable energy market trends 2024"
+```
+
+3. **Parallel processing:**
+```python
+from concurrent.futures import ThreadPoolExecutor
+import asyncio
+
+async def parallel_search(queries):
+ def single_search(query):
+ tool = OpenDeepSearchTool()
+ return tool.forward(query)
+
+ with ThreadPoolExecutor(max_workers=3) as executor:
+ futures = [executor.submit(single_search, q) for q in queries]
+ results = [future.result() for future in futures]
+
+ return results
+```
+
+### Problem: Poor Result Quality
+
+**Symptoms:** Irrelevant or incomplete answers
+
+**Solutions:**
+
+1. **Use Pro mode:**
+```python
+search_tool = OpenDeepSearchTool(mode="pro")
+```
+
+2. **Enable reranking:**
+```python
+search_tool = OpenDeepSearchTool(reranker="jina")
+```
+
+3. **Increase result count:**
+```python
+search_tool = OpenDeepSearchTool(max_results=15)
+```
+
+4. **Improve query formulation:**
+```python
+# Instead of vague queries
+"Tell me about AI"
+
+# Use specific, detailed queries
+"Latest developments in large language models for code generation in 2024"
+```
+
+5. **Use better models:**
+```python
+# Higher quality models
+quality_models = [
+ "gpt-4o",
+ "claude-3-opus-20240229",
+ "openrouter/google/gemini-2.0-flash-001"
+]
+```
+
+## Integration Problems
+
+### Problem: SmolAgents Integration Issues
+
+**Error:** Tool not recognized or import errors
+
+**Solutions:**
+
+1. **Check SmolAgents installation:**
+```bash
+pip install smolagents
+```
+
+2. **Verify tool registration:**
+```python
+from smolagents import CodeAgent
+from opendeepsearch import OpenDeepSearchTool
+
+# Create tool
+search_tool = OpenDeepSearchTool()
+
+# Check tool attributes
+print(f"Tool name: {search_tool.name}")
+print(f"Tool description: {search_tool.description}")
+
+# Create agent
+agent = CodeAgent(tools=[search_tool])
+print("โ
Integration successful")
+```
+
+3. **Test tool execution:**
+```python
+# Direct tool test
+result = search_tool.forward("test query")
+print("Direct tool:", "โ
" if result else "โ")
+
+# Agent test
+try:
+ agent_result = agent.run("Search for information about Python")
+ print("Agent integration:", "โ
" if agent_result else "โ")
+except Exception as e:
+ print(f"Agent error: {e}")
+```
+
+### Problem: LangChain Integration
+
+**Error:** Tool interface incompatibility
+
+**Solutions:**
+
+1. **Use proper LangChain wrapper:**
+```python
+from langchain.tools import BaseTool
+from pydantic import BaseModel, Field
+
+class OpenDeepSearchInput(BaseModel):
+ query: str = Field(description="Search query")
+
+class OpenDeepSearchLangChainTool(BaseTool):
+ name = "opendeepsearch"
+ description = "Search the web for current information"
+ args_schema = OpenDeepSearchInput
+
+ def __init__(self):
+ super().__init__()
+ self.search_tool = OpenDeepSearchTool()
+
+ def _run(self, query: str) -> str:
+ return self.search_tool.forward(query)
+
+ async def _arun(self, query: str) -> str:
+ return self._run(query)
+```
+
+## Getting Help
+
+### Debug Mode
+
+Enable detailed logging:
+
+```python
+import logging
+logging.basicConfig(level=logging.DEBUG)
+
+# Or set environment variable
+import os
+os.environ["OPENDEEPSEARCH_DEBUG"] = "true"
+```
+
+### Collect System Information
+
+```python
+import sys
+import platform
+import torch
+import opendeepsearch
+
+def system_info():
+ print("=== System Information ===")
+ print(f"Python version: {sys.version}")
+ print(f"Platform: {platform.platform()}")
+ print(f"OpenDeepSearch version: {opendeepsearch.__version__}")
+ print(f"PyTorch version: {torch.__version__}")
+ print(f"CUDA available: {torch.cuda.is_available()}")
+
+ print("\n=== Environment Variables ===")
+ env_vars = [
+ "OPENAI_API_KEY", "ANTHROPIC_API_KEY", "GOOGLE_API_KEY",
+ "SERPER_API_KEY", "JINA_API_KEY", "SEARXNG_INSTANCE_URL"
+ ]
+
+ for var in env_vars:
+ value = os.environ.get(var)
+ if value:
+ # Show only last 4 characters for security
+ masked = "*" * (len(value) - 4) + value[-4:]
+ print(f"{var}: {masked}")
+ else:
+ print(f"{var}: Not set")
+
+system_info()
+```
+
+### Test Configuration
+
+```python
+def test_configuration():
+ """Test all components of OpenDeepSearch setup"""
+
+ tests = []
+
+ # Test 1: Import
+ try:
+ from opendeepsearch import OpenDeepSearchTool
+ tests.append(("Import", "โ
", "Success"))
+ except Exception as e:
+ tests.append(("Import", "โ", str(e)))
+ return tests
+
+ # Test 2: Basic initialization
+ try:
+ tool = OpenDeepSearchTool()
+ tests.append(("Initialization", "โ
", "Success"))
+ except Exception as e:
+ tests.append(("Initialization", "โ", str(e)))
+ return tests
+
+ # Test 3: Simple search
+ try:
+ result = tool.forward("test query")
+ tests.append(("Search", "โ
", "Success"))
+ except Exception as e:
+ tests.append(("Search", "โ", str(e)))
+
+ # Print results
+ print("\n=== Configuration Test Results ===")
+ for test_name, status, message in tests:
+ print(f"{test_name}: {status} {message}")
+
+ return tests
+
+test_configuration()
+```
+
+### Community Support
+
+1. **GitHub Issues**: [Create an issue](https://github.com/sentient-agi/OpenDeepSearch/issues)
+2. **Discord**: [Join the community](https://discord.gg/sentientfoundation)
+3. **Documentation**: Check other guides in the `docs/` directory
+
+### Common Error Patterns
+
+| Error Pattern | Likely Cause | Quick Fix |
+|---------------|--------------|-----------|
+| `API key` errors | Missing/invalid keys | Check environment variables |
+| `Model not found` | Wrong model name | Use correct provider format |
+| `Connection` errors | Network/provider issues | Check internet, try different provider |
+| `Memory` errors | Large models/results | Use smaller models, fewer results |
+| `Import` errors | Installation issues | Reinstall with `pip install -e .` |
+| `Timeout` errors | Slow responses | Increase timeout, use faster config |
+
+### Performance Benchmarks
+
+Use these benchmarks to validate your setup:
+
+```python
+import time
+
+def benchmark_search(search_tool, queries):
+ """Benchmark search performance"""
+ results = []
+
+ for query in queries:
+ start_time = time.time()
+ try:
+ result = search_tool.forward(query)
+ duration = time.time() - start_time
+ results.append({
+ "query": query,
+ "duration": duration,
+ "success": True,
+ "length": len(result)
+ })
+ except Exception as e:
+ duration = time.time() - start_time
+ results.append({
+ "query": query,
+ "duration": duration,
+ "success": False,
+ "error": str(e)
+ })
+
+ # Print summary
+ successful = [r for r in results if r["success"]]
+ if successful:
+ avg_time = sum(r["duration"] for r in successful) / len(successful)
+ avg_length = sum(r["length"] for r in successful) / len(successful)
+ print(f"Average response time: {avg_time:.1f}s")
+ print(f"Average response length: {avg_length:.0f} chars")
+ print(f"Success rate: {len(successful)}/{len(results)}")
+
+ return results
+
+# Test queries
+test_queries = [
+ "What is Python?",
+ "Latest AI developments 2024",
+ "How to install Docker?"
+]
+
+search_tool = OpenDeepSearchTool()
+benchmark_search(search_tool, test_queries)
+```
+
+Remember to always test your configuration with simple queries before attempting complex searches!
\ No newline at end of file