Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 29 additions & 2 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -46,10 +46,24 @@ MAX_WORKERS=30
# API Keys and External Services
# =============================================================================

# Serper API for web search and Google Scholar
# Web Search Providers (in order of quality/preference)
# The system will try each provider in order until one succeeds.
# You only need ONE provider configured, but having multiple provides fallback.

# Exa.ai - Best semantic/neural search ($10 free credits)
# Get your key from: https://exa.ai/
EXA_API_KEY=your_key

# Tavily - Purpose-built for RAG/LLMs (1,000 free requests/month)
# Get your key from: https://tavily.com/
TAVILY_API_KEY=your_key

# Serper API for Google search results (2,500 free queries)
# Get your key from: https://serper.dev/
SERPER_KEY_ID=your_key

# DuckDuckGo is always available as final fallback (FREE, no API key needed)

# Jina API for web page reading
# Get your key from: https://jina.ai/
JINA_API_KEYS=your_key
Expand Down Expand Up @@ -95,4 +109,17 @@ IDP_KEY_SECRET=your_idp_key_secret

# These are typically set by distributed training frameworks
# WORLD_SIZE=1
# RANK=0
# RANK=0

# =============================================================================
# llama.cpp Local Inference (Alternative for Mac/Local Users)
# =============================================================================
# If using the llama.cpp local inference option instead of vLLM:

# The llama.cpp server URL (default works if using start_llama_server.sh)
LLAMA_SERVER_URL=http://127.0.0.1:8080

# For llama.cpp mode:
# - Web search uses DuckDuckGo by default (FREE, no API key needed)
# - JINA_API_KEYS is optional but recommended for better page reading
# - See: python inference/interactive_llamacpp.py --help
Loading