New to AI-Q? This page walks you through the documentation in the order that will get you productive fastest.
Set up Python, install dependencies with uv, and configure your environment
variables (primarily NVIDIA_API_KEY).
Read: Installation
Launch the CLI and submit your first research query. This gives you a working mental model of what the system does before you look at how it works.
Read: Quick Start
Learn the two-path design — an intent classifier routes queries to either the fast shallow researcher or the multi-phase deep researcher — and how data flows through the system.
Read: Architecture Overview then Data Flow
Each agent has its own page covering state models, configuration, prompt templates, and internal flow diagrams.
- Intent Classifier — Query routing
- Shallow Researcher — Fast, bounded tool-calling
- Deep Researcher — Multi-phase subagent workflow
- Clarifier — Human-in-the-loop before deep research
Once you understand the agents, learn how to tailor the system to your needs:
- Swap LLMs — Use different models for different roles
- Enable or disable tools — Configure which data sources agents can access
- Edit prompts — Modify agent behavior through Jinja2 templates
- Add a new tool — Integrate a new search API or data source
- Configuration reference — Full YAML config guide
Move from local development to Docker Compose.
Read: Docker Compose then Production