Generate comprehensive sales briefs from company websites in under 60 seconds.
Auto-Brief analyzes company websites using AI to create detailed sales briefs with company overviews, value propositions, pricing, features, and tailored sales strategies.
- Fast Generation: Complete briefs in under 60 seconds
- Deep Analysis: Extracts pricing, features, tech stack, and more
- Multiple Formats: Export to PDF, Markdown, or JSON
- Real-time Progress: Stream generation progress via SSE
- Sales-Ready: Includes talk tracks and objection handling
- Full Citations: Every claim linked to source material
- CLI & API: Use via command line or RESTful API
- Docker Ready: Easy deployment with Docker Compose
# Clone the repository
git clone https://github.com/your-org/auto-brief.git
cd auto-brief
# Set up environment variables
cp backend/.env.example backend/.env
# Edit backend/.env with your API keys:
# - FIRECRAWL_API_KEY
# - OPENAI_API_KEY
# Start with Docker Compose
docker-compose up -d
# Access the application
open http://localhost:3000cd backend
# Create virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Set environment variables
cp .env.example .env
# Edit .env with your API keys
# Run the API server
uvicorn src.main:app --reloadcd frontend
# Install dependencies
npm install
# Set environment variables
cp .env.example .env
# Start development server
npm run dev- Navigate to http://localhost:3000
- Enter a company domain (e.g., "stripe.com")
- Click "Generate Brief"
- Watch real-time progress
- Download as PDF or Markdown
# Generate a brief
python -m src.cli.brief generate stripe.com -o ./output
# List recent briefs
python -m src.cli.brief list-runs
# Export specific brief
python -m src.cli.brief render <brief_id> --format pdf# Generate brief
curl -X POST http://localhost:8000/api/v1/briefs/generate \
-H "Content-Type: application/json" \
-d '{"domain": "stripe.com"}'
# Check status
curl http://localhost:8000/api/v1/briefs/{run_id}/status
# Download PDF
curl -O http://localhost:8000/api/v1/briefs/{run_id}/pdf┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Frontend │────▶│ API │────▶│ Orchestrator│
│ (React) │ │ (FastAPI) │ │ │
└─────────────┘ └─────────────┘ └──────┬──────┘
│
┌───────────────────────────┼───────────────────────────┐
│ │ │
┌─────▼─────┐ ┌────────▼────────┐ ┌───────▼───────┐
│ Crawler │ │ Extractor │ │ Renderer │
│(Firecrawl)│ │ (GPT-4) │ │ (PDF/MD/HTML) │
└───────────┘ └─────────────────┘ └───────────────┘
- Frontend: React + TypeScript + Vite
- Backend API: FastAPI + Pydantic
- Crawler: Firecrawl SDK for web scraping
- Extractor: OpenAI GPT-4 for information extraction
- Renderer: WeasyPrint for PDF, native Markdown/HTML
- Storage: File-based JSON (Redis optional)
- Queue: Background tasks with FastAPI
| Method | Endpoint | Description |
|---|---|---|
| POST | /api/v1/briefs/generate |
Start brief generation |
| GET | /api/v1/briefs/{run_id} |
Get completed brief |
| GET | /api/v1/briefs/{run_id}/status |
Check generation status |
| GET | /api/v1/briefs/{run_id}/progress |
Stream progress (SSE) |
| GET | /api/v1/briefs/{run_id}/pdf |
Download as PDF |
| GET | /api/v1/briefs/{run_id}/markdown |
Download as Markdown |
-
Firecrawl API Key: For web crawling
- Get from: https://firecrawl.dev
- Set:
FIRECRAWL_API_KEY
-
OpenAI API Key: For GPT-4 extraction
- Get from: https://platform.openai.com
- Set:
OPENAI_API_KEY
See backend/.env.example for all configuration options:
# Core Settings
APP_ENV=development
LOG_LEVEL=INFO
# Performance
MAX_PAGES_PER_DOMAIN=10
PAGE_FETCH_TIMEOUT_SECONDS=30
LLM_TIMEOUT_SECONDS=60
# Storage
STORAGE_PATH=./data
BRIEF_RETENTION_HOURS=168# Backend tests
cd backend
pytest tests/ -v
# Frontend tests
cd frontend
npm test
# E2E tests
npm run test:e2eSee docs/deployment.md for detailed deployment instructions:
- Docker Compose
- Kubernetes
- AWS ECS
- Google Cloud Run
- Heroku