Skip to content

arman0z/auto-brief

Repository files navigation

Auto-Brief

Generate comprehensive sales briefs from company websites in under 60 seconds.

Auto-Brief analyzes company websites using AI to create detailed sales briefs with company overviews, value propositions, pricing, features, and tailored sales strategies.

Features

  • Fast Generation: Complete briefs in under 60 seconds
  • Deep Analysis: Extracts pricing, features, tech stack, and more
  • Multiple Formats: Export to PDF, Markdown, or JSON
  • Real-time Progress: Stream generation progress via SSE
  • Sales-Ready: Includes talk tracks and objection handling
  • Full Citations: Every claim linked to source material
  • CLI & API: Use via command line or RESTful API
  • Docker Ready: Easy deployment with Docker Compose

Quick Start

Using Docker (Recommended)

# Clone the repository
git clone https://github.com/your-org/auto-brief.git
cd auto-brief

# Set up environment variables
cp backend/.env.example backend/.env
# Edit backend/.env with your API keys:
# - FIRECRAWL_API_KEY
# - OPENAI_API_KEY

# Start with Docker Compose
docker-compose up -d

# Access the application
open http://localhost:3000

Local Development

Backend Setup

cd backend

# Create virtual environment
python3 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Set environment variables
cp .env.example .env
# Edit .env with your API keys

# Run the API server
uvicorn src.main:app --reload

Frontend Setup

cd frontend

# Install dependencies
npm install

# Set environment variables
cp .env.example .env

# Start development server
npm run dev

Usage

Web Interface

  1. Navigate to http://localhost:3000
  2. Enter a company domain (e.g., "stripe.com")
  3. Click "Generate Brief"
  4. Watch real-time progress
  5. Download as PDF or Markdown

Command Line

# Generate a brief
python -m src.cli.brief generate stripe.com -o ./output

# List recent briefs
python -m src.cli.brief list-runs

# Export specific brief
python -m src.cli.brief render <brief_id> --format pdf

API

# Generate brief
curl -X POST http://localhost:8000/api/v1/briefs/generate \
  -H "Content-Type: application/json" \
  -d '{"domain": "stripe.com"}'

# Check status
curl http://localhost:8000/api/v1/briefs/{run_id}/status

# Download PDF
curl -O http://localhost:8000/api/v1/briefs/{run_id}/pdf

Architecture

┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│   Frontend  │────▶│   API       │────▶│ Orchestrator│
│   (React)   │     │  (FastAPI)  │     │             │
└─────────────┘     └─────────────┘     └──────┬──────┘
                                                │
                    ┌───────────────────────────┼───────────────────────────┐
                    │                           │                           │
              ┌─────▼─────┐           ┌────────▼────────┐          ┌───────▼───────┐
              │  Crawler  │           │    Extractor    │          │   Renderer    │
              │(Firecrawl)│           │   (GPT-4)       │          │ (PDF/MD/HTML) │
              └───────────┘           └─────────────────┘          └───────────────┘

Components

  • Frontend: React + TypeScript + Vite
  • Backend API: FastAPI + Pydantic
  • Crawler: Firecrawl SDK for web scraping
  • Extractor: OpenAI GPT-4 for information extraction
  • Renderer: WeasyPrint for PDF, native Markdown/HTML
  • Storage: File-based JSON (Redis optional)
  • Queue: Background tasks with FastAPI

API Endpoints

Method Endpoint Description
POST /api/v1/briefs/generate Start brief generation
GET /api/v1/briefs/{run_id} Get completed brief
GET /api/v1/briefs/{run_id}/status Check generation status
GET /api/v1/briefs/{run_id}/progress Stream progress (SSE)
GET /api/v1/briefs/{run_id}/pdf Download as PDF
GET /api/v1/briefs/{run_id}/markdown Download as Markdown

Configuration

Required API Keys

  1. Firecrawl API Key: For web crawling

  2. OpenAI API Key: For GPT-4 extraction

Environment Variables

See backend/.env.example for all configuration options:

# Core Settings
APP_ENV=development
LOG_LEVEL=INFO

# Performance
MAX_PAGES_PER_DOMAIN=10
PAGE_FETCH_TIMEOUT_SECONDS=30
LLM_TIMEOUT_SECONDS=60

# Storage
STORAGE_PATH=./data
BRIEF_RETENTION_HOURS=168

Testing

# Backend tests
cd backend
pytest tests/ -v

# Frontend tests
cd frontend
npm test

# E2E tests
npm run test:e2e

Deployment

See docs/deployment.md for detailed deployment instructions:

  • Docker Compose
  • Kubernetes
  • AWS ECS
  • Google Cloud Run
  • Heroku

Documentation

About

No description, website, or topics provided.

Resources

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •