AeroScan is a web app for scanning commercial roof footprints, scoring rainwater-harvesting potential, and enriching leads with satellite imagery, ML (cooling-tower detection), and AI-generated briefs. It was built for a hackathon-style workflow: map-first “command center,” saved buildings, PDF reports, and optional voice playback.
- Interactive map — Leaflet-based view with Earth Engine raster layers where configured.
- Viewport scan — Pulls building roof footprints from Overture Maps (DuckDB + GeoParquet on S3) for CONUS, or uses Google Open Buildings via Earth Engine where applicable; merges nearby polygons into site-scale leads.
- Lead scoring — Heuristic tier, viability, and component scores using roof area, CHIRPS rainfall samples, and simple cooling-tower priors.
- ML — YOLOv5-seg + EfficientNet-B5 pipeline on NAIP (US) or Sentinel-2 tiles for cooling-tower-style detection (
/api/cv/detect-tower). - AI — Google Gemini for per-building insights; ElevenLabs TTS reads the on-screen insight (no duplicate Gemini call for playback).
- Reports — ReportLab PDF generation with charts and metrics.
- Auth — Auth0 (Next.js middleware) for sign-in.
- Persistence — Postgres/PostGIS for saved leads (Docker Compose).
| Layer | Technology |
|---|---|
| Frontend | Next.js 16, React 19, TypeScript, Tailwind CSS 4, Radix UI, Leaflet / react-leaflet, Recharts |
| Backend | Python 3.10+, FastAPI, Uvicorn |
| Geo / data | Google Earth Engine API, DuckDB + spatial/httpfs, Overture Maps release GeoParquet |
| ML / CV | PyTorch, torchvision, OpenCV (headless), EfficientNet (efficientnet-pytorch), YOLOv5 (via cached torch.hub ultralytics repo), seaborn (hub dependency) |
| AI / voice | google-generativeai (Gemini), ElevenLabs REST API |
| ReportLab | |
| Database | PostgreSQL + PostGIS (optional local via Docker) |
| Auth | Auth0 Next.js SDK v4 |
SMUHackathon/
├── backend/ # FastAPI app (server.py, cv_service, EE helpers, PDF, Gemini, etc.)
├── frontend/ # Next.js UI (app router, components, Auth0)
├── cv/ # Shared tile fetch / CV utilities used by the API
├── data/ # Stores, ingestion scripts, Overture shard index JSON
├── docker/ # Postgres init scripts for Compose
├── docker-compose.yml # Local PostGIS
├── requirements.txt # API + app Python deps (install torch/opencv for CV — see below)
├── requirements-cv.txt# Explicit CV stack (torch, opencv, ultralytics, …)
└── *.pt # Trained weights (YOLO seg + B5) — keep in repo root or set YOLO_WEIGHTS_PATH
- Node.js 20+ (or current LTS compatible with Next 16)
- Python 3.10+ (3.11+ recommended for ML wheels)
- Docker Desktop (optional, for Postgres)
- Accounts / credentials (as needed for your workflow):
- Google Cloud project with Earth Engine enabled + local
gcloud/earthengine authenticate - Gemini API key
- ElevenLabs API key (TTS)
- Auth0 application (Regular Web App)
- Google Cloud project with Earth Engine enabled + local
- Weights in repo root (defaults expected by
backend/cv_service.py):cooling_tower_yolov5seg_best.ptb5_unweighted_best.pt
cp .env.example .env
# Edit .env: DATABASE_URL, GEE_PROJECT_ID / GOOGLE_CLOUD_PROJECT, GEMINI_API_KEY,
# ELEVENLABS_API_KEY, Auth0 vars, etc.For the Next app, mirror Auth0 and NEXT_PUBLIC_API_URL in frontend/.env.local if you do not load them from the repo root (Auth0 SDK typically reads env from the Next directory).
docker compose up -d
# Default DATABASE_URL uses host port 5433 — see .env.examplepython3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements.txt
# CV endpoints also need PyTorch + OpenCV; if not already pulled in:
pip install -r requirements-cv.txtStart the API from the repository root with PYTHONPATH set:
set -a && source .env && set +a # bash/zsh
PYTHONPATH=. uvicorn backend.server:app --reload --host 0.0.0.0 --port 8000cd frontend
npm install
npm run devOpen http://localhost:3000. The UI defaults to http://localhost:8000 for the API unless NEXT_PUBLIC_API_URL is set.
cd frontend
npm run build
npm run startFull reference: .env.example. Notable variables:
| Variable | Purpose |
|---|---|
DATABASE_URL |
Postgres connection (saved buildings / scans) |
GEE_PROJECT_ID / GOOGLE_CLOUD_PROJECT |
Earth Engine project |
GEMINI_API_KEY |
Building insights |
ELEVENLABS_API_KEY |
Lead-brief TTS |
AUTH0_*, APP_BASE_URL |
Auth0 + callbacks |
NEXT_PUBLIC_API_URL |
Browser → API base URL |
YOLO_WEIGHTS_PATH |
Override default YOLO checkpoint path |
CV_DETECT_TIMEOUT_S |
Cap for long-running CV requests |
OVERTURE_NAMES_PRIMARY_ONLY |
DuckDB fallback if names struct differs on a shard |
flowchart LR
subgraph client [Browser]
UI[Next.js + Leaflet]
end
subgraph api [FastAPI]
EE[Earth Engine layers / CHIRPS sample]
OV[Overture via DuckDB]
CV[CV: NAIP/S2 + YOLO + B5]
GM[Gemini]
PDF[ReportLab PDF]
TTS[ElevenLabs TTS]
end
DB[(Postgres)]
UI --> api
api --> DB
OV --> S3[(Overture S3 Parquet)]
- Lint (frontend):
cd frontend && npm run lint - Overture shard index: If
data/overture_building_parts_<release>.jsonis missing, large scans may fall back to a slower full S3 glob; see comments inbackend/overture_buildings.pyand anyscripts/helpers for refreshing the index.
No module named 'seaborn'when running CV — install fromrequirements.txt(pip install seaborn).- First CV request is slow — Earth Engine auth, model load, and tile fetch; later calls are faster.
- Auth0 errors locally — Ensure callback/logout URLs match
APP_BASE_URLand that secrets live where Next can read them (frontend/.env.local). - Apple Silicon + Docker Postgres —
docker-compose.ymlpins the PostGIS image tolinux/amd64for compatibility; expect emulation.
This project was developed for a hackathon. Add or replace this section with your chosen license before public distribution.