This document covers advanced configuration for self-hosted Multica deployments. For the quick start guide, see SELF_HOSTING.md.
All configuration is done via environment variables. Copy .env.example as a starting point.
| Variable | Description | Example |
|---|---|---|
DATABASE_URL |
PostgreSQL connection string | postgres://multica:multica@localhost:5432/multica?sslmode=disable |
JWT_SECRET |
Must change from default. Secret key for signing JWT tokens. Use a long random string. | openssl rand -hex 32 |
FRONTEND_ORIGIN |
URL where the frontend is served (used for CORS) | https://app.example.com |
These have sensible defaults and only need to be set when tuning a large or constrained deployment. Precedence (highest first): env var → pool_* query params on DATABASE_URL → built-in default.
| Variable | Description | Default |
|---|---|---|
DATABASE_MAX_CONNS |
pgxpool max connections per pod. pod_count × DATABASE_MAX_CONNS should stay well below the Postgres max_connections ceiling. With a connection pooler (PgBouncer / RDS Proxy / Supavisor) in front, this can be raised significantly. |
25 |
DATABASE_MIN_CONNS |
pgxpool warm baseline connections per pod. Auto-clamped to DATABASE_MAX_CONNS. |
5 |
Multica uses email-based magic link authentication via Resend.
| Variable | Description |
|---|---|
RESEND_API_KEY |
Your Resend API key |
RESEND_FROM_EMAIL |
Sender email address (default: noreply@multica.ai) |
Note: The dev master verification code
888888is gated byAPP_ENV != "production". The Docker self-host stack defaults toAPP_ENV=production(so888888is disabled), which protects publicly reachable instances. For local development without email configured, setAPP_ENV=developmentin your.envto enable888888— never do this on a public instance.
| Variable | Description |
|---|---|
GOOGLE_CLIENT_ID |
Google OAuth client ID |
GOOGLE_CLIENT_SECRET |
Google OAuth client secret |
GOOGLE_REDIRECT_URI |
OAuth callback URL (e.g. https://app.example.com/auth/callback) |
Changes take effect after restarting the backend / compose stack. The web UI reads GOOGLE_CLIENT_ID from /api/config at runtime, so no web rebuild is needed.
| Variable | Description |
|---|---|
ALLOW_SIGNUP |
Set to false to disable new user signups on a private instance |
ALLOWED_EMAIL_DOMAINS |
Optional comma-separated allowlist of email domains |
ALLOWED_EMAILS |
Optional comma-separated allowlist of exact email addresses |
Changes take effect after restarting the backend / compose stack. The web UI reads ALLOW_SIGNUP from /api/config at runtime, so no web rebuild is needed.
For file uploads and attachments, configure S3 and CloudFront:
| Variable | Description |
|---|---|
S3_BUCKET |
S3 bucket name |
S3_REGION |
AWS region (default: us-west-2) |
CLOUDFRONT_DOMAIN |
CloudFront distribution domain |
CLOUDFRONT_KEY_PAIR_ID |
CloudFront key pair ID for signed URLs |
CLOUDFRONT_PRIVATE_KEY |
CloudFront private key (PEM format) |
| Variable | Description |
|---|---|
COOKIE_DOMAIN |
Optional Domain attribute for session + CloudFront cookies. Leave empty for single-host deployments (localhost, LAN IP, or a single hostname). Only set it when the frontend and backend sit on different subdomains of one registered domain (e.g. .example.com). Do not use an IP literal — RFC 6265 forbids IP addresses in the cookie Domain attribute and browsers will drop such Set-Cookie headers. |
The Secure flag on session cookies is derived automatically from the scheme of FRONTEND_ORIGIN: HTTPS origins get Secure cookies; plain-HTTP origins (LAN / private-network self-host) get non-secure cookies so the browser can actually store them.
| Variable | Default | Description |
|---|---|---|
PORT |
8080 |
Backend server port |
FRONTEND_PORT |
3000 |
Frontend port |
CORS_ALLOWED_ORIGINS |
Value of FRONTEND_ORIGIN |
Comma-separated list of allowed origins |
LOG_LEVEL |
info |
Log level: debug, info, warn, error |
These are configured on each user's machine, not on the server:
| Variable | Default | Description |
|---|---|---|
MULTICA_SERVER_URL |
ws://localhost:8080/ws |
WebSocket URL for daemon → server connection |
MULTICA_APP_URL |
http://localhost:3000 |
Frontend URL for CLI login flow |
MULTICA_DAEMON_POLL_INTERVAL |
3s |
How often the daemon polls for tasks |
MULTICA_DAEMON_HEARTBEAT_INTERVAL |
15s |
Heartbeat frequency |
Agent-specific overrides:
| Variable | Description |
|---|---|
MULTICA_CLAUDE_PATH |
Custom path to the claude binary |
MULTICA_CLAUDE_MODEL |
Override the Claude model used |
MULTICA_CODEX_PATH |
Custom path to the codex binary |
MULTICA_CODEX_MODEL |
Override the Codex model used |
MULTICA_OPENCODE_PATH |
Custom path to the opencode binary |
MULTICA_OPENCODE_MODEL |
Override the OpenCode model used |
MULTICA_OPENCLAW_PATH |
Custom path to the openclaw binary |
MULTICA_OPENCLAW_MODEL |
Override the OpenClaw model used |
MULTICA_HERMES_PATH |
Custom path to the hermes binary |
MULTICA_HERMES_MODEL |
Override the Hermes model used |
MULTICA_GEMINI_PATH |
Custom path to the gemini binary |
MULTICA_GEMINI_MODEL |
Override the Gemini model used |
MULTICA_PI_PATH |
Custom path to the pi binary |
MULTICA_PI_MODEL |
Override the Pi model used |
MULTICA_CURSOR_PATH |
Custom path to the cursor-agent binary |
MULTICA_CURSOR_MODEL |
Override the Cursor Agent model used |
Multica requires PostgreSQL 17 with the pgvector extension.
The docker-compose.selfhost.yml includes PostgreSQL. No separate setup needed.
If you prefer to use an existing PostgreSQL instance, ensure the pgvector extension is available:
CREATE EXTENSION IF NOT EXISTS vector;Set DATABASE_URL in your .env and remove the postgres service from the compose file.
The Docker Compose setup runs migrations automatically. If you need to run them manually:
# Using the built binary
./server/bin/migrate up
# Or from source
cd server && go run ./cmd/migrate upIf you prefer to build and run services manually:
Prerequisites: Go 1.26+, Node.js 20+, pnpm 10.28+, PostgreSQL 17 with pgvector.
# Start your PostgreSQL (or use: docker compose up -d postgres)
# Build the backend
make build
# Run database migrations
DATABASE_URL="your-database-url" ./server/bin/migrate up
# Start the backend server
DATABASE_URL="your-database-url" PORT=8080 JWT_SECRET="your-secret" ./server/bin/serverFor the frontend:
pnpm install
pnpm build
# Start the frontend (production mode)
cd apps/web
REMOTE_API_URL=http://localhost:8080 pnpm startIn production, put a reverse proxy in front of both the backend and frontend to handle TLS and routing.
app.example.com {
reverse_proxy localhost:3000
}
api.example.com {
reverse_proxy localhost:8080
}
# Frontend
server {
listen 443 ssl;
server_name app.example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# Backend API
server {
listen 443 ssl;
server_name api.example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# WebSocket support
location /ws {
proxy_pass http://localhost:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_read_timeout 86400;
}
}When using separate domains for frontend and backend, set these environment variables accordingly:
# Backend
FRONTEND_ORIGIN=https://app.example.com
CORS_ALLOWED_ORIGINS=https://app.example.com
# Frontend (only if you are building the web image from source via docker-compose.selfhost.build.yml)
REMOTE_API_URL=https://api.example.com
NEXT_PUBLIC_API_URL=https://api.example.com
NEXT_PUBLIC_WS_URL=wss://api.example.com/wsBy default, Multica works on localhost. If you access it from another machine on the LAN (e.g. http://192.168.1.100:3000), you need to tell the backend to accept that origin:
# .env — replace with your server's LAN IP
FRONTEND_ORIGIN=http://192.168.1.100:3000
CORS_ALLOWED_ORIGINS=http://192.168.1.100:3000Then restart the stack:
docker compose -f docker-compose.selfhost.yml up -dHTTP requests (issues, comments, uploads) work on LAN out of the box — Next.js rewrites proxy /api, /auth, and /uploads to the backend. WebSockets do not: Next.js rewrites only forward HTTP requests, not the Upgrade handshake a WebSocket needs. If you open the app on http://<lan-ip>:3000, real-time features (chat streaming, live issue updates, notifications) will fail to connect until you do one of the following:
-
Put a reverse proxy in front of the stack (recommended). Nginx or Caddy terminates the WebSocket upgrade and forwards it to the backend on port 8080. See the Reverse Proxy section above — the Nginx example already includes a
location /ws { ... }block with the correctUpgrade/Connectionheaders. Once a proxy is in place the browser connects directly through it, so no frontend rebuild is needed. -
Bake a WebSocket URL into the web image. If you are not running a reverse proxy, rebuild the web image with
NEXT_PUBLIC_WS_URLpointing straight at the backend (port 8080 must be reachable from the browser):# In .env NEXT_PUBLIC_WS_URL=ws://<lan-ip>:8080/ws # Rebuild the web image so the build-time value is baked in docker compose -f docker-compose.selfhost.yml -f docker-compose.selfhost.build.yml up -d --build
NEXT_PUBLIC_WS_URLis a build-time variable (seeDockerfile.web), so setting it only inenvironment:on the pre-built image has no effect — you must use theselfhost.build.ymloverride that rebuilds the image.
Note: If you need to hard-code a different public API / WebSocket endpoint into the web image for any other reason, use the same source-build override:
docker compose -f docker-compose.selfhost.yml -f docker-compose.selfhost.build.yml up -d --build.
The backend exposes a health check endpoint:
GET /health
→ {"status":"ok"}
Use this for load balancer health checks or monitoring.
docker compose -f docker-compose.selfhost.yml pull
docker compose -f docker-compose.selfhost.yml up -dPin MULTICA_IMAGE_TAG in .env to an exact release like v0.2.4 if you want to stay on a specific version. Migrations run automatically on backend startup. They are idempotent — running them multiple times has no effect.
If the selected GHCR tag has not been published yet, fall back to docker compose -f docker-compose.selfhost.yml -f docker-compose.selfhost.build.yml up -d --build.