-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy path.env.example
More file actions
238 lines (221 loc) · 11.9 KB
/
.env.example
File metadata and controls
238 lines (221 loc) · 11.9 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
# =============================================================================
# service-cloud-api — Environment Variables
# =============================================================================
# Copy to .env.local for local dev. Production values come from K8s secrets/config.
# For production secret deployment use: admin/cloud/secrets/deploy-secrets.sh
# =============================================================================
# =============================================================================
# Core runtime
# =============================================================================
PORT=1602
NODE_ENV=development
APP_URL=http://localhost:3000
COMMIT_HASH=dev
RUNTIME_PORT=3000
# =============================================================================
# Database
# =============================================================================
DATABASE_URL=postgresql://og@localhost:5432/alternatefutures_platform
# =============================================================================
# Auth integration
# =============================================================================
# MUST match service-auth JWT_SECRET
JWT_SECRET=
AUTH_SERVICE_URL=http://localhost:1601
AUTH_INTROSPECTION_SECRET=
PAT_VALIDATION_TTL_MS=30000
# =============================================================================
# Akash provider orchestration
# =============================================================================
AKASH_MNEMONIC=
AKASH_KEY_NAME=default
RPC_ENDPOINT=https://rpc.akashnet.net:443
GRPC_ENDPOINT=https://akash-grpc.publicnode.com:443
AKASH_CHAIN_ID=akashnet-2
AKT_USD_PRICE=0.50
AKASH_SSL_PROXY_PROVIDER=akash1zlsep362zz46qlwzttm06t8lv9qtg8gtaya97u
AKASH_SSL_PROXY_PROVIDER_NAME=america.computer
# Phase 47 safety gates. NEVER enable locally with the production mnemonic.
# Production sets both to 1 via infra/k8s/api/configmap.yaml.
# AKASH_ALLOW_CHAIN_ORPHAN_SWEEP=1
# AKASH_ENABLE_BACKGROUND_WALLET_OPS=1
# =============================================================================
# Phala provider orchestration
# =============================================================================
PHALA_API_KEY=
# Path to SSH private key for Phala CVM shell access (public key at .pub is injected during deploy)
# Default: ~/.ssh/af_phala_ed25519
# PHALA_SSH_KEY_PATH=
# =============================================================================
# Subdomain proxy and DNS
# =============================================================================
PROXY_BASE_DOMAIN=local.alternatefutures.ai
# Domain baked into deployed containers' env vars (API/WS URLs).
# Local: local.alternatefutures.ai (Caddy on :443 → localhost:1602). Production: alternatefutures.ai
PROXY_DEPLOY_DOMAIN=local.alternatefutures.ai
PROXY_CACHE_TTL_MS=30000
PROXY_CACHE_MAX_SIZE=1000
PLATFORM_CNAME_TARGET=cname.alternatefutures.ai
PLATFORM_IP_ADDRESS=0.0.0.0
# =============================================================================
# Billing and payments
# =============================================================================
STRIPE_SECRET_KEY=
STRIPE_WEBHOOK_SECRET=
# =============================================================================
# Internal service-to-service auth
# =============================================================================
# Shared token for internal HTTP endpoints (provider-registry, flush-cache, telemetry, compute-resume).
# MUST match the value set in web-app's server env (INTERNAL_AUTH_TOKEN) for GPU availability to work.
# Generate with: openssl rand -base64 32
INTERNAL_AUTH_TOKEN=
# =============================================================================
# Discord Feedback Webhook
# =============================================================================
# Webhook URL for bug reports / feedback submitted through the app UI
DISCORD_FEEDBACK_WEBHOOK_URL=
# =============================================================================
# Ops
# =============================================================================
ADMIN_EMAIL=
# =============================================================================
# Open Provider
# =============================================================================
OPENPROVIDER_USERNAME=
OPENPROVIDER_PASSWORD=
# Registrant contact handle from OpenProvider (e.g. AT123456-US). Required for domain purchase.
# Find it at https://rcp.openprovider.eu → Contacts
OPENPROVIDER_OWNER_HANDLE=
# =============================================================================
# GitHub App — "Deploy from GitHub"
# =============================================================================
# Register the App via infra/github-app/register.html (one-time, manifest flow).
# All seven values come from that flow. Until they're set the GitHub deploy
# tile in the UI shows the "not configured" state.
GITHUB_APP_ID=
GITHUB_APP_SLUG=
GITHUB_APP_CLIENT_ID=
GITHUB_APP_CLIENT_SECRET=
GITHUB_APP_WEBHOOK_SECRET=
# Base64 of the .pem private key GitHub gave you (so it survives K8s secrets).
# Generate with: base64 -i path/to/private-key.pem | tr -d '\n'
GITHUB_APP_PRIVATE_KEY_B64=
# PAT (or App-installation token) with packages:write that the af-builder Job
# uses to push the produced image to GHCR.
GHCR_PUSH_TOKEN=
# Optional: dedicated read-only PAT (scopes: `read:packages`) used solely
# to pull user-built images on Akash providers. Injected into the SDL as
# per-service `credentials:` so providers can pull private GHCR images
# without us fighting GitHub org visibility policies. Falls back to
# GHCR_PUSH_TOKEN if unset, but you REALLY want a read-only token in
# production because whatever value we put here is shipped to the
# winning provider with each lease manifest — a malicious provider
# could exfiltrate it. Read-only scope caps the blast radius to
# "pull from our GHCR" instead of "push to our GHCR".
# GHCR_PULL_TOKEN=
# Optional overrides — defaults are sane.
# GHCR_USER=alternatefutures-deploy
# GHCR_NAMESPACE=alternatefutures
# BUILDER_IMAGE=ghcr.io/alternatefutures/af-builder:latest
# BUILDER_NAMESPACE=alternatefutures-builds
# Local dev only: skip both K8s Job creation AND Fly machine spawn; only
# persist the BuildJob row. Useful for iterating on UI without the build
# pipeline running. Honoured by buildSpawner regardless of BUILD_EXECUTOR.
# BUILDER_DRY_RUN=1
# =============================================================================
# Build executor — where af-builder actually runs
# =============================================================================
# Selects the backend buildSpawner.ts dispatches new BuildJobs to:
# - `k8s` (default): spawn a Kubernetes Job in BUILDER_NAMESPACE on the
# cluster pointed to by KUBERNETES_SERVICE_HOST / ~/.kube/config.
# Pairs builder + dind sidecar via shareProcessNamespace.
# - `fly`: spawn a single ephemeral Fly Machine running the Fly variant
# of service-builder (Dockerfile.fly, dockerd embedded). Costs pennies
# per build, isolates blast radius, doesn't burn cluster CPU.
# Production currently runs `k8s`; staging is the canary on `fly`.
# Local dev: leave `k8s` until you've started the cloudflared tunnel +
# pointed API_BASE_URL at it (Fly machines need a publicly-reachable
# CALLBACK_URL or the build will succeed silently and never update the UI).
BUILD_EXECUTOR=k8s
# When BUILD_EXECUTOR=fly, by default we fall back to k8s if the Fly API
# call fails (token expired, app missing, region down). Set to 1 to fail
# closed instead — useful in staging when you want to detect Fly outages
# rather than mask them with a silent k8s rebid.
# BUILD_EXECUTOR_NO_FALLBACK=1
# =============================================================================
# Fly.io builder pool (only consulted when BUILD_EXECUTOR=fly)
# =============================================================================
# API token created with: flyctl tokens create deploy -a af-builders \
# --name "service-cloud-api builders" --expiry 0
# Production reads this from the `fly-credentials` K8s secret; locally it
# lives here. The same token works for both staging and prod since Fly
# tokens are app-scoped, not env-scoped.
FLY_API_TOKEN=
# Fly app the spawned machines live under. We pre-created `af-builders`
# in the angela-steffens org as a billing/ACL boundary. The app stays in
# "Pending" state forever in the Fly dashboard — that's fine, machines
# show up under the Machines tab as builds run.
FLY_BUILDER_APP=af-builders
# Region machines are placed in. `ord` (Chicago) is closest to AWS us-east-1
# (where the cluster lives) so callbacks have minimal latency.
FLY_BUILDER_REGION=ord
# The image Fly pulls into each machine. MUST be public on GHCR — Fly
# pulls without our org credentials. Built from service-builder/Dockerfile.fly.
FLY_BUILDER_IMAGE=ghcr.io/alternatefutures/af-builder:fly-latest
# Machine size. `performance` CPUs are 2-3x faster than `shared` for the
# CPU-bound nixpacks/docker build phase and worth the extra ~$0.001/min.
FLY_BUILDER_CPU_KIND=performance
FLY_BUILDER_CPUS=2
FLY_BUILDER_MEMORY_MB=4096
# Persistent buildkit/dockerd cache volume(s). Comma-separated list of
# Fly Volume ids — flyioBuilder.ts picks one at random per spawn and
# rotates through the pool on 409 volume-busy (enables concurrent builds).
# Create volumes with:
# flyctl volumes create af_build_cache_0 -a af-builders --region ord --size 20
# then prime each before it serves real traffic (cuts cold build ~3-4min → ~60-90s):
# ./admin/cloud/scripts/prime-fly-cache.sh --volume vol_xxx
# Leave unset to fall back to ephemeral per-machine state (slow but functional).
# FLY_BUILDER_CACHE_VOLUME=vol_xxxxxxxxxxxxxxxx
# FLY_BUILDER_CACHE_MOUNT=/var/lib/af-cache
# Hard runtime cap per build. build-fly.sh enforces via `timeout(1)`;
# on expiry posts FAILED callback → Fly auto_destroy reaps the VM.
# Default 900s (15 min). Without this, a runaway build (infinite loop,
# OOM thrash) runs until you manually `flyctl machines destroy --force`.
# FLY_BUILDER_MAX_RUNTIME_SECONDS=900
# Advanced overrides — leave unset to use defaults baked into flyioBuilder.ts.
# FLY_API_BASE=https://api.machines.dev/v1
# FLY_API_TIMEOUT_MS=30000
# =============================================================================
# Build callback base URL (af-builder → service-cloud-api)
# =============================================================================
# Override the URL af-builder Jobs/Machines POST status callbacks to
# (defaults to https://api.alternatefutures.ai). Builders need a
# PUBLICLY-REACHABLE URL — `localhost`, `127.0.0.1`, and any
# `*.local.alternatefutures.ai` hostname will fail silently from a Fly
# machine because they only resolve on the operator's laptop.
#
# In-cluster (K8s executor): set to the in-cluster Service URL so the
# builder pod hits the api directly without leaving the namespace.
# API_BASE_URL=http://service-cloud-api:4000
#
# Local dev with BUILD_EXECUTOR=fly: start a tunnel and paste its
# public hostname here, then restart the api so buildSpawner picks
# up the new value:
# cloudflared tunnel --url http://localhost:1602
# # → https://<random>.trycloudflare.com
# API_BASE_URL=https://<random>.trycloudflare.com
#
# Local dev with BUILD_EXECUTOR=k8s + a real cluster: same as above,
# the K8s Job needs a publicly-reachable URL too unless you're running
# the api inside the same cluster.
# API_BASE_URL=
# ── CALLBACK_BASE_URL ───────────────────────────────────────────────
# Where the builder (Fly Machine OR K8s Job) POSTs the build-status
# callback. Takes precedence over API_BASE_URL when set. Needed
# primarily when API_BASE_URL is an internal URL the builder can't
# reach — specifically: BUILD_EXECUTOR=fly always needs a public URL
# here because Fly machines can't resolve *.svc.cluster.local.
# Production: https://api.alternatefutures.ai
# Staging: https://api.staging.alternatefutures.ai
# Local dev: your cloudflared tunnel URL (if BUILD_EXECUTOR=fly)
# CALLBACK_BASE_URL=