diff --git a/docs/01-getting-started.md b/docs/01-getting-started.md index 670763d1..958161a8 100644 --- a/docs/01-getting-started.md +++ b/docs/01-getting-started.md @@ -239,9 +239,7 @@ You'll land on the **"Publishable and secret API keys"** tab. Copy these into yo - 🔖 **Secret key** — Scroll down to the **"Secret keys"** section on the same page. You'll see a `default` key. Click the copy button to copy it. (You can also click **"+ New secret key"** to create a dedicated one named `open-brain` — this makes it easier to revoke later without affecting other services, but using the default is fine too.) > [!WARNING] -> Treat the Secret key like a password. Anyone with it has full access to your data. The "Publishable key" at the top of the page is safe to expose publicly — you don't need it for this setup. -> -> You may also see a **"Legacy anon, service_role API keys"** tab — those are the old-style JWT keys. You don't need them. Everything in this guide uses the new key format. +> Treat the Secret key like a password. Anyone with it has full access to your data. The "Publishable key" at the top of the page is safe to expose publicly — you don't need it for this setup. You may also see a **"Legacy anon, service_role API keys"** tab — those are the old-style JWT keys. You don't need them. Everything in this guide uses the new key format. ✅ **Done when:** Your credential tracker has both **Project URL** and **Secret key** filled in. @@ -289,6 +287,8 @@ Copy the output — it'll look something like `a3f8b2c1d4e5...` (64 characters). > [!WARNING] > Copy and paste the command for **your operating system only**. The Mac command won't work on Windows and vice versa. + + > [!IMPORTANT] > This is your **one access key for all of Open Brain** — core setup and every extension you add later. Save it somewhere permanent. Never generate a new one unless you want to replace it for ALL deployed functions. @@ -419,9 +419,12 @@ supabase secrets set OPENROUTER_API_KEY=your-openrouter-key-here > [!CAUTION] > Make sure the access key you set here **exactly matches** what you saved in your credential tracker. If they don't match, you'll get 401 errors when connecting your AI. + + > **If you ever rotate your OpenRouter key:** you must re-run the `supabase secrets set` command above with the new key, AND update any local `.env` files that reference it. The edge function reads from Supabase secrets at runtime — updating the key on openrouter.ai alone won't propagate here. See the [FAQ on key rotation](03-faq.md#api-key-rotation) for the full checklist. ### Create the Function + ![6.6](https://img.shields.io/badge/6.6-Download_the_Server_Files-555?style=for-the-badge&labelColor=1E88E5) Three commands, run them one at a time in order: diff --git a/docs/03-faq.md b/docs/03-faq.md index a376d9fc..dc75c395 100644 --- a/docs/03-faq.md +++ b/docs/03-faq.md @@ -173,6 +173,7 @@ When you generate a new key on openrouter.ai/keys, the old key is revoked immedi **Places your OpenRouter key lives (update ALL of them):** 1. **Supabase Edge Function secrets** — This is the most common one to miss. Your MCP server reads the key from here at runtime. + ```bash supabase secrets set OPENROUTER_API_KEY=sk-or-v1-your-new-key ``` diff --git a/docs/05-tool-audit.md b/docs/05-tool-audit.md index 6cf0d34c..f19df0e9 100644 --- a/docs/05-tool-audit.md +++ b/docs/05-tool-audit.md @@ -81,6 +81,7 @@ Once you've identified bloat, here are the patterns for consolidating. Instead of 5 separate tools per table, expose one tool with an `action` parameter: **Before (5 tools):** + ``` create_recipe get_recipe @@ -90,6 +91,7 @@ list_recipes ``` **After (1 tool):** + ``` manage_recipe action: "create" | "read" | "update" | "delete" | "list" @@ -107,6 +109,7 @@ manage_recipe A gentler consolidation that preserves clear intent: **Before (5 tools):** + ``` create_recipe get_recipe @@ -116,6 +119,7 @@ search_recipes ``` **After (2 tools):** + ``` save_recipe — creates or updates (upsert pattern) query_recipes — search, filter, get by ID, list all @@ -132,6 +136,7 @@ This maps to how people actually talk to their AI: "save this" or "find that." T For tables with similar schemas (all your Open Brain extension tables follow the same `user_id` + timestamps + domain fields pattern), you can go further: **Before (20+ tools across 4 extensions):** + ``` add_household_item, search_household_items, get_item_details, add_vendor, list_vendors, @@ -140,6 +145,7 @@ search_maintenance_history, add_family_member, ... ``` **After (2–3 tools):** + ``` save_entity entity_type: "household_item" | "vendor" | "maintenance_task" | ... @@ -177,6 +183,7 @@ Merging tools reduces count within a server. Scoping splits tools across servers Most Open Brain users' workflows fall into three modes: #### Capture server (write-heavy) + **When you use it:** Quick capture moments — jotting down a thought, logging a contact interaction, saving a recipe. **Tools to include:** @@ -189,6 +196,7 @@ Most Open Brain users' workflows fall into three modes: **Context cost:** ~5–8 tools, ~1,500–3,000 tokens. #### Query server (read-heavy) + **When you use it:** Research, recall, weekly reviews, planning sessions — any time you're pulling information out rather than putting it in. **Tools to include:** @@ -202,6 +210,7 @@ Most Open Brain users' workflows fall into three modes: **Context cost:** ~8–12 tools, ~3,000–5,000 tokens. #### Admin server (rarely used) + **When you use it:** Occasional maintenance — bulk updates, deletions, schema changes, data cleanup. **Tools to include:** diff --git a/integrations/slack-capture/README.md b/integrations/slack-capture/README.md index 0b212474..6c738eca 100644 --- a/integrations/slack-capture/README.md +++ b/integrations/slack-capture/README.md @@ -240,6 +240,8 @@ Replace the values with: > SUPABASE_URL and SUPABASE_SERVICE_ROLE_KEY are automatically available inside Edge Functions — you don't need to set them. + + > **If you ever rotate your OpenRouter key:** you must re-run `supabase secrets set OPENROUTER_API_KEY=...` with the new key. This Edge Function reads the key from Supabase secrets at runtime — updating it on openrouter.ai alone won't propagate here. See the [FAQ on key rotation](../../docs/03-faq.md#api-key-rotation) for the full checklist. ### Deploy @@ -342,6 +344,8 @@ You now have a Slack channel that acts as a direct write path into your Open Bra This is one of many possible capture interfaces. Your Open Brain MCP server also includes a `capture_thought` tool, which means any MCP-connected AI (Claude Desktop, ChatGPT, Claude Code, Cursor) can write directly to your brain without switching apps. Slack is just the dedicated inbox. +Before adding more MCP-powered capture paths, review the [MCP Tool Audit & Optimization Guide](../../docs/05-tool-audit.md) so your tool surface stays intentional and manageable. + --- *Built by Nate B. Jones — part of the [Open Brain project](https://github.com/NateBJones-Projects/OB1)* diff --git a/recipes/brain-backup/README.md b/recipes/brain-backup/README.md new file mode 100644 index 00000000..cf7e5061 --- /dev/null +++ b/recipes/brain-backup/README.md @@ -0,0 +1,95 @@ +# Brain Backup and Export + +> Export all Open Brain tables to local JSON files for safekeeping. + +## What It Does + +Paginates through every Open Brain Supabase table (1 000 rows per request) and writes each one to a dated JSON file inside a local `backup/` directory. Prints live progress and a summary table so you know exactly what was saved. + +## Prerequisites + +- Working Open Brain setup ([guide](../../docs/01-getting-started.md)) +- Node.js 18+ installed +- A `.env.local` file in the recipe directory (or its parent) containing `SUPABASE_URL` and `SUPABASE_SERVICE_ROLE_KEY` + +## Credential Tracker + +Copy this block into a text editor and fill it in as you go. + +```text +BRAIN BACKUP -- CREDENTIAL TRACKER +-------------------------------------- + +FROM YOUR OPEN BRAIN SETUP + Project URL: ____________ + Service-role key: ____________ + +-------------------------------------- +``` + +## Steps + +1. **Copy the script into your project.** Place `backup-brain.mjs` wherever is convenient, or run it directly from this recipe folder. + +2. **Create a `.env.local` file** next to the script (or one directory above it) with your Supabase credentials: + + ```text + SUPABASE_URL=https://.supabase.co + SUPABASE_SERVICE_ROLE_KEY= + ``` + +3. **Run the backup:** + + ```bash + node backup-brain.mjs + ``` + + The script will read `.env.local` automatically. Alternatively, you can export the variables first: + + ```bash + export SUPABASE_URL=https://.supabase.co + export SUPABASE_SERVICE_ROLE_KEY= + node backup-brain.mjs + ``` + +4. **Check the output.** A `backup/` directory will be created containing one JSON file per table, each named `-YYYY-MM-DD.json`. + +## Expected Outcome + +After running the script you should see live row counts for each table followed by a summary like this: + +``` +Open Brain Backup — 2026-04-06 +Target: /path/to/backup + + thoughts: 1200 rows (1.4 MB) + entities: 340 rows (98.2 KB) + ... + +--- Backup Summary --- +Date: 2026-04-06 +Dir: /path/to/backup + +Table Rows Size +-------------------------------------- +thoughts 1200 1.4 MB +entities 340 98.2 KB +... +-------------------------------------- +TOTAL 1842 1.7 MB + +Done. 7/7 tables exported successfully. +``` + +The `backup/` directory will contain one JSON file per table. Each file is a valid JSON array that can be re-imported or queried with any JSON tool. + +## Troubleshooting + +**Issue: "SUPABASE_URL not found" error** +Solution: Make sure `.env.local` exists next to the script (or one directory up) and contains a line starting with `SUPABASE_URL=`. + +**Issue: "SUPABASE_SERVICE_ROLE_KEY not found" error** +Solution: Add your service-role key to `.env.local`. You can find it in your Supabase dashboard under Settings > API. + +**Issue: "PostgREST error 401" or "PostgREST error 403"** +Solution: Your service-role key may be expired or incorrect. Regenerate it in the Supabase dashboard and update `.env.local`. diff --git a/recipes/brain-backup/backup-brain.mjs b/recipes/brain-backup/backup-brain.mjs new file mode 100644 index 00000000..6db6e541 --- /dev/null +++ b/recipes/brain-backup/backup-brain.mjs @@ -0,0 +1,268 @@ +#!/usr/bin/env node +/** + * backup-brain.mjs — Export all Open Brain Supabase tables to local JSON files. + * + * Paginates through PostgREST (1000 rows per request) and writes each table + * to backup/
-YYYY-MM-DD.json. Shows progress and prints a summary. + * + * Usage: + * export SUPABASE_URL=https://.supabase.co + * export SUPABASE_SERVICE_ROLE_KEY= + * node backup-brain.mjs + * + * Or let the script read .env.local directly: + * node backup-brain.mjs + */ + +import fs from "node:fs"; +import path from "node:path"; +import { fileURLToPath } from "node:url"; + +const __dirname = path.dirname(fileURLToPath(import.meta.url)); +const PROJECT_ROOT = path.resolve(__dirname, ".."); + +// --------------------------------------------------------------------------- +// Config +// --------------------------------------------------------------------------- + +const PAGE_SIZE = 1000; + +const TABLES = [ + { name: "thoughts", orderBy: "id" }, + { name: "entities", orderBy: "id" }, + { name: "edges", orderBy: "id" }, + { name: "thought_entities", orderBy: "thought_id,entity_id" }, + { name: "reflections", orderBy: "id" }, + { name: "ingestion_jobs", orderBy: "id" }, + { name: "ingestion_items", orderBy: "id" }, +]; + +// --------------------------------------------------------------------------- +// Env loading +// --------------------------------------------------------------------------- + +function loadEnvFile() { + // Look for .env.local next to the script first, then one directory up. + const candidates = [ + path.join(__dirname, ".env.local"), + path.join(PROJECT_ROOT, ".env.local"), + ]; + const vars = {}; + for (const envPath of candidates) { + if (fs.existsSync(envPath)) { + for (const line of fs.readFileSync(envPath, "utf8").split("\n")) { + const trimmed = line.trim(); + if (!trimmed || trimmed.startsWith("#")) continue; + const eqIdx = trimmed.indexOf("="); + if (eqIdx > 0) { + const key = trimmed.slice(0, eqIdx); + if (!(key in vars)) { + vars[key] = trimmed.slice(eqIdx + 1).replace(/^['"]|['"]$/g, ""); + } + } + } + break; // use the first file found + } + } + return vars; +} + +const envVars = loadEnvFile(); + +const SUPABASE_URL = + process.env.SUPABASE_URL || + envVars.SUPABASE_URL || + ""; + +if (!SUPABASE_URL) { + console.error( + "ERROR: SUPABASE_URL not found.\n" + + "Either export it or ensure it exists in .env.local." + ); + process.exit(1); +} + +const REST_BASE = `${SUPABASE_URL}/rest/v1`; + +const SERVICE_KEY = + process.env.SUPABASE_SERVICE_ROLE_KEY || + envVars.SUPABASE_SERVICE_ROLE_KEY || + ""; + +if (!SERVICE_KEY) { + console.error( + "ERROR: SUPABASE_SERVICE_ROLE_KEY not found.\n" + + "Either export it or ensure it exists in .env.local." + ); + process.exit(1); +} + +const HEADERS = { + apikey: SERVICE_KEY, + Authorization: `Bearer ${SERVICE_KEY}`, + "Content-Type": "application/json", + Prefer: "count=exact", +}; + +// --------------------------------------------------------------------------- +// Helpers +// --------------------------------------------------------------------------- + +function today() { + const d = new Date(); + return d.toISOString().slice(0, 10); // YYYY-MM-DD +} + +function humanSize(bytes) { + if (bytes < 1024) return `${bytes} B`; + if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`; + return `${(bytes / (1024 * 1024)).toFixed(1)} MB`; +} + +/** Fetch a single page of rows from a table using Range header. */ +async function fetchPage(table, orderBy, offset, limit) { + const url = `${REST_BASE}/${table}?order=${orderBy}&limit=${limit}&offset=${offset}`; + const rangeEnd = offset + limit - 1; + const res = await fetch(url, { + headers: { + ...HEADERS, + Range: `${offset}-${rangeEnd}`, + }, + }); + + if (!res.ok && res.status !== 206) { + const body = await res.text(); + throw new Error(`PostgREST error ${res.status} on ${table}: ${body}`); + } + + // Parse total from content-range header: "0-999/75000" + let total = null; + const cr = res.headers.get("content-range"); + if (cr) { + const match = cr.match(/\/(\d+|\*)/); + if (match && match[1] !== "*") total = parseInt(match[1], 10); + } + + const rows = await res.json(); + return { rows, total }; +} + +/** Export one table, streaming rows to disk to avoid V8 string length limits. */ +async function exportTable(tableName, orderBy, backupDir, dateStr) { + const filePath = path.join(backupDir, `${tableName}-${dateStr}.json`); + let offset = 0; + let total = null; + let rowCount = 0; + + // First page to get total count + const first = await fetchPage(tableName, orderBy, 0, PAGE_SIZE); + total = first.total; + + const label = ` ${tableName}`; + if (first.rows.length === 0) { + process.stdout.write(`${label}: 0 rows (empty table)\n`); + fs.writeFileSync(filePath, "[]"); + return { rowCount: 0, filePath, fileSize: 2 }; + } + + // Stream JSON array to file — write each row individually to avoid huge string + const fd = fs.openSync(filePath, "w"); + fs.writeSync(fd, "[\n"); + let firstRow = true; + + function writeRows(rows) { + for (const row of rows) { + if (!firstRow) fs.writeSync(fd, ",\n"); + fs.writeSync(fd, JSON.stringify(row)); + firstRow = false; + rowCount++; + } + } + + writeRows(first.rows); + process.stdout.write( + `${label}: ${rowCount}${total != null ? "/" + total : ""} rows\r` + ); + + // Remaining pages + offset = PAGE_SIZE; + while (first.rows.length === PAGE_SIZE && (total == null || offset < total)) { + const page = await fetchPage(tableName, orderBy, offset, PAGE_SIZE); + if (page.rows.length === 0) break; + writeRows(page.rows); + offset += page.rows.length; + + process.stdout.write( + `${label}: ${rowCount}${total != null ? "/" + total : ""} rows\r` + ); + } + + fs.writeSync(fd, "\n]"); + fs.closeSync(fd); + + const fileSize = fs.statSync(filePath).size; + + process.stdout.write( + `${label}: ${rowCount} rows (${humanSize(fileSize)}) \n` + ); + + return { rowCount, filePath, fileSize }; +} + +// --------------------------------------------------------------------------- +// Main +// --------------------------------------------------------------------------- + +async function main() { + const dateStr = today(); + const backupDir = path.join(__dirname, "backup"); + + if (!fs.existsSync(backupDir)) { + fs.mkdirSync(backupDir, { recursive: true }); + console.log(`Created ${backupDir}`); + } + + console.log(`\nOpen Brain Backup — ${dateStr}`); + console.log(`Target: ${backupDir}\n`); + + const results = []; + for (const table of TABLES) { + try { + const result = await exportTable(table.name, table.orderBy, backupDir, dateStr); + results.push({ table: table.name, ...result }); + } catch (err) { + console.error(`\n ERROR exporting ${table.name}: ${err.message}`); + results.push({ table: table.name, rowCount: 0, filePath: null, fileSize: 0, error: err.message }); + } + } + + // Summary + const totalRows = results.reduce((s, r) => s + r.rowCount, 0); + const totalSize = results.reduce((s, r) => s + r.fileSize, 0); + + console.log("\n--- Backup Summary ---"); + console.log(`Date: ${dateStr}`); + console.log(`Dir: ${backupDir}\n`); + + const colTable = "Table".padEnd(20); + const colRows = "Rows".padStart(8); + const colSize = "Size".padStart(10); + console.log(`${colTable}${colRows}${colSize}`); + console.log("-".repeat(38)); + + for (const r of results) { + const name = r.table.padEnd(20); + const rows = String(r.rowCount).padStart(8); + const size = (r.error ? "ERROR" : humanSize(r.fileSize)).padStart(10); + console.log(`${name}${rows}${size}`); + } + + console.log("-".repeat(38)); + console.log(`${"TOTAL".padEnd(20)}${String(totalRows).padStart(8)}${humanSize(totalSize).padStart(10)}`); + console.log(`\nDone. ${results.filter(r => !r.error).length}/${results.length} tables exported successfully.`); +} + +main().catch((err) => { + console.error("Fatal error:", err); + process.exit(1); +}); diff --git a/recipes/brain-backup/metadata.json b/recipes/brain-backup/metadata.json new file mode 100644 index 00000000..c2745bc3 --- /dev/null +++ b/recipes/brain-backup/metadata.json @@ -0,0 +1,20 @@ +{ + "name": "Brain Backup and Export", + "description": "Export all Open Brain Supabase tables to local JSON files with pagination and progress reporting.", + "category": "recipes", + "author": { + "name": "Alan Shurafa", + "github": "alanshurafa" + }, + "version": "1.0.0", + "requires": { + "open_brain": true, + "services": ["Supabase"], + "tools": ["Node.js 18+"] + }, + "tags": ["backup", "export", "data-safety"], + "difficulty": "beginner", + "estimated_time": "10 minutes", + "created": "2026-04-06", + "updated": "2026-04-06" +} diff --git a/recipes/grok-export-import/README.md b/recipes/grok-export-import/README.md index 0d015a26..b93339af 100644 --- a/recipes/grok-export-import/README.md +++ b/recipes/grok-export-import/README.md @@ -37,12 +37,14 @@ FROM OPENROUTER - Find the Grok JSON file in the export 2. **Copy this recipe folder** and install dependencies: + ```bash cd grok-export-import npm install ``` 3. **Create `.env`** with your credentials (see `.env.example`): + ```env SUPABASE_URL=https://your-project.supabase.co SUPABASE_SERVICE_ROLE_KEY=your-service-role-key @@ -50,11 +52,13 @@ FROM OPENROUTER ``` 4. **Preview what will be imported** (dry run): + ```bash node import-grok.mjs /path/to/grok-export.json --dry-run ``` 5. **Run the import:** + ```bash node import-grok.mjs /path/to/grok-export.json ``` diff --git a/recipes/instagram-import/README.md b/recipes/instagram-import/README.md index 20c77daa..f63768bc 100644 --- a/recipes/instagram-import/README.md +++ b/recipes/instagram-import/README.md @@ -53,12 +53,14 @@ FROM OPENROUTER - Look for the `your_instagram_activity/` folder 2. **Copy this recipe folder** and install dependencies: + ```bash cd instagram-import npm install ``` 3. **Create `.env`** with your credentials (see `.env.example`): + ```env SUPABASE_URL=https://your-project.supabase.co SUPABASE_SERVICE_ROLE_KEY=your-service-role-key @@ -66,17 +68,20 @@ FROM OPENROUTER ``` 4. **Preview what will be imported** (dry run): + ```bash node import-instagram.mjs /path/to/instagram-export --dry-run ``` 5. **Import specific types only** (optional): + ```bash node import-instagram.mjs /path/to/instagram-export --types messages node import-instagram.mjs /path/to/instagram-export --types comments,posts ``` 6. **Run the full import:** + ```bash node import-instagram.mjs /path/to/instagram-export ``` diff --git a/recipes/journals-blogger-import/README.md b/recipes/journals-blogger-import/README.md index 324768bd..f7563e4d 100644 --- a/recipes/journals-blogger-import/README.md +++ b/recipes/journals-blogger-import/README.md @@ -48,6 +48,7 @@ FROM OPENROUTER - If you have multiple blogs, export each one 2. **Place all `.atom` files in a folder:** + ``` blogger-exports/ ├── my-tech-blog.atom @@ -56,12 +57,14 @@ FROM OPENROUTER ``` 3. **Copy this recipe folder** and install dependencies: + ```bash cd journals-blogger-import npm install ``` 4. **Create `.env`** with your credentials (see `.env.example`): + ```env SUPABASE_URL=https://your-project.supabase.co SUPABASE_SERVICE_ROLE_KEY=your-service-role-key @@ -69,11 +72,13 @@ FROM OPENROUTER ``` 5. **Preview what will be imported** (dry run): + ```bash node import-blogger.mjs /path/to/blogger-exports --dry-run ``` 6. **Run the import:** + ```bash node import-blogger.mjs /path/to/blogger-exports ``` diff --git a/recipes/life-engine-video/README.md b/recipes/life-engine-video/README.md index 75e0192e..6a6a676b 100755 --- a/recipes/life-engine-video/README.md +++ b/recipes/life-engine-video/README.md @@ -7,6 +7,8 @@ An add-on for [Life Engine](../life-engine/) that replaces (or supplements) text > [!IMPORTANT] > **Built for [Claude Code](https://claude.ai/download), but not exclusive to it.** The Life Engine core requires Claude Code (it depends on `/loop` and skills), but this video add-on — the Remotion rendering, ElevenLabs TTS, and pipeline scripting — can be driven by any capable AI coding agent. ChatGPT handles Remotion well; other agents may work too. If you're adapting this to a different tool, the architecture and components in this guide give you everything you need. + + > [!NOTE] > **Expect iteration.** Your first rendered video will have timing issues, subtitle drift, or a voiceover script that sounds stilted. That's normal. Each render gives you feedback — adjust the VO script guidelines, tweak the subtitle chunking, tune the ElevenLabs voice settings. The structured data flowing from your Open Brain means the *content* improves automatically as your knowledge base grows. The *presentation* improves as you and your agent dial in the rendering pipeline together. @@ -320,6 +322,7 @@ export const SubtitleBar: React.FC<{ ); }; ``` +
@@ -341,6 +344,7 @@ export const ProgressBar: React.FC = () => { ); }; ``` +
@@ -390,6 +394,7 @@ export const TaskCard: React.FC<{ ); }; ``` +
@@ -435,6 +440,7 @@ export const SectionHeader: React.FC<{ ); }; ``` +
### 2.3 Scene Components @@ -486,6 +492,7 @@ export const TitleScene: React.FC<{ ); }; ``` + ### 2.4 Main Composition @@ -761,15 +768,19 @@ Place in `public/music.mp3`. The composition plays it at 12-15% volume under the ## Going Further ### Dynamic Scene Assembly + Instead of fixed scene types, let Claude decide which scenes to include based on the data. If there are no habits, skip the habits scene. If there's a lot of OB1 context for a meeting, add an extra context scene. The composition adapts to the data. ### Weekly Recap Videos + Every Sunday, render a 60-second recap of the week: meetings attended, habits completed, mood trends, and highlights. Use chart/graph animations for habit streaks. ### Voice Briefings (Audio Only) + Skip the video render entirely and just send the TTS audio as a voice message via Telegram. Much faster (seconds instead of minutes), still personal. Good for quick habit reminders. ### Screen Recording Integration + For meeting prep, capture a screenshot of the client's website or relevant dashboard and animate it into the prep briefing video. Use `@remotion/gif` to embed Chrome GIF captures. --- diff --git a/recipes/life-engine/README.md b/recipes/life-engine/README.md index 855f6c2b..07141db0 100755 --- a/recipes/life-engine/README.md +++ b/recipes/life-engine/README.md @@ -8,10 +8,8 @@ A self-improving, time-aware personal assistant that runs in the background via > [!IMPORTANT] > **This recipe requires [Claude Code](https://claude.ai/download).** It uses Claude Code-specific features — skills, the `/loop` command, and MCP server connections — that aren't available in other AI coding tools. If you're using a different agent, this one isn't for you (yet). - > [!TIP] > **You don't have to set this up manually.** This guide is detailed enough that Claude Code can do most of the setup for you. If you'd rather not walk through every step yourself, skip to [Quick Setup with Claude Code](#quick-setup-with-claude-code) — paste one prompt and Claude handles the plugin install, skill file creation, schema setup, and permissions configuration. Come back to the step-by-step sections if you want to understand what it built or customize further. - > [!NOTE] > **This will not be perfect on day one.** That's by design. Life Engine is built to iterate — your first morning briefing will be rough, your tenth will be dialed in, and by week four the system is suggesting its own improvements based on what you actually use. The value comes from the feedback loop between you and the agent, powered by the structured context your Open Brain provides. Treat the first run as a starting point, not a finished product. @@ -227,10 +225,13 @@ claude --channels plugin:telegram@claude-plugins-official 1. DM your bot on Telegram — send it any message (e.g., "hello") 2. The bot replies with a **6-character pairing code** 3. Back in Claude Code, approve the pairing: + ``` /telegram:access pair ``` + 4. Lock down access so only your account can reach the session: + ``` /telegram:access policy allowlist ``` @@ -280,6 +281,7 @@ claude --channels plugin:discord@claude-plugins-official 1. DM your bot on Discord — if it doesn't respond, make sure Claude Code is running with `--channels` from the previous step 2. The bot replies with a **pairing code** 3. Back in Claude Code: + ``` /discord:access pair /discord:access policy allowlist @@ -701,7 +703,7 @@ Or persist them in `.claude/settings.json`: > **Note:** The exact tool names depend on how you named your MCP servers. Run `/mcp` in Claude Code to see your server names, then match them here. The `__*` wildcard approves all tools from that server. -If you're using the [Dynamic Loop Timing](#dynamic-loop-timing) feature from the skill, also add `CronCreate` and `CronDelete`. +If you're using the Dynamic Loop Timing feature from the skill, also add `CronCreate` and `CronDelete`. ### 6.5 Test Before You Walk Away @@ -762,11 +764,13 @@ That's it. Claude will now check in every 30 minutes and decide if you need anyt This is where Life Engine becomes unique to you. Here's the progression: ### Week 1: Calendar + Telegram (Start Here) + - Morning briefing with today's events - Pre-meeting prep from Open Brain - That's it. Keep it simple. ### Week 2: Add Habits + Tell Claude: > "Add a morning jog habit to my Life Engine. Remind me at 7am and ask me to confirm when I'm done." @@ -776,6 +780,7 @@ Claude will: 3. Log completions when you reply ### Week 3: Add Check-ins + Tell Claude: > "Add a midday mood check-in. Just ask me how I'm feeling and log it." @@ -785,6 +790,7 @@ Claude will: 3. Include mood trends in evening summaries ### Week 4: First Self-Improvement Cycle + After 7 days of data, Claude reviews its own performance: - Which messages did you respond to? - Which ones did you ignore? @@ -793,6 +799,7 @@ After 7 days of data, Claude reviews its own performance: It sends you a suggestion via Telegram. You approve or reject. The skill evolves. ### Beyond: It's Yours + Over weeks and months, your Life Engine accumulates: - A log of every briefing it sent - Your habit completion streaks @@ -824,15 +831,19 @@ No two Life Engines look the same. Yours adapts to your schedule, your habits, y ## Going Further ### Video Briefings with Remotion + Instead of text, render a short video summary using [Remotion](https://www.remotion.dev/). Claude can generate a Remotion composition from the briefing data and send the rendered video via the Telegram channel's `reply` tool (which supports file attachments up to 50MB). ### Multi-Person Households + Combine with the [Family Calendar Extension](../../extensions/family-calendar/) to track multiple family members' schedules and send briefings relevant to the whole household. ### Professional CRM Integration + Combine with the [Professional CRM Extension](../../extensions/professional-crm/) to automatically pull contact history and opportunity status into pre-meeting briefings. ### Voice Briefings + Use ElevenLabs or another TTS API to convert briefings to audio. Send voice messages via Telegram instead of text — perfect for when you're driving. --- diff --git a/recipes/life-engine/life-engine-skill.md b/recipes/life-engine/life-engine-skill.md index d40ab4d4..067829de 100755 --- a/recipes/life-engine/life-engine-skill.md +++ b/recipes/life-engine/life-engine-skill.md @@ -25,6 +25,7 @@ Messages arrive as ` 45 min away) - Send a mood/energy check-in prompt via `reply` - When the user replies (arrives as a `` event), `react` with 👍 and log to `life_engine_checkins` ### Afternoon (2:00 PM – 5:00 PM) + **Action:** Pre-meeting prep (same logic as above) OR afternoon update - If meetings coming up, do meeting prep - If afternoon is clear, surface any relevant Open Brain thoughts or pending follow-ups ### Evening (5:00 PM – 7:00 PM) + **Action:** Day summary (if not already sent today) - Count today's calendar events - Query `life_engine_habit_log` for today's completions @@ -60,6 +65,7 @@ Messages arrive as `=3.1 ``` @@ -230,9 +231,11 @@ pip install openpyxl>=3.1 Your export may use different sheet names. Open the file in a spreadsheet app and check the sheet tabs. The script looks for exact names "Conversations" and "Memory". **"OPENROUTER_API_KEY environment variable required"** + ```bash export OPENROUTER_API_KEY="sk-or-v1-your-key" ``` + Or use `--model ollama` for local summarization (embeddings still need OpenRouter). **Summarization returns empty thoughts** @@ -240,6 +243,7 @@ Some Q&A pairs are too simple (e.g., "what time is it?"). This is expected — t **"Failed to generate embedding"** Check your OpenRouter API key has credits and access to `text-embedding-3-small`. Test with: + ```bash curl https://openrouter.ai/api/v1/embeddings \ -H "Authorization: Bearer $OPENROUTER_API_KEY" \ diff --git a/recipes/schema-aware-routing/README.md b/recipes/schema-aware-routing/README.md index 0f1ded9c..8b5e20a6 100644 --- a/recipes/schema-aware-routing/README.md +++ b/recipes/schema-aware-routing/README.md @@ -10,7 +10,6 @@ - A pattern for using LLM-extracted metadata to route unstructured text into the correct database tables automatically. One input message becomes writes to four different tables — `thoughts`, `people`, `interactions`, and `action_items` — based entirely on what the LLM finds in the text. > [!NOTE] diff --git a/recipes/x-twitter-import/README.md b/recipes/x-twitter-import/README.md index 84d919d7..c96dc70a 100644 --- a/recipes/x-twitter-import/README.md +++ b/recipes/x-twitter-import/README.md @@ -51,12 +51,14 @@ FROM OPENROUTER - You should see a `data/` folder containing `tweets.js`, `direct-messages.js`, etc. 2. **Copy this recipe folder** and install dependencies: + ```bash cd x-twitter-import npm install ``` 3. **Create `.env`** with your credentials (see `.env.example`): + ```env SUPABASE_URL=https://your-project.supabase.co SUPABASE_SERVICE_ROLE_KEY=your-service-role-key @@ -64,17 +66,20 @@ FROM OPENROUTER ``` 4. **Preview what will be imported** (dry run): + ```bash node import-x-twitter.mjs /path/to/twitter-export --dry-run ``` 5. **Import specific types only** (optional): + ```bash node import-x-twitter.mjs /path/to/twitter-export --types tweets node import-x-twitter.mjs /path/to/twitter-export --types dms,grok ``` 6. **Run the full import:** + ```bash node import-x-twitter.mjs /path/to/twitter-export ```