Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 6 additions & 3 deletions docs/01-getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -239,9 +239,7 @@ You'll land on the **"Publishable and secret API keys"** tab. Copy these into yo
- 🔖 **Secret key** — Scroll down to the **"Secret keys"** section on the same page. You'll see a `default` key. Click the copy button to copy it. (You can also click **"+ New secret key"** to create a dedicated one named `open-brain` — this makes it easier to revoke later without affecting other services, but using the default is fine too.)

> [!WARNING]
> Treat the Secret key like a password. Anyone with it has full access to your data. The "Publishable key" at the top of the page is safe to expose publicly — you don't need it for this setup.
>
> You may also see a **"Legacy anon, service_role API keys"** tab — those are the old-style JWT keys. You don't need them. Everything in this guide uses the new key format.
> Treat the Secret key like a password. Anyone with it has full access to your data. The "Publishable key" at the top of the page is safe to expose publicly — you don't need it for this setup. You may also see a **"Legacy anon, service_role API keys"** tab — those are the old-style JWT keys. You don't need them. Everything in this guide uses the new key format.

✅ **Done when:** Your credential tracker has both **Project URL** and **Secret key** filled in.

Expand Down Expand Up @@ -289,6 +287,8 @@ Copy the output — it'll look something like `a3f8b2c1d4e5...` (64 characters).
> [!WARNING]
> Copy and paste the command for **your operating system only**. The Mac command won't work on Windows and vice versa.

<!-- -->

> [!IMPORTANT]
> This is your **one access key for all of Open Brain** — core setup and every extension you add later. Save it somewhere permanent. Never generate a new one unless you want to replace it for ALL deployed functions.

Expand Down Expand Up @@ -419,9 +419,12 @@ supabase secrets set OPENROUTER_API_KEY=your-openrouter-key-here
> [!CAUTION]
> Make sure the access key you set here **exactly matches** what you saved in your credential tracker. If they don't match, you'll get 401 errors when connecting your AI.

<!-- -->

> **If you ever rotate your OpenRouter key:** you must re-run the `supabase secrets set` command above with the new key, AND update any local `.env` files that reference it. The edge function reads from Supabase secrets at runtime — updating the key on openrouter.ai alone won't propagate here. See the [FAQ on key rotation](03-faq.md#api-key-rotation) for the full checklist.

### Create the Function

![6.6](https://img.shields.io/badge/6.6-Download_the_Server_Files-555?style=for-the-badge&labelColor=1E88E5)

Three commands, run them one at a time in order:
Expand Down
1 change: 1 addition & 0 deletions docs/03-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -173,6 +173,7 @@ When you generate a new key on openrouter.ai/keys, the old key is revoked immedi
**Places your OpenRouter key lives (update ALL of them):**

1. **Supabase Edge Function secrets** — This is the most common one to miss. Your MCP server reads the key from here at runtime.

```bash
supabase secrets set OPENROUTER_API_KEY=sk-or-v1-your-new-key
```
Expand Down
9 changes: 9 additions & 0 deletions docs/05-tool-audit.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,7 @@ Once you've identified bloat, here are the patterns for consolidating.
Instead of 5 separate tools per table, expose one tool with an `action` parameter:

**Before (5 tools):**

```
create_recipe
get_recipe
Expand All @@ -90,6 +91,7 @@ list_recipes
```

**After (1 tool):**

```
manage_recipe
action: "create" | "read" | "update" | "delete" | "list"
Expand All @@ -107,6 +109,7 @@ manage_recipe
A gentler consolidation that preserves clear intent:

**Before (5 tools):**

```
create_recipe
get_recipe
Expand All @@ -116,6 +119,7 @@ search_recipes
```

**After (2 tools):**

```
save_recipe — creates or updates (upsert pattern)
query_recipes — search, filter, get by ID, list all
Expand All @@ -132,6 +136,7 @@ This maps to how people actually talk to their AI: "save this" or "find that." T
For tables with similar schemas (all your Open Brain extension tables follow the same `user_id` + timestamps + domain fields pattern), you can go further:

**Before (20+ tools across 4 extensions):**

```
add_household_item, search_household_items, get_item_details,
add_vendor, list_vendors,
Expand All @@ -140,6 +145,7 @@ search_maintenance_history, add_family_member, ...
```

**After (2–3 tools):**

```
save_entity
entity_type: "household_item" | "vendor" | "maintenance_task" | ...
Expand Down Expand Up @@ -177,6 +183,7 @@ Merging tools reduces count within a server. Scoping splits tools across servers
Most Open Brain users' workflows fall into three modes:

#### Capture server (write-heavy)

**When you use it:** Quick capture moments — jotting down a thought, logging a contact interaction, saving a recipe.

**Tools to include:**
Expand All @@ -189,6 +196,7 @@ Most Open Brain users' workflows fall into three modes:
**Context cost:** ~5–8 tools, ~1,500–3,000 tokens.

#### Query server (read-heavy)

**When you use it:** Research, recall, weekly reviews, planning sessions — any time you're pulling information out rather than putting it in.

**Tools to include:**
Expand All @@ -202,6 +210,7 @@ Most Open Brain users' workflows fall into three modes:
**Context cost:** ~8–12 tools, ~3,000–5,000 tokens.

#### Admin server (rarely used)

**When you use it:** Occasional maintenance — bulk updates, deletions, schema changes, data cleanup.

**Tools to include:**
Expand Down
4 changes: 4 additions & 0 deletions integrations/slack-capture/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -240,6 +240,8 @@ Replace the values with:

> SUPABASE_URL and SUPABASE_SERVICE_ROLE_KEY are automatically available inside Edge Functions — you don't need to set them.

<!-- -->

> **If you ever rotate your OpenRouter key:** you must re-run `supabase secrets set OPENROUTER_API_KEY=...` with the new key. This Edge Function reads the key from Supabase secrets at runtime — updating it on openrouter.ai alone won't propagate here. See the [FAQ on key rotation](../../docs/03-faq.md#api-key-rotation) for the full checklist.

### Deploy
Expand Down Expand Up @@ -342,6 +344,8 @@ You now have a Slack channel that acts as a direct write path into your Open Bra

This is one of many possible capture interfaces. Your Open Brain MCP server also includes a `capture_thought` tool, which means any MCP-connected AI (Claude Desktop, ChatGPT, Claude Code, Cursor) can write directly to your brain without switching apps. Slack is just the dedicated inbox.

Before adding more MCP-powered capture paths, review the [MCP Tool Audit & Optimization Guide](../../docs/05-tool-audit.md) so your tool surface stays intentional and manageable.

---

*Built by Nate B. Jones — part of the [Open Brain project](https://github.com/NateBJones-Projects/OB1)*
126 changes: 126 additions & 0 deletions recipes/brain-health-monitoring/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
# Brain Health Monitoring

> SQL views and runbook for monitoring source volumes, enrichment gaps, ingestion pipeline health, stalled queues, and knowledge graph coverage.

## What It Does

Adds 8 monitoring views to your Open Brain database that answer the most common operational questions:

| View | What It Shows |
|------|---------------|
| `ops_source_volume_24h` | Thought counts per source in the last 24 hours |
| `ops_recent_thoughts` | Latest thoughts with type, source, enrichment status, and preview |
| `ops_enrichment_gaps` | Thoughts that haven't been enriched yet |
| `ops_type_distribution` | Type breakdown (all-time, 7-day, 24-hour windows) |
| `ops_sensitivity_distribution` | Sensitivity tier breakdown |
| `ops_ingestion_summary` | Ingestion job status and counts (requires smart-ingest-tables) |
| `ops_stalled_entity_queue` | Queue items stuck or permanently failed (requires knowledge-graph) |
| `ops_graph_coverage` | Entity extraction progress and coverage percentage (requires knowledge-graph) |

Views 1-5 work with the base enhanced thoughts schema. Views 6-8 require optional schemas and will error if those tables don't exist — run only the views that match your installed schemas.

## Prerequisites

- Working Open Brain setup ([guide](../../docs/01-getting-started.md))
- **Enhanced thoughts schema** applied — install `schemas/enhanced-thoughts` (required for all views)
- Optional: `schemas/smart-ingest-tables` for the ingestion summary view
- Optional: `schemas/knowledge-graph` for queue and graph coverage views

## Steps

1. Review which monitoring views apply to your installed schemas.
2. Run `ops-views.sql` in the Supabase SQL Editor.
3. Verify the `ops_*` views were created successfully.
4. Query the views to establish a baseline health check.

### 1. Review the SQL File

Open `ops-views.sql` and check which views apply to your setup:

- **Views 1-5** (source volume, recent thoughts, enrichment gaps, type/sensitivity distribution): Work with any Open Brain install that has the enhanced thoughts schema.
- **View 6** (ingestion summary): Requires the `ingestion_jobs` table from `schemas/smart-ingest-tables`.
- **Views 7-8** (stalled queue, graph coverage): Require the `entity_extraction_queue` table from `schemas/knowledge-graph`.

If you haven't installed the optional schemas, comment out views 6-8 before running.

### 2. Run the SQL

In the Supabase SQL Editor, paste the contents of `ops-views.sql` and execute. All statements use `CREATE OR REPLACE VIEW`, so running multiple times is safe.

```bash
# Or via psql:
psql "$DATABASE_URL" -f ops-views.sql
```

### 3. Verify Views Exist

```sql
SELECT table_name
FROM information_schema.views
WHERE table_schema = 'public'
AND table_name LIKE 'ops_%'
ORDER BY table_name;
```

You should see between 5 and 8 views depending on which schemas are installed.

### 4. Run Your First Health Check

```sql
-- How many thoughts arrived in the last 24 hours, by source?
SELECT * FROM ops_source_volume_24h;

-- How many thoughts are waiting for enrichment?
SELECT count(*) AS unenriched FROM ops_enrichment_gaps;

-- What's the type distribution?
SELECT * FROM ops_type_distribution;
```

## Runbook: What "Healthy" Looks Like

### Fresh Install (< 100 thoughts)

- `ops_source_volume_24h`: 0-10 thoughts, mostly from `mcp` or `rest_api`
- `ops_enrichment_gaps`: May show all thoughts if enrichment hasn't run yet — this is normal
- `ops_type_distribution`: Mostly `idea` (default type before enrichment)
- `ops_sensitivity_distribution`: All `standard` unless you've captured sensitive content

### Established Brain (1000+ thoughts)

- `ops_source_volume_24h`: Regular flow from expected sources. If a source drops to 0, check the capture pipeline.
- `ops_enrichment_gaps`: Should be near 0 if the enrichment pipeline is active. A growing backlog means enrichment is stalled.
- `ops_type_distribution`: Diverse types across `idea`, `decision`, `lesson`, `reference`, `person_note`, etc. If everything is `idea`, the classifier may not be running.
- `ops_sensitivity_distribution`: Mostly `standard` with some `personal`. A spike in `restricted` is worth investigating.
- `ops_ingestion_summary`: Mostly `complete` jobs. `failed` jobs need error investigation.
- `ops_graph_coverage`: `coverage_pct` should climb toward 100% over time. Stalled at a low percentage means the entity worker isn't running.
- `ops_stalled_entity_queue`: Should be empty. Items here need manual intervention (reset `processing` items, investigate `failed` items).

### Common Remediation Actions

| Symptom | Action |
|---------|--------|
| Source volume dropped to 0 | Check the capture integration (MCP server, REST API, webhook) |
| Large enrichment gap | Run the thought enrichment pipeline (`recipes/thought-enrichment`) |
| All types are "idea" | Verify the LLM classifier is configured (`OPENROUTER_API_KEY` set) |
| Stalled queue items | Reset with: `UPDATE entity_extraction_queue SET status = 'pending' WHERE status = 'processing' AND started_at < now() - interval '10 minutes'` |
| Failed queue items | Check `last_error` column. Common: LLM rate limits, empty content |
| Low graph coverage | Run the entity extraction worker (`integrations/entity-extraction-worker`) |

## Expected Outcome

After running the SQL, you should be able to query any `ops_*` view from the Supabase SQL Editor, your dashboard, or the REST API to get a real-time picture of your brain's health. These views are also available through PostgREST if you need to query them programmatically.

## Troubleshooting

**"relation ops_ingestion_summary does not exist"**
The `ingestion_jobs` table hasn't been created. Install `schemas/smart-ingest-tables` first, or comment out view 6 in the SQL file.

**"relation entity_extraction_queue does not exist"**
The knowledge graph schema hasn't been applied. Install `schemas/knowledge-graph` first, or comment out views 7-8.

**Views return empty results**
This is normal for a fresh install with no thoughts. Capture a few thoughts first, then query the views.

**Permission denied on a view**
Ensure the GRANT statements at the end of the SQL file executed successfully. Re-run them if needed.
17 changes: 17 additions & 0 deletions recipes/brain-health-monitoring/metadata.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
{
"name": "Brain Health Monitoring",
"description": "SQL views and runbook for monitoring source volumes, enrichment gaps, ingestion pipeline health, stalled queues, and knowledge graph coverage.",
"category": "recipes",
"author": {
"name": "Alan Shurafa",
"github": "alanshurafa"
},
"version": "1.0.0",
"requires": {
"open_brain": true,
"tools": ["Supabase SQL Editor or psql"]
},
"tags": ["monitoring", "ops", "health", "observability", "views"],
"difficulty": "beginner",
"estimated_time": "15 minutes"
}
Loading
Loading