Skip to content

Latest commit

 

History

History
370 lines (259 loc) · 17.1 KB

File metadata and controls

370 lines (259 loc) · 17.1 KB

AI Workflow Builder Template (KeeperHub Fork)

AI Agents Code Policy

  • No Emojis: NEVER use emojis in any code, documentation, README files, PR descriptions, commit messages, or any other text output. This rule applies to ALL generated content without exception.
  • No File Structure: Do not include file/folder structure diagrams in README files
  • No Random Documentation: Do not create markdown documentation files unless explicitly requested by the user. This includes integration guides, feature documentation, or any other .md files
  • docs/ is public-facing: The docs/ directory is published to docs.keeperhub.com. Never put internal specs, notes, or working documents there. Internal documentation and specs go in specs/
  • No internal references in public docs: In docs/ and docs-site/content/ (anything published to docs.keeperhub.com), NEVER mention phase numbers (e.g. Phase 33), internal version tags (e.g. v1.8, v0.1.4), Linear ticket IDs (KEEP-XXX), PR numbers (PR #917), or internal branch names. Write about capabilities in terms of what's supported today vs not yet supported. Internal tracking belongs in .planning/, specs/, commit messages, and Linear — not on the public docs site.
  • No co-authored with Claude in PR descriptions and git commits
  • Do not git push or create Github PRs without user's confirmation
  • Do not leave code comments with summaries of user's prompt
  • PR titles must follow conventional commit format: <type>: <description> or <type>(scope): <description>. Allowed types: feat, fix, hotfix, chore, docs, refactor, test, ci, build, perf, style, breaking, release. This is enforced by the pr-title-check workflow on PRs targeting staging.
  • Use kh CLI and KeeperHub MCP tools for all KeeperHub API interactions: NEVER use raw curl or fetch against KeeperHub endpoints. Use MCP tools (mcp__keeperhub-dev__*, mcp__keeperhub-staging__*, mcp__keeperhub__*) for the target environment, or the kh CLI which handles auth and CF Access headers automatically via ~/.config/kh/hosts.yml.

Code Quality: Lint and Type Checking

Before writing or editing any code, review the lint configuration to write compliant code:

  1. Check biome.jsonc for project-specific lint rules and exclusions
  2. Check .cursor/rules/ultracite.mdc for detailed coding standards

Key Ultracite/Biome Rules

  • Use explicit types for function parameters and return values
  • Prefer unknown over any
  • Use for...of loops over .forEach() and indexed loops
  • Use optional chaining (?.) and nullish coalescing (??)
  • Use const by default, let only when reassignment is needed
  • Always await promises in async functions
  • Remove console.log, debugger, and alert from production code
  • Use Next.js <Image> component instead of <img> tags
  • Add rel="noopener" when using target="_blank"

Before Every Commit

Run these checks and fix any issues before committing:

pnpm check      # Lint check (Ultracite/Biome)
pnpm type-check # TypeScript validation
pnpm fix        # Auto-fix lint issues (run if check fails)

If pnpm check or pnpm type-check fails, fix the issues before committing. Do not commit code with lint or type errors.

Lint Output Caching

When lint/type-check commands run, their output is saved to gitignored files:

  • .claude/lint-output.txt - Output from pnpm check
  • .claude/typecheck-output.txt - Output from pnpm type-check

Workflow for fixing errors:

  1. Run pnpm check or pnpm type-check once
  2. Read .claude/lint-output.txt or .claude/typecheck-output.txt for errors
  3. Fix the errors in code
  4. Re-run the check command only when you need fresh output

Do NOT repeatedly run lint commands to check progress. Read the cached output file instead - this saves time and context.

Claude Hooks (Automatic Checks)

This project has Claude Code hooks configured in .claude/settings.json:

Pre-Edit Lint Context (.claude/hooks/pre-edit-lint-context.sh):

  • Fires before Edit/Write on .ts/.tsx/.js/.jsx files
  • Injects key Ultracite/Biome lint rules into context
  • Rationale: Higher upfront token cost, but saves overall context by writing correct code the first time instead of the expensive cycle of: write code → run lint → see errors → fix partially → re-run lint → repeat

Pre-Commit Checks (.claude/hooks/pre-commit-checks.sh):

  • Detects git commit commands
  • Runs pnpm check (lint) and pnpm type-check (TypeScript)
  • Saves output to .claude/*.txt files for reading without re-running
  • Blocks the commit (exit code 2) if either fails

Lint Ignore Comments

Only use lint ignore comments when absolutely necessary. Valid reasons:

  • Third-party library types are incorrect and cannot be fixed
  • Generated code that cannot be modified
  • Rare edge cases where the rule genuinely does not apply

Invalid reasons (fix the code instead):

  • "It works fine"
  • "The rule is too strict"
  • "It's faster to ignore than fix"

When you must use an ignore comment:

  1. Use the most specific ignore possible (target the exact rule, not all rules)
  2. Add a brief comment explaining why the ignore is necessary
  3. Example:
    // biome-ignore lint/suspicious/noExplicitAny: third-party SDK types are incomplete
    const result = externalLib.call() as any;

Design System

Before writing or modifying any UI code, read the relevant spec file in specs/design-system/. Use only tokens from specs/design-system/tokens.css. Run node scripts/token-audit.js before committing UI changes. Zero errors required.

Key Rules

  1. Read the spec first: Check specs/design-system/foundations/ for color, spacing, typography, radius, elevation, and motion tokens. Check specs/design-system/components/ for component-specific specs.
  2. Use tokens, not raw values: Never use hardcoded hex colors, rgb/rgba values, or arbitrary pixel values. Reference semantic tokens from tokens.css.
  3. Tailwind classes over arbitrary values: Use bg-primary, text-muted-foreground, border-border instead of bg-[#xxx], text-[#xxx].
  4. Hub-specific dark surfaces: Use --color-hub-card, --color-hub-icon-bg, etc. for protocol/hub pages.
  5. Layout constants: Use --header-height, --flyout-width, --sidebar-strip-width instead of top-[60px], w-[280px], w-[32px].
  6. Token reference: See specs/design-system/tokens/token-reference.md for the complete token map with usage guidance.

Audit Script

node scripts/token-audit.js         # Full scan (errors + warnings)
node scripts/token-audit.js --quiet # Errors only

Exits with code 1 if errors are found. Errors are hardcoded colors in CSS and arbitrary Tailwind color classes. Warnings are hardcoded spacing, font sizes, z-index, and shadows.

Exempt Files

  • app/api/og/generate-og.tsx -- server-rendered OG images, not interactive UI
  • lib/monaco-theme.ts -- editor syntax highlighting, uses Monaco's theming API
  • docs-site/ -- separate documentation site

Tech Stack

  • Framework: Next.js 16 (App Router)
  • Language: TypeScript 5
  • UI: React 19, shadcn/ui, Radix UI, Tailwind CSS 4
  • Database: PostgreSQL + Drizzle ORM
  • Testing: Vitest (unit/integration), Playwright (E2E)
  • AI: Vercel AI SDK with Anthropic/OpenAI
  • Workflow: Workflow DevKit 4.1.0-beta.51
  • Package Manager: pnpm

Project Structure

app/              - Next.js app directory (API routes, pages)
components/       - UI components
lib/              - Core utilities, DB schemas, middleware
plugins/          - Workflow plugins (web3, discord, sendgrid, etc.)
scripts/          - Build/migration scripts
tests/            - Test files
specs/            - Internal specs and design system
docs/             - Public-facing docs (published to docs.keeperhub.com)

Common Commands

pnpm dev                    # Start dev server
pnpm build                  # Production build
pnpm type-check             # TypeScript check
pnpm check / pnpm fix       # Lint

pnpm db:push                # Push schema changes (local dev only)
pnpm db:migrate             # Run file-based migrations
pnpm db:studio              # Open Drizzle Studio

pnpm drizzle-kit generate   # Generate migration file after schema changes

pnpm discover-plugins       # Scan and register plugins
pnpm create-plugin          # Create new plugin

pnpm test                   # All tests
pnpm test:e2e               # E2E tests

Database Migrations

The build script (scripts/migrate-prod.ts) runs pnpm db:migrate (file-based migrations), not db:push. Migration state is tracked in the drizzle.__drizzle_migrations table (schema drizzle, not public). When adding or modifying database tables:

  1. Update the Drizzle schema (e.g., lib/db/schema-oauth.ts)
  2. Run pnpm drizzle-kit generate to create a migration file in drizzle/
  3. Ensure the when timestamp in drizzle/meta/_journal.json is monotonically increasing (each entry must be greater than the previous) -- out-of-order timestamps cause db:migrate to fail silently
  4. Commit the migration file, snapshot, and journal together with the schema change

Without the migration file, the table will not be created on deploy and you will get relation does not exist errors in staging/production.

If your local dev DB was bootstrapped via pnpm db:push (instead of file migrations), pnpm db:migrate will fail on relation already exists because the journal table drizzle.__drizzle_migrations is empty. Run pnpm tsx scripts/backfill-drizzle-migrations.ts once to mark the existing migrations as applied without re-running their SQL — subsequent pnpm db:migrate calls will then cleanly apply only the new files.

Note: a shell-set DATABASE_URL overrides the value in .env (drizzle.config.ts uses dotenv without override: true). If pnpm db:migrate connects to the wrong DB or port, run unset DATABASE_URL first or prefix the command with the right value.

Branch Strategy

  • Main branch: staging
  • PRs target: staging (always use staging as base branch when creating PRs)
  • Feature branches: feature/KEEP-XXXX-description

Plugin Development

Context: Building Web3 integrations for the workflow system. Plugins go in plugins/.

Current Plugins: web3, webhook, discord, sendgrid

When creating new plugins:

  1. Check existing plugins: ls plugins/
  2. Pick a recent, similar plugin as reference
  3. Copy its exact structure and pattern
  4. Keep it absolutely minimal - no extra features, no over-engineering

Structure: Each plugin has index.ts (definition), icon.tsx, steps/ (actions), optional credentials.ts and test.ts.


MCP Schemas Endpoint

Files:

  • app/api/mcp/schemas/route.ts

This endpoint serves workflow schemas to the KeeperHub MCP server. It's the source of truth for what actions, triggers, and capabilities are available.

What's Dynamic (no maintenance needed)

  • Plugin Actions: Pulled from getAllIntegrations() registry - add plugins normally and they appear automatically
  • Chains: Pulled from database chains table - add chains via DB and they appear automatically
  • Platform Capabilities: Derived by scanning plugin field types (e.g., abi-with-auto-fetch → proxy support)

What's Inline (update when changed)

These are defined directly in the file because they rarely change and aren't in a registry:

Section When to Update
SYSTEM_ACTIONS Adding new system action (Condition, HTTP Request, Database Query)
TRIGGERS Adding new trigger type (Manual, Schedule, Webhook, Event)
TEMPLATE_SYNTAX If template syntax {{@nodeId:Label.field}} changes
tips array When adding guidance for AI workflow generation

How to Update

  1. New System Action: Add entry to SYSTEM_ACTIONS object, implement step in lib/steps/
  2. New Trigger: Add entry to TRIGGERS object, implement UI in components/workflow/config/trigger-config.tsx
  3. New Plugin: Just create the plugin normally in plugins/ - it's picked up automatically

Testing the Endpoint

# Get all schemas
curl http://localhost:3000/api/mcp/schemas

# Filter by category
curl http://localhost:3000/api/mcp/schemas?category=web3

# Without chains
curl http://localhost:3000/api/mcp/schemas?includeChains=false

Writing Playwright Tests: Discovery-First Workflow

Writing E2E tests requires understanding page structure before writing selectors. This project provides three complementary tools for page discovery.

Tool 1: Discovery CLI (pnpm discover)

Quick recon of any page. Produces structured reports Claude can read.

# Unauthenticated page
pnpm discover /

# Authenticated (uses persistent test user)
pnpm discover / --auth

# With numbered element overlays on screenshot
pnpm discover / --auth --highlight

# Multi-step exploration
pnpm discover / --auth --steps "click:button:has-text('New Workflow')" "probe:after-click"

Output goes to tests/e2e/playwright/.probes/<label>-<timestamp>/:

  • screenshot.png - full page screenshot
  • screenshot-highlighted.png - elements with numbered overlays (if --highlight)
  • elements.md - interactive elements table grouped by region (optimized for Claude)
  • report.json - full structured data
  • summary.txt - compact overview

Tool 2: Probe Function (in-test)

Drop probe() calls into any test to capture state at specific points:

import { probe, highlightElements } from "./utils/discover";

test("my test", async ({ page }) => {
  await page.goto("/");
  await probe(page, "initial"); // captures screenshot + element map

  await page.click('button:has-text("Sign In")');
  await probe(page, "dialog-open"); // captures new state after click

  // Read .probes/ output to understand what's on screen
});

Tool 3: Playwright MCP (direct browser control)

The Playwright MCP server (.mcp.json) gives Claude direct browser access. Use it for interactive exploration when the CLI isn't enough:

  • Navigate pages, click elements, fill forms
  • Take screenshots and read them
  • Evaluate JavaScript in the page

Combine with the discovery utilities: use MCP to navigate, then call getInteractiveElements() or getPageStructure() via page.evaluate for structured data.

Tool 4: Exploration Test Harness

tests/e2e/playwright/explore.test.ts is a scratchpad test designed for iterative exploration:

  1. Edit the exploration steps
  2. Run: pnpm test:e2e --grep "explore"
  3. Read probe outputs from .probes/
  4. Edit steps again based on findings
  5. Once page structure is understood, write the real test in a new file

Recommended Workflow for Writing New Tests

  1. Recon: Run pnpm discover <path> --auth --highlight to understand the page
  2. Read: Read the elements.md output to see available selectors
  3. Explore: If you need to interact (open dialogs, expand menus), use the explore harness or Playwright MCP
  4. Write: Create the real test file using the selectors and interaction patterns discovered
  5. Verify: Run the test, use probe() at failure points if it breaks

Key Selectors Reference

Element Selector
Sign In button button:has-text("Sign In") (first)
Auth dialog [role="dialog"]
Signup email #signup-email
Signup password #signup-password
OTP input #otp
User menu [data-testid="user-menu"]
Workflow canvas [data-testid="workflow-canvas"]
Trigger node .react-flow__node-trigger
Action grid [data-testid="action-grid"]
Add Step button button[name="Add Step"]
Toasts [data-sonner-toast]
Org switcher button[role="combobox"]

Existing Test Utilities

Utility Import Purpose
signUpAndVerify(page) ./utils/auth Full signup + OTP verification flow
signIn(page, email, pw) ./utils/auth Sign in with credentials
createWorkflow(page) ./utils/workflow Navigate + create new workflow
addActionNode(page, label) ./utils/workflow Add action to canvas
probe(page, label) ./utils/discover Capture page state for analysis
highlightElements(page) ./utils/discover Add numbered overlays
getInteractiveElements(page) ./utils/discover Get structured element list
getPageStructure(page) ./utils/discover Get page headings, landmarks, forms
createTestWorkflow(email) ./utils/db Inject workflow directly into DB