Skip to content

fix(subagents): improve reliability by requiring persistence checks and style matching#468

Open
psklarkins wants to merge 93 commits intoobra:mainfrom
psklarkins:main
Open

fix(subagents): improve reliability by requiring persistence checks and style matching#468
psklarkins wants to merge 93 commits intoobra:mainfrom
psklarkins:main

Conversation

@psklarkins
Copy link

@psklarkins psklarkins commented Feb 13, 2026

Summary

This PR improves the reliability of the subagent-driven development workflow by addressing key issues identified during recent sessions.

Changes:

  • Commit Verification: Added mandatory commit 449100e
    Author: biuro@pcmania.pl biuro@pcmania.pl
    Date: Fri Feb 13 21:59:18 2026 +0100

    fix(subagents): improve reliability by requiring persistence checks and style matching persistence checks for both implementers and reviewers. This prevents agents from hallucinating successful commits when file modifications were made but not staged/committed.

  • Efficiency: Added an "IMMEDIATE ACTION: DO NOT PLAN" directive to the implementer prompt to prevent token waste on redundant planning steps.

  • Style Consistency: Added a "Style Check" requirement for implementers to read related files and mimic existing project patterns (e.g., event handler syntax) before writing code.

These changes significantly reduce the failure rate of subagent tasks and improve code quality consistency.

Summary by CodeRabbit

  • New Features

    • Visual Brainstorming companion (browser UI + local server, helper, CLI)
    • In-repo memory tooling: memorize/recall/snapshot CLIs and memory APIs
    • Workflow dashboard CLI and visualization utilities
    • Per-task Amplifier agent assignment and skill indexing
  • Improvements

    • Agent-aware workflows: review loops, session-start refinements, agent allocation
    • Cross-platform session hook handling and Windows runner
  • Documentation

    • Extensive new docs, agent mappings, plans, and updated release notes

psklarkins and others added 30 commits February 6, 2026 10:48
🤖 Generated with [Amplifier](https://github.com/microsoft/amplifier)

Co-Authored-By: Amplifier <240397093+microsoft-amplifier@users.noreply.github.com>
🤖 Generated with [Amplifier](https://github.com/microsoft/amplifier)

Co-Authored-By: Amplifier <240397093+microsoft-amplifier@users.noreply.github.com>
🤖 Generated with [Amplifier](https://github.com/microsoft/amplifier)

Co-Authored-By: Amplifier <240397093+microsoft-amplifier@users.noreply.github.com>
… reviewers

🤖 Generated with [Amplifier](https://github.com/microsoft/amplifier)

Co-Authored-By: Amplifier <240397093+microsoft-amplifier@users.noreply.github.com>
…omain

🤖 Generated with [Amplifier](https://github.com/microsoft/amplifier)

Co-Authored-By: Amplifier <240397093+microsoft-amplifier@users.noreply.github.com>
…oint

🤖 Generated with [Amplifier](https://github.com/microsoft/amplifier)

Co-Authored-By: Amplifier <240397093+microsoft-amplifier@users.noreply.github.com>
🤖 Generated with [Amplifier](https://github.com/microsoft/amplifier)

Co-Authored-By: Amplifier <240397093+microsoft-amplifier@users.noreply.github.com>
- spec-reviewer-prompt.md: dispatch test-coverage instead of general-purpose
- code-quality-reviewer-prompt.md: dispatch zen-architect instead of code-reviewer
- Fix AMPLIFIER-AGENTS.md path references ("superpowers plugin directory" not "repo root")
- Expand agent mapping from 11 to 30 agents across 4 categories:
  Core Development (11), Design (7), Knowledge & Analysis (8), Meta (4)

🤖 Generated with [Amplifier](https://github.com/microsoft/amplifier)

Co-Authored-By: Amplifier <240397093+microsoft-amplifier@users.noreply.github.com>
Create the initial server for the visual brainstorming companion:
- Express server with WebSocket support for browser communication
- File watcher (chokidar) to detect screen.html changes
- Auto-injects helper.js into served HTML for event capture
- Binds to localhost only (127.0.0.1) for security
- Outputs JSON events to stdout for Claude consumption
The spread operator order was causing incoming event types to overwrite
the user-event type marker.
obra and others added 20 commits February 8, 2026 16:50
Testing showed the model skipped scope assessment when it was a
separate step after "Understanding the idea." Inlining it as the
first thing in understanding ensures it fires before detailed questions.
Add architecture guidance and capability-aware escalation
Add project-level scope assessment to brainstorming pipeline
Claude Code spawns hook commands with shell:true + windowsHide:true,
but on Windows the execution chain cmd.exe -> bash.exe causes Git
Bash (MSYS2) to allocate its own console window, bypassing the hide
flag. This creates visible terminal windows that steal focus on every
SessionStart event (startup, resume, clear, compact).

The fix:
- Rename session-start.sh to session-start (no extension) so Claude
  Code's .sh auto-detection regex doesn't fire and prepend "bash"
- Restore run-hook.cmd polyglot wrapper to control bash invocation
  on Windows (tries known Git Bash paths, then PATH, then exits
  silently if no bash found)
- On Unix, the polyglot's shell portion runs the script directly

This avoids Claude Code's broken .sh auto-prepend, gives us control
over how bash is invoked on Windows, and gracefully handles missing
bash instead of erroring.

Addresses: obra#440, obra#414, obra#354, obra#417, obra#293
Upstream: anthropics/claude-code#14828
Merged 44 commits from obra/superpowers dev branch into our fork.
Key upstream additions:
- Project scope assessment in brainstorming pipeline
- Architecture guidance and capability-aware escalation
- Plan review loop with checkbox syntax
- Document review system with spec/plan reviewer prompts
- Visual brainstorm companion (browser-based mockups)
- Polyglot wrapper fix for Windows hook spawning
- File size/growth checks in code quality review

Conflict resolution strategy: kept all Amplifier agent additions
(AMPLIFIER-AGENTS.md, Agent: fields, specialist dispatch, session hub)
while incorporating upstream improvements (scope assessment, design
isolation guidance, review loops, visual companion).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
4 refinements based on practice review:

1. Fix AMPLIFIER-AGENTS.md path references - use ${CLAUDE_PLUGIN_ROOT}
   variable instead of vague "in the superpowers plugin directory"

2. Strengthen dispatch instructions - explicitly show Task tool call
   with subagent_type parameter matching the plan's Agent: field.
   Makes it unambiguous that agents should be dispatched by name.

3. Contextual review levels - replace mandatory two-stage review with:
   - Level 1 (self-review): simple 1-2 file tasks
   - Level 2 (spec compliance): standard multi-file tasks
   - Level 3 (full two-stage + security): complex/security tasks

4. Sequential context gathering in brainstorming - parallel tool calls
   caused silent failures. Steps now run sequentially with graceful
   degradation if any step fails.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Dispatch announcements: before every Task dispatch, output a visible
  status line showing agent name, model tier, task description, review
  level, and complexity. Never dispatch silently.

- Concrete model mapping: map abstract heuristics to actual Task tool
  model parameter values (haiku/sonnet/opus) with per-agent-type defaults:
  - haiku: modular-builder (simple), test-coverage (review), post-task-cleanup
  - sonnet: bug-hunter, database-architect, zen-architect (review)
  - opus: security-guardian

- Model upgrade path: haiku BLOCKED -> retry sonnet -> retry opus

- Updated example workflow showing announcement format with review
  levels, model tiers, and task progress (Task N/M).

- Applied to both subagent-driven-development and dispatching-parallel-agents.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Create MODEL-PROVIDERS.md as central reference mapping model tiers
(Fast/Balanced/Deep) to provider-specific models:
- Claude Code: haiku/sonnet/opus (Task tool model param)
- OpenCode/Gemini: Flash/Pro (/model command)

Update Model Selection tables in subagent-driven-development and
dispatching-parallel-agents skills to show both providers side-by-side.
Neither provider's settings are overridden — each environment uses its
native model names while the tier mapping provides the cross-reference.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Create MEMORY-WORKFLOW.md documenting when to use each of the three
memory systems:
- Auto-memory (MEMORY.md): working memory, always loaded
- Git-notes (knowledge_base): project decisions, patterns, glossary
- Episodic Memory: conversation recall, passive archival

Wire memory triggers into skills:
- brainstorming: recall decisions + glossary during context gathering
- writing-plans: recall decisions + patterns before planning
- subagent-driven-development: memorize decisions after all tasks complete

Seeded git-notes with 3 ADRs (model tiers, agent dispatch, review
levels) and 3 glossary terms (FuseCP, Amplifier, Superpowers) to
bootstrap the knowledge base.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link

coderabbitai bot commented Feb 13, 2026

📝 Walkthrough

Walkthrough

Adds git-notes-backed state and memory tooling, a visual Brainstorm Companion (server + browser helper), Amplifier agent integration and skill discovery/indexing, many CLI commands and tests, extensive documentation, and Windows-aware hook handling; includes schema validation and memory snapshot generation.

Changes

Cohort / File(s) Summary
Repository config & hooks
\.gitattributes, \.gitignore, hooks/hooks.json, hooks/run-hook.cmd
Enforce LF for hook, add ignore patterns, switch SessionStart to Windows-aware run-hook.cmd with bash detection. Review cross-platform hook invocation behavior.
State schema & persistence
lib/state-schema.js, lib/git-notes-state.js
Introduce SCHEMA, validate(), deepMerge(), getState(), updateState() persisted via git notes; check validation, deep merge logic, and shell-escaping when writing notes.
Memory operations & CLI
lib/memory-ops.js, commands/memorize.js, commands/recall.js, commands/snapshot-memory.js, docs/memory/SNAPSHOT.md
Add queryMemory/appendToMemory with schema-driven init/merge; CLI wrappers for memorize/recall and snapshot generation producing Markdown; verify path handling and error paths.
Skill discovery & OpenCode plugin
lib/skills-core.js, lib/index-skills.js, .opencode/plugins/superpowers.js
Parse and surface semantic_tags, add indexSkills(dir, namespace), integrate multi-namespace indexing into OpenCode bootstrap content; review frontmatter parsing, deduplication, and injection into system prompt.
Brainstorm Visual Companion
lib/brainstorm-server/* (index.js, helper.js, frame-template.html, package.json, start-server.sh, stop-server.sh, wait-for-feedback.sh) and tests tests/brainstorm-server/*
New Express+WebSocket server, file watcher, injected helper.js, UI frame template, session start/stop helpers, and tests. Validate injection, port binding, filesystem watch behavior, security of served content, and helper API.
Coordination & record tooling
lib/record-finding.js, commands/record-finding.js, commands/sync-context.js, commands/push-context.js
CLI for recording findings and syncing/pushing git-notes refs; review argument validation, JSON normalization, and merge/update semantics.
Visualization utilities & CLI
lib/visualize-workflow.js, commands/visualize.js, tests/coordination/test-visualization.js
Worktree parsing, Mermaid graph and status table generation, CLI to print dashboard; check git parsing robustness and Markdown/Mermaid output.
Amplifier & workflow docs
AMPLIFIER-AGENTS.md, MODEL-PROVIDERS.md, MEMORY-WORKFLOW.md, RELEASE-NOTES.md, README.md, docs/COMPREHENSIVE-SYSTEM-DOCS.md
Large set of new/updated docs defining agent mappings, model tiers, memory taxonomy, and release notes; documentation-only but impacts workflows and conventions.
Skill content & metadata
skills/* (notable: skills/brainstorming/SKILL.md, skills/writing-plans/SKILL.md, skills/subagent-driven-development/*, many SKILL.md updates)
Extensive skill workflow updates to be Amplifier-aware, add semantic_tags and recommended_model frontmatter, review loops, agent allocation, and Visual Companion references. Review for metadata consistency.
Tests: discovery, coordination, memory, brainstorm server, opencode
tests/* (many new tests across discovery, coordination, memory, brainstorm-server, opencode, claude-code scripts)
Extensive new test coverage; ensure CI can run git-notes and worktree-dependent tests and that test stubs/mocks are isolated.
Path migrations
docs/superpowers/plans/..., updates in many tests/* and prompt fixtures
Moved plan storage to docs/superpowers/plans/ and updated numerous test fixtures and prompts; verify all references updated and tests still point to correct locations.

Sequence Diagram(s)

sequenceDiagram
    participant User as User
    participant Browser as Browser UI
    participant Server as Brainstorm Server
    participant Brainstorm as Brainstorming Skill
    participant Memory as Git-Notes Memory

    User->>Browser: open visual session / interact
    Browser->>Server: WebSocket user-event (click/input)
    Server->>Brainstorm: forward event (stdout / Task)
    Brainstorm->>Memory: getState() / appendToMemory()
    Memory-->>Brainstorm: state / confirmation
    Brainstorm-->>Server: start session / write screen file
    Server->>Browser: serve frame + injected helper.js
Loading
sequenceDiagram
    participant Plan as Writing-Plans Skill
    participant State as Git-Notes State
    participant Agent as Amplifier Agent
    participant Reviewer as Plan Reviewer

    Plan->>State: getState()
    State-->>Plan: context
    Plan->>Reviewer: dispatch plan chunk (per-chunk loop)
    Reviewer-->>Plan: Issues or Approved
    alt Approved
      Plan->>Agent: dispatch tasks (Agent field)
      Agent->>Agent: implement & self-review
      Agent-->>Plan: status (DONE / BLOCKED / NEEDS_CONTEXT)
      Plan->>State: updateState(execution metadata)
    else Issues
      Plan->>Plan: fix chunk and re-dispatch
    end
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Poem

🐇 I hopped through notes and spun a plan,
A frame appeared for a visual span,
Memory tucked in git's soft keep,
Agents woke from a thoughtful sleep,
Hooks now speak Windows — hop, hop, hooray!

🚥 Pre-merge checks | ✅ 3 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 41.94% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: adding persistence checks and style matching to improve reliability in subagent-driven development.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into main

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

🤖 Fix all issues with AI agents
In `@commands/sync-context.js`:
- Around line 10-41: The function syncContext is declared async but never uses
await and the top-level syncContext() call discards its returned Promise; remove
the unnecessary async modifier from the syncContext declaration so it becomes a
regular synchronous function (affecting the function named syncContext) and keep
the existing direct call syncContext(); alternatively, if you prefer to keep it
async, ensure the top-level invocation handles rejections by appending
.catch(...) to syncContext() — but prefer removing async to avoid returning a
Promise for the execSync-based implementation.

In `@docs/plans/2026-02-06-amplifier-integration.md`:
- Around line 639-727: Update the Implementer Subagent Prompt template in the
plans doc to match the current
skills/subagent-driven-development/implementer-prompt.md: insert the IMMEDIATE
ACTION section (with "DO NOT PLAN" and "EXECUTE IMMEDIATELY" directives), add
the upfront Style Check requirement to read 1–2 related files and mimic project
style before starting, and include the post-commit Verify Commit persistence
step requiring a `git log -1` check; update the template text and headings in
the snippet (the prompt block and "Before You Begin"/"Before Reporting Back"
sections) so they mirror the implementer-prompt.md content exactly.

In `@hooks/hooks.json`:
- Line 9: The command string in hooks.json uses ${CLAUDE_PLUGIN_ROOT} unquoted
so expanded paths with spaces will be split; update the "command" value to wrap
the variable in single quotes around the path portion (i.e., quote
${CLAUDE_PLUGIN_ROOT} when building the command that invokes run-hook.cmd
session-start) so the shell treats the entire path as one token; modify the
entry that currently references "${CLAUDE_PLUGIN_ROOT}/hooks/run-hook.cmd
session-start" to include single quotes around the variable portion (preserving
the rest of the command) to prevent word-splitting.

In `@lib/brainstorm-server/index.js`:
- Around line 74-78: The message handler registered on ws via ws.on('message',
...) currently calls JSON.parse directly and will throw on malformed client
input; wrap the parse and subsequent use in a try/catch (or use a small
safeParse helper) inside that handler to catch JSON parsing errors, log the
parse error and the raw payload, and return early (or close the socket if you
want to treat it as malicious) instead of letting the exception bubble and crash
the process; update the handler around JSON.parse(data.toString()) and the
console.log(...) usage accordingly.

In `@lib/brainstorm-server/start-server.sh`:
- Around line 52-53: Ensure the script checks the result of cd "$SCRIPT_DIR"
before launching Node: after the cd "$SCRIPT_DIR" command validate it succeeded
and if not log an error and exit (or otherwise abort starting the background
process) so that the subsequent BRAINSTORM_DIR="$SCREEN_DIR" node index.js >
"$LOG_FILE" 2>&1 & line cannot run from the wrong directory; update the start-up
flow around the cd "$SCRIPT_DIR" and node invocation (the commands shown) to
perform the check and fail fast with a clear error message when cd fails.
- Around line 66-68: On timeout where the script currently echoes the JSON error
and exits, ensure the background Node process referenced by SERVER_PID is
terminated first to avoid orphaned processes: check that SERVER_PID is set and
running, send a polite SIGTERM (and fallback to SIGKILL if it doesn't stop) to
that PID before printing the error and exiting; update the timeout handling in
start-server.sh (the block that waits for "server-started") to perform this
shutdown sequence so the node process started earlier is cleaned up on failure.

In `@lib/git-notes-state.js`:
- Around line 15-24: updateState performs a read-modify-write by calling
getState(), deepMerge(...), then a git notes add which can cause lost updates
under concurrent callers; modify updateState to serialize updates by either
using git notes merge (invoke `git notes --ref ${REF} merge` with a temp note
containing the new JSON and let Git merge semantics reconcile) or acquire an
advisory lock before reading and writing (e.g., create/lock a lock file keyed by
REF, re-read state after acquiring lock, apply deepMerge and write via git notes
add, then release lock); ensure you reference updateState, getState, deepMerge
and REF when implementing and add a brief doc comment describing the concurrency
behavior if you choose not to implement locking.
- Around line 6-13: The REF value is interpolated into a shell command in
getState (and elsewhere where child_process.execSync is used), allowing command
injection; fix by rejecting or sanitizing unexpected REF values (e.g., allow
only a tight whitelist like alphanumerics, dots, dashes and underscores) and
replace shell interpolation with a safe exec variant that passes REF as an
argument (use child_process.execFileSync/child_process.spawnSync with
['notes','--ref', REF, 'show']) so the REF is never interpreted by a shell;
update getState (and the other execSync call) to validate REF first and/or
switch to execFileSync to eliminate shell metacharacter risks.

In `@lib/state-schema.js`:
- Around line 17-60: The validate function currently uses truthy checks like if
(!SCHEMA[key]) and if (!SCHEMA.metadata[mKey]) / if
(!SCHEMA.knowledge_base[kKey]) which traverse the prototype chain and allow keys
like "hasOwnProperty"; replace those checks with own-property checks (e.g.,
Object.hasOwn(SCHEMA, key) and Object.hasOwn(SCHEMA.metadata, mKey) and
Object.hasOwn(SCHEMA.knowledge_base, kKey) or fallback to
Object.prototype.hasOwnProperty.call) so only actual schema keys are accepted
while leaving the rest of the validation logic (type checks, array handling for
expectedType === 'array', and object/null checks) unchanged.

In `@MEMORY-WORKFLOW.md`:
- Around line 69-74: The MEMORY-WORKFLOW.md guidance currently instructs storing
sensitive secrets directly in "Auto-memory (MEMORY.md)"; update this to remove
credentials, API keys, connection strings and server credentials from MEMORY.md
and instead recommend referencing secrets via environment variables or a
dedicated secret manager (e.g., "credentials stored in 1Password vault X" or
"use $DB_CONNECTION_STRING env var"); also add a note near the system-prompt
loading instruction (the line referencing "always loaded into system prompt") to
explicitly prohibit injecting secrets into the LLM context and document safe
placeholders or references only.

In `@skills/subagent-driven-development/implementer-prompt.md`:
- Around line 3-9: The template incorrectly hardcodes subagent_type: "general"
which contradicts the surrounding instructions; update the template so the Task
tool's subagent_type uses the task's Agent field value (e.g., replace the
literal "general" with a placeholder indicating "[value from task's Agent:
field]" or equivalent interpolation) and ensure the description remains
"Implement Task N: [task name]" so dispatch uses the specialized Amplifier agent
rather than a generic agent.

In `@tests/coordination/test-schema-validation.js`:
- Around line 9-17: The test mutates real git-notes making it order- and
environment-dependent; instead, modify the test to isolate state by either
creating a temporary git repo (run git init in a tmp dir and point the
coordination code to it) or explicitly reset/clear the architecture note before
running assertions; locate and update the test around the calls to updateState
and getState to ensure you clear the note ref (or use a temp repo) so that
updateState({ architecture: ... }) and subsequent getState() produce
deterministic results regardless of prior repository state.

In `@tests/memory/test-memory-ops.js`:
- Around line 1-8: The test sets process.env.SUPERPOWERS_NOTES_REF too
late—imports for updateState/getState (from git-notes-state.js) and memory-ops
happen before the env var is read into the REF constant, causing tests to use
the production ref; fix by moving the process.env.SUPERPOWERS_NOTES_REF
assignment above the imports in tests/memory/test-memory-ops.js or,
alternatively, set the env then perform dynamic imports (await
import('../../lib/git-notes-state.js') and await
import('../../lib/memory-ops.js')) so git-notes-state.js captures the test REF
at module load time.
🟡 Minor comments (24)
skills/requesting-code-review/SKILL.md-62-62 (1)

62-62: ⚠️ Potential issue | 🟡 Minor

Use a real example path instead of the non-existent docs/superpowers/plans/deployment-plan.md.

The referenced path does not exist in the repository. Replace it with an actual example like docs/plans/2025-11-22-opencode-support-design.md or docs/superpowers/plans/2026-01-22-document-review-system.md to ensure documentation examples match the real project structure.

lib/brainstorm-server/start-server.sh-16-27 (1)

16-27: ⚠️ Potential issue | 🟡 Minor

Missing guard: --project-dir without a value will silently misbehave.

If the script is invoked as start-server.sh --project-dir (no value), $2 is empty, PROJECT_DIR becomes empty, and shift 2 may error or shift past end. Add a check.

Suggested fix
     --project-dir)
+      if [[ -z "${2:-}" ]]; then
+        echo '{"error": "--project-dir requires a value"}'
+        exit 1
+      fi
       PROJECT_DIR="$2"
       shift 2
       ;;
commands/push-context.js-6-10 (1)

6-10: ⚠️ Potential issue | 🟡 Minor

Minor: Log messages say "findings" but the command is push-context.

The user-facing messages reference "findings" while the script is named push-context.js and operates on a general refs/notes/superpowers ref. Consider aligning the terminology (e.g., "context" or "superpowers notes") for clarity.

Suggested diff
-    console.log(`Pushing findings to origin...`);
+    console.log(`Pushing superpowers context to origin...`);
     execSync(`git push origin ${REF}:${REF}`, { stdio: 'inherit' });
-    console.log(`✅ Findings pushed successfully.`);
+    console.log(`✅ Context pushed successfully.`);
 } catch (error) {
-    console.error(`❌ Failed to push findings: ${error.message}`);
+    console.error(`❌ Failed to push context: ${error.message}`);
RELEASE-NOTES.md-26-42 (1)

26-42: ⚠️ Potential issue | 🟡 Minor

Unreleased section duplicates v4.1.0 entries verbatim.

The "OpenCode: Switched to native skills system" (lines 26–30), "Fixed agent reset on session start" (lines 34–36), and "Fixed Windows installation" (lines 38–42) blocks are near-identical copies of the v4.1.0 entries at lines 182–198. If these are being carried forward intentionally (e.g., not yet released in a tagged version), that's fine — but if v4.1.0 is already released, this duplication will confuse readers. Consider referencing v4.1.0 instead of repeating the text, or removing the duplicates if they've already shipped.

RELEASE-NOTES.md-58-64 (1)

58-64: ⚠️ Potential issue | 🟡 Minor

Space inside code span on line 62, and inconsistent section naming.

Two issues in this block:

  1. Line 62: `bash ` has a trailing space inside the code span, which triggers the MD038 lint warning. Use `bash` instead (the trailing space isn't meaningful).

  2. The Unreleased section has both a "### Fixes" (line 32) and a "### Bug Fixes" (line 58) heading. Pick one for consistency — conventional changelogs typically use a single heading for all fixes.

docs/superpowers/plans/2026-01-22-document-review-system.md-144-194 (1)

144-194: ⚠️ Potential issue | 🟡 Minor

Same nested code block rendering issue in the plan reviewer template.

Same concern as the spec reviewer template above — inner fenced code blocks on Lines 153–191 will break the outer fence on Line 144.

docs/plans/2026-02-08-skill-discovery-and-subagent-coordination-implementation.md-110-131 (1)

110-131: ⚠️ Potential issue | 🟡 Minor

Plan's updateState example uses shallow merge; actual implementation uses deepMerge + validate.

The code example on Lines 124–130 shows { ...currentState, ...newData } (shallow spread), but the actual lib/git-notes-state.js uses deepMerge(currentState, newData) with validate(newData). If this plan is referenced by agents during implementation, the discrepancy could cause confusion. Consider updating the example to match the real implementation.

docs/superpowers/plans/2026-01-22-document-review-system.md-26-74 (1)

26-74: ⚠️ Potential issue | 🟡 Minor

Nested fenced code blocks will not render correctly in standard Markdown.

The reviewer prompt template (Lines 27–74) contains fenced code blocks inside an outer fenced code block. Since both use triple backticks, the inner blocks break the outer fence. Consider using different fence styles (e.g., ~~~ for the outer block or 4+ backticks) or moving the template content to a separate file and referencing it.

commands/memorize.js-8-18 (1)

8-18: ⚠️ Potential issue | 🟡 Minor

Missing bounds check on flag values — --value at end of args silently sets value to undefined.

While Line 25 catches undefined, if --value is immediately followed by another flag (e.g., memorize foo --value --section bar), value would be set to "--section". Consider validating that flag values don't start with --.

🛡️ Suggested guard
 for (let i = 0; i < args.length; i++) {
     if (args[i] === '--value') {
+        if (i + 1 >= args.length || args[i + 1].startsWith('--')) {
+            console.error('Error: --value requires a value argument.');
+            process.exit(1);
+        }
         value = args[i + 1];
         i++;
     } else if (args[i] === '--section') {
+        if (i + 1 >= args.length || args[i + 1].startsWith('--')) {
+            console.error('Error: --section requires a value argument.');
+            process.exit(1);
+        }
         section = args[i + 1];
         i++;
commands/snapshot-memory.js-74-76 (1)

74-76: ⚠️ Potential issue | 🟡 Minor

Glossary definition assumed to be a string.

If a glossary value is an object (e.g., { short: "...", long: "..." }), the template literal on Line 76 will render [object Object]. A defensive stringify would be safer.

Proposed fix
         const definition = kb.glossary[term];
-        content += `**${term}**:\n${definition}\n\n`;
+        const defText = typeof definition === 'string' ? definition : JSON.stringify(definition, null, 2);
+        content += `**${term}**:\n${defText}\n\n`;
commands/record-finding.js-9-10 (1)

9-10: ⚠️ Potential issue | 🟡 Minor

result.status is null when child is killed by signal.

If the spawned process is terminated by a signal, spawnSync returns { status: null, signal: 'SIGTERM' }. Calling process.exit(null) is equivalent to process.exit(0), masking the failure.

Proposed fix
-process.exit(result.status);
+process.exit(result.status ?? 1);
lib/visualize-workflow.js-37-49 (1)

37-49: ⚠️ Potential issue | 🟡 Minor

Unescaped content in Mermaid node labels can break diagram rendering.

task values from state are injected directly into "..." delimited Mermaid labels (Line 42). If a task description contains double quotes, brackets, or other Mermaid special characters, the graph will fail to render. The same applies to worktree names/branches on Line 55.

Proposed fix — escape quotes in labels
+function escapeMermaid(str) {
+    return str.replace(/"/g, '#quot;').replace(/[\[\](){}]/g, '#$&;');
+}
+
 export function generateMermaid(state, worktrees) {
     let mermaid = '```mermaid\ngraph TD\n';
     // ...
     agents.forEach(agent => {
         const data = state[agent] || {};
         const status = data.status || 'idle';
-        const task = data.task || 'no active task';
+        const task = escapeMermaid(data.task || 'no active task');
         const label = `${agent}<br/>(${status})<br/>${task}`;
docs/plans/2026-01-17-visual-brainstorming.md-37-122 (1)

37-122: ⚠️ Potential issue | 🟡 Minor

Plan's code examples are significantly out of sync with the actual implementation.

The plan references BRAINSTORM_SCREEN (single file) and PORT=3333, but the actual lib/brainstorm-server/index.js uses BRAINSTORM_DIR (directory-based), a random ephemeral port, serves the newest screen from the directory, and shows a waiting page when no screens exist. Consider updating the plan to match the implementation or adding a note that the plan is historical and the code is the source of truth.

tests/brainstorm-server/server.test.js-74-75 (1)

74-75: ⚠️ Potential issue | 🟡 Minor

WebSocket open promises have no timeout — test will hang indefinitely on failure.

If the server fails to accept the WebSocket connection, await new Promise(resolve => ws.on('open', resolve)) will never resolve. Add a timeout or race with a rejection.

🛡️ Proposed fix — add a helper with timeout
function waitForOpen(ws, timeoutMs = 3000) {
  return new Promise((resolve, reject) => {
    const timer = setTimeout(() => reject(new Error('WebSocket open timeout')), timeoutMs);
    ws.on('open', () => { clearTimeout(timer); resolve(); });
    ws.on('error', (err) => { clearTimeout(timer); reject(err); });
  });
}

Then replace await new Promise(resolve => ws.on('open', resolve)) with await waitForOpen(ws).

Also applies to: 87-88

docs/plans/2026-01-17-visual-brainstorming.md-436-441 (1)

436-441: ⚠️ Potential issue | 🟡 Minor

Duplicate type key in JSON example and wrong field name.

Line 439 shows {"type":"user-event","type":"click",...} — duplicate keys are invalid JSON (second type overwrites first). The actual server uses "source":"user-event" (not "type"), so this example is misleading.

tests/brainstorm-server/server.test.js-159-162 (1)

159-162: ⚠️ Potential issue | 🟡 Minor

Server process may leak if error occurs before try/finally.

If startServer() succeeds but an error is thrown before entering the try block (e.g., during the startup polling loop), server.kill() in finally won't execute because it's scoped inside runTests. The .catch handler calls process.exit(1) without killing the child process.

🛡️ Proposed fix
-runTests().catch(err => {
+let serverProcess;
+runTests().catch(err => {
   console.error('Test failed:', err);
+  if (serverProcess) serverProcess.kill();
   process.exit(1);
 });

And assign serverProcess = server inside runTests before the polling loop, or restructure so the try/finally wraps the entire function body including startup wait.

lib/brainstorm-server/helper.js-14-18 (1)

14-18: ⚠️ Potential issue | 🟡 Minor

Unguarded JSON.parse on incoming WebSocket messages can crash the handler.

If the server sends malformed data, JSON.parse will throw and the onmessage handler will fail silently (or propagate as an unhandled error depending on the environment). Wrap in try-catch.

🛡️ Proposed fix
   ws.onmessage = (msg) => {
-    const data = JSON.parse(msg.data);
-    if (data.type === 'reload') {
-      window.location.reload();
+    try {
+      const data = JSON.parse(msg.data);
+      if (data.type === 'reload') {
+        window.location.reload();
+      }
+    } catch (e) {
+      // Ignore malformed messages
     }
   };
lib/brainstorm-server/wait-for-feedback.sh-20-27 (1)

20-27: ⚠️ Potential issue | 🟡 Minor

No timeout — script will poll indefinitely if feedback never arrives.

If the server crashes or the user closes the browser, this script loops forever. Consider adding a configurable timeout (e.g., via a second argument or environment variable).

🛡️ Proposed fix — add a timeout
+TIMEOUT="${2:-300}"  # Default 5 minutes
+ELAPSED=0
+
 while true; do
   RESULT=$(tail -n +$((LOG_POS + 1)) "$LOG_FILE" 2>/dev/null | grep -m 1 "send-to-claude")
   if [[ -n "$RESULT" ]]; then
     echo "$RESULT"
     exit 0
   fi
   sleep 0.2
+  ELAPSED=$(echo "$ELAPSED + 0.2" | bc)
+  if (( $(echo "$ELAPSED >= $TIMEOUT" | bc -l) )); then
+    echo '{"error": "Timeout waiting for feedback"}' >&2
+    exit 2
+  fi
 done
docs/memory/SNAPSHOT.md-1-32 (1)

1-32: ⚠️ Potential issue | 🟡 Minor

Consider whether this generated snapshot with placeholder data should be committed.

This file contains test placeholder values ("d1", "d2", "term1", "term2") and a hardcoded timestamp. If SNAPSHOT.md is meant to be regenerated by snapshot-memory, committing a stale placeholder version may cause confusion or merge conflicts. Consider either .gitignore-ing it or adding a note that it's an example.

Also, heading numbering is inconsistent: entries 1 and 2 use ### N: Decision N format while entry 3 uses ### Decision 3 (no numeric prefix).

tests/discovery/test-indexer.js-11-13 (1)

11-13: ⚠️ Potential issue | 🟡 Minor

Guard against empty index before accessing index[0].

If indexSkills returns an empty array (e.g., parsing failure), index[0].name throws a TypeError that obscures the actual problem.

Proposed fix
     const index = indexSkills(mockDir);
+    assert.ok(index.length > 0, 'Expected at least one indexed skill');
     assert.strictEqual(index[0].name, 'test');
tests/coordination/test-record-finding.js-14-19 (1)

14-19: ⚠️ Potential issue | 🟡 Minor

spawnSync exit status is not checked — test may silently pass on command failure.

If record-finding.js exits with a non-zero code (e.g., missing dependency, script error), the test proceeds to assert on stale/empty state, producing a confusing assertion error instead of clearly reporting the command failure.

Proposed fix
-    spawnSync('node', [
+    const result1 = spawnSync('node', [
         recordFindingScript,
         '--role', 'architecture',
         '--key', 'vision',
         '--value', 'modular-amplifiers'
     ], { stdio: 'inherit' });
+    assert.strictEqual(result1.status, 0, `record-finding exited with code ${result1.status}`);

Apply the same pattern for the second spawnSync call on lines 27-32.

.opencode/plugins/superpowers.js-12-14 (1)

12-14: ⚠️ Potential issue | 🟡 Minor

Remove unused imports getState and stripFrontmatter.

getState (line 12) and stripFrontmatter (line 14) are not referenced anywhere in the file. The file defines its own extractAndStripFrontmatter function (line 19) and uses it on line 65. These unused imports add unnecessary module loading at startup.

tests/memory/test-memory-ops.js-99-99 (1)

99-99: ⚠️ Potential issue | 🟡 Minor

Unhandled promise rejection on top-level call.

runTests() is async but the returned promise has no .catch(). If the promise rejects for a reason not caught inside (e.g., an error in clearTestNotes during setup at line 22), the process may exit with an unhandled rejection warning or code 0 instead of signalling failure.

-runTests();
+runTests().catch(e => { console.error('FAIL:', e); process.exit(1); });
tests/claude-code/test-document-review-system.sh-93-98 (1)

93-98: ⚠️ Potential issue | 🟡 Minor

$? captures the exit code of echo, not timeout/claude.

Inside the || { ... } block, the first echo on line 95 overwrites $?. By line 96, $? is always 0 (the echo's exit code), so the reported exit code is meaningless.

Proposed fix
-cd "$SCRIPT_DIR/../.." && timeout 120 claude -p "$PROMPT" --permission-mode bypassPermissions 2>&1 | tee "$OUTPUT_FILE" || {
+cd "$SCRIPT_DIR/../.." && timeout 120 claude -p "$PROMPT" --permission-mode bypassPermissions 2>&1 | tee "$OUTPUT_FILE"
+CLAUDE_EXIT=$?
+if [ $CLAUDE_EXIT -ne 0 ]; then
     echo ""
     echo "================================================================================"
-    echo "EXECUTION FAILED (exit code: $?)"
+    echo "EXECUTION FAILED (exit code: $CLAUDE_EXIT)"
     exit 1
-}
+fi

Note: Also be aware that tee in the pipeline means the exit code captured is from tee, not claude. If you need the claude exit code, use set -o pipefail (already implied by set -euo pipefail on line 4 — so this should be fine).

🧹 Nitpick comments (35)
hooks/run-hook.cmd (1)

21-35: Argument forwarding is limited to 8 extra arguments (%2%9).

Batch % parameters only go up to %9. If a hook script ever needs more than 8 arguments, the extras will be silently dropped. This is fine for the current session-start use case (zero extra args), but worth noting if the hook infrastructure grows.

A more robust alternative would be to use %* to forward all arguments after shifting:

Suggested improvement (optional)

You could restructure to use shift and %* in the batch portion to forward all remaining arguments, though this adds complexity. For now, documenting the 8-arg limit in the header comment is sufficient.

.gitignore (1)

4-6: inspo pattern matches both files and directories — is that intentional?

Unlike the other entries (node_modules/, triage/), inspo lacks a trailing slash, so it will match any file or directory named inspo at any depth. If only a directory is intended, append / for consistency.

tests/brainstorm-server/package.json (1)

7-9: ws should be in devDependencies since this is a test-only package.

The package is named brainstorm-server-tests and its sole script is a test runner. The ws dependency is only used for testing, so it belongs in devDependencies.

Proposed fix
-  "dependencies": {
+  "devDependencies": {
     "ws": "^8.19.0"
   }
lib/state-schema.js (1)

62-75: deepMerge uses for...in and doesn't guard against __proto__ keys.

for...in iterates inherited enumerable properties, and __proto__ from a JSON-parsed source could alter the returned object's prototype via the __proto__ setter. While not a global prototype pollution vector here, it's a correctness hazard.

Switching to Object.keys() and adding a __proto__ guard is a straightforward hardening:

Suggested fix
 export function deepMerge(target, source) {
     const result = { ...target };
-    for (const key in source) {
+    for (const key of Object.keys(source)) {
+        if (key === '__proto__') continue;
         if (source[key] && typeof source[key] === 'object' && !Array.isArray(source[key])) {
lib/brainstorm-server/stop-server.sh (1)

18-28: No validation that pid is numeric before passing to kill.

If the .server.pid file is corrupted or contains unexpected content, kill "$pid" could behave unexpectedly. A quick numeric check would harden this:

🛡️ Proposed fix
 if [[ -f "$PID_FILE" ]]; then
   pid=$(cat "$PID_FILE")
-  kill "$pid" 2>/dev/null
+  if [[ "$pid" =~ ^[0-9]+$ ]]; then
+    kill "$pid" 2>/dev/null
+  fi
   rm -f "$PID_FILE" "${SCREEN_DIR}/.server.log"
lib/brainstorm-server/frame-template.html (1)

1-4: Missing <meta viewport> despite responsive CSS, and missing charset/lang.

The template includes a @media (max-width: 700px) rule (line 185) and @media (prefers-color-scheme: dark) but has no viewport meta tag, so mobile browsers will ignore the responsive breakpoint. Also missing are charset and lang for good HTML hygiene.

♻️ Proposed fix
 <!DOCTYPE html>
-<html>
+<html lang="en">
 <head>
+  <meta charset="utf-8">
+  <meta name="viewport" content="width=device-width, initial-scale=1">
   <title>Brainstorm Companion</title>
lib/record-finding.js (2)

7-14: Arg parser doesn't guard against a missing value after the last flag.

If a user passes --value as the final argument, args[i + 1] is undefined and gets stored as the value. The downstream !value check on line 18 catches undefined, so this doesn't crash — but silently consuming the next flag as a value (e.g., --role --key) could produce confusing errors. This is acceptable for an internal CLI tool, just noting it.


35-43: role is used as a top-level state key — must match the schema.

The role value from CLI args becomes a top-level key in the state object passed to updateState()validate(). If the user passes a role that isn't in SCHEMA (from lib/state-schema.js), it'll throw. This is correct behavior, but the error message from validate() ("Invalid top-level key") may be confusing to CLI users who think in terms of "roles."

Consider catching validation errors and re-wrapping with a hint about valid roles.

tests/opencode/test-auto-activation.js (2)

14-69: Mock setup/teardown duplication across all three tests.

The mock setup (execSync, existsSync, readdirSync, readFileSync) and teardown pattern is nearly identical across all three subtests. Consider extracting a helper to reduce boilerplate and make adding new test scenarios easier.

♻️ Example: extract a test helper
function withMocks({ execSync: execFn, existsSync: existsFn, readdirSync: readdirFn, readFileSync: readFn }, fn) {
    const execMock = mock.method(child_process, 'execSync', execFn);
    const existsMock = mock.method(fs, 'existsSync', existsFn);
    const readdirMock = mock.method(fs, 'readdirSync', readdirFn);
    const readMock = mock.method(fs, 'readFileSync', readFn);
    try {
        return fn();
    } finally {
        execMock.mock.restore();
        existsMock.mock.restore();
        readdirMock.mock.restore();
        readMock.mock.restore();
    }
}

53-61: Tests only assert presence of injected content — consider asserting absence of non-activated skills too.

For example, in the architect session test (Step 1), you verify brainstorming content is injected but don't assert that other skills (e.g., using-superpowers) are or aren't present as expected. Adding a negative assertion would strengthen confidence in the activation logic.

lib/memory-ops.js (1)

40-49: Verbose inline comments explaining the merge strategy.

Lines 41–48 contain extended reasoning comments ("But wait!", "Yes, deepMerge...") that read like thinking-out-loud rather than code documentation. Consider condensing to a single brief comment.

♻️ Suggested cleanup
     } else if (typeof currentValue === 'object' && currentValue !== null) {
-        // Validation: item must be object for merging into object
         if (typeof item !== 'object' || item === null) {
              throw new Error(`Cannot merge non-object item into object section "${path}"`);
         }
-        // For object merge, we just pass the item to be merged via deepMerge in updateState.
-        // But wait! If we construct updateObject with JUST item, deepMerge will merge it into existing.
-        // Yes, deepMerge(target, source) merges source into target.
-        // So newValue = item is correct strategy for object merging IF we construct the full path object.
+        // Pass item directly; updateState's deepMerge will merge it into the existing object at this path.
         newValue = item;
commands/snapshot-memory.js (1)

58-62: Fallback in pattern code extraction includes already-rendered fields.

When a pattern object has title, description, and language but neither code nor pattern, Line 60 falls back to JSON.stringify(pattern, null, 2) — which re-serializes title, description, and language that were already rendered above. Consider extracting only the unhandled fields or requiring code/pattern to be present.

Proposed fix
-             const code = pattern.code || pattern.pattern || JSON.stringify(pattern, null, 2);
+             const code = pattern.code || pattern.pattern || '';
lib/visualize-workflow.js (1)

31-31: Duplicated agent list between generateMermaid and generateTable.

The agent keys are defined independently in both functions (Lines 31 and 70-75). Consider extracting a shared constant to keep them in sync.

Also applies to: 70-75

docs/plans/2026-02-08-opencode-workflow-dashboard.md (1)

1-52: Plan document is clear and well-structured.

The heading level jump (h1 → h3) flagged by markdownlint is consistent with the plan document format used across the project. Adding an ## Tasks heading before the ### Task N headings would fix the lint warning if desired.

lib/brainstorm-server/helper.js (1)

73-89: Single debounce timer shared across all input elements may drop events.

inputTimeout is a single variable. If the user edits input A then quickly edits input B (within 500ms), the timer for A is cleared and only B's change is sent. Consider keying the timeout per element (e.g., by target.id or target.name).

♻️ Proposed fix using a per-element debounce map
-  let inputTimeout = null;
+  const inputTimeouts = new Map();
   document.addEventListener('input', (e) => {
     const target = e.target;
     if (!target.matches('input, textarea, select')) return;
 
-    clearTimeout(inputTimeout);
-    inputTimeout = setTimeout(() => {
+    const key = target.id || target.name || target;
+    clearTimeout(inputTimeouts.get(key));
+    inputTimeouts.set(key, setTimeout(() => {
+      inputTimeouts.delete(key);
       sendEvent({
         type: 'input',
         name: target.name || null,
         id: target.id || null,
         value: target.value,
         inputType: target.type || target.tagName.toLowerCase()
       });
-    }, 500);
+    }, 500));
   });
lib/brainstorm-server/index.js (2)

103-123: Duplicate notification logic for add and change events.

The chokidar handlers for add and change are identical. Extract to a shared function for DRY.

♻️ Proposed refactor
+function notifyClients(eventType, filePath) {
+  if (!filePath.endsWith('.html')) return;
+  console.log(JSON.stringify({ type: eventType, file: filePath }));
+  clients.forEach(ws => {
+    if (ws.readyState === WebSocket.OPEN) {
+      ws.send(JSON.stringify({ type: 'reload' }));
+    }
+  });
+}
+
 chokidar.watch(SCREEN_DIR, { ignoreInitial: true })
-  .on('add', (filePath) => {
-    if (filePath.endsWith('.html')) {
-      console.log(JSON.stringify({ type: 'screen-added', file: filePath }));
-      clients.forEach(ws => {
-        if (ws.readyState === WebSocket.OPEN) {
-          ws.send(JSON.stringify({ type: 'reload' }));
-        }
-      });
-    }
-  })
-  .on('change', (filePath) => {
-    if (filePath.endsWith('.html')) {
-      console.log(JSON.stringify({ type: 'screen-updated', file: filePath }));
-      clients.forEach(ws => {
-        if (ws.readyState === WebSocket.OPEN) {
-          ws.send(JSON.stringify({ type: 'reload' }));
-        }
-      });
-    }
-  });
+  .on('add', (filePath) => notifyClients('screen-added', filePath))
+  .on('change', (filePath) => notifyClients('screen-updated', filePath));

81-100: Synchronous file reads on every HTTP request.

fs.readFileSync (line 88) and getNewestScreen (which calls readdirSync/statSync) block the event loop on each request. For a low-traffic dev tool this is acceptable, but worth noting if usage patterns grow.

tests/brainstorm-server/server.test.js (1)

9-9: Hardcoded test port may cause flaky tests.

TEST_PORT = 3334 will fail if another process (or parallel test run) occupies that port. Consider using port 0 and parsing the actual port from the server's startup JSON output.

lib/index-skills.js (1)

3-10: namespace parameter is undocumented in JSDoc.

The @param block documents dir but omits namespace (which defaults to 'superpowers'). Adding it would improve discoverability for callers.

tests/discovery/test-indexer.js (1)

6-8: Relative path ./tests/mock-skills is CWD-dependent and test artifacts are not cleaned up.

If this test is run from a directory other than the repo root, the mock directory will be created in an unexpected location. Consider using __dirname-relative paths (already available via the import.meta.url pattern used in sibling tests) and adding cleanup in a finally block.

Suggested approach
+import { fileURLToPath } from 'url';
+import { dirname, join } from 'path';
+
+const __dirname = dirname(fileURLToPath(import.meta.url));
-const mockDir = './tests/mock-skills';
+const mockDir = join(__dirname, 'mock-skills');

And wrap the test body in try/finally to remove the mock directory:

finally {
    fs.rmSync(mockDir, { recursive: true, force: true });
}
tests/coordination/test-schema-validation.js (1)

21-26: Validation error detection pattern is sound but fragile on message wording.

The catch block correctly distinguishes between assert.fail (no "Validation failed" substring) and a real validation error. However, if lib/state-schema.js changes its error message wording, this test silently breaks. Consider matching on error type or a more stable marker if one is available.

tests/coordination/test-git-notes.js (2)

4-13: Test leaves git-notes state behind — no cleanup after run.

updateState writes to refs/notes/superpowers in the current repo. If other tests (e.g., test-schema-validation.js, test-e2e-memory.js) also read/write this ref, execution order can cause flaky results. Consider adding a finally block that removes or resets the note.

Proposed fix
+import { execSync } from 'child_process';
+
 try {
     const testData = { architecture: { task: "T1", status: "done" } };
     updateState(testData);
     const state = getState();
     assert.deepStrictEqual(state.architecture.task, "T1");
     console.log("PASS");
 } catch (e) {
     console.error("FAIL", e.message);
     process.exit(1);
+} finally {
+    try { execSync('git notes --ref superpowers remove', { stdio: 'ignore' }); } catch (_) {}
 }

8-8: Assertion only checks task, ignoring the status field that was also stored.

testData includes status: "done", but there's no assertion for it. Adding a second assertion (or using deepStrictEqual on the whole architecture object) would give more confidence that the full object round-trips correctly.

tests/memory/test-e2e-memory.js (2)

17-29: Shell injection risk in the run helper — args are not escaped.

Arguments are interpolated directly into a shell command string without escaping. The current test value is benign, but if any arg contained shell metacharacters ($, `, ", etc.), the command would break or execute unintended code. Since this is a test file with a controlled value, the risk is low, but using execSync with an argv array (via execFileSync) would be safer and simpler.

Suggested alternative
+import { execSync, execFileSync } from 'child_process';
 ...
-const run = (cmd, args = []) => {
-  const command = `node "${cmd}" ${args.join(' ')}`;
-  console.log(`Running: ${command}`);
+const run = (cmd, args = []) => {
+  console.log(`Running: node ${cmd} ${args.join(' ')}`);
   try {
-    return execSync(command, { cwd: rootDir, encoding: 'utf8', stdio: 'pipe' });
+    return execFileSync('node', [cmd, ...args], { cwd: rootDir, encoding: 'utf8', stdio: 'pipe' });
   } catch (error) {

With execFileSync, each argument is passed directly to the process without shell interpretation, so the extra manual quoting on line 38 ("${decisionValue}") would also become unnecessary.


31-72: No cleanup of side effects (git-notes state + SNAPSHOT.md).

The test writes to git-notes and creates docs/memory/SNAPSHOT.md but never cleans up. This can cause test pollution across runs and unexpected diffs in the working tree. A finally block that removes the snapshot file and resets the git note would improve isolation.

skills/subagent-driven-development/spec-reviewer-prompt.md (1)

9-9: Add a language specifier to the fenced code block.

Static analysis flags this as MD040. Since the content is YAML-like, adding yaml would satisfy the linter and improve rendering.

-```
+```yaml
MODEL-PROVIDERS.md (2)

7-11: Deep and Balanced tiers map to the same Gemini model.

Both tiers resolve to google/gemini-3-pro-preview on the Gemini side. If this is intentional (no higher-tier Gemini model available), a brief note would prevent confusion — readers may assume it's a copy-paste error.


19-21: Add language specifiers to fenced code blocks (lines 19 and 29).

Static analysis (MD040) flags both blocks. The first looks like a generic function call, the second like a shell command.

-```
+```text
 Task(subagent_type="bug-hunter", model="sonnet", description="...", prompt="...")
-```
+```bash
 /model google/gemini-3-flash-preview
.opencode/plugins/superpowers.js (1)

59-76: Skill indexing runs on every system prompt transform — potential performance concern.

getBootstrapContent() is called on every experimental.chat.system.transform invocation (line 114-118), which triggers filesystem I/O for skill indexing each time. Consider caching the result and invalidating on a timer or file-change signal.

skills/brainstorming/spec-document-reviewer-prompt.md (1)

9-9: Add a language specifier to the fenced code block.

Same MD040 issue as in spec-reviewer-prompt.md. Adding yaml would satisfy the linter.

-```
+```yaml
tests/claude-code/test-document-review-system.sh (1)

23-24: Consider single quotes in trap to avoid early expansion (ShellCheck SC2064).

While early expansion works correctly here since TEST_PROJECT is already set, using single quotes is the conventional, more robust pattern. This prevents subtle bugs if the trap line is ever moved before the assignment.

-trap "cleanup_test_project $TEST_PROJECT" EXIT
+trap 'cleanup_test_project '"$TEST_PROJECT" EXIT

Or more readably:

cleanup() { cleanup_test_project "$TEST_PROJECT"; }
trap cleanup EXIT
docs/plans/2026-02-08-automated-skill-activation.md (1)

13-13: Heading level skips from h1 to h3.

The ### Task headings jump directly from the h1 title without an h2 in between, which violates standard Markdown heading hierarchy (MD001). Consider using ## Task 1: ... or adding an intermediate ## Tasks heading.

skills/dispatching-parallel-agents/SKILL.md (1)

78-90: Add language identifiers to fenced code blocks.

The code blocks at lines 78 and 86 lack language specifiers (flagged by markdownlint MD040). Adding text or a more specific identifier improves rendering and accessibility.

-```
+```text
 >> Dispatching 3 parallel agents:
-```
+```text
 Task(subagent_type="bug-hunter", ...
skills/brainstorming/SKILL.md (1)

38-38: Optional: Consider hyphenating compound adjective.

When "multiple choice" modifies "questions" as a compound adjective, standard English style uses "multiple-choice questions."

✏️ Suggested style fix
-- Prefer multiple choice questions when possible, but open-ended is fine too
+- Prefer multiple-choice questions when possible, but open-ended is fine too
docs/plans/2026-02-06-amplifier-integration.md (1)

179-179: Add language identifiers to fenced code blocks.

Multiple fenced code blocks lack language identifiers, which affects syntax highlighting and tooling support. For example blocks that show output or generic text, use text or plaintext.

Example locations:

  • Line 179: Triple backticks after brainstorming SKILL.md content
  • Line 363: Triple backticks after writing-plans SKILL.md content
  • Line 503: Example workflow block
  • Line 633: Triple backticks after subagent-driven-development content
  • Line 726: Triple backticks after implementer-prompt.md content
  • Line 830: Task dispatch example
  • Line 914: Verification example
Example fix for line 503
-```
+```text
 You: I'm using Subagent-Driven Development to execute this plan.

Also applies to: 363-363, 503-503, 633-633, 726-726, 830-830, 914-914

Comment on lines +10 to +41
async function syncContext() {
console.log(`Syncing context from origin...`);
try {
// 1. Fetch the remote notes to a tracking ref
execSync(`git fetch origin ${REF}:refs/notes/origin/superpowers`, { stdio: 'pipe' });

// 2. Merge remote notes into local notes
execSync(`git notes --ref ${REF} merge refs/notes/origin/superpowers -s manual`, { stdio: 'pipe' });

console.log('✅ Successfully synced and merged context.');
} catch (e) {
const errorText = e.stderr ? e.stderr.toString() : e.message;

// If it's just a "no remote ref" error, it's fine for first-time use
if (errorText.includes("couldn't find remote ref") || errorText.includes("fatal: couldn't find remote ref")) {
console.log("ℹ️ No remote context found on 'origin'. Initializing local context.");
return;
}

console.error(`❌ Failed to sync context: ${errorText}`);
console.log("Attempting fallback to remote state...");
try {
execSync(`git fetch origin ${REF}:${REF} --force`, { stdio: 'pipe' });
console.log("✅ Fallback successful (force-synced).");
} catch (err) {
console.error("Critical failure during sync: " + (err.stderr ? err.stderr.toString() : err.message));
process.exit(1);
}
}
}

syncContext();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

async is unnecessary — no await is used — and the top-level call has no rejection handler.

syncContext is declared async but only calls execSync. This makes the function return a Promise for no reason, and the bare syncContext() call on line 41 discards it without a .catch(), risking an unhandled-rejection crash on unexpected throws.

Either drop async (since nothing is awaited), or add error handling at the call site:

♻️ Option A: drop async (preferred)
-async function syncContext() {
+function syncContext() {
♻️ Option B: handle rejection at call site
-syncContext();
+syncContext().catch(err => {
+    console.error('Unexpected error:', err.message);
+    process.exit(1);
+});
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async function syncContext() {
console.log(`Syncing context from origin...`);
try {
// 1. Fetch the remote notes to a tracking ref
execSync(`git fetch origin ${REF}:refs/notes/origin/superpowers`, { stdio: 'pipe' });
// 2. Merge remote notes into local notes
execSync(`git notes --ref ${REF} merge refs/notes/origin/superpowers -s manual`, { stdio: 'pipe' });
console.log('✅ Successfully synced and merged context.');
} catch (e) {
const errorText = e.stderr ? e.stderr.toString() : e.message;
// If it's just a "no remote ref" error, it's fine for first-time use
if (errorText.includes("couldn't find remote ref") || errorText.includes("fatal: couldn't find remote ref")) {
console.log("ℹ️ No remote context found on 'origin'. Initializing local context.");
return;
}
console.error(`❌ Failed to sync context: ${errorText}`);
console.log("Attempting fallback to remote state...");
try {
execSync(`git fetch origin ${REF}:${REF} --force`, { stdio: 'pipe' });
console.log("✅ Fallback successful (force-synced).");
} catch (err) {
console.error("Critical failure during sync: " + (err.stderr ? err.stderr.toString() : err.message));
process.exit(1);
}
}
}
syncContext();
function syncContext() {
console.log(`Syncing context from origin...`);
try {
// 1. Fetch the remote notes to a tracking ref
execSync(`git fetch origin ${REF}:refs/notes/origin/superpowers`, { stdio: 'pipe' });
// 2. Merge remote notes into local notes
execSync(`git notes --ref ${REF} merge refs/notes/origin/superpowers -s manual`, { stdio: 'pipe' });
console.log('✅ Successfully synced and merged context.');
} catch (e) {
const errorText = e.stderr ? e.stderr.toString() : e.message;
// If it's just a "no remote ref" error, it's fine for first-time use
if (errorText.includes("couldn't find remote ref") || errorText.includes("fatal: couldn't find remote ref")) {
console.log("ℹ️ No remote context found on 'origin'. Initializing local context.");
return;
}
console.error(`❌ Failed to sync context: ${errorText}`);
console.log("Attempting fallback to remote state...");
try {
execSync(`git fetch origin ${REF}:${REF} --force`, { stdio: 'pipe' });
console.log("✅ Fallback successful (force-synced).");
} catch (err) {
console.error("Critical failure during sync: " + (err.stderr ? err.stderr.toString() : err.message));
process.exit(1);
}
}
}
syncContext();
🤖 Prompt for AI Agents
In `@commands/sync-context.js` around lines 10 - 41, The function syncContext is
declared async but never uses await and the top-level syncContext() call
discards its returned Promise; remove the unnecessary async modifier from the
syncContext declaration so it becomes a regular synchronous function (affecting
the function named syncContext) and keep the existing direct call syncContext();
alternatively, if you prefer to keep it async, ensure the top-level invocation
handles rejections by appending .catch(...) to syncContext() — but prefer
removing async to avoid returning a Promise for the execSync-based
implementation.

Comment on lines +639 to +727
```markdown
# Implementer Subagent Prompt Template

Use this template when dispatching an implementation agent. The agent type comes from the task's `Agent:` field in the plan.

```
Task tool ([agent-name from task]):
description: "Implement Task N: [task name]"
prompt: |
You are the [agent-name] agent implementing Task N: [task name]

## Your Strengths

[Brief description of what this agent specializes in, from AMPLIFIER-AGENTS.md.
Examples:
- modular-builder: "You build self-contained, regeneratable modules following the bricks-and-studs philosophy."
- database-architect: "You design clean schemas, optimize queries, and handle migrations."
- bug-hunter: "You use hypothesis-driven debugging to find root causes systematically."
- integration-specialist: "You handle external system integration with reliability and simplicity."]

## Task Description

[FULL TEXT of task from plan - paste it here, don't make subagent read file]

## Context

[Scene-setting: where this fits, dependencies, architectural context]

## Before You Begin

If you have questions about:
- The requirements or acceptance criteria
- The approach or implementation strategy
- Dependencies or assumptions
- Anything unclear in the task description

**Ask them now.** Raise any concerns before starting work.

## Your Job

Once you're clear on requirements:
1. Implement exactly what the task specifies
2. Write tests (following TDD if task says to)
3. Verify implementation works
4. Commit your work
5. Self-review (see below)
6. Report back

Work from: [directory]

**While you work:** If you encounter something unexpected or unclear, **ask questions**.
It's always OK to pause and clarify. Don't guess or make assumptions.

## Before Reporting Back: Self-Review

Review your work with fresh eyes. Ask yourself:

**Completeness:**
- Did I fully implement everything in the spec?
- Did I miss any requirements?
- Are there edge cases I didn't handle?

**Quality:**
- Is this my best work?
- Are names clear and accurate (match what things do, not how they work)?
- Is the code clean and maintainable?

**Discipline:**
- Did I avoid overbuilding (YAGNI)?
- Did I only build what was requested?
- Did I follow existing patterns in the codebase?

**Testing:**
- Do tests actually verify behavior (not just mock behavior)?
- Did I follow TDD if required?
- Are tests comprehensive?

If you find issues during self-review, fix them now before reporting.

## Report Format

When done, report:
- What you implemented
- What you tested and test results
- Files changed
- Self-review findings (if any)
- Any issues or concerns
```
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for the specific PR objective features across the codebase

# Search for "IMMEDIATE ACTION" directive
echo "=== Searching for IMMEDIATE ACTION directive ==="
rg -n "IMMEDIATE ACTION.*DO NOT PLAN" -C 2 2>/dev/null || echo "Not found with exact pattern"

# Try broader search for IMMEDIATE ACTION
rg -n "IMMEDIATE ACTION" --type md -C 2 2>/dev/null || echo "Not found"

# Search for "Style Check" requirement
echo -e "\n=== Searching for Style Check requirement ==="
rg -n "Style Check|style.*check" --type md -C 2 2>/dev/null || echo "Not found"

# Search for persistence check/verification
echo -e "\n=== Searching for persistence checks ==="
rg -n "persistence.*check|verify.*commit|staged.*files" --type md -C 2 2>/dev/null || echo "Not found"

# Look for PR objectives in the document
echo -e "\n=== Searching for PR objectives definition ==="
rg -n "PR objectives|objectives:" --type md -C 3 2>/dev/null | head -40

# Check the mentioned file directly for all these terms
echo -e "\n=== Checking docs/plans/2026-02-06-amplifier-integration.md ==="
rg -n "IMMEDIATE ACTION|Style Check|persistence|PR objectives" docs/plans/2026-02-06-amplifier-integration.md -C 2 2>/dev/null | head -60

Repository: obra/superpowers

Length of output: 1392


🏁 Script executed:

#!/bin/bash
# Read the actual implementer-prompt.md file to compare with the template in the plans document
echo "=== Content of skills/subagent-driven-development/implementer-prompt.md ==="
cat -n skills/subagent-driven-development/implementer-prompt.md

echo -e "\n=== Check for persistence/staging references ==="
rg -n "persist|stage|commit|git" skills/subagent-driven-development/implementer-prompt.md -C 1

Repository: obra/superpowers

Length of output: 6483


🏁 Script executed:

#!/bin/bash
# Check the exact content in the plans document
echo "=== Lines 639-727 from docs/plans/2026-02-06-amplifier-integration.md ==="
sed -n '639,727p' docs/plans/2026-02-06-amplifier-integration.md

echo -e "\n=== Search for features in the plans document ==="
rg -n "IMMEDIATE ACTION|Style Check|Verify Commit" docs/plans/2026-02-06-amplifier-integration.md -C 1

Repository: obra/superpowers

Length of output: 3027


Update plans document to reflect actual implementer prompt template.

The template in docs/plans/2026-02-06-amplifier-integration.md (lines 639-727) is outdated. The actual implementer-prompt.md file at skills/subagent-driven-development/implementer-prompt.md contains important reliability improvements that are missing from the plans document:

  1. IMMEDIATE ACTION section (lines 14-17 in actual file) - "DO NOT PLAN" and "EXECUTE IMMEDIATELY" directives to prevent wasted planning
  2. Style Check requirement (line 38 in actual file) - Explicit upfront instruction to read 1-2 related files and mimic the project's style before starting
  3. Verify Commit persistence check (lines 46-47 in actual file) - Required git log -1 verification after committing work

Update the template snippet to match the current implementer-prompt.md file so that the plans document provides accurate guidance.

🧰 Tools
🪛 markdownlint-cli2 (0.20.0)

[warning] 650-650: Code block style
Expected: fenced; Actual: indented

(MD046, code-block-style)


[warning] 726-726: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In `@docs/plans/2026-02-06-amplifier-integration.md` around lines 639 - 727,
Update the Implementer Subagent Prompt template in the plans doc to match the
current skills/subagent-driven-development/implementer-prompt.md: insert the
IMMEDIATE ACTION section (with "DO NOT PLAN" and "EXECUTE IMMEDIATELY"
directives), add the upfront Style Check requirement to read 1–2 related files
and mimic project style before starting, and include the post-commit Verify
Commit persistence step requiring a `git log -1` check; update the template text
and headings in the snippet (the prompt block and "Before You Begin"/"Before
Reporting Back" sections) so they mirror the implementer-prompt.md content
exactly.

{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/session-start.sh",
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/run-hook.cmd session-start",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing single quotes around ${CLAUDE_PLUGIN_ROOT} — paths with spaces will break.

Claude Code expands variables before passing the command to the shell. Without quotes, paths containing spaces (e.g., C:\Users\Name With Spaces\...) will be split incorrectly.

Proposed fix
-            "command": "${CLAUDE_PLUGIN_ROOT}/hooks/run-hook.cmd session-start",
+            "command": "'${CLAUDE_PLUGIN_ROOT}/hooks/run-hook.cmd' session-start",

Based on learnings: "single quotes around the variable are necessary because Claude Code expands variables before passing the command to the shell. The quotes protect against word splitting when the expanded path contains spaces."

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"command": "${CLAUDE_PLUGIN_ROOT}/hooks/run-hook.cmd session-start",
"command": "'${CLAUDE_PLUGIN_ROOT}/hooks/run-hook.cmd' session-start",
🤖 Prompt for AI Agents
In `@hooks/hooks.json` at line 9, The command string in hooks.json uses
${CLAUDE_PLUGIN_ROOT} unquoted so expanded paths with spaces will be split;
update the "command" value to wrap the variable in single quotes around the path
portion (i.e., quote ${CLAUDE_PLUGIN_ROOT} when building the command that
invokes run-hook.cmd session-start) so the shell treats the entire path as one
token; modify the entry that currently references
"${CLAUDE_PLUGIN_ROOT}/hooks/run-hook.cmd session-start" to include single
quotes around the variable portion (preserving the rest of the command) to
prevent word-splitting.

Comment on lines +74 to +78
ws.on('message', (data) => {
const event = JSON.parse(data.toString());
console.log(JSON.stringify({ source: 'user-event', ...event }));
});
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Unguarded JSON.parse on client WebSocket messages can crash the server.

A malformed message from any connected browser will throw and potentially crash the Node process. This is more critical here than on the client since it affects all connected users.

🛡️ Proposed fix
   ws.on('message', (data) => {
-    const event = JSON.parse(data.toString());
-    console.log(JSON.stringify({ source: 'user-event', ...event }));
+    try {
+      const event = JSON.parse(data.toString());
+      console.log(JSON.stringify({ source: 'user-event', ...event }));
+    } catch (e) {
+      console.error(JSON.stringify({ source: 'error', message: 'Invalid JSON from client' }));
+    }
   });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ws.on('message', (data) => {
const event = JSON.parse(data.toString());
console.log(JSON.stringify({ source: 'user-event', ...event }));
});
});
ws.on('message', (data) => {
try {
const event = JSON.parse(data.toString());
console.log(JSON.stringify({ source: 'user-event', ...event }));
} catch (e) {
console.error(JSON.stringify({ source: 'error', message: 'Invalid JSON from client' }));
}
});
});
🤖 Prompt for AI Agents
In `@lib/brainstorm-server/index.js` around lines 74 - 78, The message handler
registered on ws via ws.on('message', ...) currently calls JSON.parse directly
and will throw on malformed client input; wrap the parse and subsequent use in a
try/catch (or use a small safeParse helper) inside that handler to catch JSON
parsing errors, log the parse error and the raw payload, and return early (or
close the socket if you want to treat it as malicious) instead of letting the
exception bubble and crash the process; update the handler around
JSON.parse(data.toString()) and the console.log(...) usage accordingly.

Comment on lines +52 to +53
cd "$SCRIPT_DIR"
BRAINSTORM_DIR="$SCREEN_DIR" node index.js > "$LOG_FILE" 2>&1 &
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

cd without error handling — node will run from the wrong directory if it fails.

As flagged by static analysis (SC2164), if cd "$SCRIPT_DIR" fails, node index.js will execute from the current directory, likely producing confusing errors or serving wrong files.

Suggested fix
-cd "$SCRIPT_DIR"
+cd "$SCRIPT_DIR" || { echo '{"error": "Failed to cd to script directory"}'; exit 1; }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
cd "$SCRIPT_DIR"
BRAINSTORM_DIR="$SCREEN_DIR" node index.js > "$LOG_FILE" 2>&1 &
cd "$SCRIPT_DIR" || { echo '{"error": "Failed to cd to script directory"}'; exit 1; }
BRAINSTORM_DIR="$SCREEN_DIR" node index.js > "$LOG_FILE" 2>&1 &
🧰 Tools
🪛 Shellcheck (0.11.0)

[warning] 52-52: Use 'cd ... || exit' or 'cd ... || return' in case cd fails.

(SC2164)

🤖 Prompt for AI Agents
In `@lib/brainstorm-server/start-server.sh` around lines 52 - 53, Ensure the
script checks the result of cd "$SCRIPT_DIR" before launching Node: after the cd
"$SCRIPT_DIR" command validate it succeeded and if not log an error and exit (or
otherwise abort starting the background process) so that the subsequent
BRAINSTORM_DIR="$SCREEN_DIR" node index.js > "$LOG_FILE" 2>&1 & line cannot run
from the wrong directory; update the start-up flow around the cd "$SCRIPT_DIR"
and node invocation (the commands shown) to perform the check and fail fast with
a clear error message when cd fails.

Comment on lines +17 to +60
export function validate(state) {
for (const key in state) {
if (!SCHEMA[key]) {
throw new Error(`Validation failed: Invalid top-level key "${key}"`);
}

if (key === 'metadata') {
const metadata = state[key];
if (typeof metadata !== 'object' || metadata === null) {
throw new Error('Validation failed: "metadata" must be an object');
}
for (const mKey in metadata) {
if (!SCHEMA.metadata[mKey]) {
throw new Error(`Validation failed: Invalid metadata key "${mKey}"`);
}
if (typeof metadata[mKey] !== SCHEMA.metadata[mKey]) {
throw new Error(`Validation failed: Metadata key "${mKey}" must be a ${SCHEMA.metadata[mKey]}`);
}
}
} else if (key === 'knowledge_base') {
const kb = state[key];
if (typeof kb !== 'object' || kb === null) {
throw new Error('Validation failed: "knowledge_base" must be an object');
}
for (const kKey in kb) {
if (!SCHEMA.knowledge_base[kKey]) {
throw new Error(`Validation failed: Invalid knowledge_base key "${kKey}"`);
}
const expectedType = SCHEMA.knowledge_base[kKey];
if (expectedType === 'array') {
if (!Array.isArray(kb[kKey])) {
throw new Error(`Validation failed: knowledge_base key "${kKey}" must be an array`);
}
} else if (typeof kb[kKey] !== expectedType) {
throw new Error(`Validation failed: knowledge_base key "${kKey}" must be a ${expectedType}`);
}
}
} else {
if (typeof state[key] !== 'object' || state[key] === null) {
throw new Error(`Validation failed: "${key}" must be an object`);
}
}
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

SCHEMA[key] lookup traverses the prototype chain — accepts invalid keys like hasOwnProperty, toString, constructor.

The check if (!SCHEMA[key]) on Line 19 uses bracket notation, which resolves properties from Object.prototype. A state object containing { hasOwnProperty: {} } would pass validation despite not being a valid schema key. The same applies to SCHEMA.metadata[mKey] on Line 29 and SCHEMA.knowledge_base[kKey] on Line 42.

Use Object.hasOwn() (or Object.prototype.hasOwnProperty.call()) for safe lookups.

Suggested fix
 export function validate(state) {
     for (const key in state) {
-        if (!SCHEMA[key]) {
+        if (!Object.hasOwn(SCHEMA, key)) {
             throw new Error(`Validation failed: Invalid top-level key "${key}"`);
         }
         
         if (key === 'metadata') {
             const metadata = state[key];
             if (typeof metadata !== 'object' || metadata === null) {
                 throw new Error('Validation failed: "metadata" must be an object');
             }
             for (const mKey in metadata) {
-                if (!SCHEMA.metadata[mKey]) {
+                if (!Object.hasOwn(SCHEMA.metadata, mKey)) {
                     throw new Error(`Validation failed: Invalid metadata key "${mKey}"`);
                 }

Apply the same change at Line 42 for SCHEMA.knowledge_base.

🤖 Prompt for AI Agents
In `@lib/state-schema.js` around lines 17 - 60, The validate function currently
uses truthy checks like if (!SCHEMA[key]) and if (!SCHEMA.metadata[mKey]) / if
(!SCHEMA.knowledge_base[kKey]) which traverse the prototype chain and allow keys
like "hasOwnProperty"; replace those checks with own-property checks (e.g.,
Object.hasOwn(SCHEMA, key) and Object.hasOwn(SCHEMA.metadata, mKey) and
Object.hasOwn(SCHEMA.knowledge_base, kKey) or fallback to
Object.prototype.hasOwnProperty.call) so only actual schema keys are accepted
while leaving the rest of the validation logic (type checks, array handling for
expectedType === 'array', and object/null checks) unchanged.

Comment on lines +69 to +74
**Auto-memory (MEMORY.md)** — information needed EVERY session regardless of task:
- Network topology, server IPs, credentials
- Deployment commands and paths
- API endpoints and keys
- IIS sites, database connection strings

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Security concern: guidance to store credentials and API keys in MEMORY.md.

Lines 70-73 recommend storing server IPs, credentials, API keys, and database connection strings in a plain-text file that is "always loaded into system prompt" (line 17). Secrets injected into LLM context can be inadvertently echoed in model outputs, logged, or sent to third-party APIs. Consider recommending environment variables or a secret manager instead, and limiting MEMORY.md to non-sensitive references (e.g., "credentials are in 1Password vault X" or "use $DB_CONNECTION_STRING env var").

🤖 Prompt for AI Agents
In `@MEMORY-WORKFLOW.md` around lines 69 - 74, The MEMORY-WORKFLOW.md guidance
currently instructs storing sensitive secrets directly in "Auto-memory
(MEMORY.md)"; update this to remove credentials, API keys, connection strings
and server credentials from MEMORY.md and instead recommend referencing secrets
via environment variables or a dedicated secret manager (e.g., "credentials
stored in 1Password vault X" or "use $DB_CONNECTION_STRING env var"); also add a
note near the system-prompt loading instruction (the line referencing "always
loaded into system prompt") to explicitly prohibit injecting secrets into the
LLM context and document safe placeholders or references only.

Comment on lines +3 to +9
Use this template when dispatching an implementation agent. The `subagent_type` parameter comes from the task's `Agent:` field in the plan.

**CRITICAL: Use the Agent: field value as the `subagent_type` parameter.** This dispatches the specialized Amplifier agent, not a generic one.

```
Task tool (general-purpose):
Task tool:
subagent_type: "general"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

subagent_type: "general" in the template contradicts the surrounding instructions.

Lines 3 and 5 emphasize using the task's Agent: field as subagent_type, but the template example on line 9 hardcodes "general". Anyone copying this template verbatim will dispatch a generic agent instead of the intended specialist.

Proposed fix
 ```
 Task tool:
-  subagent_type: "general"
+  subagent_type: "[value from task's Agent: field]"
   description: "Implement Task N: [task name]"
🤖 Prompt for AI Agents
In `@skills/subagent-driven-development/implementer-prompt.md` around lines 3 - 9,
The template incorrectly hardcodes subagent_type: "general" which contradicts
the surrounding instructions; update the template so the Task tool's
subagent_type uses the task's Agent field value (e.g., replace the literal
"general" with a placeholder indicating "[value from task's Agent: field]" or
equivalent interpolation) and ensure the description remains "Implement Task N:
[task name]" so dispatch uses the specialized Amplifier agent rather than a
generic agent.

Comment on lines +9 to +17
// Reset state for test
// Note: This might affect the actual git notes in the repo.
// In a real test environment we'd use a temporary repo,
// but here I'll just try to work with what I have.

updateState({ architecture: { component: 'A' } });
updateState({ architecture: { details: 'B' } });
let state = getState();
assert.deepStrictEqual(state.architecture, { component: 'A', details: 'B' });
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Test operates on real git-notes state with no isolation — assertions are order- and environment-dependent.

As the inline comments acknowledge (lines 10-12), this test mutates the actual repository's git-notes. If prior state exists under architecture, the deepStrictEqual on line 17 will fail because deepMerge preserves existing keys. This makes the test flaky in CI or when run after other coordination tests.

Consider using a temporary git repo (git init in a temp directory) for test isolation, or at minimum resetting the note ref before the test runs.

🤖 Prompt for AI Agents
In `@tests/coordination/test-schema-validation.js` around lines 9 - 17, The test
mutates real git-notes making it order- and environment-dependent; instead,
modify the test to isolate state by either creating a temporary git repo (run
git init in a tmp dir and point the coordination code to it) or explicitly
reset/clear the architecture note before running assertions; locate and update
the test around the calls to updateState and getState to ensure you clear the
note ref (or use a temp repo) so that updateState({ architecture: ... }) and
subsequent getState() produce deterministic results regardless of prior
repository state.

Comment on lines +1 to +8

import assert from 'assert';
import { execSync } from 'child_process';
import { updateState, getState } from '../../lib/git-notes-state.js';
import * as memoryOps from '../../lib/memory-ops.js';

const TEST_REF = 'refs/notes/superpowers-test';
process.env.SUPERPOWERS_NOTES_REF = TEST_REF;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Find git-notes-state.js and examine how REF is defined
fd git-notes-state.js --type f | head -5

Repository: obra/superpowers

Length of output: 82


🏁 Script executed:

# Find the test file and lib structure
fd test-memory-ops.js --type f

Repository: obra/superpowers

Length of output: 91


🏁 Script executed:

# Find memory-ops.js
fd memory-ops.js --type f

Repository: obra/superpowers

Length of output: 109


🏁 Script executed:

# Read git-notes-state.js to see how REF is defined
cat -n lib/git-notes-state.js

Repository: obra/superpowers

Length of output: 1078


🏁 Script executed:

# Read memory-ops.js to see if it depends on REF
cat -n lib/memory-ops.js

Repository: obra/superpowers

Length of output: 3022


🏁 Script executed:

# Read the full test file to understand context
cat -n tests/memory/test-memory-ops.js

Repository: obra/superpowers

Length of output: 3844


🏁 Script executed:

# Verify the import execution order and confirm no code loads REF after import
rg "SUPERPOWERS_NOTES_REF|REF\s*=" lib/git-notes-state.js lib/memory-ops.js tests/memory/test-memory-ops.js

Repository: obra/superpowers

Length of output: 315


Import-before-env-set defeats test isolation—tests run against production ref.

SUPERPOWERS_NOTES_REF is set on line 8, but git-notes-state and memory-ops are imported on lines 4–5. The REF constant in git-notes-state.js is captured at module load time (line 4: const REF = process.env.SUPERPOWERS_NOTES_REF || 'refs/notes/superpowers'). When the test imports execute before the env var is set, REF defaults to the production ref, silently breaking test isolation.

Move the env assignment above the imports, or use dynamic import() after setting the env var:

Proposed fix using dynamic imports
-import assert from 'assert';
-import { execSync } from 'child_process';
-import { updateState, getState } from '../../lib/git-notes-state.js';
-import * as memoryOps from '../../lib/memory-ops.js';
-
 const TEST_REF = 'refs/notes/superpowers-test';
 process.env.SUPERPOWERS_NOTES_REF = TEST_REF;
+
+import assert from 'assert';
+import { execSync } from 'child_process';
+const { updateState, getState } = await import('../../lib/git-notes-state.js');
+const memoryOps = await import('../../lib/memory-ops.js');

Note: This requires the file to be treated as a top-level await context (ESM with Node ≥14.8). Alternatively, move the env assignment into a separate preload script.

🤖 Prompt for AI Agents
In `@tests/memory/test-memory-ops.js` around lines 1 - 8, The test sets
process.env.SUPERPOWERS_NOTES_REF too late—imports for updateState/getState
(from git-notes-state.js) and memory-ops happen before the env var is read into
the REF constant, causing tests to use the production ref; fix by moving the
process.env.SUPERPOWERS_NOTES_REF assignment above the imports in
tests/memory/test-memory-ops.js or, alternatively, set the env then perform
dynamic imports (await import('../../lib/git-notes-state.js') and await
import('../../lib/memory-ops.js')) so git-notes-state.js captures the test REF
at module load time.

psklarkins and others added 5 commits February 13, 2026 22:35
Session start now dispatches a haiku subagent to gather git status,
episodic memory, recall decisions/glossary, and agent mapping. Returns
~500 word summary instead of dumping ~5,000-13,000 tokens of raw tool
output into the main context window.

🤖 Generated with [Amplifier](https://github.com/microsoft/amplifier)

Co-Authored-By: Amplifier <240397093+microsoft-amplifier@users.noreply.github.com>
Spec writing, self-review, and git commit now happen in a separate
context window. Main context receives only file path and status
instead of accumulating ~4,000-9,000 tokens of spec content and
review loop exchanges.

🤖 Generated with [Amplifier](https://github.com/microsoft/amplifier)

Co-Authored-By: Amplifier <240397093+microsoft-amplifier@users.noreply.github.com>
Plans with 5+ tasks are now generated in a separate context window.
Main context receives file path and 200-word summary instead of
accumulating ~12,000-31,000 tokens of plan content. Small plans
(<5 tasks) still generate in-context.

🤖 Generated with [Amplifier](https://github.com/microsoft/amplifier)

Co-Authored-By: Amplifier <240397093+microsoft-amplifier@users.noreply.github.com>
Subagents now return concise summaries (200 lines max for implementers,
100 lines max for reviewers) instead of full file contents. Protects
the main context window from large agent outputs.

🤖 Generated with [Amplifier](https://github.com/microsoft/amplifier)

Co-Authored-By: Amplifier <240397093+microsoft-amplifier@users.noreply.github.com>
Reviewed: All subagent dispatch patterns correct, code fences balanced, output discipline properly placed.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@skills/brainstorming/SKILL.md`:
- Around line 114-132: The Task's subagent spec-writing flow currently tells the
agent to run "git add <file> && git commit -m 'docs: add <topic> design spec'"
and return the file path and git commit hash but doesn't explicitly verify the
commit succeeded; update the instructions for this Task/subagent to (1) run a
verification after committing (e.g., confirm exit status, run git rev-parse HEAD
or git log -1 --pretty=%H and ensure it matches the returned hash, and confirm
the file is tracked via git ls-files or git status --porcelain), (2) only return
the git commit hash and review status after that verification passes, and (3) on
failure return a clear error status and message instead of a hash; reference the
commit command and the returned fields (file path, git commit hash, review
status) so the implementation point is unambiguous.
- Line 64: Replace the incorrect flag-style invocation "recall --path
knowledge_base.decisions" with the positional-argument form expected by the
recall implementation; use "recall knowledge_base.decisions" (or "node
${CLAUDE_PLUGIN_ROOT}/../commands/recall.js knowledge_base.decisions" when
invoking the script directly). Update the SKILL.md line to match the recall
command signature (recall <path>) used by commands/recall.js and consistent with
the examples on lines 28-29.

In `@skills/subagent-driven-development/implementer-prompt.md`:
- Around line 7-8: The fenced code block that begins with "Task tool:" is
missing a language tag; update that block to use a YAML language tag (e.g.,
```yaml) and format the block as valid YAML (include keys like subagent_type and
description) so linters detect the language and the content is properly
structured; locate the fenced block containing the literal "Task tool:" and
change the opening fence to ```yaml and adjust the inner lines to be valid YAML
keys and values (for example include subagent_type: "general" and description:
"Implement Task N: [task name]").

In `@skills/writing-plans/SKILL.md`:
- Around line 18-21: Update the two command examples in SKILL.md so the
CLAUDE_PLUGIN_ROOT variable is quoted to avoid shell word-splitting on paths
with spaces; change the lines that invoke recall.js (the examples containing
node ${CLAUDE_PLUGIN_ROOT}/../commands/recall.js knowledge_base.decisions and
node ${CLAUDE_PLUGIN_ROOT}/../commands/recall.js knowledge_base.patterns) to use
single quotes around ${CLAUDE_PLUGIN_ROOT} (i.e., node
'${CLAUDE_PLUGIN_ROOT}'/../commands/recall.js ...) so the recall.js invocation
works on platforms with spaces in the plugin path.
🧹 Nitpick comments (5)
skills/subagent-driven-development/spec-reviewer-prompt.md (3)

44-44: Consider more comprehensive persistence verification.

The instruction to check git log -1 verifies a commit exists, but doesn't detect uncommitted or staged changes. Consider adding a git status check to ensure the working directory is clean.

📝 Suggested enhancement
-    - **Verify Persistence:** Check `git log -1` to ensure the work was actually committed.
+    - **Verify Persistence:** Check `git status` for uncommitted changes and `git log -1` to ensure the work was actually committed.

53-56: Clarify the persistence verification methodology.

The persistence check mentions detecting "modified but not committed" files and "old commit[s]", but doesn't specify the exact verification commands. While line 44 mentions git log -1, explicitly mentioning git status here would make the methodology clearer.

📝 Suggested clarification
 **Persistence Check:**
 - Did the implementer actually commit the code?
-- If files are modified but not committed, or if `git log` shows an old commit, **FAIL** the review immediately.
+- Check `git status` for uncommitted changes and `git log -1` for the latest commit.
+- If files are modified but not committed, or if the latest commit is unrelated to this task, **FAIL** the review immediately.

9-9: Optional: Add language identifier to fenced code block.

The fenced code block could specify a language identifier (e.g., yaml, text, or markdown) to satisfy the markdownlint rule and improve syntax highlighting.

📝 Suggested fix
-```
+```yaml
 Task tool (test-coverage):

As per coding guidelines, "Fenced code blocks should have a language specified" (MD040, fenced-code-language).

skills/brainstorming/SKILL.md (1)

96-96: Add language specifier to fenced code block.

The markdown code block starting at line 96 lacks a language specifier. Add markdown after the opening backticks for proper syntax highlighting and linting compliance.

📝 Proposed fix
-```
+```markdown
 ## Agent Allocation
skills/writing-plans/SKILL.md (1)

138-169: Clarify code block syntax with escaped backticks.

The code blocks in the examples use ` (backticks with spaces) instead of standard ``` markdown syntax. This appears to be an attempt to display literal code block delimiters in the documentation.

Consider one of these approaches:

  1. If showing how to write code blocks: Add a note explaining this is escaped for display purposes
  2. If this is a rendering error: Remove the spaces to use standard triple-backtick syntax
  3. Use quadruple backticks to wrap the outer code block if you need to show triple backticks literally
📝 Example: Standard approach for displaying code blocks in markdown

In markdown, to show a code block example that itself contains code blocks, use quadruple backticks for the outer block:

````markdown
```python
def example():
    pass
```

Or add an explanatory note:

```markdown
(Note: Replace spaces in ` ` ` with actual triple backticks ```)
```
</details>

</blockquote></details>

</blockquote></details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

- Prefer multiple choice questions when possible, but open-ended is fine too
- Only one question per message - if a topic needs more exploration, break it into multiple questions
- Focus on understanding: purpose, constraints, success criteria
- Before proposing a design, use `recall --path knowledge_base.decisions` to review existing architectural constraints.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Verify recall command syntax and usage

# Find recall.js implementation
echo "=== Searching for recall.js ==="
fd -t f "recall.js"

# Check for recall usage patterns in the codebase
echo -e "\n=== Checking recall usage patterns ==="
rg -n "recall" --type md -A 2 -B 2

# Also search in other files for recall command
echo -e "\n=== Checking recall in all files ==="
rg -n "recall" -A 1 -B 1 | head -100

Repository: obra/superpowers

Length of output: 10729


🏁 Script executed:

cat -n commands/recall.js | head -20

Repository: obra/superpowers

Length of output: 669


Fix incorrect recall command syntax on line 64.

The syntax recall --path knowledge_base.decisions is incorrect. The recall command only accepts positional arguments, not flags. The implementation in commands/recall.js expects: recall <path>

Use the correct syntax:

  • recall knowledge_base.decisions

Or if using the full path with node:

  • node ${CLAUDE_PLUGIN_ROOT}/../commands/recall.js knowledge_base.decisions

This matches the correct usage in lines 28-29 of the same file.

🤖 Prompt for AI Agents
In `@skills/brainstorming/SKILL.md` at line 64, Replace the incorrect flag-style
invocation "recall --path knowledge_base.decisions" with the positional-argument
form expected by the recall implementation; use "recall
knowledge_base.decisions" (or "node ${CLAUDE_PLUGIN_ROOT}/../commands/recall.js
knowledge_base.decisions" when invoking the script directly). Update the
SKILL.md line to match the recall command signature (recall <path>) used by
commands/recall.js and consistent with the examples on lines 28-29.

Comment on lines +114 to +132
Task(subagent_type="general-purpose", model="sonnet", description="Write and validate design spec", prompt="
Write a design spec document from the following validated design.

## Validated Design
[paste the complete design text including Agent Allocation table]

## Instructions
1. Write the spec to: docs/superpowers/specs/YYYY-MM-DD-<topic>-design.md
2. Include all sections: Problem, Goal, Changes, Impact, Files Changed, Agent Allocation, Test Plan
3. Self-review against this checklist:
- All requirements from the design are captured
- Agent allocation table is included
- File paths are concrete (not placeholder)
- No ambiguous language ('should', 'could', 'might' replaced with specifics)
- Acceptance criteria are testable
4. Fix any issues found during self-review
5. Commit the spec: git add <file> && git commit -m 'docs: add <topic> design spec'
6. Return: file path, git commit hash, review status (pass/fail), any concerns (MAX 100 words)
")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Consider adding commit verification to the subagent spec-writing task.

The subagent is instructed to commit the spec (line 130) and return the git commit hash (line 131), but there's no explicit instruction to verify the commit succeeded before returning. While requesting the commit hash implies verification, consider making this explicit.

Based on the PR objectives mentioning "persistence checks" for subagents, this might be an area where the enforcement should be stronger.

🔒 Proposed enhancement for commit verification
   4. Fix any issues found during self-review
   5. Commit the spec: git add <file> && git commit -m 'docs: add <topic> design spec'
-  6. Return: file path, git commit hash, review status (pass/fail), any concerns (MAX 100 words)
+  6. Verify commit succeeded: git log -1 --oneline
+  7. Return: file path, git commit hash, review status (pass/fail), any concerns (MAX 100 words)
+     - If commit failed, return 'COMMIT_FAILED' as status and explain why
 ")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Task(subagent_type="general-purpose", model="sonnet", description="Write and validate design spec", prompt="
Write a design spec document from the following validated design.
## Validated Design
[paste the complete design text including Agent Allocation table]
## Instructions
1. Write the spec to: docs/superpowers/specs/YYYY-MM-DD-<topic>-design.md
2. Include all sections: Problem, Goal, Changes, Impact, Files Changed, Agent Allocation, Test Plan
3. Self-review against this checklist:
- All requirements from the design are captured
- Agent allocation table is included
- File paths are concrete (not placeholder)
- No ambiguous language ('should', 'could', 'might' replaced with specifics)
- Acceptance criteria are testable
4. Fix any issues found during self-review
5. Commit the spec: git add <file> && git commit -m 'docs: add <topic> design spec'
6. Return: file path, git commit hash, review status (pass/fail), any concerns (MAX 100 words)
")
Task(subagent_type="general-purpose", model="sonnet", description="Write and validate design spec", prompt="
Write a design spec document from the following validated design.
## Validated Design
[paste the complete design text including Agent Allocation table]
## Instructions
1. Write the spec to: docs/superpowers/specs/YYYY-MM-DD-<topic>-design.md
2. Include all sections: Problem, Goal, Changes, Impact, Files Changed, Agent Allocation, Test Plan
3. Self-review against this checklist:
- All requirements from the design are captured
- Agent allocation table is included
- File paths are concrete (not placeholder)
- No ambiguous language ('should', 'could', 'might' replaced with specifics)
- Acceptance criteria are testable
4. Fix any issues found during self-review
5. Commit the spec: git add <file> && git commit -m 'docs: add <topic> design spec'
6. Verify commit succeeded: git log -1 --oneline
7. Return: file path, git commit hash, review status (pass/fail), any concerns (MAX 100 words)
- If commit failed, return 'COMMIT_FAILED' as status and explain why
")
🤖 Prompt for AI Agents
In `@skills/brainstorming/SKILL.md` around lines 114 - 132, The Task's subagent
spec-writing flow currently tells the agent to run "git add <file> && git commit
-m 'docs: add <topic> design spec'" and return the file path and git commit hash
but doesn't explicitly verify the commit succeeded; update the instructions for
this Task/subagent to (1) run a verification after committing (e.g., confirm
exit status, run git rev-parse HEAD or git log -1 --pretty=%H and ensure it
matches the returned hash, and confirm the file is tracked via git ls-files or
git status --porcelain), (2) only return the git commit hash and review status
after that verification passes, and (3) on failure return a clear error status
and message instead of a hash; reference the commit command and the returned
fields (file path, git commit hash, review status) so the implementation point
is unambiguous.

Comment on lines 7 to +8
```
Task tool (general-purpose):
Task tool:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add a language tag to the fenced code block.
MD040: fenced code blocks should specify a language (e.g., ```yaml).

🧩 Proposed fix
-```
+```yaml
 Task tool:
   subagent_type: "general"
   description: "Implement Task N: [task name]"
🤖 Prompt for AI Agents
In `@skills/subagent-driven-development/implementer-prompt.md` around lines 7 - 8,
The fenced code block that begins with "Task tool:" is missing a language tag;
update that block to use a YAML language tag (e.g., ```yaml) and format the
block as valid YAML (include keys like subagent_type and description) so linters
detect the language and the content is properly structured; locate the fenced
block containing the literal "Task tool:" and change the opening fence to
```yaml and adjust the inner lines to be valid YAML keys and values (for example
include subagent_type: "general" and description: "Implement Task N: [task
name]").

Comment on lines +18 to +21
**Before planning:** Recall existing project decisions and patterns to avoid contradicting established architecture:
- `node ${CLAUDE_PLUGIN_ROOT}/../commands/recall.js knowledge_base.decisions`
- `node ${CLAUDE_PLUGIN_ROOT}/../commands/recall.js knowledge_base.patterns`

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check file structure and CLAUDE_PLUGIN_ROOT usage patterns

# 1. Find recall.js in the repository
echo "=== Searching for recall.js ==="
fd -t f "recall.js"

# 2. Check directory structure relative to SKILL.md location
echo -e "\n=== Directory structure from skills/writing-plans/ ==="
ls -la skills/writing-plans/ 2>/dev/null | head -20

# 3. Check for ../commands from that location
echo -e "\n=== Contents of skills/commands (if it exists) ==="
ls -la skills/commands/ 2>/dev/null || echo "skills/commands/ does not exist"

# 4. Search for CLAUDE_PLUGIN_ROOT usage in the codebase
echo -e "\n=== CLAUDE_PLUGIN_ROOT usage patterns ==="
rg -n "CLAUDE_PLUGIN_ROOT" --type md --type json -C 2

# 5. Check hooks.json for quote patterns around environment variables
echo -e "\n=== Hooks configuration examples ==="
fd -t f "hooks.json" -exec cat {} \;

Repository: obra/superpowers

Length of output: 11080


Add quotes around ${CLAUDE_PLUGIN_ROOT} to handle paths with spaces.

The commands should use single quotes around the variable reference because Claude Code expands variables before passing to the shell. Without quotes, paths with spaces (e.g., Windows: C:\Program Files\...) will break word splitting:

- `node '${CLAUDE_PLUGIN_ROOT}'/../commands/recall.js knowledge_base.decisions`
- `node '${CLAUDE_PLUGIN_ROOT}'/../commands/recall.js knowledge_base.patterns`

The recall.js file exists and CLAUDE_PLUGIN_ROOT is properly documented throughout the plugin, but this quoting pattern is necessary for cross-platform compatibility.

🤖 Prompt for AI Agents
In `@skills/writing-plans/SKILL.md` around lines 18 - 21, Update the two command
examples in SKILL.md so the CLAUDE_PLUGIN_ROOT variable is quoted to avoid shell
word-splitting on paths with spaces; change the lines that invoke recall.js (the
examples containing node ${CLAUDE_PLUGIN_ROOT}/../commands/recall.js
knowledge_base.decisions and node ${CLAUDE_PLUGIN_ROOT}/../commands/recall.js
knowledge_base.patterns) to use single quotes around ${CLAUDE_PLUGIN_ROOT}
(i.e., node '${CLAUDE_PLUGIN_ROOT}'/../commands/recall.js ...) so the recall.js
invocation works on platforms with spaces in the plugin path.

Merge upstream obra/superpowers v4.3.0 with our Amplifier integration.

Upstream changes (v4.3.0):
- Brainstorming hard gates prevent skipping design phase
- EnterPlanMode intercept forces brainstorming first
- SessionStart hook now synchronous (fixes race condition)
- writing-plans nested code fence fix (4-backtick)
- stream-json --verbose fix

Conflict resolutions:
- hooks.json: keep our run-hook.cmd wrapper + take async:false fix
- brainstorming: merge hard gates with our Amplifier agent awareness
- writing-plans: keep 4-backtick fix + our Review Tasks section
- RELEASE-NOTES: reorganize with both fork and upstream sections

🤖 Generated with [Amplifier](https://github.com/microsoft/amplifier)

Co-Authored-By: Amplifier <240397093+microsoft-amplifier@users.noreply.github.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@skills/writing-plans/SKILL.md`:
- Line 96: The fenced code block containing the Task(...) snippet in SKILL.md is
missing a language tag; update the opening triple backtick to include a language
(for example `text`) so the block starts with ```text and leave the
Task(subagent_type="general-purpose", model="sonnet", description="Generate
implementation plan for [feature]", prompt="...") content unchanged.
🧹 Nitpick comments (1)
skills/using-superpowers/SKILL.md (1)

108-109: Consider clarifying vague references for better usability.

Two items could be clearer:

  • Line 108: "Avoid changing the system prompt" - it's unclear how users would control or influence this, or what actions cause system prompt changes.
  • Line 109: "Skill Index in the system prompt" - the location and access method for this index isn't explained, which may confuse users trying to follow this guidance.

Consider adding brief explanations or cross-references to help users understand and apply these recommendations.


For plans with 5+ tasks, delegate the heavy generation work to a subagent to protect the main context window:

```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add a language to the fenced block.

Markdownlint flags the code fence without a language. Please specify one (e.g., text).

🔧 Proposed fix
-```
+```text
 Task(subagent_type="general-purpose", model="sonnet", description="Generate implementation plan for [feature]", prompt="
   You are writing an implementation plan. Follow these instructions EXACTLY.
🧰 Tools
🪛 markdownlint-cli2 (0.20.0)

[warning] 96-96: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In `@skills/writing-plans/SKILL.md` at line 96, The fenced code block containing
the Task(...) snippet in SKILL.md is missing a language tag; update the opening
triple backtick to include a language (for example `text`) so the block starts
with ```text and leave the Task(subagent_type="general-purpose", model="sonnet",
description="Generate implementation plan for [feature]", prompt="...") content
unchanged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants