Skip to content

feat(skills): add rules-distill — extract cross-cutting principles from skills into rules#561

Open
shimo4228 wants to merge 5 commits intoaffaan-m:mainfrom
shimo4228:feat/skills/rules-distill
Open

feat(skills): add rules-distill — extract cross-cutting principles from skills into rules#561
shimo4228 wants to merge 5 commits intoaffaan-m:mainfrom
shimo4228:feat/skills/rules-distill

Conversation

@shimo4228
Copy link
Contributor

@shimo4228 shimo4228 commented Mar 17, 2026

Summary

Adds /rules-distill, a meta-tool that scans installed skills, extracts cross-cutting principles appearing in 2+ skills, and proposes changes to rule files (append, revise, new section, or new file).

This applies the skill-stocktake pattern to rules maintenance — just as skill-stocktake keeps skills healthy, rules-distill keeps rules aligned with the knowledge encoded in skills.

How it works

  1. Phase 1 (Deterministic): Scripts enumerate all skills and rule files
  2. Phase 2 (LLM Judgment): Subagent cross-reads skills + full rules text, extracts candidates with 6-type verdict system
  3. Phase 3 (User Review): Summary table presented; user approves/modifies/skips each candidate

Key design decisions

  • No grep pre-filter: Rules are ~800 lines total — pass full text to LLM for accurate semantic matching (catches concepts expressed in different words)
  • Anti-abstraction safeguard: 3-layer filter prevents overly abstract principles:
    • Must appear in 2+ skills
    • Must be writable as "do X" / "don't do Y" (actionable behavior test)
    • Must have clear violation risk
  • 6 verdict types: Append, Revise, New Section, New File, Already Covered, Too Specific
  • Never auto-modifies rules — always requires user approval

Tested locally

Ran full distillation against 56 skills + 22 rule files. Results:

  • 4 actionable candidates (high confidence)
  • 2 correctly identified as Already Covered
  • 2 correctly identified as Too Specific

Type

  • Skill
  • Command
  • Agent
  • Hook

Testing

  • Tested with Claude Code CLI (Opus 4.6, 1M context)
  • Full scan: 56 skills, 22 rules → 4 actionable candidates
  • Verified Already Covered detection catches semantically equivalent rules (not just keyword matches)
  • Verified Too Specific filter correctly excludes language-specific patterns

Checklist

  • Follows file naming conventions
  • Includes required sections
  • No hardcoded paths or personal information
  • English only

Summary by cubic

Adds /rules-distill, a self-contained tool that scans installed skills, finds principles shared across 2+ skills, and proposes rule updates with user approval. Removes the skill-stocktake dependency by shipping built-in skill and rules scanners.

  • New Features

    • Added skills/rules-distill with a 3-phase workflow and built-in scanners (scripts/scan-skills.sh, scripts/scan-rules.sh).
    • Cross-reads full rules text for semantic matching with six verdicts; never auto-edits.
    • Cross-batch merge enforces the 2+ skills requirement across clusters.
    • Writes results.json with UTC timestamps, kebab-case IDs, and Revise fields (reason/before/after).
  • Bug Fixes

    • Safer bash loops/quoting, removed silent truncation, fixed fenced code blocks; standardized to RULES_DISTILL_* env vars; switched scan-rules.sh JSON construction to jq for safe escaping; removed unused counters and fixed header in scan-skills.sh.
    • Docs cleanup and counts updated in README.md and AGENTS.md to 109 skills and 58 commands.

Written for commit b62cbbf. Summary will update on new commits.

Summary by CodeRabbit

  • Documentation

    • Added comprehensive Rules Distill workflow docs (three-phase process, prompts, output formats, approval model) and a new command description for the feature.
    • Updated AGENTS.md and README to reflect updated counts and a new MCP configs entry in project structure.
  • New Features

    • Added command-line scanning tools to enumerate and summarize installed rules and skills, emitting structured JSON summaries and metrics for review.

…om skills into rules

Applies the skill-stocktake pattern to rules maintenance:
scan skills → extract shared principles → propose rule changes.

Key design decisions:
- Deterministic collection (scan scripts) + LLM judgment (cross-read & verdict)
- 6 verdict types: Append, Revise, New Section, New File, Already Covered, Too Specific
- Anti-abstraction safeguard: 2+ skills evidence, actionable behavior test, violation risk
- Rules full text passed to LLM (no grep pre-filter) for accurate matching
- Never modifies rules automatically — always requires user approval
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 17, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a new rules-distill command doc and a Rules Distill skill with documentation plus two new scanner scripts that enumerate rules and skills, extract metadata/frontmatter and file metrics, and emit structured JSON for downstream cross-reading and human review. Also updates docs counts and project structure entries.

Changes

Cohort / File(s) Summary
Command metadata
commands/rules-distill.md
New command doc with YAML front matter describing the rules-distill workflow and linking to the skill.
Skill documentation
skills/rules-distill/SKILL.md
New SKILL.md describing the three-phase distillation workflow, detailed prompts, output schema, verdict taxonomy, and user-approval model (documentation only).
Rule scanner script
skills/rules-distill/scripts/scan-rules.sh
New Bash script enumerating Markdown rule files (excludes _archived), extracting H2 headings and line counts, resolving HOME paths, and emitting a JSON index; includes strict shell options and error handling.
Skill scanner script
skills/rules-distill/scripts/scan-skills.sh
New Bash script scanning global and project skill Markdown files, extracting single-line frontmatter (name, description), computing UTC mtimes, validating project paths, and emitting combined JSON with scan summary and merged skills list.
Docs updates
AGENTS.md, README.md
Updated counts (skills +1, commands +1) and added a project structure entry; documentation-only content changes.

Sequence Diagram(s)

sequenceDiagram
  participant RulesScanner as "scan-rules.sh"
  participant SkillsScanner as "scan-skills.sh"
  participant LLM as "Cross-read LLM"
  participant User as "Reviewer"
  participant Store as "results.json"

  RulesScanner->>SkillsScanner: provide rules index (JSON)
  SkillsScanner->>LLM: send skill metadata and file metrics
  LLM->>LLM: cross-read skills + rules, extract candidate rules & verdicts
  LLM->>User: present candidate summaries and drafts
  User->>LLM: approve/modify/skip candidates
  User->>Store: persist final statuses, metadata, and report
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • affaan-m

Poem

🐇 I hop through headings, lines counted bright,
I gather names and times by moonlit byte.
Scripts hum JSON, LLMs think through the night,
You click approve — rules ready for flight. ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main feature added: a rules-distill tool that extracts cross-cutting principles from skills into rules.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Mar 17, 2026

Greptile Summary

This PR adds /rules-distill, a meta-skill that scans installed skills, extracts cross-cutting principles appearing in 2+ skills, and proposes targeted changes to rule files — following the same "deterministic collection + LLM judgment" pattern used by skill-stocktake. It ships two self-contained bash scanning scripts (scan-skills.sh, scan-rules.sh), a 264-line SKILL.md specifying the 3-phase workflow, a thin commands/rules-distill.md entry-point, and updated counts in AGENTS.md and README.md.

Key concerns:

  • The main workflow gap between Phase 1 and Phase 2 is undocumented: scan-skills.sh emits only metadata (name, description, mtime) yet Phase 2's subagent prompt requires {full text of skills in this batch}. Without an explicit step instructing the agent to read actual SKILL.md file contents, an LLM executing this skill may substitute the metadata JSON as the "full text," leading to low-quality or empty extraction.
  • scan-rules.sh's early-exit error message (line 14) still uses raw shell string interpolation with $RULES_DIR, while the main JSON output was correctly hardened with jq --arg. Any special character in the path (e.g., a user-supplied RULES_DISTILL_DIR with a double-quote) produces malformed JSON on stderr.
  • scan-skills.sh uses date -u -r "$file" which is not available on busybox date (Alpine-based Docker/CI), potentially breaking the scan in container environments.
  • Previously-noted issues from earlier review threads (scanning all *.md instead of SKILL.md only; duplicate skills when running from $HOME) remain open in the current codebase.

Confidence Score: 3/5

  • Safe to merge with caveats — the skill is LLM-driven and never auto-modifies rules, but a workflow documentation gap could undermine Phase 2 extraction quality, and two shell script issues should be fixed before wide distribution.
  • The core safety guarantee (never auto-modifies rules, always requires user approval) is solid. The bash scripts have been significantly hardened compared to earlier iterations (proper jq --arg usage, safe while IFS= read -r -d '' loops). However, the undocumented Phase 1→2 transition — where metadata-only scan output is expected to become full skill text — is a functional gap that would silently degrade extraction quality. The scan-rules.sh line 14 JSON injection and the date -r portability issue are additional correctness concerns. Several issues flagged in previous review rounds also remain unresolved.
  • skills/rules-distill/SKILL.md (Phase 1→2 transition gap), skills/rules-distill/scripts/scan-rules.sh (line 14 JSON injection), skills/rules-distill/scripts/scan-skills.sh (all-.md scan, home-dir deduplication, date -r portability)

Important Files Changed

Filename Overview
skills/rules-distill/SKILL.md Core 264-line workflow spec for the 3-phase distillation process; contains a documentation gap between Phase 1 (metadata-only scan output) and Phase 2 (requires full skill file text), which could lead to low-quality subagent extraction if not bridged explicitly by the executing agent.
skills/rules-distill/scripts/scan-rules.sh Rules directory scanner that correctly uses jq --arg for all main JSON output, but the early-exit error message on line 14 still uses raw shell string interpolation with $RULES_DIR, creating a JSON injection vector in the stderr path.
skills/rules-distill/scripts/scan-skills.sh Skills directory scanner with well-structured frontmatter extraction and safe JSON via jq --arg; scans all *.md files (not just SKILL.md), has no deduplication guard when GLOBAL_DIR equals CWD_SKILLS_DIR, and uses date -r which is absent on busybox/Alpine.
commands/rules-distill.md Thin command entry-point that delegates entirely to the rules-distill skill; correctly structured with frontmatter, and does not hardcode paths or env vars.
AGENTS.md Updated skill and command counts (108→109 skills, 57→58 commands) to match the new addition; straightforward doc-only change.
README.md Updated count strings in three locations (install banner, comparison table header); consistent with the AGENTS.md update and the new skill/command being added.

Sequence Diagram

sequenceDiagram
    participant U as User
    participant A as Claude Agent
    participant SS as scan-skills.sh
    participant SR as scan-rules.sh
    participant SA as Subagent (per batch)
    participant FS as Rule Files

    U->>A: /rules-distill
    A->>SS: bash scan-skills.sh
    SS-->>A: JSON (name, description, mtime per skill)
    A->>SR: bash scan-rules.sh
    SR-->>A: JSON (path, file, headings per rule file)
    A->>U: Phase 1 summary (N skills, M rules)

    Note over A: Phase 2 — batching
    loop Per thematic cluster
        A->>SA: full skill text + full rules text
        SA-->>A: candidates[] with verdicts
    end
    A->>A: Cross-batch merge & deduplicate

    A->>U: Distillation Report (summary table + details)
    U->>A: Approve / Modify / Skip per candidate

    loop Per approved candidate
        A->>FS: Append / Revise / New Section / New File
    end

    A->>A: Save results.json (UTC timestamp, kebab-case IDs)
    A-->>U: Results saved
Loading

Last reviewed commit: "fix(skills): address..."

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (2)
skills/rules-distill/SKILL.md (2)

93-101: Add language identifier to code block.

The JSON structure at lines 93-101 is missing a language identifier. Adding json to the code fence improves syntax highlighting and clarity.

✨ Suggested fix
 ## Output Format (per candidate)
 
+```json
 {
   "principle": "1-2 sentences in 'do X' / 'don't do Y' form",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/rules-distill/SKILL.md` around lines 93 - 101, The fenced code block
containing the JSON example (keys like "principle", "evidence",
"violation_risk", "verdict", "target_rule", "confidence", "draft") in SKILL.md
is missing a language identifier; update the opening code fence to "```json" so
the block reads as a JSON code block to enable proper syntax highlighting and
clarity.

19-184: Consider grouping phases under "How It Works" section.

The three phases (Phase 1, 2, 3) describe how the skill works, but they're not explicitly grouped under a "How It Works" heading. While the current structure is clear, adding an explicit "How It Works" parent section would improve compliance with the skill documentation guidelines.

♻️ Suggested restructure
+## How It Works
+
+The rules distillation process follows three phases:
+
 ## Phase 1: Inventory (Deterministic Collection)
 
 ### 1a. Collect skill inventory

As per coding guidelines: Skills should be formatted as Markdown with clear sections for When to Use, How It Works, and Examples.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/rules-distill/SKILL.md` around lines 19 - 184, The document SKILL.md
is missing a top-level "How It Works" parent heading that groups the existing
Phase 1: Inventory, Phase 2: Cross-read, Match & Verdict, and Phase 3: User
Review & Execution sections; add a "## How It Works" (or appropriate H2) above
the "Phase 1" heading and indent/organize the three phase headings (Phase 1:
Inventory, Phase 2: Cross-read, Match & Verdict, Phase 3: User Review &
Execution) under that section so the skill follows the required "When to Use /
How It Works / Examples" structure while keeping all existing content and
headings intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@skills/rules-distill/SKILL.md`:
- Line 167: The README currently hardcodes the results path as
`~/.claude/skills/rules-distill/results.json`; replace that hardcoded path with
a configurable/reference placeholder and document how to resolve it (e.g., use
an environment variable or resolved home directory). Update the SKILL.md line
that contains the literal `~/.claude/skills/rules-distill/results.json` to show
a configurable token like `${CLAUDE_HOME}/skills/rules-distill/results.json` or
instruct users to use their home directory (e.g., expand with os.homedir or
$HOME) and add a short note on how the application determines the final path
(env var name or config key).
- Line 35: Replace the hardcoded path in the shell snippet "find ~/.claude/rules
-name '*.md' -not -path '*/_archived/*' | while read f; do" with a configurable
variable (e.g. "${CLAUDE_RULES_DIR:-$HOME/.claude/rules}") so callers can
override it; update the snippet to use that variable in the find invocation and
ensure any README or comments document the new CLAUDE_RULES_DIR env var and its
default.
- Line 26: Replace the hardcoded path "bash
~/.claude/skills/skill-stocktake/scripts/scan.sh" with a configurable reference:
update the invocation in SKILL.md to use either a relative path like
"./scripts/scan.sh" (if the README sits at repo root) or an environment variable
placeholder such as "$SKILL_DIR/scripts/scan.sh" (and document setting
SKILL_DIR), and remove the tilde expansion; ensure the example shows the generic
form and that any CI or startup scripts set the env var if needed.
- Around line 1-191: Add a new "Examples" Markdown section immediately after the
"Design Principles" heading that includes: a complete end-to-end run showing the
command used to start rules-distill and the Phase 1 banner (e.g., "Rules
Distillation — Phase 1: Inventory" with counts), representative sample outputs
for Phase 2 candidate JSON entries (principle, evidence, violation_risk,
verdict, draft) and the Phase 3 "Rules Distillation Report" summary table, plus
a short example of the user approval interaction (e.g., "User: Approve 1, Modify
2") — ensure headings mirror existing ones (Phase 1/Phase 2/Phase 3, "Rules
Distillation Report") and include "See skill: [name]" links in draft examples to
comply with the Design Principles.

---

Nitpick comments:
In `@skills/rules-distill/SKILL.md`:
- Around line 93-101: The fenced code block containing the JSON example (keys
like "principle", "evidence", "violation_risk", "verdict", "target_rule",
"confidence", "draft") in SKILL.md is missing a language identifier; update the
opening code fence to "```json" so the block reads as a JSON code block to
enable proper syntax highlighting and clarity.
- Around line 19-184: The document SKILL.md is missing a top-level "How It
Works" parent heading that groups the existing Phase 1: Inventory, Phase 2:
Cross-read, Match & Verdict, and Phase 3: User Review & Execution sections; add
a "## How It Works" (or appropriate H2) above the "Phase 1" heading and
indent/organize the three phase headings (Phase 1: Inventory, Phase 2:
Cross-read, Match & Verdict, Phase 3: User Review & Execution) under that
section so the skill follows the required "When to Use / How It Works /
Examples" structure while keeping all existing content and headings intact.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 001b4032-f29a-4b10-836c-5f925443dc47

📥 Commits

Reviewing files that changed from the base of the PR and between fce4513 and b5326c5.

📒 Files selected for processing (2)
  • commands/rules-distill.md
  • skills/rules-distill/SKILL.md

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 2 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="skills/rules-distill/SKILL.md">

<violation number="1" location="skills/rules-distill/SKILL.md:100">
P3: The output schema doesn’t define fields for Revise “reason” and “before/after” text, yet the Verdict Reference requires them. This mismatch leaves Revise proposals underspecified for Phase 3 review.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Fixes raised by CodeRabbit, Greptile, and cubic:

- Add Prerequisites section documenting skill-stocktake dependency
- Add fallback command when skill-stocktake is not installed
- Fix shell quoting: add IFS= and -r to while-read loops
- Replace hardcoded paths with env var placeholders ($CLAUDE_RULES_DIR, $SKILL_STOCKTAKE_DIR)
- Add json language identifier to code blocks
- Add "How It Works" parent heading for Phase 1/2/3
- Add "Example" section with end-to-end run output
- Add revision.reason/before/after fields to output schema for Revise verdict
- Document timestamp format (date -u +%Y-%m-%dT%H:%M:%SZ)
- Document candidate-id format (kebab-case from principle)
- Use concrete examples in results.json schema
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@skills/rules-distill/SKILL.md`:
- Line 13: Rename the Markdown section heading "When to Activate" to "When to
Use" in SKILL.md so the skill follows the documentation standard; locate the
heading string "When to Activate" and replace it with "When to Use", ensuring
the rest of the Markdown structure (other sections like "How It Works" and
"Examples") remains unchanged.
- Line 41: The SKILL discovery fallback uses the wrong env var name—replace
CLAUDE_RULES_DIR with a correctly named CLAUDE_SKILLS_DIR in the find command so
it searches "${CLAUDE_SKILLS_DIR:-$HOME/.claude/skills}" for SKILL.md and other
.md files; update any references to CLAUDE_RULES_DIR in the SKILL.md discovery
logic to CLAUDE_SKILLS_DIR to keep semantics consistent (look for the find
invocation that mentions SKILL.md and CLAUDE_RULES_DIR).
- Line 21: The Prerequisites line in SKILL.md contains a hardcoded path
(~/.claude/skills/skill-stocktake/scripts/scan.sh); update the Prerequisites
section so it does not reference a fixed filesystem location — either remove the
explicit path or replace it with an environment-variable placeholder (e.g.,
$SKILL_STOCKTAKE_SCRIPT or $CLAUDE_SKILLS_DIR) and describe how to set that
variable or fall back to the find command used in the same sentence; edit the
sentence mentioning "skill-stocktake" and the scan script accordingly to
reference the placeholder or the alternative manual enumeration command.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 8fdb1476-73e8-49e1-b8bc-9da8eeebb0c8

📥 Commits

Reviewing files that changed from the base of the PR and between b5326c5 and c3e3ae6.

📒 Files selected for processing (1)
  • skills/rules-distill/SKILL.md

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 issues found across 1 file (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="skills/rules-distill/SKILL.md">

<violation number="1" location="skills/rules-distill/SKILL.md:41">
P2: Fallback inventory command points at the rules directory, matches arbitrary `.md` files, and truncates with `head -100`, so Phase 1 can scan the wrong tree and produce a non‑exhaustive/incorrect skill inventory.</violation>

<violation number="2" location="skills/rules-distill/SKILL.md:70">
P2: Batch-local 2+ filtering is specified without a global cross-batch merge step, which can drop valid cross-cutting principles split across clusters and undermine the stated exhaustive collection goal.</violation>

<violation number="3" location="skills/rules-distill/SKILL.md:107">
P2: Nested triple-backtick fences inside the subagent prompt will terminate the outer fence early; use a longer outer fence (e.g., ````) to keep the prompt intact.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

…ripts

Address P1 review feedback:
- Add scan-skills.sh and scan-rules.sh directly in rules-distill/scripts/
  (no external dependency on skill-stocktake)
- Remove Prerequisites section (no longer needed)
- Add cross-batch merge step to prevent 2+ skills requirement
  from being silently broken across batch boundaries
- Fix nested triple-backtick fences (use quadruple backticks)
- Remove head -100 cap (silent truncation)
- Rename "When to Activate" → "When to Use" (ECC standard)
- Remove unnecessary env var placeholders (SKILL.md is a prompt, not a script)
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

6 issues found across 3 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="skills/rules-distill/SKILL.md">

<violation number="1" location="skills/rules-distill/SKILL.md:60">
P2: Cross-batch promotion is unreachable because per-batch extraction still suppresses singleton-in-batch candidates, leaving nothing for merge to promote.</violation>
</file>

<file name="skills/rules-distill/scripts/scan-skills.sh">

<violation number="1" location="skills/rules-distill/scripts/scan-skills.sh:100">
P2: GNU-specific `date -r "$file"` will fail on macOS/BSD (expects epoch seconds), and `set -e` will abort the scan. Use a portable stat/date fallback for file mtimes.</violation>

<violation number="2" location="skills/rules-distill/scripts/scan-skills.sh:103">
P2: Observation count matching breaks for file paths containing spaces because `uniq -c` output is split on whitespace and `awk` compares only field 2 to the full path, causing `use_7d`/`use_30d` to silently return 0 for such files.</violation>
</file>

<file name="skills/rules-distill/scripts/scan-rules.sh">

<violation number="1" location="skills/rules-distill/scripts/scan-rules.sh:30">
P2: `sort -z` is a GNU-only option; on macOS/BSD `sort` rejects `-z`, causing the pipeline to fail and the script to exit under `set -euo pipefail`. This makes the script non-portable for common developer environments.</violation>

<violation number="2" location="skills/rules-distill/scripts/scan-rules.sh:43">
P2: The script always rewrites paths as home-relative, which corrupts paths when RULES_DIR is outside $HOME (e.g., "~//tmp/rules/foo.md"), so the emitted JSON path no longer identifies the scanned file.</violation>

<violation number="3" location="skills/rules-distill/scripts/scan-rules.sh:67">
P2: JSON output is assembled with unescaped string interpolation, so headings or filenames containing quotes/backslashes will produce invalid JSON.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.


After all batches complete, merge candidates across batches:
- Deduplicate candidates with the same or overlapping principles
- Re-check the "2+ skills" requirement using evidence from **all** batches combined — a principle found in 1 skill per batch but 2+ skills total is valid
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Cross-batch promotion is unreachable because per-batch extraction still suppresses singleton-in-batch candidates, leaving nothing for merge to promote.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/rules-distill/SKILL.md, line 60:

<comment>Cross-batch promotion is unreachable because per-batch extraction still suppresses singleton-in-batch candidates, leaving nothing for merge to promote.</comment>

<file context>
@@ -71,11 +53,17 @@ Extraction and matching are unified in a single pass. Rules files are small enou
+
+After all batches complete, merge candidates across batches:
+- Deduplicate candidates with the same or overlapping principles
+- Re-check the "2+ skills" requirement using evidence from **all** batches combined — a principle found in 1 skill per batch but 2+ skills total is valid
+
 #### Subagent Prompt
</file context>
Fix with Cubic

local name desc mtime u7 u30 dp
name=$(extract_field "$file" "name")
desc=$(extract_field "$file" "description")
mtime=$(date -u -r "$file" +%Y-%m-%dT%H:%M:%SZ)
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: GNU-specific date -r "$file" will fail on macOS/BSD (expects epoch seconds), and set -e will abort the scan. Use a portable stat/date fallback for file mtimes.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/rules-distill/scripts/scan-skills.sh, line 100:

<comment>GNU-specific `date -r "$file"` will fail on macOS/BSD (expects epoch seconds), and `set -e` will abort the scan. Use a portable stat/date fallback for file mtimes.</comment>

<file context>
@@ -0,0 +1,168 @@
+    local name desc mtime u7 u30 dp
+    name=$(extract_field "$file" "name")
+    desc=$(extract_field "$file" "description")
+    mtime=$(date -u -r "$file" +%Y-%m-%dT%H:%M:%SZ)
+    # Use awk exact field match to avoid substring false-positives from grep -F.
+    # uniq -c output format: "   N /path/to/file" — path is always field 2.
</file context>
Fix with Cubic

files=()
while IFS= read -r -d '' f; do
files+=("$f")
done < <(find "$RULES_DIR" -name '*.md' -not -path '*/_archived/*' -print0 | sort -z)
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: sort -z is a GNU-only option; on macOS/BSD sort rejects -z, causing the pipeline to fail and the script to exit under set -euo pipefail. This makes the script non-portable for common developer environments.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/rules-distill/scripts/scan-rules.sh, line 30:

<comment>`sort -z` is a GNU-only option; on macOS/BSD `sort` rejects `-z`, causing the pipeline to fail and the script to exit under `set -euo pipefail`. This makes the script non-portable for common developer environments.</comment>

<file context>
@@ -0,0 +1,72 @@
+files=()
+while IFS= read -r -d '' f; do
+  files+=("$f")
+done < <(find "$RULES_DIR" -name '*.md' -not -path '*/_archived/*' -print0 | sort -z)
+
+total=${#files[@]}
</file context>
Fix with Cubic

for i in "${!files[@]}"; do
file="${files[$i]}"
rel_path="${file#"$HOME"/}"
rel_path="~/$rel_path"
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: The script always rewrites paths as home-relative, which corrupts paths when RULES_DIR is outside $HOME (e.g., "~//tmp/rules/foo.md"), so the emitted JSON path no longer identifies the scanned file.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/rules-distill/scripts/scan-rules.sh, line 43:

<comment>The script always rewrites paths as home-relative, which corrupts paths when RULES_DIR is outside $HOME (e.g., "~//tmp/rules/foo.md"), so the emitted JSON path no longer identifies the scanned file.</comment>

<file context>
@@ -0,0 +1,72 @@
+for i in "${!files[@]}"; do
+  file="${files[$i]}"
+  rel_path="${file#"$HOME"/}"
+  rel_path="~/$rel_path"
+
+  # Extract H2 headings (## Title)
</file context>
Fix with Cubic

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@skills/rules-distill/scripts/scan-rules.sh`:
- Around line 13-15: Replace manual string concatenation JSON prints in
scan-rules.sh (e.g., the RULES_DIR existence error block and the rule-emission
logic around functions that use basename and extracted headings) with jq-driven
JSON construction: use jq -n --arg for each string value (RULES_DIR, basename
output, each extracted heading) and --argjson where needed so all strings are
properly escaped, emit the error object via jq -n --arg path "$RULES_DIR"
'{"error":"rules directory not found","path":$path}', and similarly build each
rule object with jq -n --arg name "$basename" --arg heading "$heading"
'{name:$name,heading:$heading,...}' instead of echoing concatenated strings;
apply the same jq approach to the blocks covering lines ~35-68 that produce rule
objects.
- Around line 28-30: The use of sort -z in the find pipeline is not portable
(BSD/macos sort lacks -z) and causes the script to fail under set -euo pipefail;
update the pipeline that builds the files array (the while IFS= read -r -d '' f;
do files+=("$f"); done < <(find "$RULES_DIR" -name '*.md' -not -path
'*/_archived/*' -print0 | sort -z)) to a portable approach: remove sort -z and
instead convert NULs to newlines, sort normally, then read lines into the files
array (e.g., use find ... -print0 | tr '\0' '\n' | sort | while IFS= read -r f;
do files+=("$f"); done) so RULES_DIR scanning and ordering work on macOS and
Linux.

In `@skills/rules-distill/scripts/scan-skills.sh`:
- Around line 87-92: The jq selectors in obs_7d_counts and obs_30d_counts still
look for top-level .tool and .path but the current observation schema nests
skill metadata under .skill and only provides timestamp + skill.id/skill.path;
update both jq invocations that read from $OBSERVATIONS (variables obs_7d_counts
and obs_30d_counts) to filter by timestamp and emit the nested skill path (or
id) instead, e.g. change the filter to 'select(.timestamp >= $c and .skill.path
!= null) | .skill.path' (or use .skill.id where you index by id) so the counts
reflect current observations and still pipe through sort | uniq -c as before.
- Line 119: The scan currently inventories any Markdown under "$dir" by using
find "$dir" -name "*.md", which picks up non-canonical files; change the find
invocation to only match SKILL.md (e.g., find "$dir" -type f -name "SKILL.md")
so the script only counts canonical skill records; update the pipeline that
feeds the while/read loop (the line using done < <(find "$dir" -name "*.md" ...
| sort)) to use the new find pattern and keep the sort/redirect behavior intact.
- Around line 47-52: Create a portable file_mtime_utc() helper to replace the
incorrect use of date -r with a file path: use stat (e.g., stat -c %Y or stat -f
%m) to get the file's epoch mtime, then pass that epoch to the
platform-appropriate date command (date -u -r EPOCH on macOS/BSD or date -u -d
"@EPOCH" on GNU) to produce the UTC ISO timestamp; update callers that
previously used date -u -r "$file" to call file_mtime_utc "$file" and keep the
existing date_ago() helper unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5339a74d-f52d-4a0e-a6ab-4db64b15cfee

📥 Commits

Reviewing files that changed from the base of the PR and between c3e3ae6 and 525acd8.

📒 Files selected for processing (3)
  • skills/rules-distill/SKILL.md
  • skills/rules-distill/scripts/scan-rules.sh
  • skills/rules-distill/scripts/scan-skills.sh
🚧 Files skipped from review as they are similar to previous changes (1)
  • skills/rules-distill/SKILL.md

Comment on lines +28 to +30
while IFS= read -r -d '' f; do
files+=("$f")
done < <(find "$RULES_DIR" -name '*.md' -not -path '*/_archived/*' -print0 | sort -z)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Does BSD/macOS sortsupport the-z option for NUL-delimited input?

💡 Result:

No, the BSD/macOS sort command does not support the -z option for NUL-delimited input. This option is a GNU coreutils extension for using NUL as the line delimiter instead of newline. macOS uses a BSD-derived sort (Darwin sort), which lacks -z/--zero-terminated. Official man pages for macOS/Darwin (e.g., from unix.com/osx, ss64.com/mac, keith.github.io/xcode-man-pages) list options like -b, -d, -f, -i, -M, -n, -r, -k, -t, -o, -s, -S, -T, -u, but no -z. These pages show GNU-like options inconsistently, but primary BSD sources (OpenBSD man.openbsd.org/sort.1, FreeBSD) confirm -z only in some modern BSDs like OpenBSD/FreeBSD, not Darwin/macOS. Secondary sites (amazingalgorithms.com) claim -z, but contradicted by authoritative sources and lack of mention in Apple docs or recent release notes (developer.apple.com/macos-sonoma-14). Users needing NUL-delimited sorting on macOS often install GNU coreutils via Homebrew (gsort -z) or use workarounds, as noted in Stack Overflow/Super User discussions on macOS sort limitations (e.g., no -V, implying non-GNU). To verify locally: man sort on macOS shows no -z; sort --help or sort -z errors "illegal option".

Citations:


🏁 Script executed:

cat -n skills/rules-distill/scripts/scan-rules.sh | head -35

Repository: affaan-m/everything-claude-code

Length of output: 1099


sort -z makes this scanner incompatible with macOS.

The script uses set -euo pipefail, which causes it to exit immediately when sort -z fails. On macOS, BSD sort does not support the -z (NUL-delimited) option—this is a GNU coreutils extension. The script will fail before scanning any rules on macOS systems. Switch to a portable ordering strategy (e.g., sort with a different approach, or GNU coreutils via Homebrew) or remove the sort entirely.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/rules-distill/scripts/scan-rules.sh` around lines 28 - 30, The use of
sort -z in the find pipeline is not portable (BSD/macos sort lacks -z) and
causes the script to fail under set -euo pipefail; update the pipeline that
builds the files array (the while IFS= read -r -d '' f; do files+=("$f"); done <
<(find "$RULES_DIR" -name '*.md' -not -path '*/_archived/*' -print0 | sort -z))
to a portable approach: remove sort -z and instead convert NULs to newlines,
sort normally, then read lines into the files array (e.g., use find ... -print0
| tr '\0' '\n' | sort | while IFS= read -r f; do files+=("$f"); done) so
RULES_DIR scanning and ordering work on macOS and Linux.

Comment on lines +47 to +52
# Get UTC timestamp N days ago (supports both macOS and GNU date)
date_ago() {
local n="$1"
date -u -v-"${n}d" +%Y-%m-%dT%H:%M:%SZ 2>/dev/null ||
date -u -d "${n} days ago" +%Y-%m-%dT%H:%M:%SZ
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

On macOS/BSD date, does date -r accept a file path, or does-r expect epoch seconds?

💡 Result:

On macOS and BSD systems like FreeBSD, the date -r command does not accept a file path directly as the argument to -r. The -r option expects epoch seconds (number of seconds since the Unix epoch, January 1, 1970 UTC). It can be specified in decimal, octal, or hexadecimal. Some man pages (e.g., FreeBSD and secondary sources like SS64) show syntax allowing -r filename to print the file's last modification time, but official macOS man pages (e.g., unix.com/osx) describe -r seconds only, without mentioning files. Stack Exchange discussions confirm that on FreeBSD/macOS date, -r requires a numeric timestamp; providing a filename causes an error like "illegal time specification" because it fails to parse as a number. To display a file's modification time on macOS/BSD, use stat -f %Sm -t "%Y-%m-%d %H:%M:%S" or ls -l instead.

Citations:


🏁 Script executed:

cat -n skills/rules-distill/scripts/scan-skills.sh | sed -n '95,105p'

Repository: affaan-m/everything-claude-code

Length of output: 711


🏁 Script executed:

grep -n "date.*-r" skills/rules-distill/scripts/scan-skills.sh

Repository: affaan-m/everything-claude-code

Length of output: 130


Fix the macOS portability issue with date -r.

Line 100 uses date -u -r "$file" which fails on macOS/BSD—the -r flag expects epoch seconds, not a file path. Add a portable file_mtime_utc() helper that uses stat to extract the epoch first, then passes it to the OS-appropriate date command.

Portable helper
 # Get UTC timestamp N days ago (supports both macOS and GNU date)
 date_ago() {
   local n="$1"
   date -u -v-"${n}d" +%Y-%m-%dT%H:%M:%SZ 2>/dev/null ||
   date -u -d "${n} days ago" +%Y-%m-%dT%H:%M:%SZ
 }
+
+file_mtime_utc() {
+  local file="$1" epoch
+  epoch=$(stat -f %m "$file" 2>/dev/null || stat -c %Y "$file")
+  date -u -r "$epoch" +%Y-%m-%dT%H:%M:%SZ 2>/dev/null ||
+  date -u -d "@$epoch" +%Y-%m-%dT%H:%M:%SZ
+}
-    mtime=$(date -u -r "$file" +%Y-%m-%dT%H:%M:%SZ)
+    mtime=$(file_mtime_utc "$file")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/rules-distill/scripts/scan-skills.sh` around lines 47 - 52, Create a
portable file_mtime_utc() helper to replace the incorrect use of date -r with a
file path: use stat (e.g., stat -c %Y or stat -f %m) to get the file's epoch
mtime, then pass that epoch to the platform-appropriate date command (date -u -r
EPOCH on macOS/BSD or date -u -d "@EPOCH" on GNU) to produce the UTC ISO
timestamp; update callers that previously used date -u -r "$file" to call
file_mtime_utc "$file" and keep the existing date_ago() helper unchanged.

Comment on lines +87 to +92
obs_7d_counts=$(jq -r --arg c "$c7" \
'select(.tool=="Read" and .timestamp>=$c) | .path' \
"$OBSERVATIONS" 2>/dev/null | sort | uniq -c)
obs_30d_counts=$(jq -r --arg c "$c30" \
'select(.tool=="Read" and .timestamp>=$c) | .path' \
"$OBSERVATIONS" 2>/dev/null | sort | uniq -c)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The usage rollups still target the old observation schema.

scripts/lib/skill-improvement/observations.js:51-74 emits timestamp plus skill.id / skill.path; there is no top-level .tool or .path in the current schema. As written, these selectors never match current observations, so every emitted skill ends up with use_7d: 0 and use_30d: 0.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/rules-distill/scripts/scan-skills.sh` around lines 87 - 92, The jq
selectors in obs_7d_counts and obs_30d_counts still look for top-level .tool and
.path but the current observation schema nests skill metadata under .skill and
only provides timestamp + skill.id/skill.path; update both jq invocations that
read from $OBSERVATIONS (variables obs_7d_counts and obs_30d_counts) to filter
by timestamp and emit the nested skill path (or id) instead, e.g. change the
filter to 'select(.timestamp >= $c and .skill.path != null) | .skill.path' (or
use .skill.id where you index by id) so the counts reflect current observations
and still pipe through sort | uniq -c as before.

'{path:$path,name:$name,description:$description,use_7d:$use_7d,use_30d:$use_30d,mtime:$mtime}' \
> "$tmpdir/$i.json"
i=$((i+1))
done < <(find "$dir" -name "*.md" -type f 2>/dev/null | sort)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Only inventory canonical skill files.

find "$dir" -name "*.md" will treat any extra Markdown bundled with a skill as another skill record. That skews the inventory and can create false positives for the "appears in 2+ skills" check. Restrict this scan to SKILL.md.

🎯 Narrow the scan target
-  done < <(find "$dir" -name "*.md" -type f 2>/dev/null | sort)
+  done < <(find "$dir" -name 'SKILL.md' -type f 2>/dev/null | sort)

Based on learnings, the standard convention is to use SKILL.md as the canonical skill file inside each skill folder.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/rules-distill/scripts/scan-skills.sh` at line 119, The scan currently
inventories any Markdown under "$dir" by using find "$dir" -name "*.md", which
picks up non-canonical files; change the find invocation to only match SKILL.md
(e.g., find "$dir" -type f -name "SKILL.md") so the script only counts canonical
skill records; update the pipeline that feeds the while/read loop (the line
using done < <(find "$dir" -name "*.md" ... | sort)) to use the new find pattern
and keep the sort/redirect behavior intact.

Copy link

@leegonzales leegonzales left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Servitor Review — PR #561

Good work here, Shimo. This is a well-designed meta-tool that follows the project's formats and conventions. Let me give you the full picture.

What's right

  • Clean, focused PR: One skill, one command, two support scripts. This is how PRs should look.
  • Proper formatting: Frontmatter on both files is correct. File naming follows conventions.
  • Required sections present: When to Use, How It Works, Example — all accounted for.
  • Smart design: The 3-phase workflow (deterministic collection → LLM judgment → user review) is solid. The anti-abstraction safeguard (2+ skills, actionable behavior test, violation risk) is exactly the kind of thoughtfulness that prevents rules from becoming abstract noise.
  • Never auto-modifies rules: This is the right call. Always requiring user approval keeps humans in the loop.
  • Security: No hardcoded paths or secrets. set -euo pipefail in scripts. Cleanup traps. Defensive.

Things to address

  1. scan-skills.sh env var naming: The script uses SKILL_STOCKTAKE_GLOBAL_DIR and SKILL_STOCKTAKE_PROJECT_DIR env var names, but this is the rules-distill skill, not skill-stocktake. Should use RULES_DISTILL_GLOBAL_DIR / RULES_DISTILL_PROJECT_DIR for clarity. The scan-rules.sh already uses RULES_DISTILL_DIR — be consistent across both scripts.

  2. scan-skills.sh carries unnecessary scope: Lines 64–100 implement observation counting (use_7d, use_30d) from observations.jsonl. Rules-distill only needs skill names, descriptions, and content for cross-reading — not usage metrics. Consider trimming this to keep the script focused on what rules-distill actually needs.

  3. Header comment inconsistency: scan-skills.sh header says # scan.sh — should say # scan-skills.sh.

  4. scan-rules.sh JSON construction: Headings are embedded directly into echo'd JSON strings without escaping. A heading containing " or \ would produce invalid JSON. Consider using jq for JSON construction (as scan-skills.sh already does) or at minimum escape special characters.

CI failures

All 3 CI failures are pre-existing on main — your PR doesn't introduce any new ones:

  • Lint: buildTemplateVariables unused, provenance unused (both in main's code)
  • Test: orchestrator slug validation, catalog counts (both known issues since heartbeat 3)

You're clean here.

Verdict

This is a good contribution. Address the 4 items above and it's ready. Hit it. ⭐


— Servitor (Christopher Pike), Keeper of everything-claude-code

rules-distill added 1 skill + 1 command:
- skills: 108 → 109
- commands: 57 → 58

Updates all count references to pass CI catalog validation.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@AGENTS.md`:
- Line 3: The skills count in AGENTS.md is inconsistent: the header block ("This
is a **production-ready AI coding plugin**..." which currently states "109
skills") does not match the Project Structure entry ("skills/ — 102..."); verify
the actual number of skill modules in the repo and update both places in
AGENTS.md so they match (either change the header to 102 or update the Project
Structure entry to 109), ensuring the exact numeric value is used in the header
sentence and the "skills/ — ..." line under Project Structure.

In `@README.md`:
- Line 194: Update the README so the counts are consistent: find the sentence "✨
**That's it!** You now have access to 25 agents, 109 skills, and 58 commands."
and update the corresponding entries in the "Cross-Tool Feature Parity" table
(Claude Code counts and any other cells showing agent/skill/command totals) to
match 25/109/58; ensure any other mentions of those numbers in the document
(table captions, badges, or summary lines) are changed to the same values so the
README is one source of truth.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b423ad14-28a3-44c8-925e-82a8fd0d628c

📥 Commits

Reviewing files that changed from the base of the PR and between 525acd8 and a68714f.

📒 Files selected for processing (2)
  • AGENTS.md
  • README.md

# Everything Claude Code (ECC) — Agent Instructions

This is a **production-ready AI coding plugin** providing 25 specialized agents, 108 skills, 57 commands, and automated hook workflows for software development.
This is a **production-ready AI coding plugin** providing 25 specialized agents, 109 skills, 58 commands, and automated hook workflows for software development.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Resolve internal count mismatch in AGENTS.md.

Line 3 now says 109 skills, but the Project Structure section still lists skills/ — 102... (Line 138 in this file). Please align both values.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@AGENTS.md` at line 3, The skills count in AGENTS.md is inconsistent: the
header block ("This is a **production-ready AI coding plugin**..." which
currently states "109 skills") does not match the Project Structure entry
("skills/ — 102..."); verify the actual number of skill modules in the repo and
update both places in AGENTS.md so they match (either change the header to 102
or update the Project Structure entry to 109), ensuring the exact numeric value
is used in the header sentence and the "skills/ — ..." line under Project
Structure.

```

✨ **That's it!** You now have access to 25 agents, 108 skills, and 57 commands.
✨ **That's it!** You now have access to 25 agents, 109 skills, and 58 commands.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Synchronize count updates across all README sections.

Line 194 was updated to 25/109/58, but the later "Cross-Tool Feature Parity" table still shows older Claude Code counts. Please update those values too so the README has one consistent source of truth.

📝 Proposed doc fix
-| **Agents** | 21 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 |
-| **Commands** | 52 | Shared | Instruction-based | 31 |
-| **Skills** | 102 | Shared | 10 (native format) | 37 |
+| **Agents** | 25 | Shared (AGENTS.md) | Shared (AGENTS.md) | 12 |
+| **Commands** | 58 | Shared | Instruction-based | 31 |
+| **Skills** | 109 | Shared | 16 (native format) | 37 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` at line 194, Update the README so the counts are consistent: find
the sentence "✨ **That's it!** You now have access to 25 agents, 109 skills, and
58 commands." and update the corresponding entries in the "Cross-Tool Feature
Parity" table (Claude Code counts and any other cells showing
agent/skill/command totals) to match 25/109/58; ensure any other mentions of
those numbers in the document (table captions, badges, or summary lines) are
changed to the same values so the README is one source of truth.

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 2 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="README.md">

<violation number="1" location="README.md:1047">
P2: README catalog totals are internally inconsistent: updated Claude Code counts (109 skills/58 commands) conflict with older values still shown in the later cross-tool parity table.</violation>
</file>

<file name="AGENTS.md">

<violation number="1" location="AGENTS.md:3">
P2: AGENTS.md now reports two different skill totals (109 vs 102), leaving the catalog counts internally inconsistent after this PR.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

| Commands | ✅ 57 commands | ✅ 31 commands | **Claude Code leads** |
| Skills | ✅ 108 skills | ✅ 37 skills | **Claude Code leads** |
| Commands | ✅ 58 commands | ✅ 31 commands | **Claude Code leads** |
| Skills | ✅ 109 skills | ✅ 37 skills | **Claude Code leads** |
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: README catalog totals are internally inconsistent: updated Claude Code counts (109 skills/58 commands) conflict with older values still shown in the later cross-tool parity table.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At README.md, line 1047:

<comment>README catalog totals are internally inconsistent: updated Claude Code counts (109 skills/58 commands) conflict with older values still shown in the later cross-tool parity table.</comment>

<file context>
@@ -1043,8 +1043,8 @@ The configuration is automatically detected from `.opencode/opencode.json`.
-| Commands | ✅ 57 commands | ✅ 31 commands | **Claude Code leads** |
-| Skills | ✅ 108 skills | ✅ 37 skills | **Claude Code leads** |
+| Commands | ✅ 58 commands | ✅ 31 commands | **Claude Code leads** |
+| Skills | ✅ 109 skills | ✅ 37 skills | **Claude Code leads** |
 | Hooks | ✅ 8 event types | ✅ 11 events | **OpenCode has more!** |
 | Rules | ✅ 29 rules | ✅ 13 instructions | **Claude Code leads** |
</file context>
Fix with Cubic

# Everything Claude Code (ECC) — Agent Instructions

This is a **production-ready AI coding plugin** providing 25 specialized agents, 108 skills, 57 commands, and automated hook workflows for software development.
This is a **production-ready AI coding plugin** providing 25 specialized agents, 109 skills, 58 commands, and automated hook workflows for software development.
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: AGENTS.md now reports two different skill totals (109 vs 102), leaving the catalog counts internally inconsistent after this PR.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At AGENTS.md, line 3:

<comment>AGENTS.md now reports two different skill totals (109 vs 102), leaving the catalog counts internally inconsistent after this PR.</comment>

<file context>
@@ -1,6 +1,6 @@
 # Everything Claude Code (ECC) — Agent Instructions
 
-This is a **production-ready AI coding plugin** providing 25 specialized agents, 108 skills, 57 commands, and automated hook workflows for software development.
+This is a **production-ready AI coding plugin** providing 25 specialized agents, 109 skills, 58 commands, and automated hook workflows for software development.
 
 ## Core Principles
</file context>
Fix with Cubic

1. Rename SKILL_STOCKTAKE_* env vars to RULES_DISTILL_* for consistency
2. Remove unnecessary observation counting (use_7d/use_30d) from scan-skills.sh
3. Fix header comment: scan.sh → scan-skills.sh
4. Use jq for JSON construction in scan-rules.sh to properly escape
   headings containing special characters (", \)
@ecc-tools
Copy link
Contributor

ecc-tools bot commented Mar 18, 2026

Analyzing 5000 commits...

@ecc-tools
Copy link
Contributor

ecc-tools bot commented Mar 18, 2026

Analysis Failed

Not Found - https://docs.github.com/rest/git/refs#get-a-reference

Troubleshooting
Cause Resolution
Large repository Analysis may timeout on repos with extensive history
API rate limits Wait 15 minutes before retrying
Network issues Queue timeout is 15 minutes; retry may succeed
Permissions Verify app has Contents: Read access

Retry: /ecc-tools analyze


Report Issue | ECC Tools

@shimo4228
Copy link
Contributor Author

Addressed all 4 items from the Servitor review in b62cbbf:

  1. Env var naming: SKILL_STOCKTAKE_*RULES_DISTILL_*
  2. Unnecessary scope: Removed observation counting (use_7d/use_30d, date_ago, count_obs) from scan-skills.sh — rules-distill only needs name, description, path, mtime
  3. Header comment: # scan.sh# scan-skills.sh
  4. JSON escaping: Replaced raw echo JSON construction in scan-rules.sh with jq -n --arg to properly escape headings containing " or \

Note: CI failures (ESLint no-unused-vars in tmux-worktree-orchestrator.js and skill-dashboard.test.js) are pre-existing on main — not introduced by this PR.

RULES_DIR="${RULES_DISTILL_DIR:-${1:-$HOME/.claude/rules}}"

if [[ ! -d "$RULES_DIR" ]]; then
echo '{"error":"rules directory not found","path":"'"$RULES_DIR"'"}' >&2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 JSON injection in error-path output

The error message on this line still uses raw shell string interpolation for $RULES_DIR:

echo '{"error":"rules directory not found","path":"'"$RULES_DIR"'"}' >&2

If RULES_DISTILL_DIR (which controls $RULES_DIR) contains a double-quote, backslash, or newline, the emitted JSON is malformed. The main output (lines 40–57) was correctly hardened with jq --arg, but this early-exit path was missed. Any tooling or log parser that reads stderr as JSON will silently fail or crash.

Suggested change
echo '{"error":"rules directory not found","path":"'"$RULES_DIR"'"}' >&2
jq -n --arg path "$RULES_DIR" '{error:"rules directory not found",path:$path}' >&2

Comment on lines +62 to +71
#### Subagent Prompt

Launch a general-purpose Agent with the following prompt:

````
You are an analyst who cross-reads skills to extract principles that should be promoted to rules.

## Input
- Skills: {full text of skills in this batch}
- Existing rules: {full text of all rule files}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Phase 2 subagent prompt requires full skill text, but Phase 1 only collects metadata

Phase 1 runs scan-skills.sh, which produces a JSON object containing only name, description, and mtime for each skill — not the full file content. Phase 2 then asks the agent to launch a subagent with Skills: {full text of skills in this batch}, which requires the actual content of every skill file.

There is no documented step between Phase 1 and Phase 2 that reads the actual SKILL.md files. An LLM following these instructions literally might substitute the metadata JSON as the "full text," making the extraction in Phase 2 ineffective (the LLM would see only names and one-line descriptions, not the rich technique sections needed for principle extraction).

Consider adding an explicit bridging step — e.g., after running scan-skills.sh, instruct the agent to read each SKILL.md file referenced in the output before constructing the subagent prompt:

#### 1d. Read full skill content
For each skill listed in the scan output, read the full SKILL.md file. Pass the
combined content (not the JSON metadata) as {full text of skills in this batch}
to the Phase 2 subagent prompt.

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 2 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="skills/rules-distill/scripts/scan-rules.sh">

<violation number="1" location="skills/rules-distill/scripts/scan-rules.sh:36">
P2: Heading extraction now exits the script when a markdown file has no `##` matches due to `set -euo pipefail` + missing `|| true` around `grep`.</violation>

<violation number="2" location="skills/rules-distill/scripts/scan-rules.sh:56">
P2: Rules are reassembled from lexicographically globbed temp files, which reorders entries for 10+ files (`10.json` before `2.json`).</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

jq -n \
--arg dir "$RULES_DIR" \
--argjson total "$total" \
--argjson rules "$(jq -s '.' "$tmpdir"/*.json)" \
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Rules are reassembled from lexicographically globbed temp files, which reorders entries for 10+ files (10.json before 2.json).

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/rules-distill/scripts/scan-rules.sh, line 56:

<comment>Rules are reassembled from lexicographically globbed temp files, which reorders entries for 10+ files (`10.json` before `2.json`).</comment>

<file context>
@@ -16,57 +16,43 @@ if [[ ! -d "$RULES_DIR" ]]; then
+  jq -n \
+    --arg dir "$RULES_DIR" \
+    --argjson total "$total" \
+    --argjson rules "$(jq -s '.' "$tmpdir"/*.json)" \
+    '{rules_dir:$dir,total:$total,rules:$rules}'
+fi
</file context>
Fix with Cubic

rel_path="~/$rel_path"

# Extract H2 headings (## Title) into a JSON array via jq
headings_json=$(grep -E '^## ' "$file" 2>/dev/null | sed 's/^## //' | jq -R . | jq -s '.')
Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Heading extraction now exits the script when a markdown file has no ## matches due to set -euo pipefail + missing || true around grep.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At skills/rules-distill/scripts/scan-rules.sh, line 36:

<comment>Heading extraction now exits the script when a markdown file has no `##` matches due to `set -euo pipefail` + missing `|| true` around `grep`.</comment>

<file context>
@@ -16,57 +16,43 @@ if [[ ! -d "$RULES_DIR" ]]; then
-    headings="$headings\"$heading\""
-  done < <(grep -E '^## ' "$file" 2>/dev/null || true)
+  # Extract H2 headings (## Title) into a JSON array via jq
+  headings_json=$(grep -E '^## ' "$file" 2>/dev/null | sed 's/^## //' | jq -R . | jq -s '.')
 
   # Get line count
</file context>
Suggested change
headings_json=$(grep -E '^## ' "$file" 2>/dev/null | sed 's/^## //' | jq -R . | jq -s '.')
headings_json=$({ grep -E '^## ' "$file" 2>/dev/null || true; } | sed 's/^## //' | jq -R . | jq -s '.')
Fix with Cubic

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (4)
skills/rules-distill/scripts/scan-rules.sh (2)

13-15: ⚠️ Potential issue | 🟠 Major

Use jq for the error object serialization.

Line 14 still builds JSON via string concatenation; paths containing " or \ can emit invalid JSON.

Proposed fix
-if [[ ! -d "$RULES_DIR" ]]; then
-  echo '{"error":"rules directory not found","path":"'"$RULES_DIR"'"}' >&2
+if [[ ! -d "$RULES_DIR" ]]; then
+  jq -n --arg path "$RULES_DIR" '{"error":"rules directory not found","path":$path}' >&2
   exit 1
 fi
#!/bin/bash
set -euo pipefail
RULES_DIR='/tmp/rules-with-"quote'
echo '{"error":"rules directory not found","path":"'"$RULES_DIR"'"}' | jq . >/dev/null
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/rules-distill/scripts/scan-rules.sh` around lines 13 - 15, Replace the
manual JSON string construction that uses RULES_DIR with a safe jq-based
serialization: when the directory check in scan-rules.sh (the if [[ ! -d
"$RULES_DIR" ]] branch) fails, invoke jq -n with --arg to set "error" and "path"
fields so paths with quotes/backslashes are escaped correctly, and write the jq
output to stderr before exiting; update the echo/exit sequence to use this
jq-generated JSON instead of concatenating strings.

20-22: ⚠️ Potential issue | 🟠 Major

Avoid GNU-only sort -z in a cross-platform scanner.

Line 22 uses sort -z, which is not supported by BSD/macOS sort and can hard-fail under set -euo pipefail.

Portable fix (newline-delimited)
-while IFS= read -r -d '' f; do
+while IFS= read -r f; do
   files+=("$f")
-done < <(find "$RULES_DIR" -name '*.md' -not -path '*/_archived/*' -print0 | sort -z)
+done < <(find "$RULES_DIR" -name '*.md' -not -path '*/_archived/*' -print | LC_ALL=C sort)
Does BSD/macOS `sort` support the `-z` (`--zero-terminated`) option?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/rules-distill/scripts/scan-rules.sh` around lines 20 - 22, The loop
uses GNU-only sort -z which breaks on BSD/macOS; replace the null-terminated
pipeline with a newline-delimited one: run find "$RULES_DIR" -name '*.md' -not
-path '*/_archived/*' -print | sort and change the read loop to while IFS= read
-r f; do files+=("$f"); done < <( ... ); this keeps the use of RULES_DIR and the
files array and avoids sort -z while preserving the sorted list of markdown
files for the rest of the script.
skills/rules-distill/scripts/scan-skills.sh (2)

71-71: ⚠️ Potential issue | 🟠 Major

Restrict scan target to canonical SKILL.md files.

Line 71 inventories every Markdown file, which can include non-skill docs and distort skill counts used by rules-distill.

Proposed fix
-  done < <(find "$dir" -name "*.md" -type f 2>/dev/null | sort)
+  done < <(find "$dir" -type f -name "SKILL.md" 2>/dev/null | sort)

Based on learnings, the canonical skill file in this repository is SKILL.md inside each skill folder.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/rules-distill/scripts/scan-skills.sh` at line 71, The loop currently
feeds every Markdown file into the scanner via the find invocation (the line
using done < <(find "$dir" -name "*.md" ...)), which pulls non-canonical docs;
change the find invocation to only match canonical SKILL.md files (e.g., use
-name "SKILL.md" or a path filter matching "*/SKILL.md") so the script scans
only each skill's SKILL.md. Update the find call inside scan-skills.sh (the
pipeline feeding the while/read loop) to restrict results to SKILL.md and keep
the existing sort and error-redirection.

60-60: ⚠️ Potential issue | 🟠 Major

Make mtime extraction portable across GNU/BSD date.

Line 60 uses date -u -r "$file" with a file path, which is not portable across environments.

Proposed fix
+file_mtime_utc() {
+  local file="$1" epoch
+  epoch=$(stat -f %m "$file" 2>/dev/null || stat -c %Y "$file")
+  date -u -r "$epoch" +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u -d "@$epoch" +%Y-%m-%dT%H:%M:%SZ
+}
+
 ...
-    mtime=$(date -u -r "$file" +%Y-%m-%dT%H:%M:%SZ)
+    mtime=$(file_mtime_utc "$file")
On macOS/BSD `date`, does `date -r` accept a file path or epoch seconds?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/rules-distill/scripts/scan-skills.sh` at line 60, Replace the
non-portable mtime extraction with a small portable routine: detect which
stat/date flags are available and compute mtime by first getting the file's
epoch seconds (try stat -c %Y "$file" for GNU, fallback to stat -f %m "$file"
for BSD) into a variable (e.g., secs) and then format it to UTC ISO8601 using
date with the appropriate option (use date -u -d "@$secs" +%Y-%m-%dT%H:%M:%SZ
for GNU date, fallback to date -u -r "$secs" +%Y-%m-%dT%H:%M:%SZ for BSD);
update the code that sets mtime=$(...) to call this logic (you can implement as
a helper function like get_mtime and replace the mtime=$(date -u -r "$file"
+%Y-%m-%dT%H:%M:%SZ) line).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@skills/rules-distill/scripts/scan-rules.sh`:
- Around line 32-33: The current rel_path rewriting always prefixes "~/" which
yields malformed "~//..." when $file is not under $HOME; change scan-rules.sh to
only rewrite to a home-relative path when the file actually starts with $HOME
(check using the $file value and $HOME), e.g. if $file begins with "$HOME/" then
set rel_path="${file#"$HOME"/}" and rel_path="~/$rel_path", otherwise leave
rel_path equal to the original $file; update the logic around the rel_path
assignment in the script (the lines where rel_path is computed) so non-$HOME
paths are not rewritten.
- Around line 35-36: Replace the failing grep pipeline used to build
headings_json with an awk-based extraction that doesn't exit nonzero when no
matches are found: locate the assignment to headings_json (the line using grep
-E '^## ' "$file" ... | jq -R . | jq -s '.') and change it to invoke awk to emit
the matched title text (stripping the leading "## ") so the rest of the pipeline
(jq -R . | jq -s '.') can run successfully and produce [] for files without
headings; ensure you update the same variable name headings_json and preserve
the jq steps so behavior for files with headings remains identical.

---

Duplicate comments:
In `@skills/rules-distill/scripts/scan-rules.sh`:
- Around line 13-15: Replace the manual JSON string construction that uses
RULES_DIR with a safe jq-based serialization: when the directory check in
scan-rules.sh (the if [[ ! -d "$RULES_DIR" ]] branch) fails, invoke jq -n with
--arg to set "error" and "path" fields so paths with quotes/backslashes are
escaped correctly, and write the jq output to stderr before exiting; update the
echo/exit sequence to use this jq-generated JSON instead of concatenating
strings.
- Around line 20-22: The loop uses GNU-only sort -z which breaks on BSD/macOS;
replace the null-terminated pipeline with a newline-delimited one: run find
"$RULES_DIR" -name '*.md' -not -path '*/_archived/*' -print | sort and change
the read loop to while IFS= read -r f; do files+=("$f"); done < <( ... ); this
keeps the use of RULES_DIR and the files array and avoids sort -z while
preserving the sorted list of markdown files for the rest of the script.

In `@skills/rules-distill/scripts/scan-skills.sh`:
- Line 71: The loop currently feeds every Markdown file into the scanner via the
find invocation (the line using done < <(find "$dir" -name "*.md" ...)), which
pulls non-canonical docs; change the find invocation to only match canonical
SKILL.md files (e.g., use -name "SKILL.md" or a path filter matching
"*/SKILL.md") so the script scans only each skill's SKILL.md. Update the find
call inside scan-skills.sh (the pipeline feeding the while/read loop) to
restrict results to SKILL.md and keep the existing sort and error-redirection.
- Line 60: Replace the non-portable mtime extraction with a small portable
routine: detect which stat/date flags are available and compute mtime by first
getting the file's epoch seconds (try stat -c %Y "$file" for GNU, fallback to
stat -f %m "$file" for BSD) into a variable (e.g., secs) and then format it to
UTC ISO8601 using date with the appropriate option (use date -u -d "@$secs"
+%Y-%m-%dT%H:%M:%SZ for GNU date, fallback to date -u -r "$secs"
+%Y-%m-%dT%H:%M:%SZ for BSD); update the code that sets mtime=$(...) to call
this logic (you can implement as a helper function like get_mtime and replace
the mtime=$(date -u -r "$file" +%Y-%m-%dT%H:%M:%SZ) line).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 3b979900-6adc-4eb8-b03e-d8afc2c49ff7

📥 Commits

Reviewing files that changed from the base of the PR and between a68714f and b62cbbf.

📒 Files selected for processing (2)
  • skills/rules-distill/scripts/scan-rules.sh
  • skills/rules-distill/scripts/scan-skills.sh

Comment on lines +32 to +33
rel_path="${file#"$HOME"/}"
rel_path="~/$rel_path"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix home-relative path rewriting for non-$HOME files.

Lines 32-33 produce malformed values like ~//tmp/... when RULES_DIR is outside $HOME (common in tests via RULES_DISTILL_DIR).

Proposed fix
-  rel_path="${file#"$HOME"/}"
-  rel_path="~/$rel_path"
+  if [[ "$file" == "$HOME/"* ]]; then
+    rel_path="~/${file#"$HOME"/}"
+  else
+    rel_path="$file"
+  fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
rel_path="${file#"$HOME"/}"
rel_path="~/$rel_path"
if [[ "$file" == "$HOME/"* ]]; then
rel_path="~/${file#"$HOME"/}"
else
rel_path="$file"
fi
🧰 Tools
🪛 Shellcheck (0.11.0)

[warning] 33-33: Tilde does not expand in quotes. Use $HOME.

(SC2088)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/rules-distill/scripts/scan-rules.sh` around lines 32 - 33, The current
rel_path rewriting always prefixes "~/" which yields malformed "~//..." when
$file is not under $HOME; change scan-rules.sh to only rewrite to a
home-relative path when the file actually starts with $HOME (check using the
$file value and $HOME), e.g. if $file begins with "$HOME/" then set
rel_path="${file#"$HOME"/}" and rel_path="~/$rel_path", otherwise leave rel_path
equal to the original $file; update the logic around the rel_path assignment in
the script (the lines where rel_path is computed) so non-$HOME paths are not
rewritten.

Comment on lines +35 to +36
# Extract H2 headings (## Title) into a JSON array via jq
headings_json=$(grep -E '^## ' "$file" 2>/dev/null | sed 's/^## //' | jq -R . | jq -s '.')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail
tmp="$(mktemp)"
printf '# Title\nNo H2 here\n' > "$tmp"
( grep -E '^## ' "$tmp" | sed 's/^## //' | jq -R . | jq -s '.' ) >/dev/null
rm -f "$tmp"

Repository: affaan-m/everything-claude-code

Length of output: 57


🏁 Script executed:

# Check if the file exists and view the relevant lines
head -40 skills/rules-distill/scripts/scan-rules.sh | tail -10

Repository: affaan-m/everything-claude-code

Length of output: 370


🏁 Script executed:

# Test the awk-based fix under pipefail
#!/bin/bash
set -euo pipefail
tmp="$(mktemp)"
printf '# Title\nNo H2 here\n' > "$tmp"

# Test the proposed awk fix
result=$(awk '/^## /{sub(/^## /,""); print}' "$tmp" | jq -R . | jq -s '.')
echo "awk fix result: $result"
echo "Exit code: $?"

rm -f "$tmp"

Repository: affaan-m/everything-claude-code

Length of output: 106


🏁 Script executed:

# Also test with a file that HAS H2 headings to make sure the fix works correctly
#!/bin/bash
set -euo pipefail
tmp="$(mktemp)"
printf '# Title\n## Heading One\n## Heading Two\nSome text\n' > "$tmp"

echo "Testing with file that has H2 headings:"
result=$(awk '/^## /{sub(/^## /,""); print}' "$tmp" | jq -R . | jq -s '.')
echo "Result: $result"

rm -f "$tmp"

Repository: affaan-m/everything-claude-code

Length of output: 159


grep exits 1 when no matches found, causing pipeline failure under pipefail.

Line 36 terminates the script for files without ## headings because grep returns exit code 1, which propagates through the pipe when pipefail is set (as shown in the script's set -euo pipefail).

Proposed fix
-  headings_json=$(grep -E '^## ' "$file" 2>/dev/null | sed 's/^## //' | jq -R . | jq -s '.')
+  headings_json=$(awk '/^## /{sub(/^## /,""); print}' "$file" | jq -R . | jq -s '.')

The awk alternative handles files without matches gracefully (returning [] instead of failing) while producing identical output for files with headings.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@skills/rules-distill/scripts/scan-rules.sh` around lines 35 - 36, Replace the
failing grep pipeline used to build headings_json with an awk-based extraction
that doesn't exit nonzero when no matches are found: locate the assignment to
headings_json (the line using grep -E '^## ' "$file" ... | jq -R . | jq -s '.')
and change it to invoke awk to emit the matched title text (stripping the
leading "## ") so the rest of the pipeline (jq -R . | jq -s '.') can run
successfully and produce [] for files without headings; ensure you update the
same variable name headings_json and preserve the jq steps so behavior for files
with headings remains identical.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants