diff --git a/.cursor/commands/bug-fix.md b/.agents/commands/bug-fix.md similarity index 74% rename from .cursor/commands/bug-fix.md rename to .agents/commands/bug-fix.md index 5ecc88a..d75bd46 100644 --- a/.cursor/commands/bug-fix.md +++ b/.agents/commands/bug-fix.md @@ -1,8 +1,5 @@ # Bug Fix Command -あなたは Bun + TypeScript のコードベースのバグ修正のスペシャリストです。 -現在起きている error や bug をソースコードや context から解釈し、その問題を最小限の修正で解決することが mission です。 - ## Steps 1. まず以下のerrorをstackをよく確認し、問題となるfile, codeを直接読み込める場合はコンテキストとして読み込み具体的な原因を特定してください。 diff --git a/.cursor/commands/check-simirality.md b/.agents/commands/check-simirality.md similarity index 95% rename from .cursor/commands/check-simirality.md rename to .agents/commands/check-simirality.md index d191397..5a082a6 100644 --- a/.cursor/commands/check-simirality.md +++ b/.agents/commands/check-simirality.md @@ -1,7 +1,4 @@ -# Check Similarity Command - -You are a expert of codebase similarity detection. -Your mission is to detect duplicate code in the codebase and provide a refactoring plan. +# similarity-ts: AI Assistant Guide ## Purpose diff --git a/.cursor/commands/commit.md b/.agents/commands/commit.md similarity index 97% rename from .cursor/commands/commit.md rename to .agents/commands/commit.md index b122d55..2709ecf 100644 --- a/.cursor/commands/commit.md +++ b/.agents/commands/commit.md @@ -1,7 +1,5 @@ ### Command: Commit current changes in logical groups (simple) -You are a expert of Git commit message. - Do exactly this, non-interactively, from repo root. 1. Ignore when staging: diff --git a/.agents/commands/final-check.md b/.agents/commands/final-check.md new file mode 100644 index 0000000..92e900c --- /dev/null +++ b/.agents/commands/final-check.md @@ -0,0 +1,8 @@ +# Final Check Command + +## Steps + +1. lint, format, typecheckを実行し、error, warningが出ていないことをしっかりと確認してください。 +2. error, warningが出ている場合はその根本原因を冷静に特定し、その原因を解決するための最小限の修正を行ってください。 +3. 修正が完了したら再度lint, format, typecheckを実行し、error, warningが出ていないことを確認してください。 +4. error, warningがなくなるまで2, 3を繰り返してください。 diff --git a/.cursor/commands/refactor.md b/.agents/commands/refactor.md similarity index 66% rename from .cursor/commands/refactor.md rename to .agents/commands/refactor.md index c0c778b..a9dfede 100644 --- a/.cursor/commands/refactor.md +++ b/.agents/commands/refactor.md @@ -4,7 +4,7 @@ --- -## プロジェクト規約(厳守) +プロジェクト規約(厳守) - 対象は Bun + TypeScript。コードは src/ に集約し、co-location を優先。ファイル乱立を避ける。 - 公開 API の互換を維持。破壊的変更が不可避な場合は移行シムと非推奨注記を同時に用意。 @@ -14,53 +14,53 @@ --- -## コーディングルール(適用指針) +コーディングルール(適用指針) -- FP: 純粋関数優先/不変更新/副作用分離/型安全。 -- DDD: 値オブジェクト/エンティティの区別、集約で整合性保証、リポジトリで永続化抽象化、境界付けられたコンテキスト意識。 -- TDD: Red-Green-Refactor、小さな反復、テストを仕様として扱う。 +FP: 純粋関数優先/不変更新/副作用分離/型安全。 +DDD: 値オブジェクト/エンティティの区別、集約で整合性保証、リポジトリで永続化抽象化、境界付けられたコンテキスト意識。 +TDD: Red-Green-Refactor、小さな反復、テストを仕様として扱う。 -## 型とパターン +型とパターン -```ts -type Branded = T & { _brand: B }; +type Branded = T & { \_brand: B }; type Result = { ok: true; value: T } | { ok: false; error: E }; // 値オブジェクトは不変・自己検証・ドメイン操作を持つ -``` -## リポジトリ/アダプタ: ドメインのみを扱い外部依存を抽象化。テスト用インメモリ実装を用意。 +リポジトリ/アダプタ: ドメインのみを扱い外部依存を抽象化。テスト用インメモリ実装を用意。 -## 準備(健康チェック) +--- + +準備(健康チェック) - 型: bun run typecheck - Lint: bun run lint - テスト: bun run test --coverage - デッドコード(任意): bunx ts-prune -p tsconfig.json -## 解析ステップ(similarity) +--- + +解析ステップ(similarity) 目的: 重複/類似コードを検出し、「影響度 = lines × similarity」で優先度付け。 実行 -```bash similarity-ts ./src --threshold 0.80 --min-lines 8 --cross-file --print -``` -## 必要に応じて部分重複を深掘り +# 必要に応じて部分重複を深掘り -```bash similarity-ts ./src --experimental-overlap \ --threshold 0.75 --overlap-min-window 8 --overlap-max-window 25 \ --overlap-size-tolerance 0.25 --print -``` -## 分析 +分析 - 出力を重複グループ単位に集計し、similarity(%)/lines/priority=lines×similarity を算出。 - 優先度降順で対応計画を作成。 -## 設計原則(抽出・統合) +--- + +設計原則(抽出・統合) - 同一ロジックは utility / service / strategy / hook に抽出して再利用。 - 引数差や前後処理差は コールバック注入・テンプレートメソッドで吸収。 @@ -68,47 +68,42 @@ similarity-ts ./src --experimental-overlap \ - 例外・エラーは Result で明示化し早期リターンを徹底。 - 公開 API 変更は 薄いラッパーで段階的移行(旧 → 新を委譲)し、非推奨注記を付与。 -## 実装サイクル(serena で安全適用) +--- -1 重複グループ = 1 サイクル で反復。各サイクルは「探索 → 設計 → 編集 → 検証 → サマリ」。 +実装サイクル(serena で安全適用) -1. 探索 - serena 検索で該当箇所と呼び出し元を列挙し、影響範囲を固定。 -2. 設計(明文化) - 抽出先モジュール/関数名、引数・戻り値の型、例外/Result 方針、副作用位置と logger.debug を定義。 - 公開 API に触れる場合は 移行シム(旧署名 → 新署名) と非推奨注記を同時に設計。 -3. 編集(最小差分) - serena で該当ファイルを開き、co-location を保ちつつ抽出/統合。過度な新規ファイルは作らない。 -4. 検証(即時) +1 重複グループ = 1 サイクル で反復。各サイクルは「探索 → 設計 → 編集 → 検証 → サマリ」。 1. 探索 +serena 検索で該当箇所と呼び出し元を列挙し、影響範囲を固定。 2. 設計(明文化) +抽出先モジュール/関数名、引数・戻り値の型、例外/Result 方針、副作用位置と logger.debug を定義。 +公開 API に触れる場合は 移行シム(旧署名 → 新署名) と非推奨注記を同時に設計。 3. 編集(最小差分) +serena で該当ファイルを開き、co-location を保ちつつ抽出/統合。過度な新規ファイルは作らない。 4. 検証(即時) -```bash bun run typecheck && bun run lint && bun run test --coverage -``` -失敗時は差分最小で手戻りし再実行。 +失敗時は差分最小で手戻りし再実行。 5. サマリ出力(下記フォーマットに厳密準拠)。 -5. サマリ出力(下記フォーマットに厳密準拠)。 +--- 出力フォーマット(各サイクル) -```txt 【対象グループ】<ファイルとシンボルの一覧> 【検出指標】similarity=<%> / lines= / priority= 【方針】抽出/統合/汎用化の要点(1〜3 行) 【編集内容】影響ファイルと主要変更点(関数名, 引数, 戻り値, 例外/Result, ログ) 【検証結果】tsc/eslint/test のステータス要約 【フォローアップ】残タスク/次候補/移行ガイド(旧 API→ 新 API) -``` --- -## 改善フェーズ(継続的リファクタ) +改善フェーズ(継続的リファクタ) - サイクルごとに bun run typecheck && bun run lint && bun run test --coverage を回す。 - デッドコード削除(任意): bunx ts-prune -p tsconfig.json - 値オブジェクト化・ドメイン語彙の型化を継続。過度な抽象化は避け、複雑性に応じて調整。 -## 終了条件 +--- + +終了条件 - 上位グループの合計影響度の残余が全体の ≤20% になった時点で完了提案。 - 最終サマリに 適用一覧/非推奨 API/移行ガイド を提示し終了。 diff --git a/.agents/commands/worktree-pr.md b/.agents/commands/worktree-pr.md new file mode 100644 index 0000000..5103e1d --- /dev/null +++ b/.agents/commands/worktree-pr.md @@ -0,0 +1,213 @@ +# Worktree PR Command + +Migrate changes from a worktree to a new branch cut from main (or a specified base branch), commit with proper messages, push, and create a pull request. + +## Arguments + +$ARGUMENTS = `[branch-name] [base-branch=main]` + +- **branch-name** (optional): Name of the branch to create (e.g., `feat/add-buy-agent`) +- **base-branch** (optional): Base branch to cut from (default: `main`) + +### Auto-generated Branch Name (when branch-name is omitted) + +When branch-name is omitted, the branch name is automatically determined from the **change analysis in Step 2**. + +#### Auto-naming Rules + +1. Determine the dominant change type: `feat` / `fix` / `refactor` / `chore` / `docs` / `test` / `perf` / `style` +2. Determine the primary scope (see Scope Guidelines) +3. Generate a descriptive slug (English, kebab-case, 3-5 words) summarizing the changes +4. Format: `${type}/${slug}` + +#### Auto-naming Examples + +| Changes | Auto-generated branch name | +| ------------------------------- | ------------------------------ | +| Add purchase agent feature | `feat/add-purchase-agent` | +| Add purchase table to DB schema | `feat/add-purchase-schema` | +| Refactor notification logic | `refactor/notification-logic` | +| Fix CI workflow | `fix/ci-workflow` | +| Fix API endpoint bug | `fix/api-endpoint-error` | +| Add unit tests | `test/add-purchase-unit-tests` | +| Update Docker configuration | `chore/update-docker-config` | + +--- + +## Steps + +Do exactly this, non-interactively, from repo root. + +### 1. Assess Current State + +Run the following commands **in parallel** to fully understand the current state: + +```bash +# Worktree info +git worktree list + +# Current branch +git branch --show-current + +# Full picture of changes (staged + unstaged + untracked) +git status --porcelain=v1 + +# Committed changes ahead of origin/base-branch +git log origin/${base-branch}..HEAD --oneline 2>/dev/null || echo "No commits ahead" + +# Diff stat against origin/base-branch +git diff origin/${base-branch} --stat 2>/dev/null + +# Fetch latest from remote +git fetch origin ${base-branch} +``` + +### 2. Analyze Changes + +1. Analyze **all changes** by combining `git diff origin/${base-branch}` (committed + unstaged) with untracked files +2. Read the content of each changed file and determine: + - Functional grouping (which responsibility/layer each change belongs to) + - Intent of each group (feat / fix / refactor / chore / docs / test / style / perf) + - Appropriate scope (e.g., db, api, web, infra, auth, etc.) +3. Plan the commit strategy (following the format described below) +4. **If branch-name was omitted**: Determine the branch name from the analysis above using the auto-naming rules + - Generate `${type}/${slug}` from the most dominant type and scope across all changes + - If an active spec exists under `.kiro/specs/`, reflect its feature name in the slug + +### 3. Create Branch and Migrate Changes + +**Case A: Uncommitted changes only (HEAD is the same as origin/${base-branch}, or no commits ahead)** + +```bash +# Stash all changes +git stash push -u -m "worktree-pr: temp stash for ${branch-name}" + +# Create new branch from origin/${base-branch} +git checkout -b ${branch-name} origin/${base-branch} + +# Apply stashed changes +git stash pop +``` + +**Case B: Committed changes exist** + +```bash +# Record the range of committed changes +FIRST_COMMIT=$(git log origin/${base-branch}..HEAD --reverse --format='%H' | head -1) +LAST_COMMIT=$(git rev-parse HEAD) + +# Stash uncommitted changes if any +git stash push -u -m "worktree-pr: temp stash for ${branch-name}" 2>/dev/null + +# Create new branch from origin/${base-branch} +git checkout -b ${branch-name} origin/${base-branch} + +# Cherry-pick committed changes +git cherry-pick ${FIRST_COMMIT}^..${LAST_COMMIT} +# If cherry-pick conflicts: fall back to soft reset for manual commit +# git reset --soft origin/${base-branch} + +# Apply stash if present +git stash pop 2>/dev/null +``` + +**Case C: Patch-based migration (fallback when Case B conflicts)** + +```bash +# Create patch of all diffs against origin/${base-branch} +git diff origin/${base-branch} > /tmp/worktree-pr.patch + +# Archive untracked files +git ls-files --others --exclude-standard -z | xargs -0 tar czf /tmp/worktree-pr-untracked.tar.gz 2>/dev/null + +# Create new branch +git checkout -b ${branch-name} origin/${base-branch} + +# Apply patch +git apply /tmp/worktree-pr.patch +tar xzf /tmp/worktree-pr-untracked.tar.gz 2>/dev/null + +# Cleanup +rm -f /tmp/worktree-pr.patch /tmp/worktree-pr-untracked.tar.gz +``` + +### 4. Commit in Logical Groups + +Based on the analysis from Step 2, commit changes in logical groups. **Follow `.cursor/rules/commit-style.mdc`** for format, type/emoji, order, scope, and commands. + +### 5. Push + +```bash +git push -u origin ${branch-name} +``` + +### 6. Create PR + +Create a pull request using `gh pr create`. + +#### PR Title + +- If there is only 1 commit: use that commit message as-is +- If there are multiple commits: craft a title that summarizes all changes +- Format: `${emoji} ${type}(${scope}): ${summary}` (same format as commit messages) + +#### PR Body Template + +```markdown +## Summary + +<1-3 bullet points summarizing the overall changes> + +## Changes + + + +### ${scope1} + +- ${change1} +- ${change2} + +### ${scope2} + +- ${change3} + +## Test plan + +- [ ] +- [ ] +``` + +#### Command + +```bash +gh pr create \ + --base ${base-branch} \ + --title "${pr_title}" \ + --body "$(cat <<'EOF' +${pr_body} +EOF +)" +``` + +### 7. Final Verification + +```bash +# Verify no remaining changes +git status --porcelain=v1 + +# Display PR URL +gh pr view --web 2>/dev/null || gh pr view +``` + +Report the PR URL to the user as the final output. + +--- + +## Important Notes + +- Follow `.gitignore` strictly. Additionally, never stage `.env`, `.cursor/**` (except commands) +- Never commit files containing credentials or secrets +- If a conflict occurs, report the situation to the user and ask for instructions +- Default base-branch to `main` when omitted +- Auto-generate branch-name from change analysis when omitted (see auto-naming rules) +- If the branch-name (specified or auto-generated) already exists, report an error and suggest an alternative name diff --git a/.agents/memory/todo.md b/.agents/memory/todo.md new file mode 100644 index 0000000..4e2fd00 --- /dev/null +++ b/.agents/memory/todo.md @@ -0,0 +1,73 @@ +# Task Plan + +- [x] Reproduce the reported diagnostics with a deterministic command. +- [x] Fix Anchor feature wiring for `idl-build` so macro expansion works under `clippy --all-features`. +- [x] Remove the unused `initialize` instruction/context that triggers the lifetime diagnostic in macro expansion. +- [x] Run `cargo fmt`. +- [x] Run `cargo check -p doom-nft-program`. +- [x] Run `cargo clippy -p doom-nft-program --all-features --all-targets`. +- [x] Run `cargo test`. + +# Notes + +- `docs/` was not present in this repository, so implementation context came from source code and existing workspace config. +- The reported errors reproduce under `cargo clippy -p doom-nft-program --all-features --all-targets`, which matches the workspace rust-analyzer setting using `clippy`. + +# Review + +- Added `anchor-spl/idl-build` to the crate `idl-build` feature so Anchor's IDL-related macro expansion works for SPL account types under `clippy --all-features`. +- Added a direct `solana-program` dependency so macro-generated references to `solana_program` resolve under rust-analyzer/clippy. +- Removed the unused `initialize` instruction and empty `Initialize` accounts struct, which was the source of the bogus lifetime diagnostic around `Context`. +- Verification passed with `cargo fmt`, `cargo check -p doom-nft-program`, `cargo clippy -p doom-nft-program --all-features --all-targets`, and `cargo test`. + +# Task Plan: Anchor Docs Summary (2026-03-11) + +- [x] Read project context and current Anchor workspace files +- [x] Check latest official Anchor documentation for local deployment and client calls +- [x] Summarize concrete local deploy/call workflows for this repo + +# Review: Anchor Docs Summary (2026-03-11) + +- Confirmed against official Anchor docs that `anchor test` deploys workspace programs before running tests. +- Confirmed that `anchor test` auto-starts a local validator when the configured cluster is `localnet`. +- Confirmed that `anchor test --skip-local-validator` is the path for reusing an already running validator. +- Confirmed that `anchor shell` starts a Node.js shell with an Anchor client configured from local workspace config. +- Confirmed that `anchor build` writes artifacts under `target/deploy`, `target/idl`, and `target/types`. +- Repo-specific caveat: local `anchor-cli` is `0.32.1`, while this workspace uses `anchor-lang = 0.29.0` and `@coral-xyz/anchor = ^0.31.1`, so version mismatch should be called out. + +# Task Plan: Tooling Setup (2026-03-11) + +- [x] Inspect current workspace tooling, lockfiles, and existing CI/hook files +- [x] Add Rust formatter/linter configuration and package scripts +- [x] Add lefthook configuration and install wiring +- [x] Add GitHub Actions CI for format, lint, and tests +- [x] Run verification commands and record outcomes + +# Review: Tooling Setup (2026-03-11) + +- Added `rust-toolchain.toml` to require the stable toolchain with `rustfmt` and `clippy`. +- Switched Anchor's JS package manager setting from `yarn` to `bun` to match the existing lockfile and local workflow. +- Upgraded `prettier` to `3.8.1`, added `lefthook` `2.1.3`, and rewrote `package.json` scripts around `format`, `lint`, `test`, and `check`. +- Added `lefthook.yml` with pre-commit checks for Rust fmt, Clippy, and Prettier, plus a pre-push test gate. +- Added `.github/workflows/ci.yml` to run Bun install, Prettier check, Rust fmt, Clippy, and workspace tests on push/PR. +- Added `Makefile` targets for `install`, `build`, `test`, `lint`, `lint:fix`, `format`, and `format:fix`, using a catch-all alias rule for the colon forms. +- Verified locally with `bun run prepare`, `bun run format:ts:check`, `cargo fmt --all --check`, `cargo clippy --workspace --all-targets --all-features -- -D warnings`, `cargo test --workspace`, and `bun run check`. +- Verified locally with `make install`, `make format`, `make lint`, `make test`, `make build`, `make -n lint:fix`, and `make -n format:fix`. +- Remaining note: Rust commands emit a future-incompatibility warning from `solana-client v1.18.26`, but the checks pass. + +# Task Plan: Create PR (2026-03-11) + +- [x] Load project memory and inspect repository state +- [ ] Review the pending diff for tests, silent failures, comments, types, and general risks +- [ ] Create a feature branch from `main` for the PR +- [ ] Commit the current repository changes with a concise message +- [ ] Push the branch to `origin` +- [ ] Review the full PR diff against `origin/main` +- [ ] Create or update the GitHub PR with a concise body +- [ ] Poll CI and triage actionable feedback +- [ ] Check mergeability against `origin/main` +- [ ] Record PR outcome and any follow-up needed + +# Review: Create PR (2026-03-11) + +- In progress. diff --git a/.agents/rules/coderabbit.mdc b/.agents/rules/coderabbit.mdc new file mode 100644 index 0000000..d95642f --- /dev/null +++ b/.agents/rules/coderabbit.mdc @@ -0,0 +1,9 @@ +--- +alwaysApply: true +--- + +# Running the CodeRabbit CLI + +CodeRabbit is already installed in the terminal. Run it as a way to review your code. Run the command: cr -h for details on comands available. In general, I want you to run coderabbit with the `--prompt-only` flag. To review uncommitted changes (this is what we'll use most of the time) run: `coderabbit --prompt-only -t uncommitted`. + +IMPORTANT: When running CodeRabbit to review code changes, don't run it more than 3 times in a given set of changes. \ No newline at end of file diff --git a/.agents/rules/commit-style.mdc b/.agents/rules/commit-style.mdc new file mode 100644 index 0000000..819555b --- /dev/null +++ b/.agents/rules/commit-style.mdc @@ -0,0 +1,59 @@ +--- +description: Commit message format and conventions +alwaysApply: false +globs: ["**/.claude/commands/**", "**/lefthook.yml"] +--- + +# Commit Style + +Use this format when committing changes (e.g. via `agent-commit-push`, `worktree-pr`, or manual commits). + +## Message Format + +``` +${emoji} ${type}(${scope}): ${summary} + +- ${change_detail_1} +- ${change_detail_2} +``` + +## Type → Emoji Mapping + +| type | emoji | usage | +| -------- | ----- | --------------------------- | +| chore | 🍱 | config, dependencies, build | +| docs | 📝 | documentation | +| style | 💄 | formatting, UI | +| refactor | ♻️ | refactoring | +| perf | 🚀 | performance improvement | +| feat | ✨ | new feature | +| fix | 🐛 | bug fix | +| test | 💚 | tests | + +## Commit Order + +`chore → docs → style → refactor → perf → feat → fix → test` + +## Scope Guidelines + +- `db` — schema, migrations, D1/Drizzle +- `api` or `server` — API routes, server logic +- `web` or `app` — pages, components, client UI +- `infra` — docker, .github/, wrangler, deployment +- `deps` — package.json, lockfiles +- `openclaw` — OpenClaw/Moltworker agent prompts and config +- When changes span multiple scopes: pick the most impactful scope, or use `core` + +## Commands (agent commits only) + +Set `GIT_AUTHOR_*` / `GIT_COMMITTER_*` to your agent identity before commit: + +```bash +git add -A -- ${file1} ${file2} ${fileN} +GIT_AUTHOR_NAME="..." GIT_AUTHOR_EMAIL="..." GIT_COMMITTER_NAME="..." GIT_COMMITTER_EMAIL="..." git commit -m "${emoji} ${type}(${scope}): ${summary}" -m "- ${detail1}\n- ${detail2}" +``` + +## Staging Rules + +- Follow `.gitignore` strictly. Never stage `.env`, `.cursor/**` (except commands), credentials +- Group by intent/responsibility, not only by folder diff --git a/.agents/rules/dotenvx.mdc b/.agents/rules/dotenvx.mdc new file mode 100644 index 0000000..46c6515 --- /dev/null +++ b/.agents/rules/dotenvx.mdc @@ -0,0 +1,77 @@ +--- +description: Environment variable management with dotenvx encryption +alwaysApply: true +--- + +# Environment Variable Management with dotenvx + +## Principles + +- Encrypt **only sensitive values** (API keys, secrets, passwords, tokens, credentials) using `dotenvx set` or `dotenvx encrypt` +- Non-sensitive parameters (ports, log levels, feature flags, hostnames, etc.) may remain plaintext in `.env` files +- **NEVER commit `.env.keys`** — it contains private decryption keys +- Encrypted `.env` files (`.env`, `.env.production`, `.env.staging`, etc.) SHOULD be committed for version control + +## .gitignore Required Entries + +``` +# dotenvx - never commit private decryption keys +.env.keys +``` + +**Note**: Only `.env.keys` is excluded. Encrypted `.env*` files are safe and intended to be committed. + +## Workflow + +### Adding / Updating Secrets + +```bash +# Set a sensitive value (automatically encrypted) +dotenvx set SECRET_KEY "value" + +# Set in a specific environment file +dotenvx set DATABASE_PASSWORD "value" -f .env.production +``` + +### Adding Non-Sensitive Config + +Add plaintext values directly to the `.env` file — no encryption needed: + +``` +PORT=3000 +LOG_LEVEL=info +NODE_ENV=production +``` + +### Encrypting Existing Plaintext Secrets + +```bash +# Encrypt all unencrypted values in .env files +dotenvx encrypt +``` + +### Pre-Commit Checklist + +```bash +# Verify .env.keys is NOT staged +git diff --cached --name-only | grep -q '.env.keys' && echo "ERROR: Remove .env.keys from staging" && exit 1 + +# Ensure sensitive values are encrypted +dotenvx encrypt +``` + +### Runtime Decryption + +```bash +# Auto-decrypts using .env.keys or DOTENV_PRIVATE_KEY env var +dotenvx run -- + +# In production, pass the private key via environment variable +DOTENV_PRIVATE_KEY_PRODUCTION="..." dotenvx run -- +``` + +## Prohibited + +- Committing or pushing `.env.keys` +- Committing unencrypted secrets (API keys, passwords, tokens, etc.) +- Logging, hardcoding, or commenting private keys in source code diff --git a/.cursor/rules/mermaid.mdc b/.agents/rules/mermaid.mdc similarity index 97% rename from .cursor/rules/mermaid.mdc rename to .agents/rules/mermaid.mdc index d33555a..64b5ec3 100644 --- a/.cursor/rules/mermaid.mdc +++ b/.agents/rules/mermaid.mdc @@ -3,12 +3,13 @@ description: globs: *.md,*.mmd alwaysApply: false --- + Rule Name: mermaid Description: Mermaid Diagram Syntax and Best Practices ## Syntax Pitfalls & Rules -- **Parentheses/Spaces/Special Chars in Labels:** When using parentheses `()`, spaces, or special characters like `:` or `#` inside node labels (`[]`, `""`, `()`, `{}` etc.), always enclose the *entire* label text in double quotes `""`. +- **Parentheses/Spaces/Special Chars in Labels:** When using parentheses `()`, spaces, or special characters like `:` or `#` inside node labels (`[]`, `""`, `()`, `{}` etc.), always enclose the _entire_ label text in double quotes `""`. - Correct (Flowchart): `A["Node text with (parentheses)"] --> B` - Correct (Flowchart): `Cron_Twitter["fa:fa-clock Schedule: Twitter Scrape (5m)"]` - Incorrect: `A[Node text with (parentheses)] --> B` @@ -24,7 +25,7 @@ Description: Mermaid Diagram Syntax and Best Practices - **Layout (Flowchart):** Experiment with `graph TD/LR/etc.` directions. Use invisible links (`~~~`) sparingly to adjust layout if necessary. - **Styling (Flowchart):** `classDef` and `class` are useful for styling in Flowcharts. Ensure color contrast for light/dark themes. - **Icons & Images:** - - **Font Awesome (`fa:`):** Generally the **most compatible** way to add icons across different environments, *provided* the environment loads the Font Awesome CSS. Syntax: `NodeId["fa:fa-icon Label Text"]`. + - **Font Awesome (`fa:`):** Generally the **most compatible** way to add icons across different environments, _provided_ the environment loads the Font Awesome CSS. Syntax: `NodeId["fa:fa-icon Label Text"]`. - **Built-in Icons (`architecture-beta`):** This diagram type has a few built-in icons like `cloud`, `database`, `server`, `disk`, `internet`. These are reliable within `architecture-beta`. Syntax: `service myService(iconName)[Label]`. - **Image URLs (`@{ img: ... }` in Flowchart):** Embedding external images can work but depends on the renderer's ability to fetch and display them. Syntax: `NodeId@{ img: "URL", ... }`. - **Iconify/Prefixed Icons (`logos:`, `simple-icons:`, etc.):** **Use with caution.** These often require specific extensions (like `vscode-markdown-mermaid`) or environment setup (registering icon packs). They are **not guaranteed** to work universally (e.g., in standard GitHub Markdown or the Mermaid Live Editor). The `@icon` syntax (`NodeId@{ icon: "pack:name" }`) in newer Flowchart versions also falls into this category. **For broad compatibility, prefer `fa:` or basic built-in icons.** diff --git a/.agents/rules/proactive-subagent-and-skills.mdc b/.agents/rules/proactive-subagent-and-skills.mdc new file mode 100644 index 0000000..079ff22 --- /dev/null +++ b/.agents/rules/proactive-subagent-and-skills.mdc @@ -0,0 +1,18 @@ +--- +description: 通常タスクで該当 Skill/Subagent を積極的に探して使う +alwaysApply: true +--- + +# Skill と Subagent を積極活用する + +## 使い分け + +- **Skill**: 専門知識が必要なタスク → 作業開始前に `http://SKILL.md` を読み、手順/制約をそのまま適用する。宣言だけで終わらせない。 +- **Subagent**: 独立コンテキストが有効なタスク(リファクタリング・レビュー・広範囲探索など)、または並列実行したい場合に委任する。 + +※ 両者は併用可。Skill の知識を Subagent に渡して実行することもある。 +※ 小さいタスクで該当する Skill/Subagent がない場合は、通常フローで進める。 + +## 利用時は明示 + +使用する Skill/Subagent と理由は1行で明記してから進める diff --git a/.agents/rules/test.mdc b/.agents/rules/test.mdc new file mode 100644 index 0000000..cf862be --- /dev/null +++ b/.agents/rules/test.mdc @@ -0,0 +1,303 @@ +--- +alwaysApply: false +--- +# Test Rules + +## 基本方針 + +- testを通すことを目的としないで下さい。anyやunknownによってtestがpassしてもproductの品質が担保できていなかったり、specを満たしていなければ意味がありません。 +- テストの独立性を確保し、グローバル状態への依存を避けて下さい。 +- 型安全なモック実装を心がけ、必要な場合のみ型アサーションを使用して下さい。 + +## Test Implementation Flow + +1. 境界値などを考慮しながら必要なビジネス要件をすべて満たす様にtest caseを過不足なく書き出す。 +2. 必ず`src/`以下の実装をimportしてtestコードを実行する。 +3. 外部依存は明示的にモックし、テスト実行順序に依存しないようにする。 + +## Bun Test Mocking Best Practices + +### 型安全なモック実装 + +参考: [Bun Test Mocks](https://bun.com/docs/test/mocks), [Mock Functions Guide](https://bun.com/docs/guides/test/mock-functions) + +#### ✅ 良い例: 完全な型定義でモックを作成 + +```typescript +import { mock } from "bun:test"; + +interface UserService { + getUser(id: string): Promise; + createUser(data: CreateUserData): Promise; +} + +// 型安全なモック +const mockUserService: UserService = { + getUser: mock(async (id: string) => ({ id, name: "Test User" })), + createUser: mock(async (data: CreateUserData) => ({ id: "new-id", ...data })), +}; +``` + +#### ✅ 良い例: モックの呼び出し履歴を型安全にアクセス + +```typescript +import { mock } from "bun:test"; + +const mockGenerate = mock((request: ImageRequest) => Promise.resolve(ok({ imageBuffer: new ArrayBuffer(8) }))); + +// 型アサーションで安全にアクセス +const calls = mockGenerate.mock.calls as unknown as Array<[ImageRequest, unknown?]>; +const request = calls[0]![0]; + +// または安全なチェック +const call = mockGenerate.mock.calls[0]; +if (call && call.length > 0 && call[0]) { + const request = call[0] as ImageRequest; + expect(request.referenceImageUrl).toBe("https://example.com/image.png"); +} +``` + +#### ❌ 悪い例: 型アサーションなしの直接アクセス + +```typescript +// TypeScript エラーになる +const request = mockGenerate.mock.calls[0][0]; // Type error! +``` + +### テストの独立性 + +#### ⚠️ Bunのmock isolation問題と対策 + +**重要**: Bunでは `mock`/`spyOn`/`mock.module()` がテストスイート間で漏れる既知の問題があります([oven-sh/bun#6040](https://github.com/oven-sh/bun/issues/6040), [oven-sh/bun#12823](https://github.com/oven-sh/bun/issues/12823), [oven-sh/bun#7823](https://github.com/oven-sh/bun/issues/7823))。 +これにより、CIや `--randomize` で順序が変わるとテストが不安定になります。 + +**対策チェックリスト**(実装時に必ず確認): + +- [ ] **`tests/**`では`mock.module()` を使わない\*\* + - 例外: `tests/preload.ts` での import解決目的のみ(`cloudflare:workers`, `jose` browser build回避など) + - ビジネスロジックの差し替えには `spyOn` を使用 + +- [ ] **spyOnでmockしたら `afterEach` で必ず restore** + - `mock.restore()` と `mock.clearAllMocks()` を呼ぶ + - `tests/mocks/**` のヘルパを使う場合は、各ヘルパの `restore*()` を `afterEach` で呼ぶ + +- [ ] **Promiseを返す関数は `mockResolvedValue`/`async` を維持** + - `mock(() => true)` ではなく `mock(async () => true)` または `mockResolvedValue(true)` を使用 + - 型崩れ(`Promise` → `boolean`)を防ぐ + +- [ ] **`global.fetch` 等のグローバルは direct代入せず `spyOn` で差し替え** + - `global.fetch = mock(...)` ではなく `setupFetchMock()` を使用 + - `afterEach` で `restoreFetch()` を呼ぶ + +- [ ] **署名検証などの本質ロジックは、可能なら実装を通す** + - `verifySignature` を無条件 `true` でモックせず、`generateQuickNodeSignature()` で正しい署名を生成して実装を通す + - テストの意味を落とさない + +- [ ] **`tests/mocks/**` のヘルパを優先的に使用\*\* + - 既存のヘルパ(`setupLoggerMock`, `setupTelegramMock`, `setupGrammyMock` など)を活用 + - 新しいモックが必要な場合は、`tests/mocks/` にspyOnベースのヘルパを追加 + +- [ ] **`tests/setup.ts` は使わない(削除済み)** + - `tests/preload.ts` のみを使用(env/polyfill/import解決のみ) + - テストごとのセットアップは各テストファイルの `beforeEach`/`afterEach` で行う + +#### ⚠️ `mock.module()` はテスト間で共有される(詳細) + +Bun の `mock.module()` はプロセス全体に影響し、他のテストファイルにも波及します。 +**unit テストで実装を直接検証したいモジュールは、integration テストで `mock.module()` しないでください。** + +```typescript +// ❌ 悪い例: integration テストでグローバルにモック +// tests/integration/app/gallery-page.integration.test.tsx +mock.module("@/lib/glb-export-service", createGlbExportServiceMock()); +// → tests/unit/lib/glb-export-service.test.ts にも影響し、 +// 実装ではなくモックがテストされてしまう + +// ✅ 良い例: spyOnでモックし、afterEachでrestore +import { setupLoggerMock, restoreLogger } from "../mocks/logger"; + +beforeEach(() => { + setupLoggerMock(); +}); + +afterEach(() => { + restoreLogger(); // 必ずrestore +}); +``` + +#### ✅ 良い例: 明示的なパラメータ渡し + +```typescript +it("should fail when API key is not set", async () => { + // 明示的に空文字列を渡してテスト + const client = createTavilyClient({ apiKey: "" }); + const result = await client.searchToken(input); + expect(result.isErr()).toBe(true); +}); +``` + +#### ❌ 悪い例: グローバル状態への依存 + +```typescript +it("should fail when API key is not set", async () => { + // グローバル状態を変更(他のテストに影響する可能性) + delete process.env.TAVILY_API_KEY; + const client = createTavilyClient(); + // env.ts のモジュールキャッシュにより期待通り動作しない +}); +``` + +### 外部ライブラリの型定義変更への対応 + +#### 依存ライブラリ更新時のチェックリスト + +```bash +# 1. 依存関係を更新 +bun update + +# 2. 型エラーを即座に検出 +bun run typecheck + +# 3. テストの互換性確認 +bun run test + +# 4. すべて成功したらコミット +git add package.json bun.lockb +git commit -m "chore: update dependencies" +``` + +#### モックデータの型定義を最新に保つ + +```typescript +// CoinGecko API の型が変更された場合 +const mockResponse: CoinsMarketsResponse = [ + { + id: "bitcoin", + symbol: "btc", + name: "Bitcoin", + // 型定義の変更に追従 + max_supply: null, // number → number | null + ath_date: new Date("2021-11-10T14:24:11.849Z"), // string → Date + atl_date: new Date("2013-07-06T00:00:00.000Z"), + last_updated: new Date("2025-11-21T00:00:00.000Z"), + }, +]; +``` + +### 型アサーションの使用ガイドライン + +#### @ts-expect-error の適切な使用 + +```typescript +// ✅ 良い例: 理由を明記 +// @ts-expect-error - Cloudflare Workers types mismatch between test and runtime +const client = createWorkersAiClient({ aiBinding: mockAiBinding }); + +// ✅ 良い例: テスト用の型互換性問題 +// @ts-expect-error - BunSQLiteDatabase type mismatch but works at runtime +repository = new MarketSnapshotsRepository(db as any); +``` + +#### as any の使用は最小限に + +```typescript +// ❌ 避ける: 理由なく as any を使用 +const result = someFunction() as any; + +// ✅ 良い例: 具体的な型を指定 +const result = someFunction() as SpecificType; + +// ✅ より良い例: unknown を経由して安全に変換 +const result = someFunction() as unknown as SpecificType; +``` + +### モックのクリーンアップ + +**重要**: Bunでは `mock.restore()` だけでは不十分な場合があります。`tests/mocks/**` のヘルパを使う場合は、各ヘルパの `restore*()` を明示的に呼び出してください。 + +```typescript +import { beforeEach, afterEach } from "bun:test"; +import { setupLoggerMock, restoreLogger } from "../mocks/logger"; +import { setupTelegramMock, restoreTelegram } from "../mocks/telegram"; +import { setupFetchMock, restoreFetch } from "../mocks/fetch"; + +beforeEach(() => { + // テストごとにモックをセットアップ + setupLoggerMock(); + setupTelegramMock(); + setupFetchMock(); +}); + +afterEach(() => { + // 必ずrestore(順序は逆順が安全) + restoreFetch(); + restoreTelegram(); + restoreLogger(); + + // 念のため全体もrestore + mock.restore(); + mock.clearAllMocks(); +}); +``` + +#### ✅ 良い例: cleanupヘルパを使う + +```typescript +import { createCleanup } from "../mocks/cleanup"; +import { setupLoggerMock, restoreLogger } from "../mocks/logger"; +import { setupTelegramMock, restoreTelegram } from "../mocks/telegram"; + +describe("my test", () => { + const cleanup = createCleanup(); + + beforeEach(() => { + cleanup.add(() => restoreLogger()); + cleanup.add(() => restoreTelegram()); + setupLoggerMock(); + setupTelegramMock(); + }); + + afterEach(() => { + cleanup.run(); // すべてのcleanup関数を実行 + }); +}); +``` + +## Bun Mock Isolation問題の詳細 + +詳細は [docs/test/bun-test-mock-isolation.md](../docs/test/bun-test-mock-isolation.md) を参照してください。 + +### 主な問題点 + +1. **`mock.module()` はモジュール単位でグローバルに効く** + - 一度モックすると、他のテストファイルにも影響 + - `mock.restore()` や `jest.restoreAllMocks()` でも完全に復元できないケースが多い + +2. **`spyOn` もテスト間で漏れる** + - `afterEach` で明示的に `mockRestore()` しないと、次のテストに影響 + +3. **並列実行と順序ランダム化で顕在化** + - `--randomize` で順序が変わると、普段は見えない依存関係が露呈 + +### 検証コマンド + +```bash +# 順序ランダム化でフレーク検出 +bun test tests/unit --preload=./tests/preload.ts --randomize --seed=1 +bun test tests/unit --preload=./tests/preload.ts --randomize --seed=2 +bun test tests/unit --preload=./tests/preload.ts --randomize --seed=3 + +# 各テストを複数回実行してフレーク検出 +bun test tests/unit --preload=./tests/preload.ts --rerun-each 20 +``` + +## 参考リンク + +- [Bun Test Mocks Documentation](https://bun.com/docs/test/mocks) +- [Bun Mock Functions Guide](https://bun.com/docs/guides/test/mock-functions) +- [Bun Test Mock Isolation 問題と対策](../docs/test/bun-test-mock-isolation.md) +- [oven-sh/bun#6040](https://github.com/oven-sh/bun/issues/6040) - mock/spyOnがリセットされない +- [oven-sh/bun#12823](https://github.com/oven-sh/bun/issues/12823) - mock.moduleのrestoreが隔離されない +- [oven-sh/bun#7823](https://github.com/oven-sh/bun/issues/7823) - モジュールモックが競合する +- [oven-sh/bun#7376](https://github.com/oven-sh/bun/issues/7376) - モックされたモジュールが競合する +- [oven-sh/bun#18900](https://github.com/oven-sh/bun/issues/18900) - CIとローカルで順序が異なる diff --git a/.cursor/rules/typescript.mdc b/.agents/rules/typescript.mdc similarity index 98% rename from .cursor/rules/typescript.mdc rename to .agents/rules/typescript.mdc index 22572d8..62f3488 100644 --- a/.cursor/rules/typescript.mdc +++ b/.agents/rules/typescript.mdc @@ -18,8 +18,8 @@ TypeScript でのコーディングにおける一般的なベストプラクテ 1. 具体的な型を使用 - - any は禁止!!! - - unknown の使用もなるべく避ける、基本的にlibraryなどが型を提供してくれているのでそれを使用する + - any の使用を避ける + - unknown を使用してから型を絞り込む - Utility Types を活用する 2. 型エイリアスの命名 diff --git a/.agents/skills/bug-fix/SKILL.md b/.agents/skills/bug-fix/SKILL.md new file mode 100644 index 0000000..7d6a53d --- /dev/null +++ b/.agents/skills/bug-fix/SKILL.md @@ -0,0 +1,15 @@ +--- +name: bug-fix +description: Diagnose and fix implementation errors from stack traces, identify root cause, apply minimum effective fixes, and validate with lint/format/build/test loops until green. Use for debugging and defect resolution tasks. +--- + +# Bug Fix + +## Steps + +1. Read the error and stack trace carefully, then inspect the related files and code to identify a concrete root cause. +2. If root cause cannot be identified because required information is missing, report exactly what is missing to the developer. +3. Choose the minimum but effective method to solve that root cause. +4. Apply the fix. +5. Run lint, format, build, and test (when available), then verify all tests pass. +6. If tests fail, use the new failure as input and restart this flow from step 1. diff --git a/.agents/skills/code-review/SKILL.md b/.agents/skills/code-review/SKILL.md new file mode 100644 index 0000000..193df4f --- /dev/null +++ b/.agents/skills/code-review/SKILL.md @@ -0,0 +1,125 @@ +--- +name: code-review +description: "AI-powered code review using CodeRabbit. Default code-review skill. Trigger for any explicit review request AND autonomously when the agent thinks a review is needed (code/PR/quality/security)." +--- + +# CodeRabbit Code Review + +AI-powered code review using CodeRabbit. Enables developers to implement features, review code, and fix issues in autonomous cycles without manual intervention. + +## Capabilities + +- Finds bugs, security issues, and quality risks in changed code +- Groups findings by severity (Critical, Warning, Info) +- Works on staged, committed, or all changes; supports base branch/commit +- Provides fix suggestions (`--plain`) or minimal output for agents (`--prompt-only`) + +## When to Use + +When user asks to: + +- Review code changes / Review my code +- Check code quality / Find bugs or security issues +- Get PR feedback / Pull request review +- What's wrong with my code / my changes +- Run coderabbit / Use coderabbit + +## How to Review + +### 1. Check Prerequisites + +```bash +coderabbit --version 2>/dev/null || echo "NOT_INSTALLED" +coderabbit auth status 2>&1 +``` + +**If CLI not installed**, tell user: + +```text +Please install CodeRabbit CLI first: +curl -fsSL https://cli.coderabbit.ai/install.sh | sh +``` + +**If not authenticated**, tell user: + +```text +Please authenticate first: +coderabbit auth login +``` + +### 2. Run Review + +Use `--prompt-only` for minimal output optimized for AI agents: + +```bash +coderabbit review --prompt-only +``` + +Or use `--plain` for detailed feedback with fix suggestions: + +```bash +coderabbit review --plain +``` + +**Options:** + +| Flag | Description | +| ---------------- | -------------------------------------- | +| `-t all` | All changes (default) | +| `-t committed` | Committed changes only | +| `-t uncommitted` | Uncommitted changes only | +| `--base main` | Compare against specific branch | +| `--base-commit` | Compare against specific commit hash | +| `--prompt-only` | Minimal output optimized for AI agents | +| `--plain` | Detailed feedback with fix suggestions | + +**Shorthand:** `cr` is an alias for `coderabbit`: + +```bash +cr review --prompt-only +``` + +### 3. Present Results + +Group findings by severity: + +1. **Critical** - Security vulnerabilities, data loss risks, crashes +2. **Warning** - Bugs, performance issues, anti-patterns +3. **Info** - Style issues, suggestions, minor improvements + +Create a task list for issues found that need to be addressed. + +### 4. Fix Issues (Autonomous Workflow) + +When user requests implementation + review: + +1. Implement the requested feature +2. Run `coderabbit review --prompt-only` +3. Create task list from findings +4. Fix critical and warning issues systematically +5. Re-run review to verify fixes +6. Repeat until clean or only info-level issues remain + +### 5. Review Specific Changes + +**Review only uncommitted changes:** + +```bash +cr review --prompt-only -t uncommitted +``` + +**Review against a branch:** + +```bash +cr review --prompt-only --base main +``` + +**Review a specific commit range:** + +```bash +cr review --prompt-only --base-commit abc123 +``` + +## Documentation + +For more details: diff --git a/.agents/skills/create-pr/SKILL.md b/.agents/skills/create-pr/SKILL.md new file mode 100644 index 0000000..198c28c --- /dev/null +++ b/.agents/skills/create-pr/SKILL.md @@ -0,0 +1,95 @@ +--- +name: create-pr +description: Create or update a PR from current branch to main, watch CI, and address feedback +--- + +The user likes the state of the code. + +There are $`git status --porcelain | wc -l | tr -d ' '` uncommitted changes. +The current branch is $`git branch --show-current`. +The target branch is origin/main. + +$`git rev-parse --abbrev-ref @{upstream} 2>/dev/null && echo "Upstream branch exists." || echo "There is no upstream branch yet."` + +**Existing PR:** $`gh pr view --json number,title,url --jq '"#\(.number): \(.title) - \(.url)"' 2>/dev/null || echo "None"` + +The user requested a PR. + +Follow these exact steps: + +## Phase 1: Review the code + +1. Review test coverage +2. Check for silent failures +3. Verify code comments are accurate +4. Review any new types +5. General code review + +## Phase 2: Create/Update PR + +6. Run `git diff` to review uncommitted changes +7. Commit them. Follow any instructions the user gave you about writing commit messages. +8. Push to origin. +9. Use `git diff origin/main...` to review the full PR diff +10. Check if a PR already exists for this branch: + +- **If PR exists**: + - Draft/update the description in a temp file (e.g. `/tmp/pr-body.txt`). + - Update the PR body using the non-deprecated script: + - `./.agents/skills/create-pr/scripts/pr-body-update.sh --file /tmp/pr-body.txt` + - Re-fetch the body with `gh pr view --json body --jq .body` to confirm it changed. +- **If no PR exists**: Use `gh pr create --base main` to create a new PR. Keep the title under 80 characters and the description under five sentences. + +The PR description should summarize ALL commits in the PR, not just the latest changes. + +## Phase 3: Monitor CI and Address Issues + +Note: Keep commands CI-safe and avoid interactive `gh` prompts. Ensure `GH_TOKEN` or `GITHUB_TOKEN` is set in CI. + +11. Watch CI status and feedback using the polling script (instead of running `gh` in a loop): + +- Run `./.agents/skills/create-pr/scripts/poll-pr.sh --triage-on-change --exit-when-green` (polls every 15s for 10 mins). +- If checks fail, use `gh pr checks` or `gh run list` to find the failing run id, then: + - Fetch the failed check logs using `gh run view --log-failed` + - Analyze the failure and fix the issue + - Commit and push the fix + - Continue polling until all checks pass + +12. Check for merge conflicts: + +- Run `git fetch origin main && git merge origin/main` +- If conflicts exist, resolve them sensibly +- Commit the merge resolution and push + +13. Use the polling script output to notice new reviews and comments (avoid direct polling via `gh`): + +- If you need a full snapshot, run `./.agents/skills/create-pr/scripts/triage-pr.sh` once. +- If you need full context after the script reports a new item, fetch details once with `gh pr view --comments` or `gh api ...`. +- **Address feedback**: + - For bot reviews, read the review body and any inline comments carefully + - Address comments that are clearly actionable (bug fixes, typos, simple improvements) + - Skip comments that require design decisions or user input + - For addressed feedback, commit fixes with a message referencing the review/comment + +## Phase 4: Merge and Cleanup + +14. Once CI passes and the PR is approved, ask the user if they want to merge the PR. + +15. If the user confirms, merge the PR: + - Use `gh pr merge --squash --delete-branch` to squash-merge and delete the remote branch + +16. After successful merge, check if we're in a git worktree: + - Run: `[ "$(git rev-parse --git-common-dir)" != "$(git rev-parse --git-dir)" ]` + - **If in a worktree**: Use the ask user question tool (`request_user_input`) to ask if they want to clean up the worktree. If yes, run `wt remove --yes --force` to remove the worktree and local branch, then switch back to the main worktree. + - **If not in a worktree**: Just switch back to main with `git checkout main && git pull` + +## Completion + +Report the final PR status to the user, including: + +- PR URL +- CI status (passed/merged) +- Any unresolved review comments that need user attention +- Cleanup status (worktree removed or branch switched) + +If any step fails in a way you cannot resolve, ask the user for help. diff --git a/.agents/skills/create-pr/scripts/poll-pr.sh b/.agents/skills/create-pr/scripts/poll-pr.sh new file mode 100755 index 0000000..005cdce --- /dev/null +++ b/.agents/skills/create-pr/scripts/poll-pr.sh @@ -0,0 +1,247 @@ +#!/usr/bin/env bash +set -euo pipefail + +interval="${POLL_INTERVAL:-15}" +minutes="${POLL_MINUTES:-10}" +poll_once="${POLL_ONCE:-0}" +pr="" +repo="" +exit_when_green=0 +triage_on_change=0 + +usage() { + cat <<'USAGE' +Usage: poll-pr.sh [--pr ] [--repo ] [--interval ] [--minutes ] [--exit-when-green] [--triage-on-change] + +Polls PR checks, review comments, and conversation comments every 15s for 10 minutes by default. +Environment overrides: POLL_INTERVAL, POLL_MINUTES, POLL_ONCE=1. +USAGE +} + +while [[ $# -gt 0 ]]; do + case "$1" in + --pr) + pr="$2" + shift 2 + ;; + --repo) + repo="$2" + shift 2 + ;; + --interval) + interval="$2" + shift 2 + ;; + --minutes) + minutes="$2" + shift 2 + ;; + --exit-when-green) + exit_when_green=1 + shift 1 + ;; + --triage-on-change) + triage_on_change=1 + shift 1 + ;; + -h|--help) + usage + exit 0 + ;; + *) + echo "Unknown arg: $1" >&2 + usage >&2 + exit 1 + ;; + esac +done + +if [[ -z "$pr" ]]; then + pr="$(gh pr view --json number --jq .number 2>/dev/null || true)" +fi + +if [[ -z "$pr" ]]; then + echo "Could not determine PR number. Use --pr ." >&2 + exit 1 +fi + +if [[ -z "$repo" ]]; then + repo="$(gh repo view --json nameWithOwner --jq .nameWithOwner 2>/dev/null || true)" +fi + +if [[ -z "$repo" ]]; then + echo "Could not determine repo. Use --repo owner/name." >&2 + exit 1 +fi + +if ! [[ "$interval" =~ ^[0-9]+$ && "$minutes" =~ ^[0-9]+$ ]]; then + echo "interval and minutes must be integers." >&2 + exit 1 +fi + +iterations=$(( (minutes * 60) / interval )) +if (( iterations < 1 )); then + iterations=1 +fi +if [[ "$poll_once" == "1" ]]; then + iterations=1 +fi + +echo "Polling PR #$pr in $repo every ${interval}s for ${minutes}m (${iterations} iterations)." + +last_issue_comment_id="" +last_review_comment_id="" +last_review_id="" +last_failed_signature="" + +if ! gh auth status >/dev/null 2>&1; then + if [[ -z "${GITHUB_TOKEN:-}" && -z "${GH_TOKEN:-}" ]]; then + echo "Warning: gh auth not configured (set GH_TOKEN or GITHUB_TOKEN)." + fi +fi + +script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + +print_new() { + local kind="$1" + local time="$2" + local user="$3" + local url="$4" + local body="$5" + + echo "New $kind by @$user at $time" + if [[ -n "$body" ]]; then + echo " $body" + fi + if [[ -n "$url" ]]; then + echo " $url" + fi +} + +for i in $(seq 1 "$iterations"); do + now=$(date -u +"%Y-%m-%dT%H:%M:%SZ") + echo "[$now] Poll $i/$iterations" + changed=0 + + total=0 + pending=0 + failed=0 + success=0 + failed_checks="" + failed_signature="" + + # NOTE: + # - `gh pr checks --json ...` exits non-zero when any required check fails. + # - In that case, stdout can be empty, which previously caused false "unavailable". + # - We intentionally parse tabular output from `gh pr checks` so failures are still observable. + checks_output=$(gh pr checks "$pr" --repo "$repo" 2>/dev/null || true) + if [[ -n "$checks_output" ]]; then + failed_signature_lines=() + while IFS=$'\t' read -r check_name check_state _check_age check_url _rest; do + if [[ -z "$check_name" || -z "$check_state" ]]; then + continue + fi + + total=$((total + 1)) + check_state_lc=$(echo "$check_state" | tr '[:upper:]' '[:lower:]') + case "$check_state_lc" in + pass|success) + success=$((success + 1)) + ;; + pending|in_progress|queued|requested|waiting) + pending=$((pending + 1)) + ;; + skip|skipped|neutral) + ;; + *) + failed=$((failed + 1)) + failed_checks+="${check_name} (${check_state})"$'\t'"${check_url}"$'\n' + failed_signature_lines+=("${check_name}:${check_state}") + ;; + esac + done <<< "$checks_output" + + if [[ ${#failed_signature_lines[@]} -gt 0 ]]; then + failed_signature=$(printf '%s\n' "${failed_signature_lines[@]}" | sort | paste -sd'|' -) + fi + + echo "Checks: total=$total pending=$pending failed=$failed success=$success" + else + echo "Checks: unavailable" + fi + + if [[ -n "$failed_checks" ]]; then + echo "Failed checks:" + while IFS=$'\t' read -r name url; do + if [[ -n "$name" ]]; then + if [[ -n "$url" ]]; then + echo " - $name $url" + else + echo " - $name" + fi + fi + done <<< "$failed_checks" + fi + if [[ -n "$failed_signature" && "$failed_signature" != "$last_failed_signature" ]]; then + last_failed_signature="$failed_signature" + changed=1 + fi + + issue_line=$(gh api "repos/$repo/issues/$pr/comments?per_page=100" --jq ' + if length == 0 then "" else + (max_by(.created_at)) | "\(.id)\t\(.created_at)\t\(.user.login)\t\(.html_url)\t\(.body | gsub("\\n"; " ") | gsub("\\t"; " ") | .[0:200])" + end + ' 2>/dev/null || true) + if [[ -n "$issue_line" ]]; then + IFS=$'\t' read -r issue_id issue_time issue_user issue_url issue_body <<< "$issue_line" + if [[ "$issue_id" != "$last_issue_comment_id" ]]; then + last_issue_comment_id="$issue_id" + print_new "conversation comment" "$issue_time" "$issue_user" "$issue_url" "$issue_body" + changed=1 + fi + fi + + review_comment_line=$(gh api "repos/$repo/pulls/$pr/comments?per_page=100" --jq ' + if length == 0 then "" else + (max_by(.created_at)) | "\(.id)\t\(.created_at)\t\(.user.login)\t\(.html_url)\t\(.body | gsub("\\n"; " ") | gsub("\\t"; " ") | .[0:200])" + end + ' 2>/dev/null || true) + if [[ -n "$review_comment_line" ]]; then + IFS=$'\t' read -r rc_id rc_time rc_user rc_url rc_body <<< "$review_comment_line" + if [[ "$rc_id" != "$last_review_comment_id" ]]; then + last_review_comment_id="$rc_id" + print_new "inline review comment" "$rc_time" "$rc_user" "$rc_url" "$rc_body" + changed=1 + fi + fi + + review_line=$(gh api "repos/$repo/pulls/$pr/reviews?per_page=100" --jq ' + [ .[] | select(.submitted_at != null) ] | + if length == 0 then "" else + (max_by(.submitted_at)) | "\(.id)\t\(.submitted_at)\t\(.user.login)\t\(.html_url)\t\(.state)\t\(.body | gsub("\\n"; " ") | gsub("\\t"; " ") | .[0:200])" + end + ' 2>/dev/null || true) + if [[ -n "$review_line" ]]; then + IFS=$'\t' read -r r_id r_time r_user r_url r_state r_body <<< "$review_line" + if [[ "$r_id" != "$last_review_id" ]]; then + last_review_id="$r_id" + print_new "review ($r_state)" "$r_time" "$r_user" "$r_url" "$r_body" + changed=1 + fi + fi + + if [[ "$triage_on_change" == "1" && "$changed" == "1" ]]; then + bash "$script_dir/triage-pr.sh" --pr "$pr" --repo "$repo" || true + fi + + if [[ "$exit_when_green" == "1" && -n "${pending:-}" ]]; then + if (( pending == 0 && failed == 0 && total > 0 )); then + echo "Checks green; exiting early." + break + fi + fi + + if (( i < iterations )); then + sleep "$interval" + fi +done diff --git a/.agents/skills/create-pr/scripts/pr-body-update.sh b/.agents/skills/create-pr/scripts/pr-body-update.sh new file mode 100755 index 0000000..adc7231 --- /dev/null +++ b/.agents/skills/create-pr/scripts/pr-body-update.sh @@ -0,0 +1,98 @@ +#!/usr/bin/env bash +set -euo pipefail + +body_file="" +pr="" +repo="" + +usage() { + cat <<'USAGE' +Usage: pr-body-update.sh --file [--pr ] [--repo ] + +Updates a PR body using the GraphQL updatePullRequest mutation and verifies the result. +USAGE +} + +while [[ $# -gt 0 ]]; do + case "$1" in + --file) + body_file="$2" + shift 2 + ;; + --pr) + pr="$2" + shift 2 + ;; + --repo) + repo="$2" + shift 2 + ;; + -h|--help) + usage + exit 0 + ;; + *) + echo "Unknown arg: $1" >&2 + usage >&2 + exit 1 + ;; + esac +done + +if [[ -z "$body_file" ]]; then + echo "--file is required." >&2 + exit 1 +fi + +if [[ ! -f "$body_file" ]]; then + echo "Body file not found: $body_file" >&2 + exit 1 +fi + +if [[ ! -s "$body_file" ]]; then + echo "Body file is empty: $body_file" >&2 + exit 1 +fi + +if [[ -z "$pr" ]]; then + pr="$(gh pr view --json number --jq .number 2>/dev/null || true)" +fi + +if [[ -z "$pr" ]]; then + echo "Could not determine PR number. Use --pr ." >&2 + exit 1 +fi + +if [[ -z "$repo" ]]; then + repo="$(gh repo view --json nameWithOwner --jq .nameWithOwner 2>/dev/null || true)" +fi + +if [[ -z "$repo" ]]; then + echo "Could not determine repo. Use --repo owner/name." >&2 + exit 1 +fi + +pr_id="$(gh pr view "$pr" --repo "$repo" --json id --jq .id 2>/dev/null || true)" +if [[ -z "$pr_id" ]]; then + echo "Could not determine PR id for #$pr in $repo." >&2 + exit 1 +fi + +gh api graphql \ + -f query='mutation($id:ID!,$body:String!){updatePullRequest(input:{pullRequestId:$id, body:$body}){pullRequest{id}}}' \ + -f id="$pr_id" \ + -f body="$(cat "$body_file")" \ + >/dev/null + +updated_body="$(gh pr view "$pr" --repo "$repo" --json body --jq .body 2>/dev/null || true)" +if [[ -z "$updated_body" ]]; then + echo "Failed to fetch updated PR body for #$pr in $repo." >&2 + exit 1 +fi + +if [[ "$updated_body" != "$(cat "$body_file")" ]]; then + echo "PR body mismatch after update." >&2 + exit 1 +fi + +echo "Updated PR #$pr body in $repo." \ No newline at end of file diff --git a/.agents/skills/create-pr/scripts/triage-pr.sh b/.agents/skills/create-pr/scripts/triage-pr.sh new file mode 100755 index 0000000..6337e32 --- /dev/null +++ b/.agents/skills/create-pr/scripts/triage-pr.sh @@ -0,0 +1,129 @@ +#!/usr/bin/env bash +set -euo pipefail + +pr="" +repo="" + +usage() { + cat <<'USAGE' +Usage: triage-pr.sh [--pr ] [--repo ] + +Prints a single-shot summary of CI status, latest review, and latest comments. +USAGE +} + +while [[ $# -gt 0 ]]; do + case "$1" in + --pr) + pr="$2" + shift 2 + ;; + --repo) + repo="$2" + shift 2 + ;; + -h|--help) + usage + exit 0 + ;; + *) + echo "Unknown arg: $1" >&2 + usage >&2 + exit 1 + ;; + esac +done + +if [[ -z "$pr" ]]; then + pr="$(gh pr view --json number --jq .number 2>/dev/null || true)" +fi + +if [[ -z "$pr" ]]; then + echo "Could not determine PR number. Use --pr ." >&2 + exit 1 +fi + +if [[ -z "$repo" ]]; then + repo="$(gh repo view --json nameWithOwner --jq .nameWithOwner 2>/dev/null || true)" +fi + +if [[ -z "$repo" ]]; then + echo "Could not determine repo. Use --repo owner/name." >&2 + exit 1 +fi + +total=0 +pending=0 +failed=0 +success=0 +failed_checks="" + +# Same rationale as poll-pr.sh: JSON mode exits non-zero on failed checks. +checks_output=$(gh pr checks "$pr" --repo "$repo" 2>/dev/null || true) +if [[ -n "$checks_output" ]]; then + while IFS=$'\t' read -r check_name check_state _check_age check_url _rest; do + if [[ -z "$check_name" || -z "$check_state" ]]; then + continue + fi + + total=$((total + 1)) + check_state_lc=$(echo "$check_state" | tr '[:upper:]' '[:lower:]') + case "$check_state_lc" in + pass|success) + success=$((success + 1)) + ;; + pending|in_progress|queued|requested|waiting) + pending=$((pending + 1)) + ;; + skip|skipped|neutral) + ;; + *) + failed=$((failed + 1)) + failed_checks+="${check_name}"$'\t'"${check_state}"$'\t'"${check_url}"$'\n' + ;; + esac + done <<< "$checks_output" + + echo "CI: total=$total pending=$pending failed=$failed success=$success" +else + echo "CI: unavailable" +fi + +if [[ -n "$failed_checks" ]]; then + while IFS=$'\t' read -r name conclusion url; do + if [[ -n "$name" ]]; then + echo "FAIL: $name $conclusion $url" + fi + done <<< "$failed_checks" +fi + +review_line=$(gh api "repos/$repo/pulls/$pr/reviews?per_page=100" --jq ' + [ .[] | select(.submitted_at != null) ] | + if length == 0 then "" else + (max_by(.submitted_at)) | "\(.state)\t\(.user.login)\t\(.submitted_at)\t\(.html_url)" + end +' 2>/dev/null || true) +if [[ -n "$review_line" ]]; then + IFS=$'\t' read -r r_state r_user r_time r_url <<< "$review_line" + echo "REVIEW: $r_state $r_user $r_time $r_url" +fi + +issue_line=$(gh api "repos/$repo/issues/$pr/comments?per_page=100" --jq ' + if length == 0 then "" else + (max_by(.created_at)) | "\(.user.login)\t\(.created_at)\t\(.html_url)\t\(.body | gsub("\\n"; " ") | gsub("\\t"; " ") | .[0:200])" + end +' 2>/dev/null || true) +if [[ -n "$issue_line" ]]; then + IFS=$'\t' read -r c_user c_time c_url c_body <<< "$issue_line" + echo "COMMENT: conversation $c_user $c_time $c_url $c_body" +fi + +review_comment_line=$(gh api "repos/$repo/pulls/$pr/comments?per_page=100" --jq ' + if length == 0 then "" else + (max_by(.created_at)) | "\(.user.login)\t\(.created_at)\t\(.html_url)\t\(.body | gsub("\\n"; " ") | gsub("\\t"; " ") | .[0:200])" + end +' 2>/dev/null || true) +if [[ -n "$review_comment_line" ]]; then + IFS=$'\t' read -r rc_user rc_time rc_url rc_body <<< "$review_comment_line" + echo "COMMENT: inline $rc_user $rc_time $rc_url $rc_body" +fi diff --git a/.agents/skills/final-check/SKILL.md b/.agents/skills/final-check/SKILL.md new file mode 100644 index 0000000..b8f4cc2 --- /dev/null +++ b/.agents/skills/final-check/SKILL.md @@ -0,0 +1,13 @@ +--- +name: final-check +description: Run final quality gates (lint, format, typecheck), fix root causes with minimal changes, and repeat until no errors or warnings remain. Use before completion or handoff. +--- + +# Final Check + +## Steps + +1. Run lint, format, and typecheck, then verify no errors or warnings exist. +2. If any errors or warnings are found, identify the root cause and apply the minimum effective fix. +3. Re-run lint, format, and typecheck after each fix. +4. Repeat steps 2 and 3 until all errors and warnings are cleared. diff --git a/.agents/skills/find-skills/SKILL.md b/.agents/skills/find-skills/SKILL.md new file mode 100644 index 0000000..c797184 --- /dev/null +++ b/.agents/skills/find-skills/SKILL.md @@ -0,0 +1,133 @@ +--- +name: find-skills +description: Helps users discover and install agent skills when they ask questions like "how do I do X", "find a skill for X", "is there a skill that can...", or express interest in extending capabilities. This skill should be used when the user is looking for functionality that might exist as an installable skill. +--- + +# Find Skills + +This skill helps you discover and install skills from the open agent skills ecosystem. + +## When to Use This Skill + +Use this skill when the user: + +- Asks "how do I do X" where X might be a common task with an existing skill +- Says "find a skill for X" or "is there a skill for X" +- Asks "can you do X" where X is a specialized capability +- Expresses interest in extending agent capabilities +- Wants to search for tools, templates, or workflows +- Mentions they wish they had help with a specific domain (design, testing, deployment, etc.) + +## What is the Skills CLI? + +The Skills CLI (`npx skills`) is the package manager for the open agent skills ecosystem. Skills are modular packages that extend agent capabilities with specialized knowledge, workflows, and tools. + +**Key commands:** + +- `npx skills find [query]` - Search for skills interactively or by keyword +- `npx skills add ` - Install a skill from GitHub or other sources +- `npx skills check` - Check for skill updates +- `npx skills update` - Update all installed skills + +**Browse skills at:** https://skills.sh/ + +## How to Help Users Find Skills + +### Step 1: Understand What They Need + +When a user asks for help with something, identify: + +1. The domain (e.g., React, testing, design, deployment) +2. The specific task (e.g., writing tests, creating animations, reviewing PRs) +3. Whether this is a common enough task that a skill likely exists + +### Step 2: Search for Skills + +Run the find command with a relevant query: + +```bash +npx skills find [query] +``` + +For example: + +- User asks "how do I make my React app faster?" → `npx skills find react performance` +- User asks "can you help me with PR reviews?" → `npx skills find pr review` +- User asks "I need to create a changelog" → `npx skills find changelog` + +The command will return results like: + +``` +Install with npx skills add + +vercel-labs/agent-skills@vercel-react-best-practices +└ https://skills.sh/vercel-labs/agent-skills/vercel-react-best-practices +``` + +### Step 3: Present Options to the User + +When you find relevant skills, present them to the user with: + +1. The skill name and what it does +2. The install command they can run +3. A link to learn more at skills.sh + +Example response: + +``` +I found a skill that might help! The "vercel-react-best-practices" skill provides +React and Next.js performance optimization guidelines from Vercel Engineering. + +To install it: +npx skills add vercel-labs/agent-skills@vercel-react-best-practices + +Learn more: https://skills.sh/vercel-labs/agent-skills/vercel-react-best-practices +``` + +### Step 4: Offer to Install + +If the user wants to proceed, you can install the skill for them: + +```bash +npx skills add -g -y +``` + +The `-g` flag installs globally (user-level) and `-y` skips confirmation prompts. + +## Common Skill Categories + +When searching, consider these common categories: + +| Category | Example Queries | +| --------------- | ---------------------------------------- | +| Web Development | react, nextjs, typescript, css, tailwind | +| Testing | testing, jest, playwright, e2e | +| DevOps | deploy, docker, kubernetes, ci-cd | +| Documentation | docs, readme, changelog, api-docs | +| Code Quality | review, lint, refactor, best-practices | +| Design | ui, ux, design-system, accessibility | +| Productivity | workflow, automation, git | + +## Tips for Effective Searches + +1. **Use specific keywords**: "react testing" is better than just "testing" +2. **Try alternative terms**: If "deploy" doesn't work, try "deployment" or "ci-cd" +3. **Check popular sources**: Many skills come from `vercel-labs/agent-skills` or `ComposioHQ/awesome-claude-skills` + +## When No Skills Are Found + +If no relevant skills exist: + +1. Acknowledge that no existing skill was found +2. Offer to help with the task directly using your general capabilities +3. Suggest the user could create their own skill with `npx skills init` + +Example: + +``` +I searched for skills related to "xyz" but didn't find any matches. +I can still help you with this task directly! Would you like me to proceed? + +If this is something you do often, you could create your own skill: +npx skills init my-xyz-skill +``` diff --git a/.agents/skills/mermaid-er-diagram/SKILL.md b/.agents/skills/mermaid-er-diagram/SKILL.md new file mode 100644 index 0000000..8770d56 --- /dev/null +++ b/.agents/skills/mermaid-er-diagram/SKILL.md @@ -0,0 +1,237 @@ +--- +name: mermaid-er-diagram +description: Generate professional, accurate Mermaid ER diagrams with complete database metadata. Use when creating ER diagrams that need to display all database information including primary keys (PK), foreign keys (FK), indexes, composite keys, data types, cardinality (1:1, 1:n, n:n), and comments. Triggers on requests for database diagrams, schema visualization, table relationship diagrams, or any Mermaid erDiagram creation. +--- + +# Mermaid ER Diagram Skill + +Generate precise, readable Mermaid ER diagrams with complete database metadata. + +## Syntax Reference + +### Basic Structure + +```mermaid +erDiagram + TABLE_NAME { + type column_name PK "comment" + } + TABLE1 ||--o{ TABLE2 : "relationship_label" +``` + +### Data Types + +Use standard SQL-like types for clarity: + +- `int`, `bigint`, `serial` - Integer types +- `varchar`, `text`, `char` - String types +- `boolean`, `bool` - Boolean +- `date`, `datetime`, `timestamp` - Temporal +- `decimal`, `float`, `double` - Numeric +- `uuid`, `json`, `jsonb` - Special types + +### Key Annotations + +| Annotation | Meaning | Usage | +| ---------- | ----------- | ------------------------- | +| `PK` | Primary Key | Single column primary key | +| `FK` | Foreign Key | References another table | +| `UK` | Unique Key | Unique constraint | + +### Attribute Format + +``` +type column_name [PK|FK|UK] ["comment"] +``` + +Examples: + +```mermaid +erDiagram + users { + bigint id PK "Auto-increment" + varchar email UK "Unique email" + varchar name "Display name" + timestamp created_at "Creation timestamp" + } +``` + +### Cardinality Symbols + +| Left | Right | Meaning | +| ------ | ------ | ------------------- | +| `\|o` | `o\|` | Zero or one (0..1) | +| `\|\|` | `\|\|` | Exactly one (1) | +| `}o` | `o{` | Zero or more (0..n) | +| `}\|` | `\|{` | One or more (1..n) | + +### Relationship Patterns + +**1:1 Relationship (One-to-One)** + +```mermaid +erDiagram + users ||--|| user_profiles : "has" +``` + +**1:n Relationship (One-to-Many)** + +```mermaid +erDiagram + users ||--o{ orders : "places" + categories ||--|{ products : "contains" +``` + +**n:n Relationship (Many-to-Many)** +Use junction/bridge table: + +```mermaid +erDiagram + students }o--o{ courses : "enrolls" + students ||--o{ enrollments : "has" + courses ||--o{ enrollments : "has" + + enrollments { + bigint student_id PK "Composite PK (part 1)" + bigint course_id PK "Composite PK (part 2)" + date enrolled_at "Enrollment date" + } +``` + +### Line Styles + +| Syntax | Meaning | +| ------ | ------------------------------------------ | +| `--` | Solid line (identifying relationship) | +| `..` | Dashed line (non-identifying relationship) | + +## Layout Best Practices + +### 1. Logical Grouping + +Group related entities together: + +- **Transaction tables**: Center/left, arranged by workflow order +- **Master tables**: Right side, referenced by transactions +- **Junction tables**: Between the tables they connect + +### 2. Relationship Flow + +- Parent tables above or left of child tables +- Transaction flow: left-to-right chronologically +- Foreign keys point from child to parent + +### 3. Readability Rules + +- Minimize crossing lines +- Keep related entities close +- Use meaningful relationship labels +- Add comments for non-obvious columns + +## Complete Example + +```mermaid +erDiagram + %% Master Tables + users { + bigint id PK "Auto-increment" + varchar email UK "Login email" + varchar password_hash "Bcrypt hash" + varchar display_name + boolean is_active "Soft delete flag" + timestamp created_at + timestamp updated_at + } + + categories { + int id PK + varchar name UK "Category name" + int parent_id FK "Self-reference for hierarchy" + int sort_order "Display order" + } + + products { + bigint id PK + int category_id FK + varchar sku UK "Stock keeping unit" + varchar name + text description + decimal price "Unit price" + int stock_quantity + boolean is_published + timestamp created_at + } + + %% Transaction Tables + orders { + bigint id PK + bigint user_id FK + varchar order_number UK "ORD-YYYYMMDD-XXXXX" + varchar status "pending/confirmed/shipped/delivered" + decimal subtotal + decimal tax + decimal total + timestamp ordered_at + } + + order_items { + bigint id PK + bigint order_id FK + bigint product_id FK + int quantity + decimal unit_price "Price at purchase time" + decimal subtotal + } + + payments { + bigint id PK + bigint order_id FK + varchar payment_method "card/bank/wallet" + varchar status "pending/completed/failed" + decimal amount + timestamp paid_at + } + + %% Relationships + categories ||--o{ categories : "parent" + categories ||--o{ products : "contains" + users ||--o{ orders : "places" + orders ||--|{ order_items : "contains" + products ||--o{ order_items : "purchased_in" + orders ||--o| payments : "paid_by" +``` + +## Index & Constraint Notation + +For indexes and composite keys, use comments: + +```mermaid +erDiagram + order_items { + bigint id PK + bigint order_id FK "idx: order_items_order_id" + bigint product_id FK "idx: order_items_product_id" + int quantity + } + + %% Composite unique constraint example + user_roles { + bigint user_id PK "Composite PK (part 1)" + bigint role_id PK "Composite PK (part 2)" + timestamp assigned_at "Assigned time" + } +``` + +## Output Checklist + +Before finalizing, verify: + +- [ ] All PKs marked with `PK` +- [ ] All FKs marked with `FK` +- [ ] Composite keys are represented by marking each participating column as `PK` +- [ ] Data types are specified for all columns +- [ ] Cardinality accurately reflects business rules +- [ ] Relationship labels are meaningful +- [ ] Comments explain non-obvious columns +- [ ] Indexes noted in comments where relevant +- [ ] Tables logically grouped (masters vs transactions) diff --git a/.agents/skills/mermaid-er-diagram/references/advanced-patterns.md b/.agents/skills/mermaid-er-diagram/references/advanced-patterns.md new file mode 100644 index 0000000..4be4271 --- /dev/null +++ b/.agents/skills/mermaid-er-diagram/references/advanced-patterns.md @@ -0,0 +1,244 @@ +# Advanced ER Diagram Patterns + +## Table of Contents + +1. [Self-Referencing Tables](#self-referencing-tables) +2. [Polymorphic Relationships](#polymorphic-relationships) +3. [Audit Columns Pattern](#audit-columns-pattern) +4. [Soft Delete Pattern](#soft-delete-pattern) +5. [Multi-Tenant Pattern](#multi-tenant-pattern) +6. [Versioning Pattern](#versioning-pattern) +7. [State Machine Pattern](#state-machine-pattern) + +--- + +## Self-Referencing Tables + +Hierarchical data (categories, org charts, comments): + +```mermaid +erDiagram + categories { + int id PK + varchar name + int parent_id FK "Self-ref, nullable" + int depth "Calculated hierarchy level" + varchar path "Materialized path: 1/2/5" + } + categories ||--o{ categories : "parent_of" + + %% Alternative: Closure table for complex queries + category_closure { + int ancestor_id PK,FK + int descendant_id PK,FK + int depth + } + categories ||--o{ category_closure : "ancestor" + categories ||--o{ category_closure : "descendant" +``` + +## Polymorphic Relationships + +When multiple tables share a relationship type: + +```mermaid +erDiagram + %% Option 1: Shared FK columns (nullable) + comments { + bigint id PK + bigint post_id FK "Nullable" + bigint product_id FK "Nullable" + text content + varchar commentable_type "post/product - for validation" + } + posts ||--o{ comments : "has" + products ||--o{ comments : "has" + + %% Option 2: Separate junction tables (cleaner) + post_comments { + bigint id PK + bigint post_id FK + text content + } + product_comments { + bigint id PK + bigint product_id FK + text content + } +``` + +## Audit Columns Pattern + +Standard audit fields for all entities: + +```mermaid +erDiagram + %% Include these in every table + any_table { + bigint id PK + timestamp created_at "NOT NULL DEFAULT NOW()" + bigint created_by FK "User who created" + timestamp updated_at "ON UPDATE CURRENT_TIMESTAMP" + bigint updated_by FK "User who last modified" + } + users ||--o{ any_table : "created_by" + users ||--o{ any_table : "updated_by" +``` + +## Soft Delete Pattern + +```mermaid +erDiagram + %% Option 1: Boolean flag + users_v1 { + bigint id PK + varchar email + boolean is_deleted "Default false" + timestamp deleted_at "Nullable" + } + + %% Option 2: Status enum (more flexible) + users_v2 { + bigint id PK + varchar email + varchar status "active/suspended/deleted" + timestamp status_changed_at + } +``` + +## Multi-Tenant Pattern + +```mermaid +erDiagram + %% Every table includes tenant_id + tenants { + uuid id PK + varchar name + varchar subdomain UK + } + + users { + bigint id PK + uuid tenant_id FK "idx: tenant_users" + varchar email "UK per tenant" + } + + projects { + bigint id PK + uuid tenant_id FK "idx: tenant_projects" + varchar name + } + + tenants ||--o{ users : "owns" + tenants ||--o{ projects : "owns" +``` + +## Versioning Pattern + +Track changes over time: + +```mermaid +erDiagram + %% Current state table + products { + bigint id PK + varchar name + decimal price + int current_version + timestamp updated_at + } + + %% History table + product_versions { + bigint id PK + bigint product_id FK + int version_number "idx: product_version" + varchar name + decimal price + bigint changed_by FK + timestamp changed_at + varchar change_type "create/update/delete" + } + + products ||--o{ product_versions : "history" + users ||--o{ product_versions : "changed_by" +``` + +## State Machine Pattern + +Order lifecycle with valid transitions: + +```mermaid +erDiagram + orders { + bigint id PK + varchar current_status "FK to order_statuses" + timestamp status_updated_at + } + + order_statuses { + varchar code PK "draft/pending/paid/shipped/delivered/cancelled" + varchar display_name + boolean is_terminal "Cannot transition from" + int sort_order + } + + order_status_transitions { + varchar from_status PK,FK + varchar to_status PK,FK + varchar required_role "Who can perform" + } + + order_status_history { + bigint id PK + bigint order_id FK + varchar from_status FK + varchar to_status FK + bigint changed_by FK + timestamp changed_at + text notes + } + + order_statuses ||--o{ order_status_transitions : "from" + order_statuses ||--o{ order_status_transitions : "to" + orders ||--o{ order_status_history : "transitions" +``` + +## Index Notation Conventions + +Document indexes in comments: + +| Prefix | Meaning | Example | +| -------- | --------------- | ---------------------------- | +| `idx:` | B-tree index | `"idx: users_email"` | +| `uidx:` | Unique index | `"uidx: users_tenant_email"` | +| `gist:` | GiST index | `"gist: locations_coords"` | +| `gin:` | GIN index | `"gin: posts_tags"` | +| `btree:` | Explicit B-tree | `"btree: orders_created_at"` | + +Example: + +```mermaid +erDiagram + products { + bigint id PK + int category_id FK "idx: products_category" + varchar name "idx: products_name" + tsvector search_vector "gin: products_search" + jsonb metadata "gin: products_metadata" + } +``` + +## Composite Index Notation + +```mermaid +erDiagram + order_items { + bigint id PK + bigint order_id FK + bigint product_id FK + int quantity + } + %% Note: Composite index on (order_id, product_id) + %% idx: order_items_order_product +``` diff --git a/.agents/skills/solana-dev/SKILL.md b/.agents/skills/solana-dev/SKILL.md new file mode 100644 index 0000000..dc0c517 --- /dev/null +++ b/.agents/skills/solana-dev/SKILL.md @@ -0,0 +1,104 @@ +--- +name: solana-dev +description: End-to-end Solana development playbook (Jan 2026). Prefer Solana Foundation framework-kit (@solana/client + @solana/react-hooks) for React/Next.js UI. Prefer @solana/kit for all new client/RPC/transaction code. When legacy dependencies require web3.js, isolate it behind @solana/web3-compat (or @solana/web3.js as a true legacy fallback). Covers wallet-standard-first connection (incl. ConnectorKit), Anchor/Pinocchio programs, Codama-based client generation, LiteSVM/Mollusk/Surfpool testing, and security checklists. +user-invocable: true +--- + +# Solana Development Skill (framework-kit-first) + +## What this Skill is for + +Use this Skill when the user asks for: + +- Solana dApp UI work (React / Next.js) +- Wallet connection + signing flows +- Transaction building / sending / confirmation UX +- On-chain program development (Anchor or Pinocchio) +- Client SDK generation (typed program clients) +- Local testing (LiteSVM, Mollusk, Surfpool) +- Security hardening and audit-style reviews + +## Default stack decisions (opinionated) + +1. **UI: framework-kit first** + +- Use `@solana/client` + `@solana/react-hooks`. +- Prefer Wallet Standard discovery/connect via the framework-kit client. + +2. **SDK: @solana/kit first** + +- Prefer Kit types (`Address`, `Signer`, transaction message APIs, codecs). +- Prefer `@solana-program/*` instruction builders over hand-rolled instruction data. + +3. **Legacy compatibility: web3.js only at boundaries** + +- If you must integrate a library that expects web3.js objects (`PublicKey`, `Transaction`, `Connection`), + use `@solana/web3-compat` as the boundary adapter. +- Do not let web3.js types leak across the entire app; contain them to adapter modules. + +4. **Programs** + +- Default: Anchor (fast iteration, IDL generation, mature tooling). +- Performance/footprint: Pinocchio when you need CU optimization, minimal binary size, + zero dependencies, or fine-grained control over parsing/allocations. + +5. **Testing** + +- Default: LiteSVM or Mollusk for unit tests (fast feedback, runs in-process). +- Use Surfpool for integration tests against realistic cluster state (mainnet/devnet) locally. +- Use solana-test-validator only when you need specific RPC behaviors not emulated by LiteSVM. + +## Operating procedure (how to execute tasks) + +When solving a Solana task: + +### 1. Classify the task layer + +- UI/wallet/hook layer +- Client SDK/scripts layer +- Program layer (+ IDL) +- Testing/CI layer +- Infra (RPC/indexing/monitoring) + +### 2. Pick the right building blocks + +- UI: framework-kit patterns. +- Scripts/backends: @solana/kit directly. +- Legacy library present: introduce a web3-compat adapter boundary. +- High-performance programs: Pinocchio over Anchor. + +### 3. Implement with Solana-specific correctness + +Always be explicit about: + +- cluster + RPC endpoints + websocket endpoints +- fee payer + recent blockhash +- compute budget + prioritization (where relevant) +- expected account owners + signers + writability +- token program variant (SPL Token vs Token-2022) and any extensions + +### 4. Add tests + +- Unit test: LiteSVM or Mollusk. +- Integration test: Surfpool. +- For "wallet UX", add mocked hook/provider tests where appropriate. + +### 5. Deliverables expectations + +When you implement changes, provide: + +- exact files changed + diffs (or patch-style output) +- commands to install/build/test +- a short "risk notes" section for anything touching signing/fees/CPIs/token transfers + +## Progressive disclosure (read when needed) + +- UI + wallet + hooks: [frontend-framework-kit.md](frontend-framework-kit.md) +- Kit ↔ web3.js boundary: [kit-web3-interop.md](kit-web3-interop.md) +- Anchor programs: [programs-anchor.md](programs-anchor.md) +- Pinocchio programs: [programs-pinocchio.md](programs-pinocchio.md) +- Testing strategy: [testing.md](testing.md) +- IDLs + codegen: [idl-codegen.md](idl-codegen.md) +- Payments: [payments.md](payments.md) +- Security checklist: [security.md](security.md) +- Reference links: [resources.md](resources.md) diff --git a/.agents/skills/solana-dev/frontend-framework-kit.md b/.agents/skills/solana-dev/frontend-framework-kit.md new file mode 100644 index 0000000..a5cda0e --- /dev/null +++ b/.agents/skills/solana-dev/frontend-framework-kit.md @@ -0,0 +1,90 @@ +# Frontend with framework-kit (Next.js / React) + +## Goals + +- One Solana client instance for the app (RPC + WS + wallet connectors) +- Wallet Standard-first discovery/connect +- Minimal "use client" footprint in Next.js (hooks only in leaf components) +- Transaction sending that is observable, cancelable, and UX-friendly + +## Recommended dependencies + +- @solana/client +- @solana/react-hooks +- @solana/kit +- @solana-program/system, @solana-program/token, etc. (only what you need) + +## Bootstrap recommendation + +Prefer `create-solana-dapp` and pick a kit/framework-kit compatible template for new projects. + +## Provider setup (Next.js App Router) + +Create a single client and provide it via SolanaProvider. + +Example `app/providers.tsx`: + +```tsx +"use client"; + +import React from "react"; +import { SolanaProvider } from "@solana/react-hooks"; +import { autoDiscover, createClient } from "@solana/client"; + +const endpoint = process.env.NEXT_PUBLIC_SOLANA_RPC_URL ?? "https://api.devnet.solana.com"; + +// Some environments prefer an explicit WS endpoint; default to wss derived from https. +const websocketEndpoint = + process.env.NEXT_PUBLIC_SOLANA_WS_URL ?? endpoint.replace("https://", "wss://").replace("http://", "ws://"); + +export const solanaClient = createClient({ + endpoint, + websocketEndpoint, + walletConnectors: autoDiscover(), +}); + +export function Providers({ children }: { children: React.ReactNode }) { + return {children}; +} +``` + +Then wrap `app/layout.tsx` with ``. + +## Hook usage patterns (high-level) + +Prefer framework-kit hooks before writing your own store/subscription logic: + +- `useWalletConnection()` for connect/disconnect and wallet discovery +- `useBalance(...)` for lamports balance +- `useSolTransfer(...)` for SOL transfers +- `useSplToken(...)` / token helpers for token balances/transfers +- `useTransactionPool(...)` for managing send + status + retry flows + +When you need custom instructions, build them using `@solana-program/*` and send them via the framework-kit transaction helpers. + +## Data fetching and subscriptions + +- Prefer watchers/subscriptions rather than manual polling. +- Clean up subscriptions with abort handles returned by watchers. +- For Next.js: keep server components server-side; only leaf components that call hooks should be client components. + +## Transaction UX checklist + +- Disable inputs while a transaction is pending +- Provide a signature immediately after send +- Track confirmation states (processed/confirmed/finalized) based on UX need +- Show actionable errors: + - user rejected signing + - insufficient SOL for fees / rent + - blockhash expired / dropped + - account already in use / already initialized + - program error (custom error code) + +## When to use ConnectorKit (optional) + +If you need a headless connector with composable UI elements and explicit state control, use ConnectorKit. +Typical reasons: + +- You want a headless wallet connection core (useful across frameworks) +- You want more control over wallet/account state than a single provider gives +- You need production diagnostics/health checks for wallet sessions diff --git a/.agents/skills/solana-dev/idl-codegen.md b/.agents/skills/solana-dev/idl-codegen.md new file mode 100644 index 0000000..5d6a3b5 --- /dev/null +++ b/.agents/skills/solana-dev/idl-codegen.md @@ -0,0 +1,49 @@ +# IDLs + client generation (Codama / Shank / Kinobi) + +## Goal + +Never hand-maintain multiple program clients by manually re-implementing serializers. +Prefer an IDL-driven, code-generated workflow. + +## Codama (preferred) + +- Use Codama as the "single program description format" to generate: + - TypeScript clients (including Kit-friendly output) + - Rust clients (when available/needed) + - documentation artifacts + +## Anchor → Codama + +If the program is Anchor: + +1. Produce Anchor IDL from the build +2. Convert Anchor IDL to Codama nodes (nodes-from-anchor) +3. Render a Kit-native TypeScript client (codama renderers) + +## Native Rust → Shank → Codama + +If the program is native: + +1. Use Shank macros to extract a Shank IDL from annotated Rust +2. Convert Shank IDL to Codama +3. Generate clients via Codama renderers + +## Repository structure recommendation + +- `programs//` (program source) +- `idl/.json` (Anchor/Shank IDL) +- `codama/.json` (Codama IDL) +- `clients/ts//` (generated TS client) +- `clients/rust//` (generated Rust client) + +## Generation guardrails + +- Codegen outputs should be checked into git if: + - you need deterministic builds + - you want users to consume the client without running codegen +- Otherwise, keep codegen in CI and publish artifacts. + +## "Do not do this" + +- Do not write IDLs by hand unless you have no alternative. +- Do not hand-write Borsh layouts for programs you own; use the IDL/codegen pipeline. diff --git a/.agents/skills/solana-dev/kit-web3-interop.md b/.agents/skills/solana-dev/kit-web3-interop.md new file mode 100644 index 0000000..fcccee8 --- /dev/null +++ b/.agents/skills/solana-dev/kit-web3-interop.md @@ -0,0 +1,62 @@ +# Kit ↔ web3.js Interop (boundary patterns) + +## The rule + +- New code: Kit types and Kit-first APIs. +- Legacy dependencies: isolate web3.js-shaped types behind an adapter boundary. + +## Preferred bridge: @solana/web3-compat + +Use `@solana/web3-compat` when: + +- A dependency expects `PublicKey`, `Keypair`, `Transaction`, `VersionedTransaction`, `Connection`, etc. +- You are migrating an existing web3.js codebase incrementally. + +### Why this approach works + +- web3-compat re-exports web3.js-like types and delegates to Kit where possible. +- It includes helper conversions to move between web3.js and Kit representations. + +## Practical boundary layout + +Keep these modules separate: + +- `src/solana/kit/`: + - all Kit-first code: addresses, instruction builders, tx assembly, typed codecs, generated clients + +- `src/solana/web3/`: + - adapters for legacy libs (Anchor TS client, older SDKs) + - conversions between `PublicKey` and Kit `Address` + - conversions between web3 `TransactionInstruction` and Kit instruction shapes (only at edges) + +## Conversion helpers (examples) + +Use web3-compat helpers such as: + +- `toAddress(...)` +- `toPublicKey(...)` +- `toWeb3Instruction(...)` +- `toKitSigner(...)` + +## When you still need @solana/web3.js + +Some methods outside web3-compat's compatibility surface may fall back to a legacy web3.js implementation. +If that happens: + +- keep `@solana/web3.js` as an explicit dependency +- isolate fallback usage to adapter modules only +- avoid letting `PublicKey` bleed into your core domain types + +## Common mistakes to prevent + +- Mixing `Address` and `PublicKey` throughout the app (causes type drift and confusion) +- Building transactions in one stack and signing in another without explicit conversion +- Passing web3.js `Connection` into Kit-native code (or vice versa) rather than using a single source of truth + +## Decision checklist + +If you're about to add web3.js: + +1. Is there a Kit-native equivalent? Prefer Kit. +2. Is the only reason a dependency? Use web3-compat at the boundary. +3. Can you generate a Kit-native client (Codama) instead? Prefer codegen. diff --git a/.agents/skills/solana-dev/payments.md b/.agents/skills/solana-dev/payments.md new file mode 100644 index 0000000..202d9b8 --- /dev/null +++ b/.agents/skills/solana-dev/payments.md @@ -0,0 +1,48 @@ +# Payments and commerce (optional) + +## When payments are in scope + +Use this guidance when the user asks about: + +- checkout flows, tips, payment buttons +- payment request URLs / QR codes +- fee abstraction / gasless transactions + +## Commerce Kit (preferred) + +Use Commerce Kit as the default for payment experiences: + +- drop-in payment UI components (buttons, modals, checkout flows) +- headless primitives for building custom checkout experiences +- React hooks for merchant/payment workflows +- built-in payment verification and confirmation handling +- support for SOL and SPL token payments + +### When to use Commerce Kit + +- You want a production-ready payment flow with minimal setup +- You need both UI components and headless APIs +- You want built-in best practices for payment verification +- You're building merchant experiences (tipping, checkout, subscriptions) + +### Commerce Kit patterns + +- Use the provided hooks for payment state management +- Leverage the built-in confirmation tracking (don't roll your own) +- Use the headless APIs when you need custom UI but want the payment logic handled + +## Kora (gasless / fee abstraction) + +Consider Kora when you need: + +- sponsored transactions (user doesn't pay gas) +- users paying fees in tokens other than SOL +- a trusted signing / paymaster component + +## UX and security checklist for payments + +- Always show recipient + amount + token clearly before signing. +- Protect against replay (use unique references / memoing where appropriate). +- Confirm settlement by querying chain state, not by trusting client-side callbacks. +- Handle partial failures gracefully (transaction sent but not confirmed). +- Provide clear error messages for common failure modes (insufficient balance, rejected signature). diff --git a/.agents/skills/solana-dev/programs-anchor.md b/.agents/skills/solana-dev/programs-anchor.md new file mode 100644 index 0000000..367090b --- /dev/null +++ b/.agents/skills/solana-dev/programs-anchor.md @@ -0,0 +1,316 @@ +# Programs with Anchor (default choice) + +## When to use Anchor + +Use Anchor by default when: + +- You want fast iteration with reduced boilerplate +- You want an IDL and TypeScript client story out of the box +- You want mature testing and workspace tooling +- You need built-in security through automatic account validation + +## Core Advantages + +- **Reduced Boilerplate**: Abstracts repetitive account management, instruction serialization, and error handling +- **Built-in Security**: Automatic account-ownership verification and data validation +- **IDL Generation**: Automatic interface definition for client generation + +## Core Macros + +### `declare_id!()` + +Declares the onchain address where the program resides—a unique public key derived from the project's keypair. + +### `#[program]` + +Marks the module containing every instruction entrypoint and business-logic function. + +### `#[derive(Accounts)]` + +Lists accounts an instruction requires and automatically enforces their constraints: + +- Declares all necessary accounts for specific instructions +- Enforces constraint checks automatically to block bugs and exploits +- Generates helper methods for safe account access and mutation + +### `#[error_code]` + +Enables custom, human-readable error types with `#[msg(...)]` attributes for clearer debugging. + +## Account Types + +| Type | Purpose | +| ------------------------- | ----------------------------------------------- | +| `Signer<'info>` | Verifies the account signed the transaction | +| `SystemAccount<'info>` | Confirms System Program ownership | +| `Program<'info, T>` | Validates executable program accounts | +| `Account<'info, T>` | Typed program account with automatic validation | +| `UncheckedAccount<'info>` | Raw account requiring manual validation | + +## Account Constraints + +### Initialization + +```rust +#[account( + init, + payer = payer, + space = 8 + CustomAccount::INIT_SPACE +)] +pub account: Account<'info, CustomAccount>, +``` + +### PDA Validation + +```rust +#[account( + seeds = [b"vault", owner.key().as_ref()], + bump +)] +pub vault: SystemAccount<'info>, +``` + +### Ownership and Relationships + +```rust +#[account( + has_one = authority @ CustomError::InvalidAuthority, + constraint = account.is_active @ CustomError::AccountInactive +)] +pub account: Account<'info, CustomAccount>, +``` + +### Reallocation + +```rust +#[account( + mut, + realloc = new_space, + realloc::payer = payer, + realloc::zero = true // Clear old data when shrinking +)] +pub account: Account<'info, CustomAccount>, +``` + +### Closing Accounts + +```rust +#[account( + mut, + close = destination +)] +pub account: Account<'info, CustomAccount>, +``` + +## Account Discriminators + +Default discriminators use `sha256("account:")[0..8]`. Custom discriminators (Anchor 0.31+): + +```rust +#[account(discriminator = 1)] +pub struct Escrow { ... } +``` + +**Constraints:** + +- Discriminators must be unique across your program +- Using `[1]` prevents using `[1, 2, ...]` which also start with `1` +- `[0]` conflicts with uninitialized accounts + +## Instruction Patterns + +### Basic Structure + +```rust +#[program] +pub mod my_program { + use super::*; + + pub fn initialize(ctx: Context, data: u64) -> Result<()> { + ctx.accounts.account.data = data; + Ok(()) + } +} +``` + +### Context Implementation Pattern + +Move logic to context struct implementations for organization and testability: + +```rust +impl<'info> Transfer<'info> { + pub fn transfer_tokens(&mut self, amount: u64) -> Result<()> { + // Implementation + Ok(()) + } +} +``` + +## Cross-Program Invocations (CPIs) + +### Basic CPI + +```rust +let cpi_accounts = Transfer { + from: ctx.accounts.from.to_account_info(), + to: ctx.accounts.to.to_account_info(), +}; +let cpi_program = ctx.accounts.system_program.to_account_info(); +let cpi_ctx = CpiContext::new(cpi_program, cpi_accounts); + +transfer(cpi_ctx, amount)?; +``` + +### PDA-Signed CPIs + +```rust +let seeds = &[b"vault".as_ref(), &[ctx.bumps.vault]]; +let signer = &[&seeds[..]]; +let cpi_ctx = CpiContext::new_with_signer(cpi_program, cpi_accounts, signer); +``` + +## Error Handling + +```rust +#[error_code] +pub enum MyError { + #[msg("Custom error message")] + CustomError, + #[msg("Value too large: {0}")] + ValueError(u64), +} + +// Usage +require!(value > 0, MyError::CustomError); +require!(value < 100, MyError::ValueError(value)); +``` + +## Token Accounts + +### SPL Token + +```rust +#[account( + mint::decimals = 9, + mint::authority = authority, +)] +pub mint: Account<'info, Mint>, + +#[account( + mut, + associated_token::mint = mint, + associated_token::authority = owner, +)] +pub token_account: Account<'info, TokenAccount>, +``` + +### Token2022 Compatibility + +Use `InterfaceAccount` for dual compatibility: + +```rust +use anchor_spl::token_interface::{Mint, TokenAccount}; + +pub mint: InterfaceAccount<'info, Mint>, +pub token_account: InterfaceAccount<'info, TokenAccount>, +pub token_program: Interface<'info, TokenInterface>, +``` + +## LazyAccount (Anchor 0.31+) + +Heap-allocated, read-only account access for efficient memory usage: + +```rust +// Cargo.toml +anchor-lang = { version = "0.31.1", features = ["lazy-account"] } + +// Usage +pub account: LazyAccount<'info, CustomAccountType>, + +pub fn handler(ctx: Context) -> Result<()> { + let value = ctx.accounts.account.get_value()?; + Ok(()) +} +``` + +**Note:** LazyAccount is read-only. After CPIs, use `unload()` to refresh cached values. + +## Zero-Copy Accounts + +For accounts exceeding stack/heap limits: + +```rust +#[account(zero_copy)] +pub struct LargeAccount { + pub data: [u8; 10000], +} +``` + +Accounts under 10,240 bytes use `init`; larger accounts require external creation then `zero` constraint initialization. + +## Remaining Accounts + +Pass dynamic accounts beyond fixed instruction structure: + +```rust +pub fn batch_operation(ctx: Context, amounts: Vec) -> Result<()> { + let remaining = &ctx.remaining_accounts; + require!(remaining.len() % 2 == 0, BatchError::InvalidSchema); + + for (i, chunk) in remaining.chunks(2).enumerate() { + process_pair(&chunk[0], &chunk[1], amounts[i])?; + } + Ok(()) +} +``` + +## Version Management + +- Use AVM (Anchor Version Manager) for reproducible builds +- Keep Solana CLI + Anchor versions aligned in CI and developer setup +- Pin versions in `Anchor.toml` + +## Compatibility Notes for Anchor 0.32.0 + +To resolve build conflicts with certain crates in Anchor 0.32.0, run these cargo update commands in your project root: + +```bash +cargo update base64ct --precise 1.6.0 +cargo update constant_time_eq --precise 0.4.1 +cargo update blake3 --precise 1.5.5 +``` + +Additionally, if you encounter warnings about `solana-program` conflicts, add `solana-program = "3"` to the `[dependencies]` section in your program's `Cargo.toml` file (e.g., `programs/your-program/Cargo.toml`). + +## Security Best Practices + +### Account Validation + +- Use typed accounts (`Account<'info, T>`) over `UncheckedAccount` when possible +- Always validate signer requirements explicitly +- Use `has_one` for ownership relationships +- Validate PDA seeds and bumps + +### CPI Safety + +- Use `Program<'info, T>` to validate CPI targets (prevents arbitrary CPI attacks) +- Never pass extra privileges to CPI callees +- Prefer explicit program IDs for known CPIs + +### Common Gotchas + +- **Avoid `init_if_needed`**: Permits reinitialization attacks +- **Legacy IDL formats**: Ensure tooling agrees on format (pre-0.30 vs new spec) +- **PDA seeds**: Ensure all seed material is stable and canonical + +## Testing + +- Use `anchor test` for end-to-end tests +- Prefer Mollusk or LiteSVM for fast unit tests +- Use Surfpool for integration tests with mainnet state + +## IDL and Clients + +- Treat the program's IDL as a product artifact +- Prefer generating Kit-native clients via Codama +- If using Anchor TS client in Kit-first app, put it behind web3-compat boundary diff --git a/.agents/skills/solana-dev/programs-pinocchio.md b/.agents/skills/solana-dev/programs-pinocchio.md new file mode 100644 index 0000000..4ca4d9d --- /dev/null +++ b/.agents/skills/solana-dev/programs-pinocchio.md @@ -0,0 +1,665 @@ +# Programs with Pinocchio + +Pinocchio is a minimalist Rust library for crafting Solana programs without the heavyweight `solana-program` crate. It delivers significant performance gains through zero-copy techniques and minimal dependencies. + +## When to Use Pinocchio + +Use Pinocchio when you need: + +- **Maximum compute efficiency**: 84% CU savings compared to Anchor +- **Minimal binary size**: Leaner code paths and smaller deployments +- **Zero external dependencies**: Only Solana SDK types required +- **Fine-grained control**: Direct memory access and byte-level operations +- **no_std environments**: Embedded or constrained contexts + +## Core Architecture + +### Program Structure Validation Checklist + +Before building/deploying, verify lib.rs contains all required components: + +- [ ] `entrypoint!(process_instruction)` macro +- [ ] `pub const ID: Address = Address::new_from_array([...])` with correct program ID +- [ ] `fn process_instruction(program_id: &Address, accounts: &[AccountView], data: &[u8]) -> ProgramResult` +- [ ] Instruction routing logic with proper discriminators +- [ ] `pub mod instructions; pub use instructions::*;` + +### Entrypoint Pattern + +```rust +use pinocchio::{ + account::AccountView, + address::Address, + entrypoint, + error::ProgramError, + ProgramResult, +}; + +entrypoint!(process_instruction); + +fn process_instruction( + _program_id: &Address, + accounts: &[AccountView], + instruction_data: &[u8], +) -> ProgramResult { + match instruction_data.split_first() { + Some((0, data)) => Deposit::try_from((data, accounts))?.process(), + Some((1, _)) => Withdraw::try_from(accounts)?.process(), + _ => Err(ProgramError::InvalidInstructionData) + } +} +``` + +Single-byte discriminators support 255 instructions; use two bytes for up to 65,535 variants. + +### Panic Handler Configuration + +**For std environments (SBF builds):** + +```rust +entrypoint!(process_instruction); +// Remove nostd_panic_handler!() - std provides panic handling +``` + +**For no_std environments:** + +```rust +#![no_std] +entrypoint!(process_instruction); +nostd_panic_handler!(); +``` + +**Critical**: Never include both - causes duplicate lang item error in SBF builds. + +### Program ID Declaration + +```rust +pub const ID: Address = Address::new_from_array([ + // Your 32-byte program ID as bytes + 0xXX, 0xXX, ..., 0xXX, +]); +``` + +// Note: Use `Address::new_from_array()` not `Address::new()` + +### Recommended Import Structure + +```rust +use pinocchio::{ + account::AccountView, + address::Address, + entrypoint, + error::ProgramError, + ProgramResult, +}; +// Add CPI imports only when needed: +// cpi::{invoke_signed, Seed, Signer}, +// Add system program imports only when needed: +// pinocchio_system::instructions::Transfer, +``` + +### Instruction Structure + +Separate validation from business logic using the `TryFrom` trait: + +```rust +pub struct Deposit<'a> { + pub accounts: DepositAccounts<'a>, + pub data: DepositData, +} + +impl<'a> TryFrom<(&'a [u8], &'a [AccountView])> for Deposit<'a> { + type Error = ProgramError; + + fn try_from((data, accounts): (&'a [u8], &'a [AccountView])) -> Result { + let accounts = DepositAccounts::try_from(accounts)?; + let data = DepositData::try_from(data)?; + Ok(Self { accounts, data }) + } +} + +impl<'a> Deposit<'a> { + pub const DISCRIMINATOR: &'a u8 = &0; + + pub fn process(&self) -> ProgramResult { + // Business logic only - validation already complete + Ok(()) + } +} +``` + +## Account Validation + +Pinocchio requires manual validation. Wrap all checks in `TryFrom` implementations: + +### Account Struct Validation + +```rust +pub struct DepositAccounts<'a> { + pub owner: &'a AccountView, + pub vault: &'a AccountView, + pub system_program: &'a AccountView, +} + +impl<'a> TryFrom<&'a [AccountView]> for DepositAccounts<'a> { + type Error = ProgramError; + + fn try_from(accounts: &'a [AccountView]) -> Result { + let [owner, vault, system_program, _remaining @ ..] = accounts else { + return Err(ProgramError::NotEnoughAccountKeys); + }; + + // Signer check + if !owner.is_signer() { + return Err(ProgramError::MissingRequiredSignature); + } + + // Owner check + if !vault.owned_by(&pinocchio_system::ID) { + return Err(ProgramError::InvalidAccountOwner); + } + + // Program ID check (prevents arbitrary CPI) + if system_program.address() != &pinocchio_system::ID { + return Err(ProgramError::IncorrectProgramId); + } + + Ok(Self { owner, vault, system_program }) + } +} + + + + // Owner check + if !vault.is_owned_by(&pinocchio_system::ID) { + return Err(ProgramError::InvalidAccountOwner); + } + + // Program ID check (prevents arbitrary CPI) + if system_program.address() != &pinocchio_system::ID { + return Err(ProgramError::IncorrectProgramId); + } + + Ok(Self { owner, vault, system_program }) + } +} +``` + +### Instruction Data Validation + +```rust +pub struct DepositData { + pub amount: u64, +} + +impl<'a> TryFrom<&'a [u8]> for DepositData { + type Error = ProgramError; + + fn try_from(data: &'a [u8]) -> Result { + if data.len() != core::mem::size_of::() { + return Err(ProgramError::InvalidInstructionData); + } + + let amount = u64::from_le_bytes(data.try_into().unwrap()); + + if amount == 0 { + return Err(ProgramError::InvalidInstructionData); + } + + Ok(Self { amount }) + } +} +``` + +## Token Account Helpers + +### SPL Token Validation + +```rust +pub struct Mint; + +impl Mint { + pub fn check(account: &AccountView) -> Result<(), ProgramError> { + if !account.owned_by(&pinocchio_token::ID) { + return Err(ProgramError::InvalidAccountOwner); + } + if account.data_len() != pinocchio_token::state::Mint::LEN { + return Err(ProgramError::InvalidAccountData); + } + Ok(()) + } + + pub fn init( + account: &AccountView, + payer: &AccountView, + decimals: u8, + mint_authority: &[u8; 32], + freeze_authority: Option<&[u8; 32]>, + ) -> ProgramResult { + let lamports = Rent::get()?.minimum_balance(pinocchio_token::state::Mint::LEN); + + CreateAccount { + from: payer, + to: account, + lamports, + space: pinocchio_token::state::Mint::LEN as u64, + owner: &pinocchio_token::ID, + }.invoke()?; + + InitializeMint2 { + mint: account, + decimals, + mint_authority, + freeze_authority, + }.invoke() + } +} +``` + +### Token2022 Support + +Token2022 requires discriminator-based validation due to variable account sizes with extensions: + +```rust +pub const TOKEN_2022_PROGRAM_ID: [u8; 32] = [...]; +const TOKEN_2022_ACCOUNT_DISCRIMINATOR_OFFSET: usize = 165; +pub const TOKEN_2022_MINT_DISCRIMINATOR: u8 = 0x01; +pub const TOKEN_2022_TOKEN_ACCOUNT_DISCRIMINATOR: u8 = 0x02; + +pub struct Mint2022; + +impl Mint2022 { + pub fn check(account: &AccountView) -> Result<(), ProgramError> { + if !account.is_owned_by(&TOKEN_2022_PROGRAM_ID) { + return Err(ProgramError::InvalidAccountOwner); + } + + let data = account.try_borrow_data()?; + + if data.len() != pinocchio_token::state::Mint::LEN { + if data.len() <= TOKEN_2022_ACCOUNT_DISCRIMINATOR_OFFSET { + return Err(ProgramError::InvalidAccountData); + } + if data[TOKEN_2022_ACCOUNT_DISCRIMINATOR_OFFSET] != TOKEN_2022_MINT_DISCRIMINATOR { + return Err(ProgramError::InvalidAccountData); + } + } + Ok(()) + } +} +``` + +### Token Interface (Both Programs) + +```rust +pub struct MintInterface; + +impl MintInterface { + pub fn check(account: &AccountView) -> Result<(), ProgramError> { + if account.is_owned_by(&pinocchio_token::ID) { + if account.data_len() != pinocchio_token::state::Mint::LEN { + return Err(ProgramError::InvalidAccountData); + } + } else if account.is_owned_by(&TOKEN_2022_PROGRAM_ID) { + Mint2022::check(account)?; + } else { + return Err(ProgramError::InvalidAccountOwner); + } + Ok(()) + } +} +``` + +## Cross-Program Invocations (CPIs) + +### Basic CPI + +```rust +use pinocchio_system::instructions::Transfer; + +Transfer { + from: self.accounts.owner, + to: self.accounts.vault, + lamports: self.data.amount, +}.invoke()?; +``` + +### PDA-Signed CPI + +```rust +use pinocchio::{seeds::Seed, signer::Signer}; + +let seeds = [ + Seed::from(b"vault"), + Seed::from(self.accounts.owner.address().as_ref()), + Seed::from(&[bump]), +]; +let signers = [Signer::from(&seeds)]; + +Transfer { + from: self.accounts.vault, + to: self.accounts.owner, + lamports: self.accounts.vault.lamports(), +}.invoke_signed(&signers)?; +``` + +## Reading and Writing Data + +### Struct Field Ordering + +Order fields from largest to smallest alignment to minimize padding: + +```rust +// Good: 16 bytes total +#[repr(C)] +struct GoodOrder { + big: u64, // 8 bytes, 8-byte aligned + medium: u16, // 2 bytes, 2-byte aligned + small: u8, // 1 byte, 1-byte aligned + // 5 bytes padding +} + +// Bad: 24 bytes due to padding +#[repr(C)] +struct BadOrder { + small: u8, // 1 byte + // 7 bytes padding + big: u64, // 8 bytes + medium: u16, // 2 bytes + // 6 bytes padding +} +``` + +### Zero-Copy Reading (Safe Pattern) + +Use byte arrays with accessor methods to avoid alignment issues: + +```rust +#[repr(C)] +pub struct Config { + pub authority: Pubkey, + pub mint: Pubkey, + seed: [u8; 8], // Store as bytes + fee: [u8; 2], // Store as bytes + pub state: u8, + pub bump: u8, +} + +impl Config { + pub const LEN: usize = core::mem::size_of::(); + + pub fn from_bytes(data: &[u8]) -> Result<&Self, ProgramError> { + if data.len() != Self::LEN { + return Err(ProgramError::InvalidAccountData); + } + // Safe: all fields are byte-aligned + Ok(unsafe { &*(data.as_ptr() as *const Self) }) + } + + pub fn seed(&self) -> u64 { + u64::from_le_bytes(self.seed) + } + + pub fn fee(&self) -> u16 { + u16::from_le_bytes(self.fee) + } + + pub fn set_seed(&mut self, seed: u64) { + self.seed = seed.to_le_bytes(); + } + + pub fn set_fee(&mut self, fee: u16) { + self.fee = fee.to_le_bytes(); + } +} +``` + +### Field-by-Field Serialization (Safest) + +```rust +impl Config { + pub fn write_to_buffer(&self, data: &mut [u8]) -> Result<(), ProgramError> { + if data.len() != Self::LEN { + return Err(ProgramError::InvalidAccountData); + } + + let mut offset = 0; + + data[offset..offset + 32].copy_from_slice(self.authority.as_ref()); + offset += 32; + + data[offset..offset + 32].copy_from_slice(self.mint.as_ref()); + offset += 32; + + data[offset..offset + 8].copy_from_slice(&self.seed); + offset += 8; + + data[offset..offset + 2].copy_from_slice(&self.fee); + offset += 2; + + data[offset] = self.state; + data[offset + 1] = self.bump; + + Ok(()) + } +} +``` + +### Dangerous Patterns to Avoid + +```rust +// ❌ transmute with unaligned data +let value: u64 = unsafe { core::mem::transmute(bytes_slice) }; + +// ❌ Pointer casting to packed structs +#[repr(C, packed)] +pub struct Packed { pub a: u8, pub b: u64 } +let config = unsafe { &*(data.as_ptr() as *const Packed) }; + +// ❌ Direct field access on packed structs creates unaligned references +let b_ref = &packed.b; + +// ❌ Assuming alignment without verification +let config = unsafe { &*(data.as_ptr() as *const Config) }; +``` + +## Error Handling + +Use `thiserror` for descriptive errors (supports `no_std`): + +```rust +use thiserror::Error; +use num_derive::FromPrimitive; +use pinocchio::program_error::ProgramError; + +#[derive(Clone, Debug, Eq, Error, FromPrimitive, PartialEq)] +pub enum VaultError { + #[error("Lamport balance below rent-exempt threshold")] + NotRentExempt, + #[error("Invalid account owner")] + InvalidOwner, + #[error("Account not initialized")] + NotInitialized, +} + +impl From for ProgramError { + fn from(e: VaultError) -> Self { + ProgramError::Custom(e as u32) + } +} +``` + +## Closing Accounts Securely + +Prevent revival attacks by marking closed accounts: + +```rust +pub fn close(account: &AccountView, destination: &AccountView) -> ProgramResult { + // Mark as closed (prevents reinitialization) + { + let mut data = account.try_borrow_mut_data()?; + data[0] = 0xff; + } + + // Transfer lamports + *destination.try_borrow_mut_lamports()? += *account.try_borrow_lamports()?; + + // Shrink and close + account.realloc(1, true)?; + account.close() +} +``` + +## Performance Optimization + +### Feature Flags + +```toml +[features] +default = ["perf"] +perf = [] +``` + +```rust +#[cfg(not(feature = "perf"))] +pinocchio::msg!("Instruction: Deposit"); +``` + +### Bitwise Flags for Storage + +Pack up to 8 booleans in one byte: + +```rust +const FLAG_ACTIVE: u8 = 1 << 0; +const FLAG_FROZEN: u8 = 1 << 1; +const FLAG_ADMIN: u8 = 1 << 2; + +// Set flag +flags |= FLAG_ACTIVE; + +// Check flag +if flags & FLAG_ACTIVE != 0 { /* active */ } + +// Clear flag +flags &= !FLAG_ACTIVE; +``` + +### Zero-Allocation Architecture + +Use references instead of heap allocations: + +```rust +// Good: references with borrowed lifetimes +pub struct Instruction<'a> { + pub accounts: &'a [AccountView], + pub data: &'a [u8], +} + +// Enforce no heap usage +no_allocator!(); +``` + +Respect Solana's memory limits: 4KB stack per function, 32KB total heap. + +### Skip Redundant Checks + +If a CPI will fail on incorrect accounts anyway, skip pre-validation: + +```rust +// Instead of validating ATA derivation, compute expected address +let expected_ata = find_program_address( + &[owner.address(), token_program.address(), mint.address()], + &pinocchio_associated_token_account::ID, +).0; + +if account.address() != &expected_ata { + return Err(ProgramError::InvalidAccountData); +} +``` + +## Batch Instructions + +Process multiple operations in a single CPI (saves ~1000 CU per batched operation): + +```rust +const IX_HEADER_SIZE: usize = 2; // account_count + data_length + +pub fn process_batch(mut accounts: &[AccountView], mut data: &[u8]) -> ProgramResult { + loop { + if data.len() < IX_HEADER_SIZE { + return Err(ProgramError::InvalidInstructionData); + } + + let account_count = data[0] as usize; + let data_len = data[1] as usize; + let data_offset = IX_HEADER_SIZE + data_len; + + if accounts.len() < account_count || data.len() < data_offset { + return Err(ProgramError::InvalidInstructionData); + } + + let (ix_accounts, ix_data) = (&accounts[..account_count], &data[IX_HEADER_SIZE..data_offset]); + + process_inner_instruction(ix_accounts, ix_data)?; + + if data_offset == data.len() { + break; + } + + accounts = &accounts[account_count..]; + data = &data[data_offset..]; + } + + Ok(()) +} +``` + +## Testing + +Use Mollusk or LiteSVM for fast Rust-based testing: + +```rust +#[cfg(test)] +pub mod tests; + +// Run with: cargo test-sbf +``` + +See [testing.md](testing.md) for detailed testing patterns with Mollusk and LiteSVM. + +## Build & Deployment + +### Build Validation + +After `cargo build-sbf`: + +- [ ] Check .so file size (>1KB, typically 5-15KB for Pinocchio programs) +- [ ] Verify file type: `file target/deploy/program.so` should show "ELF 64-bit LSB shared object" +- [ ] Test regular compilation: `cargo build` should succeed +- [ ] Run tests: `cargo test` should pass + +### Dependency Compatibility Issues + +**If SBF build fails with "edition2024" errors:** + +```bash +# Downgrade problematic dependencies to compatible versions +cargo update base64ct --precise 1.6.0 +cargo update constant_time_eq --precise 0.4.1 +cargo update blake3 --precise 1.5.5 +``` + +**When to apply**: Only when encountering Cargo "edition2024" errors during `cargo build-sbf`. These downgrades resolve toolchain compatibility issues while maintaining functionality. + +**Note**: These specific versions were tested and verified to work with current Solana toolchain. Regular `cargo update` may pull incompatible versions. + +## Security Checklist + +- [ ] Validate all account owners in `TryFrom` implementations +- [ ] Check signer status for authority accounts +- [ ] Verify PDA derivation matches expected seeds +- [ ] Validate program IDs before CPIs (prevent arbitrary CPI) +- [ ] Use checked math (`checked_add`, `checked_sub`, etc.) +- [ ] Mark closed accounts to prevent revival attacks +- [ ] Validate instruction data length before parsing +- [ ] Check for duplicate mutable accounts when accepting multiple of same type diff --git a/.agents/skills/solana-dev/resources.md b/.agents/skills/solana-dev/resources.md new file mode 100644 index 0000000..4db7df4 --- /dev/null +++ b/.agents/skills/solana-dev/resources.md @@ -0,0 +1,92 @@ +# Curated Resources (Source-of-Truth First) + +## Learning Platforms + +- [Blueshift](https://learn.blueshift.gg/) - Free, open-source Solana learning platform +- [Blueshift GitHub](https://github.com/blueshift-gg) - Course content and tools +- [Solana Cookbook](https://solanacookbook.com/) + +## Core Solana Docs + +- [Solana Documentation](https://solana.com/docs) (Core, RPC, Frontend, Programs) +- [Next.js + Solana React Hooks](https://solana.com/docs/frontend/nextjs-solana) +- [@solana/web3-compat](https://solana.com/docs/frontend/web3-compat) +- [RPC API Reference](https://solana.com/docs/rpc) + +## Modern JS/TS SDK + +- [@solana/kit Repository](https://github.com/anza-xyz/kit) +- [Solana Kit Docs](https://solana.com/docs/clients/kit) (installation, upgrade guide) + +## UI and Wallet Infrastructure + +- [framework-kit Repository](https://github.com/solana-foundation/framework-kit) (@solana/client, @solana/react-hooks) +- [ConnectorKit](https://github.com/civic-io/connector-kit) (headless Wallet Standard connector) + +## Scaffolding + +- [create-solana-dapp](https://github.com/solana-developers/create-solana-dapp) + +## Program Frameworks + +### Anchor + +- [Anchor Repository](https://github.com/coral-xyz/anchor) +- [Anchor Documentation](https://www.anchor-lang.com/) +- [Anchor Version Manager (AVM)](https://www.anchor-lang.com/docs/avm) + +### Pinocchio + +- [Pinocchio Repository](https://github.com/anza-xyz/pinocchio) +- [pinocchio-system](https://crates.io/crates/pinocchio-system) +- [pinocchio-token](https://crates.io/crates/pinocchio-token) +- [Pinocchio Guide](https://github.com/vict0rcarvalh0/pinocchio-guide) +- [How to Build with Pinocchio (Helius)](https://www.helius.dev/blog/pinocchio) + +## Testing + +### LiteSVM + +- [LiteSVM Repository](https://github.com/LiteSVM/litesvm) +- [litesvm crate](https://crates.io/crates/litesvm) +- [litesvm npm](https://www.npmjs.com/package/litesvm) + +### Mollusk + +- [Mollusk Repository](https://github.com/buffalojoec/mollusk) +- [mollusk-svm crate](https://crates.io/crates/mollusk-svm) + +### Surfpool + +- [Surfpool Documentation](https://docs.surfpool.dev/) +- [Surfpool Repository](https://github.com/txtx/surfpool) + +## IDLs and Codegen + +- [Codama Repository](https://github.com/codama-idl/codama) +- [Codama Generating Clients](https://solana.com/docs/programs/codama-generating-clients) +- [Shank (Metaplex)](https://github.com/metaplex-foundation/shank) +- [Kinobi (Metaplex)](https://github.com/metaplex-foundation/kinobi) + +## Tokens and NFTs + +- [SPL Token Documentation](https://spl.solana.com/token) +- [Token-2022 Documentation](https://spl.solana.com/token-2022) +- [Metaplex Documentation](https://developers.metaplex.com/) + +## Payments + +- [Commerce Kit Repository](https://github.com/solana-foundation/commerce-kit) +- [Commerce Kit Documentation](https://commercekit.solana.com/) +- [Kora Documentation](https://docs.kora.network/) + +## Security + +- [Blueshift Program Security Course](https://learn.blueshift.gg/en/courses/program-security) +- [Solana Security Best Practices](https://solana.com/docs/programs/security) + +## Performance and Optimization + +- [Solana Optimized Programs](https://github.com/Laugharne/solana_optimized_programs) +- [sBPF Assembly SDK](https://github.com/blueshift-gg/sbpf) +- [Doppler Oracle (21 CU)](https://github.com/blueshift-gg/doppler) diff --git a/.agents/skills/solana-dev/security.md b/.agents/skills/solana-dev/security.md new file mode 100644 index 0000000..11b83b2 --- /dev/null +++ b/.agents/skills/solana-dev/security.md @@ -0,0 +1,310 @@ +# Solana Security Checklist (Program + Client) + +## Core Principle + +Assume the attacker controls: + +- Every account passed into an instruction +- Every instruction argument +- Transaction ordering (within reason) +- CPI call graphs (via composability) + +--- + +## Vulnerability Categories + +### 1. Missing Owner Checks + +**Risk**: Attacker creates fake accounts with identical data structure and correct discriminator. + +**Attack**: Without owner checks, deserialization succeeds for both legitimate and counterfeit accounts. + +**Anchor Prevention**: + +```rust +// Option 1: Use typed accounts (automatic) +pub account: Account<'info, ProgramAccount>, + +// Option 2: Explicit constraint +#[account(owner = program_id)] +pub account: UncheckedAccount<'info>, +``` + +**Pinocchio Prevention**: + +```rust +if !account.is_owned_by(&crate::ID) { + return Err(ProgramError::InvalidAccountOwner); +} +``` + +--- + +### 2. Missing Signer Checks + +**Risk**: Any account can perform operations that should be restricted to specific authorities. + +**Attack**: Attacker locates target account, extracts owner pubkey, constructs transaction using real owner's address without their signature. + +**Anchor Prevention**: + +```rust +// Option 1: Use Signer type +pub authority: Signer<'info>, + +// Option 2: Explicit constraint +#[account(signer)] +pub authority: UncheckedAccount<'info>, + +// Option 3: Manual check +if !ctx.accounts.authority.is_signer { + return Err(ProgramError::MissingRequiredSignature); +} +``` + +**Pinocchio Prevention**: + +```rust +if !self.accounts.authority.is_signer() { + return Err(ProgramError::MissingRequiredSignature); +} +``` + +--- + +### 3. Arbitrary CPI Attacks + +**Risk**: Program blindly calls whatever program is passed as parameter, becoming a proxy for malicious code. + +**Attack**: Attacker substitutes malicious program mimicking expected interface (e.g., fake SPL Token that reverses transfers). + +**Anchor Prevention**: + +```rust +// Use typed Program accounts +pub token_program: Program<'info, Token>, + +// Or explicit validation +if ctx.accounts.token_program.key() != &spl_token::ID { + return Err(ProgramError::IncorrectProgramId); +} +``` + +**Pinocchio Prevention**: + +```rust +if self.accounts.token_program.key() != &pinocchio_token::ID { + return Err(ProgramError::IncorrectProgramId); +} +``` + +--- + +### 4. Reinitialization Attacks + +**Risk**: Calling initialization functions on already-initialized accounts overwrites existing data. + +**Attack**: Attacker reinitializes account to become new owner, then drains controlled assets. + +**Anchor Prevention**: + +```rust +// Use init constraint (automatic protection) +#[account(init, payer = payer, space = 8 + Data::LEN)] +pub account: Account<'info, Data>, + +// Manual check if needed +if ctx.accounts.account.is_initialized { + return Err(ProgramError::AccountAlreadyInitialized); +} +``` + +**Critical**: Avoid `init_if_needed` - it permits reinitialization. + +**Pinocchio Prevention**: + +```rust +// Check discriminator before initialization +let data = account.try_borrow_data()?; +if data[0] == ACCOUNT_DISCRIMINATOR { + return Err(ProgramError::AccountAlreadyInitialized); +} +``` + +--- + +### 5. PDA Sharing Vulnerabilities + +**Risk**: Same PDA used across multiple users enables unauthorized access. + +**Attack**: Shared PDA authority becomes "master key" unlocking multiple users' assets. + +**Vulnerable Pattern**: + +```rust +// BAD: Only mint in seeds - all vaults for same token share authority +seeds = [b"pool", pool.mint.as_ref()] +``` + +**Secure Pattern**: + +```rust +// GOOD: Include user-specific identifiers +seeds = [b"pool", vault.key().as_ref(), owner.key().as_ref()] +``` + +--- + +### 6. Type Cosplay Attacks + +**Risk**: Accounts with identical data structures but different purposes can be substituted. + +**Attack**: Attacker passes controlled account type as different type parameter, bypassing authorization. + +**Prevention**: Use discriminators to distinguish account types. + +**Anchor**: Automatic 8-byte discriminator with `#[account]` macro. + +**Pinocchio**: + +```rust +// Validate discriminator before processing +let data = account.try_borrow_data()?; +if data[0] != EXPECTED_DISCRIMINATOR { + return Err(ProgramError::InvalidAccountData); +} +``` + +--- + +### 7. Duplicate Mutable Accounts + +**Risk**: Passing same account twice causes program to overwrite its own changes. + +**Attack**: Sequential mutations on identical accounts cancel earlier changes. + +**Prevention**: + +```rust +// Anchor +if ctx.accounts.account_1.key() == ctx.accounts.account_2.key() { + return Err(ProgramError::InvalidArgument); +} + +// Pinocchio +if self.accounts.account_1.key() == self.accounts.account_2.key() { + return Err(ProgramError::InvalidArgument); +} +``` + +--- + +### 8. Revival Attacks + +**Risk**: Closed accounts can be restored within same transaction by refunding lamports. + +**Attack**: Multi-instruction transaction drains account, refunds rent, exploits "closed" account. + +**Secure Closure Pattern**: + +```rust +// Anchor: Use close constraint +#[account(mut, close = destination)] +pub account: Account<'info, Data>, + +// Pinocchio: Full secure closure +pub fn close(account: &AccountInfo, destination: &AccountInfo) -> ProgramResult { + // 1. Mark as closed + { + let mut data = account.try_borrow_mut_data()?; + data[0] = 0xff; // Closed discriminator + } + + // 2. Transfer lamports + *destination.try_borrow_mut_lamports()? += *account.try_borrow_lamports()?; + + // 3. Shrink and close + account.realloc(1, true)?; + account.close() +} +``` + +--- + +### 9. Data Matching Vulnerabilities + +**Risk**: Correct type/ownership validation but incorrect assumptions about data relationships. + +**Attack**: Signer matches transaction but not stored owner field. + +**Prevention**: + +```rust +// Anchor: has_one constraint +#[account(has_one = authority)] +pub account: Account<'info, Data>, + +// Pinocchio: Manual validation +let data = Config::from_bytes(&account.try_borrow_data()?)?; +if data.authority != *authority.key() { + return Err(ProgramError::InvalidAccountData); +} +``` + +--- + +## Program-Side Checklist + +### Account Validation + +- [ ] Validate account owners match expected program +- [ ] Validate signer requirements explicitly +- [ ] Validate writable requirements explicitly +- [ ] Validate PDAs match expected seeds + bump +- [ ] Validate token mint ↔ token account relationships +- [ ] Validate rent exemption / initialization status +- [ ] Check for duplicate mutable accounts + +### CPI Safety + +- [ ] Validate program IDs before CPIs (no arbitrary CPI) +- [ ] Do not pass extra writable or signer privileges to callees +- [ ] Ensure invoke_signed seeds are correct and canonical + +### Arithmetic and Invariants + +- [ ] Use checked math (`checked_add`, `checked_sub`, `checked_mul`, `checked_div`) +- [ ] Avoid unchecked casts +- [ ] Re-validate state after CPIs when required + +### State Lifecycle + +- [ ] Close accounts securely (mark discriminator, drain lamports) +- [ ] Avoid leaving "zombie" accounts with lamports +- [ ] Gate upgrades and ownership transfers +- [ ] Prevent reinitialization of existing accounts + +--- + +## Client-Side Checklist + +- [ ] Cluster awareness: never hardcode mainnet endpoints in dev flows +- [ ] Simulate transactions for UX where feasible +- [ ] Handle blockhash expiry and retry with fresh blockhash +- [ ] Treat "signature received" as not-final; track confirmation +- [ ] Never assume token program variant; detect Token-2022 vs classic +- [ ] Validate transaction simulation results before signing +- [ ] Show clear error messages for common failure modes + +--- + +## Security Review Questions + +1. Can an attacker pass a fake account that passes validation? +2. Can an attacker call this instruction without proper authorization? +3. Can an attacker substitute a malicious program for CPI targets? +4. Can an attacker reinitialize an existing account? +5. Can an attacker exploit shared PDAs across users? +6. Can an attacker pass the same account for multiple parameters? +7. Can an attacker revive a closed account in the same transaction? +8. Can an attacker exploit mismatches between stored and provided data? diff --git a/.agents/skills/solana-dev/testing.md b/.agents/skills/solana-dev/testing.md new file mode 100644 index 0000000..dbf8d26 --- /dev/null +++ b/.agents/skills/solana-dev/testing.md @@ -0,0 +1,362 @@ +# Testing Strategy (LiteSVM / Mollusk / Surfpool) + +## Testing Pyramid + +1. **Unit tests (fast)**: LiteSVM or Mollusk +2. **Integration tests (realistic state)**: Surfpool +3. **Cluster smoke tests**: devnet/testnet/mainnet as needed + +## LiteSVM + +A lightweight Solana Virtual Machine that runs directly in your test process. Created by Aursen from Exotic Markets. + +### When to Use LiteSVM + +- Fast execution without validator overhead +- Direct account state manipulation +- Built-in performance profiling +- Multi-language support (Rust, TypeScript, Python) + +### Rust Setup + +```bash +cargo add --dev litesvm +``` + +```rust +use litesvm::LiteSVM; +use solana_sdk::{pubkey::Pubkey, signature::Keypair, transaction::Transaction}; + +#[test] +fn test_deposit() { + let mut svm = LiteSVM::new(); + + // Load your program + let program_id = pubkey!("YourProgramId11111111111111111111111111111"); + svm.add_program_from_file(program_id, "target/deploy/program.so"); + + // Create accounts + let payer = Keypair::new(); + svm.airdrop(&payer.pubkey(), 1_000_000_000).unwrap(); + + // Build and send transaction + let tx = Transaction::new_signed_with_payer( + &[/* instructions */], + Some(&payer.pubkey()), + &[&payer], + svm.latest_blockhash(), + ); + + let result = svm.send_transaction(tx); + assert!(result.is_ok()); +} +``` + +### TypeScript Setup + +```bash +npm i --save-dev litesvm +``` + +```typescript +import { LiteSVM } from "litesvm"; +import { PublicKey, Transaction, Keypair } from "@solana/web3.js"; + +const programId = new PublicKey("YourProgramId11111111111111111111111111111"); +const svm = new LiteSVM(); +svm.addProgramFromFile(programId, "target/deploy/program.so"); + +// Build transaction +const tx = new Transaction(); +tx.recentBlockhash = svm.latestBlockhash(); +tx.add(/* instructions */); +tx.sign(payer); + +// Simulate first (optional) +const simulation = svm.simulateTransaction(tx); + +// Execute +const result = svm.sendTransaction(tx); +``` + +### Account Types in LiteSVM + +**System Accounts:** + +- Payer accounts (contain lamports) +- Uninitialized accounts (empty, awaiting setup) + +**Program Accounts:** + +- Serialize with `borsh`, `bincode`, or `solana_program_pack` +- Calculate rent-exempt minimum balance + +**Token Accounts:** + +- Use `spl_token::state::Mint` and `spl_token::state::Account` +- Serialize with Pack trait + +### Advanced LiteSVM Features + +```rust +// Modify clock sysvar +svm.set_sysvar(&Clock { slot: 1000, .. }); + +// Warp to slot +svm.warp_to_slot(5000); + +// Configure compute budget +svm.set_compute_budget(ComputeBudget { max_units: 400_000, .. }); + +// Toggle signature verification (useful for testing) +svm.with_sigverify(false); + +// Check compute units used +let result = svm.send_transaction(tx)?; +println!("CUs used: {}", result.compute_units_consumed); +``` + +## Mollusk + +A lightweight test harness providing direct interface to program execution without full validator runtime. Best for Rust-only testing with fine-grained control. + +### When to Use Mollusk + +- Fast execution for rapid development cycles +- Precise account state manipulation for edge cases +- Detailed performance metrics and CU benchmarking +- Custom syscall testing + +### Setup + +```bash +cargo add --dev mollusk-svm +cargo add --dev mollusk-svm-programs-token # For SPL token helpers +cargo add --dev solana-sdk solana-program +``` + +### Basic Usage + +```rust +use mollusk_svm::Mollusk; +use mollusk_svm::result::Check; +use solana_sdk::{account::Account, pubkey::Pubkey, instruction::Instruction}; + +#[test] +fn test_instruction() { + let program_id = Pubkey::new_unique(); + let mollusk = Mollusk::new(&program_id, "target/deploy/program"); + + // Create accounts + let payer = ( + Pubkey::new_unique(), + Account { + lamports: 1_000_000_000, + data: vec![], + owner: solana_sdk::system_program::ID, + executable: false, + rent_epoch: 0, + }, + ); + + // Build instruction + let instruction = Instruction { + program_id, + accounts: vec![/* account metas */], + data: vec![/* instruction data */], + }; + + // Execute with validation + mollusk.process_and_validate_instruction( + &instruction, + &[payer], + &[ + Check::success(), + Check::compute_units(50_000), + ], + ); +} +``` + +### Token Program Helpers + +```rust +use mollusk_svm_programs_token::token; + +// Add token program to test environment +token::add_program(&mut mollusk); + +// Create pre-configured token accounts +let mint_account = token::mint_account(decimals, supply, mint_authority); +let token_account = token::token_account(mint, owner, amount); +``` + +### CU Benchmarking + +```rust +use mollusk_svm::MolluskComputeUnitBencher; + +let bencher = MolluskComputeUnitBencher::new(mollusk) + .must_pass(true) + .out_dir("../target/benches"); + +bencher.bench( + "deposit_instruction", + &instruction, + &accounts, +); +// Generates markdown report with CU usage and deltas +``` + +### Advanced Configuration + +```rust +// Set compute budget +mollusk.set_compute_budget(200_000); + +// Enable all feature flags +mollusk.set_feature_set(FeatureSet::all_enabled()); + +// Customize sysvars +mollusk.sysvars.clock = Clock { + slot: 1000, + epoch: 5, + unix_timestamp: 1700000000, + ..Default::default() +}; +``` + +## Surfpool + +SDK and tooling suite for integration testing with realistic cluster state. Surfnet is the local network component (drop-in replacement for solana-test-validator). + +### When to Use Surfpool + +- Complex CPIs requiring mainnet programs (e.g., Jupiter with 40+ accounts) +- Testing against realistic account state +- Time travel and block manipulation +- Account/program cloning between environments + +### Setup + +```bash +# Install Surfpool CLI +cargo install surfpool + +# Start local Surfnet +surfpool start +``` + +### Connection Setup + +```typescript +import { Connection } from "@solana/web3.js"; + +const connection = new Connection("http://localhost:8899", "confirmed"); +``` + +### System Variable Control + +```typescript +// Time travel to specific slot +await connection._rpcRequest("surfnet_timeTravel", [ + { + absoluteSlot: 250000000, + }, +]); + +// Pause/resume block production +await connection._rpcRequest("surfnet_pauseClock", []); +await connection._rpcRequest("surfnet_resumeClock", []); +``` + +### Account Manipulation + +```typescript +// Set account state +await connection._rpcRequest("surfnet_setAccount", [ + { + pubkey: accountPubkey.toString(), + lamports: 1000000000, + data: Buffer.from(accountData).toString("base64"), + owner: programId.toString(), + }, +]); + +// Set token account +await connection._rpcRequest("surfnet_setTokenAccount", [ + { + pubkey: ownerPubkey.toString(), // Owner of the token account (wallet) + mint: mintPubkey.toString(), + owner: ownerPubkey.toString(), + amount: "1000000", + }, +]); + +// Clone account from another program +await connection._rpcRequest("surfnet_cloneProgramAccount", [ + { + source: sourceProgramId.toString(), + destination: destProgramId.toString(), + account: accountPubkey.toString(), + }, +]); +``` + +### SOL Supply Configuration + +```typescript +// Configure supply for economic edge case testing +await connection._rpcRequest("surfnet_setSupply", [ + { + circulating: "500000000000000000", + nonCirculating: "100000000000000000", + total: "600000000000000000", + }, +]); +``` + +## Test Layout Recommendation + +``` +tests/ +├── unit/ +│ ├── deposit.rs # LiteSVM or Mollusk +│ ├── withdraw.rs +│ └── mod.rs +├── integration/ +│ ├── full_flow.rs # Surfpool +│ └── mod.rs +└── fixtures/ + └── accounts.rs # Shared test account setup +``` + +## CI Guidance + +```yaml +jobs: + unit-tests: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - name: Run unit tests + run: cargo test-sbf + + integration-tests: + runs-on: ubuntu-latest + needs: unit-tests + steps: + - uses: actions/checkout@v4 + - name: Start Surfpool + run: surfpool start --background + - name: Run integration tests + run: cargo test --test integration +``` + +## Best Practices + +- Keep unit tests as the default CI gate (fast feedback) +- Use deterministic PDAs and seeded keypairs for reproducibility +- Minimize fixtures; prefer programmatic account creation +- Profile CU usage during development to catch regressions +- Run integration tests in separate CI stage to control runtime diff --git a/.agents/skills/test-driven-development/SKILL.md b/.agents/skills/test-driven-development/SKILL.md new file mode 100644 index 0000000..1416045 --- /dev/null +++ b/.agents/skills/test-driven-development/SKILL.md @@ -0,0 +1,399 @@ +--- +name: test-driven-development +description: Use when implementing any feature or bugfix, before writing implementation code +--- + +# Test-Driven Development (TDD) + +## Overview + +Write the test first. Watch it fail. Write minimal code to pass. + +**Core principle:** If you didn't watch the test fail, you don't know if it tests the right thing. + +**Violating the letter of the rules is violating the spirit of the rules.** + +## When to Use + +**Always:** + +- New features +- Bug fixes +- Refactoring +- Behavior changes + +**Exceptions (ask your human partner):** + +- Throwaway prototypes +- Generated code +- Configuration files + +Thinking "skip TDD just this once"? Stop. That's rationalization. + +## The Iron Law + +``` +NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST +``` + +Write code before the test? Delete it. Start over. + +**No exceptions:** + +- Don't keep it as "reference" +- Don't "adapt" it while writing tests +- Don't look at it +- Delete means delete + +Implement fresh from tests. Period. + +## Red-Green-Refactor + +```mermaid +flowchart LR + red["RED
Write failing test"] + verify_red{"Verify fails
correctly"} + green["GREEN
Minimal code"] + verify_green{"Verify passes
All green"} + refactor["REFACTOR
Clean up"] + next(("Next")) + + red --> verify_red + verify_red -->|yes| green + verify_red -->|wrong
failure| red + green --> verify_green + verify_green -->|yes| refactor + verify_green -->|no| green + refactor -->|stay
green| verify_green + verify_green --> next + next --> red +``` + +### RED - Write Failing Test + +Write one minimal test showing what should happen. + + + +```typescript +import { test, expect } from "bun:test"; + +test("retries failed operations 3 times", async () => { + let attempts = 0; + const operation = async () => { + attempts++; + if (attempts < 3) throw new Error("fail"); + return "success"; + }; + + const result = await retryOperation(operation); + + expect(result).toBe("success"); + expect(attempts).toBe(3); +}); +``` + +Clear name, tests real behavior, one thing + + + + + +```typescript +import { test, expect, jest } from "bun:test"; + +test("retry works", async () => { + const mock = jest + .fn() + .mockRejectedValueOnce(new Error()) + .mockRejectedValueOnce(new Error()) + .mockResolvedValueOnce("success"); + await retryOperation(mock); + expect(mock).toHaveBeenCalledTimes(3); +}); +``` + +Vague name, tests mock not code + + +**Requirements:** + +- One behavior +- Clear name +- Real code (no mocks unless unavoidable) + +### Verify RED - Watch It Fail + +**MANDATORY. Never skip.** + +```bash +bun test path/to/test.test.ts +``` + +Confirm: + +- Test fails (not errors) +- Failure message is expected +- Fails because feature missing (not typos) + +**Test passes?** You're testing existing behavior. Fix test. + +**Test errors?** Fix error, re-run until it fails correctly. + +### GREEN - Minimal Code + +Write simplest code to pass the test. + + + +```typescript +async function retryOperation(fn: () => Promise): Promise { + for (let i = 0; i < 3; i++) { + try { + return await fn(); + } catch (e) { + if (i === 2) throw e; + } + } + throw new Error("unreachable"); +} +``` + +Just enough to pass + + + + +```typescript +async function retryOperation( + fn: () => Promise, + options?: { + maxRetries?: number; + backoff?: "linear" | "exponential"; + onRetry?: (attempt: number) => void; + }, +): Promise { + // YAGNI +} +``` + +Over-engineered + + +Don't add features, refactor other code, or "improve" beyond the test. + +### Verify GREEN - Watch It Pass + +**MANDATORY.** + +```bash +bun test path/to/test.test.ts +``` + +Confirm: + +- Test passes +- Other tests still pass +- Output pristine (no errors, warnings) + +**Test fails?** Fix code, not test. + +**Other tests fail?** Fix now. + +### REFACTOR - Clean Up + +After green only: + +- Remove duplication +- Improve names +- Extract helpers + +Keep tests green. Don't add behavior. + +### Repeat + +Next failing test for next feature. + +## Good Tests + +| Quality | Good | Bad | +| ---------------- | ----------------------------------- | --------------------------------------------------- | +| **Minimal** | One thing. "and" in name? Split it. | `test('validates email and domain and whitespace')` | +| **Clear** | Name describes behavior | `test('test1')` | +| **Shows intent** | Demonstrates desired API | Obscures what code should do | + +## Why Order Matters + +**"I'll write tests after to verify it works"** + +Tests written after code pass immediately. Passing immediately proves nothing: + +- Might test wrong thing +- Might test implementation, not behavior +- Might miss edge cases you forgot +- You never saw it catch the bug + +Test-first forces you to see the test fail, proving it actually tests something. + +**"I already manually tested all the edge cases"** + +Manual testing is ad-hoc. You think you tested everything but: + +- No record of what you tested +- Can't re-run when code changes +- Easy to forget cases under pressure +- "It worked when I tried it" ≠ comprehensive + +Automated tests are systematic. They run the same way every time. + +**"Deleting X hours of work is wasteful"** + +Sunk cost fallacy. The time is already gone. Your choice now: + +- Delete and rewrite with TDD (X more hours, high confidence) +- Keep it and add tests after (30 min, low confidence, likely bugs) + +The "waste" is keeping code you can't trust. Working code without real tests is technical debt. + +**"TDD is dogmatic, being pragmatic means adapting"** + +TDD IS pragmatic: + +- Finds bugs before commit (faster than debugging after) +- Prevents regressions (tests catch breaks immediately) +- Documents behavior (tests show how to use code) +- Enables refactoring (change freely, tests catch breaks) + +"Pragmatic" shortcuts = debugging in production = slower. + +**"Tests after achieve the same goals - it's spirit not ritual"** + +No. Tests-after answer "What does this do?" Tests-first answer "What should this do?" + +Tests-after are biased by your implementation. You test what you built, not what's required. You verify remembered edge cases, not discovered ones. + +Tests-first force edge case discovery before implementing. Tests-after verify you remembered everything (you didn't). + +30 minutes of tests after ≠ TDD. You get coverage, lose proof tests work. + +## Common Rationalizations + +| Excuse | Reality | +| -------------------------------------- | ----------------------------------------------------------------------- | +| "Too simple to test" | Simple code breaks. Test takes 30 seconds. | +| "I'll test after" | Tests passing immediately prove nothing. | +| "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" | +| "Already manually tested" | Ad-hoc ≠ systematic. No record, can't re-run. | +| "Deleting X hours is wasteful" | Sunk cost fallacy. Keeping unverified code is technical debt. | +| "Keep as reference, write tests first" | You'll adapt it. That's testing after. Delete means delete. | +| "Need to explore first" | Fine. Throw away exploration, start with TDD. | +| "Test hard = design unclear" | Listen to test. Hard to test = hard to use. | +| "TDD will slow me down" | TDD faster than debugging. Pragmatic = test-first. | +| "Manual test faster" | Manual doesn't prove edge cases. You'll re-test every change. | +| "Existing code has no tests" | You're improving it. Add tests for existing code. | + +## Red Flags - STOP and Start Over + +- Code before test +- Test after implementation +- Test passes immediately +- Can't explain why test failed +- Tests added "later" +- Rationalizing "just this once" +- "I already manually tested it" +- "Tests after achieve the same purpose" +- "It's about spirit not ritual" +- "Keep as reference" or "adapt existing code" +- "Already spent X hours, deleting is wasteful" +- "TDD is dogmatic, I'm being pragmatic" +- "This is different because..." + +**All of these mean: Delete code. Start over with TDD.** + +## Example: Bug Fix + +**Bug:** Empty email accepted + +**RED** + +```typescript +test("rejects empty email", async () => { + const result = await submitForm({ email: "" }); + expect(result.error).toBe("Email required"); +}); +``` + +**Verify RED** + +```bash +$ bun test +FAIL: expected 'Email required', got undefined +``` + +**GREEN** + +```typescript +function submitForm(data: FormData) { + if (!data.email?.trim()) { + return { error: "Email required" }; + } + // ... +} +``` + +**Verify GREEN** + +```bash +$ bun test +PASS +``` + +**REFACTOR** +Extract validation for multiple fields if needed. + +## Verification Checklist + +Before marking work complete: + +- [ ] Every new function/method has a test +- [ ] Watched each test fail before implementing +- [ ] Each test failed for expected reason (feature missing, not typo) +- [ ] Wrote minimal code to pass each test +- [ ] All tests pass +- [ ] Output pristine (no errors, warnings) +- [ ] Tests use real code (mocks only if unavoidable) +- [ ] Edge cases and errors covered + +Can't check all boxes? You skipped TDD. Start over. + +## When Stuck + +| Problem | Solution | +| ---------------------- | -------------------------------------------------------------------- | +| Don't know how to test | Write wished-for API. Write assertion first. Ask your human partner. | +| Test too complicated | Design too complicated. Simplify interface. | +| Must mock everything | Code too coupled. Use dependency injection. | +| Test setup huge | Extract helpers. Still complex? Simplify design. | + +## Debugging Integration + +Bug found? Write failing test reproducing it. Follow TDD cycle. Test proves fix and prevents regression. + +Never fix bugs without a test. + +## Testing Anti-Patterns + +When adding mocks or test utilities, read @testing-anti-patterns.md to avoid common pitfalls: + +- Testing mock behavior instead of real behavior +- Adding test-only methods to production classes +- Mocking without understanding dependencies + +## Final Rule + +``` +Production code → test exists and failed first +Otherwise → not TDD +``` + +No exceptions without your human partner's permission. diff --git a/.agents/skills/test-driven-development/testing-anti-patterns.md b/.agents/skills/test-driven-development/testing-anti-patterns.md new file mode 100644 index 0000000..e63e485 --- /dev/null +++ b/.agents/skills/test-driven-development/testing-anti-patterns.md @@ -0,0 +1,317 @@ +# Testing Anti-Patterns + +**Load this reference when:** writing or changing tests, adding mocks, or tempted to add test-only methods to production code. + +## Overview + +Tests must verify real behavior, not mock behavior. Mocks are a means to isolate, not the thing being tested. + +**Core principle:** Test what the code does, not what the mocks do. + +**Following strict TDD prevents these anti-patterns.** + +## The Iron Laws + +``` +1. NEVER test mock behavior +2. NEVER add test-only methods to production classes +3. NEVER mock without understanding dependencies +``` + +## Anti-Pattern 1: Testing Mock Behavior + +**The violation:** + +```typescript +// ❌ BAD: Testing that the mock exists +test("renders sidebar", () => { + render(); + expect(screen.getByTestId("sidebar-mock")).toBeInTheDocument(); +}); +``` + +**Why this is wrong:** + +- You're verifying the mock works, not that the component works +- Test passes when mock is present, fails when it's not +- Tells you nothing about real behavior + +**your human partner's correction:** "Are we testing the behavior of a mock?" + +**The fix:** + +```typescript +// ✅ GOOD: Test real component or don't mock it +test("renders sidebar", () => { + render(); // Don't mock sidebar + expect(screen.getByRole("navigation")).toBeInTheDocument(); +}); + +// OR if sidebar must be mocked for isolation: +// Don't assert on the mock - test Page's behavior with sidebar present +``` + +### Gate Function + +``` +BEFORE asserting on any mock element: + Ask: "Am I testing real component behavior or just mock existence?" + + IF testing mock existence: + STOP - Delete the assertion or unmock the component + + Test real behavior instead +``` + +## Anti-Pattern 2: Test-Only Methods in Production + +**The violation:** + +```typescript +// ❌ BAD: destroy() only used in tests +class Session { + async destroy() { + // Looks like production API! + await this._workspaceManager?.destroyWorkspace(this.id); + // ... cleanup + } +} + +// In tests +afterEach(() => session.destroy()); +``` + +**Why this is wrong:** + +- Production class polluted with test-only code +- Dangerous if accidentally called in production +- Violates YAGNI and separation of concerns +- Confuses object lifecycle with entity lifecycle + +**The fix:** + +```typescript +// ✅ GOOD: Test utilities handle test cleanup +// Session has no destroy() - it's stateless in production + +// In test-utils/ +export async function cleanupSession(session: Session) { + const workspace = session.getWorkspaceInfo(); + if (workspace) { + await workspaceManager.destroyWorkspace(workspace.id); + } +} + +// In tests +afterEach(() => cleanupSession(session)); +``` + +### Gate Function + +``` +BEFORE adding any method to production class: + Ask: "Is this only used by tests?" + + IF yes: + STOP - Don't add it + Put it in test utilities instead + + Ask: "Does this class own this resource's lifecycle?" + + IF no: + STOP - Wrong class for this method +``` + +## Anti-Pattern 3: Mocking Without Understanding + +**The violation:** + +```typescript +// ❌ BAD: Mock breaks test logic +test("detects duplicate server", () => { + // Mock prevents config write that test depends on! + vi.mock("ToolCatalog", () => ({ + discoverAndCacheTools: vi.fn().mockResolvedValue(undefined), + })); + + await addServer(config); + await addServer(config); // Should throw - but won't! +}); +``` + +**Why this is wrong:** + +- Mocked method had side effect test depended on (writing config) +- Over-mocking to "be safe" breaks actual behavior +- Test passes for wrong reason or fails mysteriously + +**The fix:** + +```typescript +// ✅ GOOD: Mock at correct level +test("detects duplicate server", () => { + // Mock the slow part, preserve behavior test needs + vi.mock("MCPServerManager"); // Just mock slow server startup + + await addServer(config); // Config written + await addServer(config); // Duplicate detected ✓ +}); +``` + +### Gate Function + +``` +BEFORE mocking any method: + STOP - Don't mock yet + + 1. Ask: "What side effects does the real method have?" + 2. Ask: "Does this test depend on any of those side effects?" + 3. Ask: "Do I fully understand what this test needs?" + + IF depends on side effects: + Mock at lower level (the actual slow/external operation) + OR use test doubles that preserve necessary behavior + NOT the high-level method the test depends on + + IF unsure what test depends on: + Run test with real implementation FIRST + Observe what actually needs to happen + THEN add minimal mocking at the right level + + Red flags: + - "I'll mock this to be safe" + - "This might be slow, better mock it" + - Mocking without understanding the dependency chain +``` + +## Anti-Pattern 4: Incomplete Mocks + +**The violation:** + +```typescript +// ❌ BAD: Partial mock - only fields you think you need +const mockResponse = { + status: "success", + data: { userId: "123", name: "Alice" }, + // Missing: metadata that downstream code uses +}; + +// Later: breaks when code accesses response.metadata.requestId +``` + +**Why this is wrong:** + +- **Partial mocks hide structural assumptions** - You only mocked fields you know about +- **Downstream code may depend on fields you didn't include** - Silent failures +- **Tests pass but integration fails** - Mock incomplete, real API complete +- **False confidence** - Test proves nothing about real behavior + +**The Iron Rule:** Mock the COMPLETE data structure as it exists in reality, not just fields your immediate test uses. + +**The fix:** + +```typescript +// ✅ GOOD: Mirror real API completeness +const mockResponse = { + status: "success", + data: { userId: "123", name: "Alice" }, + metadata: { requestId: "req-789", timestamp: 1234567890 }, + // All fields real API returns +}; +``` + +### Gate Function + +``` +BEFORE creating mock responses: + Check: "What fields does the real API response contain?" + + Actions: + 1. Examine actual API response from docs/examples + 2. Include ALL fields system might consume downstream + 3. Verify mock matches real response schema completely + + Critical: + If you're creating a mock, you must understand the ENTIRE structure + Partial mocks fail silently when code depends on omitted fields + + If uncertain: Include all documented fields +``` + +## Anti-Pattern 5: Integration Tests as Afterthought + +**The violation:** + +``` +✅ Implementation complete +❌ No tests written +"Ready for testing" +``` + +**Why this is wrong:** + +- Testing is part of implementation, not optional follow-up +- TDD would have caught this +- Can't claim complete without tests + +**The fix:** + +``` +TDD cycle: +1. Write failing test +2. Implement to pass +3. Refactor +4. THEN claim complete +``` + +## When Mocks Become Too Complex + +**Warning signs:** + +- Mock setup longer than test logic +- Mocking everything to make test pass +- Mocks missing methods real components have +- Test breaks when mock changes + +**your human partner's question:** "Do we need to be using a mock here?" + +**Consider:** Integration tests with real components often simpler than complex mocks + +## TDD Prevents These Anti-Patterns + +**Why TDD helps:** + +1. **Write test first** → Forces you to think about what you're actually testing +2. **Watch it fail** → Confirms test tests real behavior, not mocks +3. **Minimal implementation** → No test-only methods creep in +4. **Real dependencies** → You see what the test actually needs before mocking + +**If you're testing mock behavior, you violated TDD** - you added mocks without watching test fail against real code first. + +## Quick Reference + +| Anti-Pattern | Fix | +| ------------------------------- | --------------------------------------------- | +| Assert on mock elements | Test real component or unmock it | +| Test-only methods in production | Move to test utilities | +| Mock without understanding | Understand dependencies first, mock minimally | +| Incomplete mocks | Mirror real API completely | +| Tests as afterthought | TDD - tests first | +| Over-complex mocks | Consider integration tests | + +## Red Flags + +- Assertion checks for `*-mock` test IDs +- Methods only called in test files +- Mock setup is >50% of test +- Test fails when you remove mock +- Can't explain why mock is needed +- Mocking "just to be safe" + +## The Bottom Line + +**Mocks are tools to isolate, not things to test.** + +If TDD reveals you're testing mock behavior, you've gone wrong. + +Fix: Test real behavior or question why you're mocking at all. diff --git a/.agents/skills/update-pr/SKILL.md b/.agents/skills/update-pr/SKILL.md new file mode 100644 index 0000000..e40be7e --- /dev/null +++ b/.agents/skills/update-pr/SKILL.md @@ -0,0 +1,132 @@ +--- +name: update-pr +description: Update an existing pull request with new changes or respond to review feedback. Use when addressing PR comments, making requested changes, or updating a PR after review. +--- + +# Update Pull Request + +## Steps + +### 1. Identify the PR + +```bash +# List open PRs for current branch +gh pr list --head $(git branch --show-current) + +# Or get PR details by number +gh pr view +``` + +### 2. Fetch Review Comments + +```bash +# View PR reviews and comments +gh pr view --comments + +# View the PR diff to understand context +gh pr diff +``` + +### 3. Address Feedback + +For each review comment: + +1. Read and understand the feedback +2. Make the necessary code changes +3. Stage and commit with a descriptive message + +```bash +# Stage changes +git add -u + +# Commit with reference to what was addressed +git commit -m "address review: " +``` + +### 4. Push Updates + +```bash +# Push to the same branch (PR updates automatically) +git push +``` + +### 5. Respond to Review Comments (Optional) + +If you need to reply to specific comments: + +```bash +# Reply to a review comment +gh api repos/{owner}/{repo}/pulls//comments//replies \ + -f body="Done - updated the implementation as suggested" +``` + +Or use the GitHub web interface for complex discussions. + +### 6. Re-request Review (if needed) + +```bash +# Re-request review from specific reviewers +gh pr edit --add-reviewer +``` + +## Handling Common Review Requests + +### "Please add tests" + +1. Identify the appropriate test file in `packages/*/src/__tests__/` +2. Add test cases covering the new functionality +3. Run `pnpm test` to verify + +### "Update types" + +1. Check TypeScript errors with `pnpm build` +2. Update type definitions as needed +3. Ensure no type errors remain + +### "Fix lint issues" + +```bash +pnpm format # Auto-fix formatting +pnpm lint # Check and fix lint issues +``` + +### "Update snapshots" + +```bash +pnpm test:storybook:update +git add packages/*/__image_snapshots__/ +git commit -m "chore: update storybook snapshots" +``` + +## Squashing Commits (if requested) + +If the reviewer asks to squash commits: + +```bash +# Interactive rebase to squash +git rebase -i origin/main + +# In the editor, change 'pick' to 'squash' for commits to combine +# Save and edit the combined commit message + +# Force push (safe for PR branches) +git push --force-with-lease +``` + +## Example Workflow + +```bash +# 1. Fetch latest review comments +gh pr view 42 --comments + +# 2. Make changes based on feedback +# ... edit files ... + +# 3. Commit and push +git add -u +git commit -m "address review: add error handling for edge case" +git push + +# 4. Notify reviewer +echo "Updated PR #42 - addressed all review comments" +``` diff --git a/.claude/commands b/.claude/commands new file mode 120000 index 0000000..1ea3574 --- /dev/null +++ b/.claude/commands @@ -0,0 +1 @@ +../.agents/commands \ No newline at end of file diff --git a/.claude/rules b/.claude/rules new file mode 120000 index 0000000..2d5c9a9 --- /dev/null +++ b/.claude/rules @@ -0,0 +1 @@ +../.agents/rules \ No newline at end of file diff --git a/.claude/skills b/.claude/skills new file mode 120000 index 0000000..2b7a412 --- /dev/null +++ b/.claude/skills @@ -0,0 +1 @@ +../.agents/skills \ No newline at end of file diff --git a/.codex/commands b/.codex/commands new file mode 120000 index 0000000..1ea3574 --- /dev/null +++ b/.codex/commands @@ -0,0 +1 @@ +../.agents/commands \ No newline at end of file diff --git a/.codex/rules b/.codex/rules new file mode 120000 index 0000000..2d5c9a9 --- /dev/null +++ b/.codex/rules @@ -0,0 +1 @@ +../.agents/rules \ No newline at end of file diff --git a/.codex/skills b/.codex/skills new file mode 120000 index 0000000..2b7a412 --- /dev/null +++ b/.codex/skills @@ -0,0 +1 @@ +../.agents/skills \ No newline at end of file diff --git a/.cursor/commands b/.cursor/commands new file mode 120000 index 0000000..1ea3574 --- /dev/null +++ b/.cursor/commands @@ -0,0 +1 @@ +../.agents/commands \ No newline at end of file diff --git a/.cursor/commands/anchor-build.md b/.cursor/commands/anchor-build.md deleted file mode 100644 index 04cbce5..0000000 --- a/.cursor/commands/anchor-build.md +++ /dev/null @@ -1,16 +0,0 @@ -# Anchorプログラムのビルド - -Anchorプログラムをビルドします。 - -```bash -anchor build -``` - -## 説明 -- IDLファイルの生成 -- Rustプログラムのコンパイル -- TypeScriptクライアントコードの生成 - -## 関連コマンド -- `anchor clean` - ビルドキャッシュのクリア -- `anchor verify` - ビルドの検証 diff --git a/.cursor/commands/anchor-deploy.md b/.cursor/commands/anchor-deploy.md deleted file mode 100644 index a207dd1..0000000 --- a/.cursor/commands/anchor-deploy.md +++ /dev/null @@ -1,20 +0,0 @@ -# Anchorプログラムのデプロイ - -AnchorプログラムをSolanaブロックチェーンにデプロイします。 - -```bash -anchor deploy -``` - -## 前提条件 -- Solana CLIのインストール -- ウォレットの設定 -- 十分なSOL残高 - -## オプション -- `--provider.cluster mainnet` - メインネットへのデプロイ -- `--provider.cluster devnet` - デブネットへのデプロイ - -## 関連コマンド -- `solana balance` - 残高確認 -- `solana airdrop 2` - テストSOLの取得 diff --git a/.cursor/commands/anchor-test.md b/.cursor/commands/anchor-test.md deleted file mode 100644 index 0726383..0000000 --- a/.cursor/commands/anchor-test.md +++ /dev/null @@ -1,22 +0,0 @@ -# Anchorテストの実行 - -プログラムのテストを実行します。 - -```bash -anchor test -``` - -## 説明 -- ローカルバリデータを起動 -- Rust単体テストの実行 -- TypeScript統合テストの実行 -- テスト後のクリーンアップ - -## オプション -- `--skip-build` - ビルドをスキップ -- `--skip-deploy` - デプロイをスキップ -- `--skip-lint` - リンターをスキップ - -## テストファイル -- `tests/*.ts` - TypeScript統合テスト -- `programs/*/src/lib.rs` - Rust単体テスト diff --git a/.cursor/commands/anchor-verify.md b/.cursor/commands/anchor-verify.md deleted file mode 100644 index 2dfe896..0000000 --- a/.cursor/commands/anchor-verify.md +++ /dev/null @@ -1,41 +0,0 @@ -# Anchorプログラムの検証 - -プログラムのビルドとテストを包括的に検証します。 - -```bash -# 完全な検証スイート -anchor build && anchor test && cargo clippy && cargo fmt --check -``` - -## 個別検証 - -### ビルド検証 -```bash -anchor build -``` - -### テスト実行 -```bash -anchor test -``` - -### コード品質チェック -```bash -cargo clippy -- -D warnings -``` - -### フォーマットチェック -```bash -cargo fmt --check -``` - -## CI/CDでの使用例 - -```yaml -- name: Verify Anchor Program - run: | - anchor build - anchor test - cargo clippy -- -D warnings - cargo fmt --check -``` diff --git a/.cursor/commands/check-script.md b/.cursor/commands/check-script.md deleted file mode 100644 index 9399ebd..0000000 --- a/.cursor/commands/check-script.md +++ /dev/null @@ -1,12 +0,0 @@ -# Check Command - -あなたは Bun + TypeScript のコードベースのスクリプトの実行結果を確認するスペシャリストです。 -スクリプトの実行結果が期待通りに動作することを確認することが mission です。 - -1. 対象のscript, file, コマンドを適切なparameterを考えcliで実行して下さい -2. その結果をコンテキストとして読み込み、その結果が正しいかを確認・検証して下さい -3. その結果が正しくない場合、その原因を特定してください -4. その原因を踏まえて、その原因を解決するための最小限の修正を行ってください -5. その修正を実行してください -6. lint, format, build, testがあれば実行し、全てのtestが成功することを確認してください -7. testがfailした場合そのerrorについてまた1からやり直してください diff --git a/.cursor/commands/dce.md b/.cursor/commands/dce.md deleted file mode 100644 index 0b0f68a..0000000 --- a/.cursor/commands/dce.md +++ /dev/null @@ -1,162 +0,0 @@ -# Dead Code Elimination - -## Overview - -This document explains how to detect dead code in TypeScript projects. - -## Tool: ts-remove-unused (tsr) - -### Installation and Execution - -```bash -# Run directly with npx (recommended) -npx -y tsr [options] [...entrypoints] - -# Or, run with old package name (deprecated) -npx -y @line/ts-remove-unused # -> warns to use tsr -``` - -### Basic Usage - -1. **Check help** - -```bash -npx -y tsr --help -``` - -2. **Check with single entrypoint** - -```bash -npx -y tsr 'src/index\.ts$' -``` - -3. **Check with multiple entrypoints** - -```bash -npx -y tsr 'src/index\.ts$' 'src/cli/cli\.ts$' -``` - -4. **Check including test files** - -```bash -npx -y tsr 'src/index\.ts$' 'src/cli/cli\.ts$' 'test/.*\.ts$' 'src/.*_test\.ts$' -``` - -### Options - -- `-w, --write`: Write changes directly to files -- `-r, --recursive`: Recursively check until project is clean -- `-p, --project `: Path to custom tsconfig.json -- `--include-d-ts`: Include .d.ts files in the check - -## Real Analysis Example - -### 1. Initial Run - -```bash -$ npx -y tsr 'src/index\.ts$' -``` - -Results: - -- 67 unused exports -- 15 unused files - -### 2. Run including CLI - -```bash -$ npx -y tsr 'src/index\.ts$' 'src/cli/cli\.ts$' -``` - -Results: - -- Unused files reduced to 14 (excluding those used by CLI) - -### 3. Run including test files - -```bash -$ npx -y tsr 'src/index\.ts$' 'src/cli/cli\.ts$' 'test/.*\.ts$' 'src/.*_test\.ts$' -``` - -Results: - -- Unused files reduced to 4 (excluding those used in tests) - -## Interpreting Analysis Results - -### Types of Unused Exports - -1. **Type Definitions** (`oxc_types.ts`) - - Many AST types are exported but unused - - Action: Export only actually used types - -2. **Internal Utility Functions** - - Example: `getNodeLabel`, `getNodeChildren` (apted.ts) - - Action: Remove `export` as they are internal implementation - -3. **Helper Functions** - - Example: `collectNodes`, `findNode` (ast_traversal.ts) - - Action: Consider if needed as public API - -### Types of Unused Files - -1. **Test-only Files** - - `*_test.ts` files - - Action: Include as test entrypoints - -2. **Duplicate Functionality** - - Example: `function_body_comparer.ts` (integrated elsewhere) - - Action: Delete - -3. **Experimental Code** - - Example: `ast_traversal_with_context.ts` - - Action: Delete or move to `experimental/` - -## Recommended Workflow - -1. **First run analysis only** - -```bash -npx -y tsr 'src/index\.ts$' 'src/cli/cli\.ts$' -``` - -2. **Review results and decide action plan** - -- Items that can be deleted -- Items to remove export but keep as internal implementation -- Items to keep for future use - -3. **Clean up incrementally** - -- First delete obviously unnecessary items -- Then remove `export` from internal implementations -- Finally organize type definitions - -4. **Automatic fixes (carefully)** - -```bash -# Take backup before running -git stash -npx -y tsr --write 'src/index\.ts$' 'src/cli/cli\.ts$' -git diff # Check changes -``` - -## Notes - -1. **Dynamic imports**: tsr uses static analysis and cannot detect dynamic imports -2. **Type-only exports**: `export type` is also detected as unused -3. **Re-exports**: Be careful with barrel files (index.ts) - -## Example in This Project - -1. **Unused code found in diagnostics** - - Unused imports in `semantic_normalizer.ts` - - `extractSemanticPatterns` function (commented out for potential future use) - -2. **Actions taken** - - Removed unused imports - - Kept potentially useful code commented out - -3. **Results** - - Cleaner codebase - - Expected reduction in build size diff --git a/.cursor/commands/final-check.md b/.cursor/commands/final-check.md deleted file mode 100644 index f515ba5..0000000 --- a/.cursor/commands/final-check.md +++ /dev/null @@ -1,11 +0,0 @@ -# Final Check Command - -あなたは Bun + TypeScript のコードベースの最終確認のスペシャリストです。 -Agent の実装が完了した後、その機能が期待通りに動作することを確認することが mission です。 - -## Steps - -1. lint, format, typecheck, test, buildを実行し、error, warningが出ていないことをしっかりと確認してください。 -2. error, warningが出ている場合はその根本原因を冷静に特定し、その原因を解決するための最小限の修正を行ってください。 -3. 修正が完了したら再度lint, format, typecheck, test, buildを実行し、error, warningが出ていないことを確認してください。 -4. error, warningがなくなるまで2, 3を繰り返してください。 diff --git a/.cursor/commands/kiro/spec-design.md b/.cursor/commands/kiro/spec-design.md deleted file mode 100644 index 295164c..0000000 --- a/.cursor/commands/kiro/spec-design.md +++ /dev/null @@ -1,544 +0,0 @@ - -description: Create comprehensive technical design for a specification -argument-hint: [feature-name] [-y] - - -# Technical Design - -Generate a **technical design document** for feature **[feature-name]**. - -**CRITICAL**: Generate COMPLETE content without abbreviations, placeholders ("...", "[details]"), or omissions. Continue until all sections are fully written. - -## Task: Create Technical Design Document - -Tool policy: Use Cursor file tools (read_file/list_dir/glob_file_search/apply_patch/edit_file); no shell. - -Prime: Always perform Discovery & Analysis first, then construct the design document. -Process Reminder: Reference discovery findings throughout Overview/Architecture/Components/Testing; if unknowns remain, note "Pending discovery: ..." and avoid assumptions. - -### 1. Prerequisites & File Handling - -- **Requirements Approval Check**: - - If invoked with `-y`, set `requirements.approved=true` in `spec.json` - - Otherwise, **stop** with an actionable message if requirements are missing or unapproved -- **Design File Handling**: - - If design.md does not exist: Create new design.md file - - If design.md exists: Interactive prompt with options: - - **[o] Overwrite**: Generate completely new design document - - **[m] Merge**: Generate new design document using existing content as reference context - - **[c] Cancel**: Stop execution for manual review -- **Context Loading**: Read `.kiro/specs/[feature-name]/requirements.md`, core steering documents, and existing design.md (if merge mode) - -### 2. Discovery & Analysis Phase - -**CRITICAL**: Before generating the design, conduct thorough research and analysis: - -#### Feature Classification & Process Adaptation - -**Classify feature type to adapt process scope**: - -- **New Feature** (greenfield): Full process including technology selection and architecture decisions -- **Extension** (existing system): Focus on integration analysis, minimal architectural changes -- **Simple Addition** (CRUD, UI): Streamlined process, follow established patterns -- **Complex Integration** (external systems, new domains): Comprehensive analysis and risk assessment - -**Process Adaptation**: Skip or streamline analysis steps based on classification above - -#### A. Requirements to Technical Components Mapping - -- Map requirements (EARS format) to technical components -- Extract non-functional requirements (performance, security, scalability) -- Identify core technical challenges and constraints - -#### B. Existing Implementation Analysis - -**MANDATORY when modifying or extending existing features**: - -- Analyze codebase structure, dependencies, patterns -- Map reusable modules, services, utilities -- Understand domain boundaries, layers, data flow -- Determine extension vs. refactor vs. wrap approach -- Prioritize minimal changes and file reuse - -**Optional for completely new features**: Review existing patterns for consistency and reuse opportunities - -#### C. Steering Alignment Check - -- Verify alignment with core steering documents (`structure.md`, `tech.md`, `product.md`) and any custom steering files (`*.md`) in `.kiro/steering/` - - **Core steering**: @.kiro/steering/structure.md, @.kiro/steering/tech.md, @.kiro/steering/product.md - - **Custom steering**: All additional `.md` files in `.kiro/steering/` discovered via list_dir or glob_file_search (excluding `structure.md`, `tech.md`, `product.md`). Do not run shell commands. -- Document deviations with rationale for steering updates - -#### D. Technology & Alternative Analysis - -**For New Features or Unknown Technology Areas**: - -- Research latest best practices using WebSearch/WebFetch when needed in parallel -- Compare relevant architecture patterns (MVC, Clean, Hexagonal) if pattern selection is required -- Assess technology stack alternatives only when technology choices are being made -- Document key findings that impact design decisions - -**Skip this step if**: Using established team technology stack and patterns for straightforward feature additions - -#### E. Implementation-Specific Investigation - -**When new technology or complex integration is involved**: - -- Verify specific API capabilities needed for requirements -- Check version compatibility with existing dependencies -- Identify configuration and setup requirements -- Document any migration or integration challenges - -**For ANY external dependencies (libraries, APIs, services)**: - -- Use WebSearch to find official documentation and community resources -- Use WebFetch to analyze specific documentation pages -- Document authentication flows, rate limits, and usage constraints -- Note any gaps in understanding for implementation phase - -**Skip only if**: Using well-established internal libraries with no external dependencies - -#### F. Technical Risk Assessment - -- Performance/scalability risks: bottlenecks, capacity, growth -- Security vulnerabilities: attack vectors, compliance gaps -- Maintainability risks: complexity, knowledge, support -- Integration complexity: dependencies, coupling, API changes -- Technical debt: new creation vs. existing resolution - -## Design Document Structure & Guidelines - -### Core Principles - -- **Complete output**: Write all sections fully - never abbreviate or use ellipsis -- **Review-optimized structure**: Critical technical decisions prominently placed to prevent oversight -- **Contextual relevance**: Include sections only when applicable to project type and scope -- **Visual-first design**: Essential Mermaid diagrams for architecture and data flow -- **Design focus only**: Architecture and interfaces, NO implementation code -- **Type safety**: Never use `any` type - define explicit types and interfaces -- **Formal tone**: Use definitive, declarative statements without hedging language -- **Language**: Use language from `spec.json.language` field, default to English - -### Document Sections - -**CORE SECTIONS** (Include when relevant): - -- Overview, Architecture, Components and Interfaces (always) -- Data Models, Error Handling, Testing Strategy (when applicable) -- Security Considerations (when security implications exist) - -**CONDITIONAL SECTIONS** (Include only when specifically relevant): - -- Performance & Scalability (for performance-critical features) -- Migration Strategy (for existing system modifications) - - -## Overview -2-3 paragraphs max -**Purpose**: This feature delivers [specific value] to [target users]. -**Users**: [Target user groups] will utilize this for [specific workflows]. -**Impact** (if applicable): Changes the current [system state] by [specific modifications]. - -### Goals - -- Primary objective 1 -- Primary objective 2 -- Success criteria - -### Non-Goals - -- Explicitly excluded functionality -- Future considerations outside current scope -- Integration points deferred - -## Architecture - -### Existing Architecture Analysis (if applicable) - -When modifying existing systems: - -- Current architecture patterns and constraints -- Existing domain boundaries to be respected -- Integration points that must be maintained -- Technical debt addressed or worked around - -### High-Level Architecture - -**RECOMMENDED**: Include Mermaid diagram showing system architecture (required for complex features, optional for simple additions) - -**Architecture Integration**: - -- Existing patterns preserved: [list key patterns] -- New components rationale: [why each is needed] -- Technology alignment: [how it fits current stack] -- Steering compliance: [principles maintained] - -### Technology Stack and Design Decisions - -**Generation Instructions** (DO NOT include this section in design.md): -Adapt content based on feature classification from Discovery & Analysis Phase: - -**For New Features (greenfield)**: -Generate Technology Stack section with ONLY relevant layers: - -- Include only applicable technology layers (e.g., skip Frontend for CLI tools, skip Infrastructure for libraries) -- For each technology choice, provide: selection, rationale, and alternatives considered -- Include Architecture Pattern Selection if making architectural decisions - -**For Extensions/Additions to Existing Systems**: -Generate Technology Alignment section instead: - -- Document how feature aligns with existing technology stack -- Note any new dependencies or libraries being introduced -- Justify deviations from established patterns if necessary - -**Key Design Decisions**: -Generate 1-3 critical technical decisions that significantly impact the implementation. -Each decision should follow this format: - -- **Decision**: [Specific technical choice made] -- **Context**: [Problem or requirement driving this decision] -- **Alternatives**: [2-3 other approaches considered] -- **Selected Approach**: [What was chosen and how it works] -- **Rationale**: [Why this is optimal for the specific context] -- **Trade-offs**: [What we gain vs. what we sacrifice] - -Skip this entire section for simple CRUD operations or when following established patterns without deviation. - -## System Flows - -**Flow Design Generation Instructions** (DO NOT include this section in design.md): -Generate appropriate flow diagrams ONLY when the feature requires flow visualization. Select from: - -- **Sequence Diagrams**: For user interactions across multiple components -- **Process Flow Charts**: For complex algorithms, decision branches, or state machines -- **Data Flow Diagrams**: For data transformations, ETL processes, or data pipelines -- **State Diagrams**: For complex state transitions -- **Event Flow**: For async/event-driven architectures - -Skip this section entirely for simple CRUD operations or features without complex flows. -When included, provide concise Mermaid diagrams specific to the actual feature requirements. - -## Requirements Traceability - -**Traceability Generation Instructions** (DO NOT include this section in design.md): -Generate traceability mapping ONLY for complex features with multiple requirements or when explicitly needed for compliance/validation. - -When included, create a mapping table showing how each EARS requirement is realized: -| Requirement | Requirement Summary | Components | Interfaces | Flows | -|---------------|-------------------|------------|------------|-------| -| 1.1 | Brief description | Component names | API/Methods | Relevant flow diagrams | - -Alternative format for simpler cases: - -- **1.1**: Realized by [Component X] through [Interface Y] -- **1.2**: Implemented in [Component Z] with [Flow diagram reference] - -Skip this section for simple features with straightforward 1:1 requirement-to-component mappings. - -## Components and Interfaces - -**Component Design Generation Instructions** (DO NOT include this section in design.md): -Structure components by domain boundaries or architectural layers. Generate only relevant subsections based on component type. -Group related components under domain/layer headings for clarity. - -### [Domain/Layer Name] - -#### [Component Name] - -**Responsibility & Boundaries** - -- **Primary Responsibility**: Single, clear statement of what this component does -- **Domain Boundary**: Which domain/subdomain this belongs to -- **Data Ownership**: What data this component owns and manages -- **Transaction Boundary**: Scope of transactional consistency (if applicable) - -**Dependencies** - -- **Inbound**: Components/services that depend on this component -- **Outbound**: Components/services this component depends on -- **External**: Third-party services, libraries, or external systems - -**External Dependencies Investigation** (when using external libraries/services): - -- Use WebSearch to locate official documentation, GitHub repos, and community resources -- Use WebFetch to retrieve and analyze documentation pages, API references, and usage examples -- Verify API signatures, authentication methods, and rate limits -- Check version compatibility, breaking changes, and migration guides -- Investigate common issues, best practices, and performance considerations -- Document any assumptions, unknowns, or risks for implementation phase -- If critical information is missing, clearly note "Requires investigation during implementation: [specific concern]" - -**Contract Definition** - -Select and generate ONLY the relevant contract types for each component: - -**Service Interface** (for business logic components): - -```typescript -interface [ComponentName]Service { - // Method signatures with clear input/output types - // Include error types in return signatures - methodName(input: InputType): Result; -} -``` - -- **Preconditions**: What must be true before calling -- **Postconditions**: What is guaranteed after successful execution -- **Invariants**: What remains true throughout - -**API Contract** (for REST/GraphQL endpoints): -| Method | Endpoint | Request | Response | Errors | -|--------|----------|---------|----------|--------| -| POST | /api/resource | CreateRequest | Resource | 400, 409, 500 | - -With detailed schemas only for complex payloads - -**Event Contract** (for event-driven components): - -- **Published Events**: Event name, schema, trigger conditions -- **Subscribed Events**: Event name, handling strategy, idempotency -- **Ordering**: Guaranteed order requirements -- **Delivery**: At-least-once, at-most-once, or exactly-once - -**Batch/Job Contract** (for scheduled/triggered processes): - -- **Trigger**: Schedule, event, or manual trigger conditions -- **Input**: Data source and validation rules -- **Output**: Results destination and format -- **Idempotency**: How repeat executions are handled -- **Recovery**: Failure handling and retry strategy - -**State Management** (only if component maintains state): - -- **State Model**: States and valid transitions -- **Persistence**: Storage strategy and consistency model -- **Concurrency**: Locking, optimistic/pessimistic control - -**Integration Strategy** (when modifying existing systems): - -- **Modification Approach**: Extend, wrap, or refactor existing code -- **Backward Compatibility**: What must be maintained -- **Migration Path**: How to transition from current to target state - -## Data Models - -**Data Model Generation Instructions** (DO NOT include this section in design.md): -Generate only relevant data model sections based on the system's data requirements and chosen architecture. -Progress from conceptual to physical as needed for implementation clarity. - -### Domain Model - -**When to include**: Complex business domains with rich behavior and rules - -**Core Concepts**: - -- **Aggregates**: Define transactional consistency boundaries -- **Entities**: Business objects with unique identity and lifecycle -- **Value Objects**: Immutable descriptive aspects without identity -- **Domain Events**: Significant state changes in the domain - -**Business Rules & Invariants**: - -- Constraints that must always be true -- Validation rules and their enforcement points -- Cross-aggregate consistency strategies - -Include conceptual diagram (Mermaid) only when relationships are complex enough to benefit from visualization - -### Logical Data Model - -**When to include**: When designing data structures independent of storage technology - -**Structure Definition**: - -- Entity relationships and cardinality -- Attributes and their types -- Natural keys and identifiers -- Referential integrity rules - -**Consistency & Integrity**: - -- Transaction boundaries -- Cascading rules -- Temporal aspects (versioning, audit) - -### Physical Data Model - -**When to include**: When implementation requires specific storage design decisions - -**For Relational Databases**: - -- Table definitions with data types -- Primary/foreign keys and constraints -- Indexes and performance optimizations -- Partitioning strategy for scale - -**For Document Stores**: - -- Collection structures -- Embedding vs referencing decisions -- Sharding key design -- Index definitions - -**For Event Stores**: - -- Event schema definitions -- Stream aggregation strategies -- Snapshot policies -- Projection definitions - -**For Key-Value/Wide-Column Stores**: - -- Key design patterns -- Column families or value structures -- TTL and compaction strategies - -### Data Contracts & Integration - -**When to include**: Systems with service boundaries or external integrations - -**API Data Transfer**: - -- Request/response schemas -- Validation rules -- Serialization format (JSON, Protobuf, etc.) - -**Event Schemas**: - -- Published event structures -- Schema versioning strategy -- Backward/forward compatibility rules - -**Cross-Service Data Management**: - -- Distributed transaction patterns (Saga, 2PC) -- Data synchronization strategies -- Eventual consistency handling - -Skip any section not directly relevant to the feature being designed. -Focus on aspects that influence implementation decisions. - -## Error Handling - -### Error Strategy - -Concrete error handling patterns and recovery mechanisms for each error type. - -### Error Categories and Responses - -**User Errors** (4xx): Invalid input → field-level validation; Unauthorized → auth guidance; Not found → navigation help -**System Errors** (5xx): Infrastructure failures → graceful degradation; Timeouts → circuit breakers; Exhaustion → rate limiting -**Business Logic Errors** (422): Rule violations → condition explanations; State conflicts → transition guidance - -**Process Flow Visualization** (when complex business logic exists): -Include Mermaid flowchart only for complex error scenarios with business workflows. - -### Monitoring - -Error tracking, logging, and health monitoring implementation. - -## Testing Strategy - -### Default sections (adapt names/sections to fit the domain) - -- Unit Tests: 3–5 items from core functions/modules (e.g., auth methods, subscription logic) -- Integration Tests: 3–5 cross-component flows (e.g., webhook handling, notifications) -- E2E/UI Tests (if applicable): 3–5 critical user paths (e.g., forms, dashboards) -- Performance/Load (if applicable): 3–4 items (e.g., concurrency, high-volume ops) - -## Optional Sections (include when relevant) - -### Security Considerations - -**Include when**: Features handle authentication, sensitive data, external integrations, or user permissions - -- Threat modeling, security controls, compliance requirements -- Authentication and authorization patterns -- Data protection and privacy considerations - -### Performance & Scalability - -**Include when**: Features have specific performance requirements, high load expectations, or scaling concerns - -- Target metrics and measurement strategies -- Scaling approaches (horizontal/vertical) -- Caching strategies and optimization techniques - -### Migration Strategy - -**REQUIRED**: Include Mermaid flowchart showing migration phases - -**Process**: Phase breakdown, rollback triggers, validation checkpoints - - ---- - -## Process Instructions (NOT included in design.md) - -### Visual Design Guidelines - -**Include based on complexity**: - -- **Simple features**: Basic component diagram or none if trivial -- **Complex features**: Architecture diagram, data flow diagram, ER diagram (if complex) -- **When helpful**: State machines, component interactions, decision trees, process flows, auth flows, approval workflows, data pipelines - -**Mermaid Diagram Rules**: - -- Use only basic graph syntax with nodes and relationships -- Exclude all styling elements (no style definitions, classDef, fill colors) -- Avoid visual customization (backgrounds, custom CSS) -- Example: `graph TB` → `A[Login] --> B[Dashboard]` → `B --> C[Settings]` -- Use simple alphanumeric labels for nodes/participants; avoid parentheses, commas, slashes, quotes, and other special characters in labels. -- Prefer short labels without punctuation, e.g., write "Nextjs React TS" instead of "Next.js (React, TypeScript)". - -### Quality Checklist - -- [ ] Requirements covered with traceability -- [ ] Existing implementation respected -- [ ] Steering compliant, deviations documented -- [ ] Architecture visualized with clear diagrams -- [ ] Components and Interfaces have Purpose, Key Features, Interface Design -- [ ] Data models individually documented -- [ ] Integration with existing system explained - -### 3. Design Document Generation & Metadata Update - -- Generate complete design document following structure guidelines (no omissions or placeholders) -- Update `.kiro/specs/[feature-name]/spec.json`: - -```json -{ - "phase": "design-generated", - "approvals": { - "requirements": { "generated": true, "approved": true }, - "design": { "generated": true, "approved": false } - }, - "updated_at": "current_timestamp" -} -``` - -JSON update: update via file tools, set ISO `updated_at`, merge only needed keys; avoid duplicates. - -Final Reminder: Do not skip discovery. - -### Actionable Messages - -If requirements are not approved and no `-y` flag: - -- **Error Message**: "Requirements must be approved before generating design. Run `/kiro/spec-requirements [feature-name]` to review requirements, then run `/kiro/spec-design [feature-name] -y` to proceed." -- **Alternative**: "Or run `/kiro/spec-design [feature-name] -y` to auto-approve requirements and generate design." - -### Conversation Guidance - -After generation: - -- Guide user to review design narrative and visualizations -- Suggest specific diagram additions if needed -- Direct to run `/kiro/spec-tasks [feature-name] -y` when approved - -Create design document that tells complete story through clear narrative, structured components, and effective visualizations. - -**BEFORE FINISHING**: Verify all sections are complete, no placeholders used, and spec.json is updated. -think deeply diff --git a/.cursor/commands/kiro/spec-impl.md b/.cursor/commands/kiro/spec-impl.md deleted file mode 100644 index a17f9bf..0000000 --- a/.cursor/commands/kiro/spec-impl.md +++ /dev/null @@ -1,133 +0,0 @@ - -description: Execute spec tasks using TDD methodology -argument-hint: [feature-name] - - -# Execute Spec Tasks with TDD - -Execute implementation tasks from spec using Kent Beck's Test-Driven Development methodology. - -## Arguments: [feature-name] - -Tool policy: Use Cursor file tools (read_file/list_dir/glob_file_search/apply_patch/edit_file); no shell. - -## Current Specs - -Available specs: Discover via list_dir/glob_file_search under `.kiro/specs/` - -## Instructions - -### Help Mode (--help) - -If arguments contain "--help", show usage: - -``` -/kiro/spec-impl [feature-name] - -Examples: - /kiro/spec-impl auth-system 1.1 # Execute task 1.1 - /kiro/spec-impl auth-system 1,2,3 # Execute tasks 1, 2, 3 - /kiro/spec-impl auth-system --all # Execute all pending tasks -``` - -### Pre-Execution Validation - -Feature name: Parse first token of `[feature-name]` argument - -Validate required files exist: - -- Requirements: Check `.kiro/specs/[feature-name]/requirements.md` via read_file -- Design: Check `.kiro/specs/[feature-name]/design.md` via read_file -- Tasks: Check `.kiro/specs/[feature-name]/tasks.md` via read_file -- Metadata: Check `.kiro/specs/[feature-name]/spec.json` via read_file - -### Context Loading - -**Load all required content before execution:** - -**Core Steering:** - -- Structure: `.kiro/steering/structure.md` -- Tech Stack: `.kiro/steering/tech.md` -- Product: `.kiro/steering/product.md` - -**Custom Steering:** -Additional files: Discover via list_dir/glob_file_search in `.kiro/steering` excluding `structure.md`, `tech.md`, `product.md` - -**Spec Documents:** -Feature directory: Parse from `[feature-name]` argument - -- Requirements: `.kiro/specs/[feature-name]/requirements.md` -- Design: `.kiro/specs/[feature-name]/design.md` -- Tasks: `.kiro/specs/[feature-name]/tasks.md` - -**Note**: [feature-name] will be replaced with actual feature name during execution - -### Task Execution - -1. **Parse feature name and task numbers** from arguments -2. **Load all context** (steering + spec documents) -3. **Extract checkboxes** from tasks.md: Read file and parse `- [ ]` / `- [x]` lines programmatically (no shell) -4. **Execute each checkbox** using TDD methodology directly - -### For Each Task Checkbox - -Execute using TDD methodology directly: - -**Implementation Steps:** - -1. **Load Project Context** (read these files first): - - Structure: `.kiro/steering/structure.md` - - Tech Stack: `.kiro/steering/tech.md` - - Product: `.kiro/steering/product.md` - - Custom steering files: Discover via list_dir/glob_file_search in `.kiro/steering` excluding `structure.md`, `tech.md`, `product.md` - - Spec Metadata: `.kiro/specs/[feature-name]/spec.json` - - Requirements: `.kiro/specs/[feature-name]/requirements.md` - - Design: `.kiro/specs/[feature-name]/design.md` - - All Tasks: `.kiro/specs/[feature-name]/tasks.md` - -2. **TDD Implementation** for each specific task: - - **RED**: Write failing tests first - - **GREEN**: Write minimal code to pass tests - - **REFACTOR**: Clean up and improve code structure - -3. **Task Completion**: - - Verify all tests pass - - Update checkbox from `- [ ]` to `- [x]` in `.kiro/specs/[feature-name]/tasks.md` - - Ensure no regressions in existing tests - -**For each task:** - -- Extract exact checkbox content from tasks.md -- Follow Kent Beck's TDD methodology strictly -- Implement only the specific task requirements -- Maintain code quality and test coverage - -## Implementation Logic - -1. **Parse Arguments**: - - Feature name: First argument - - Task numbers: Second argument (support: "1", "1,2,3", "--all") - -2. **Validate**: - - Spec directory exists - - Required files (requirements.md, design.md, tasks.md, spec.json) exist - - Spec is approved for implementation - -3. **Execute**: - - **Load all file contents** into memory first - - **Build complete context** for implementation - - **Execute each task sequentially** using TDD methodology - - Each task implementation receives complete project knowledge - -## Error Handling - -- Spec not found: Run /kiro/spec-init first -- Not approved: Complete spec workflow first -- Task failure: Keep checkbox unchecked, show error - -## Success Metrics - -- All selected checkboxes marked [x] in tasks.md -- Tests pass -- No regressions diff --git a/.cursor/commands/kiro/spec-init.md b/.cursor/commands/kiro/spec-init.md deleted file mode 100644 index 513c929..0000000 --- a/.cursor/commands/kiro/spec-init.md +++ /dev/null @@ -1,101 +0,0 @@ - -description: Initialize a new specification with detailed project description and requirements -argument-hint: - - -# Spec Initialization - -Initialize a new specification based on the provided project description: - -**Project Description**: $ARGUMENTS - -## Task: Initialize Specification Structure - -**SCOPE**: This command initializes the directory structure and metadata based on the detailed project description provided. - -Tool policy: Use Cursor file tools (read_file/list_dir/glob_file_search/apply_patch/edit_file); no shell. - -### 1. Generate Feature Name - -Create a concise, descriptive feature name from the project description ($ARGUMENTS). -**Check existing `.kiro/specs/` directory to ensure the generated feature name is unique. If a conflict exists, append a number suffix (e.g., feature-name-2).** - -### 2. Create Spec Directory - -Create `.kiro/specs/[generated-feature-name]/` directory with: - -- `spec.json` - Metadata and approval tracking -- `requirements.md` - Lightweight template with project description - -**Note**: design.md and tasks.md will be created by their respective commands during the development process. - -### 3. Initialize spec.json Metadata - -Write initial metadata with approval tracking: - -```json -{ - "feature_name": "[generated-feature-name]", - "created_at": "current_timestamp", - "updated_at": "current_timestamp", - "language": "ja", - "phase": "initialized", - "approvals": { - "requirements": { - "generated": false, - "approved": false - }, - "design": { - "generated": false, - "approved": false - }, - "tasks": { - "generated": false, - "approved": false - } - }, - "ready_for_implementation": false -} -``` - -JSON update: set ISO timestamps; merge needed keys only, avoid duplicates. - -### 4. Create Requirements Template - -Write requirements.md with project description: - -```markdown -# Requirements Document - -## Project Description (Input) - - - -## Requirements - - -``` - -### 5. Update AGENTS.md Reference - -Add the new spec to the active specifications list with the generated feature name and a brief description. - -## Next Steps After Initialization - -Follow the strict spec-driven development workflow: - -1. **`/kiro/spec-requirements [feature-name]`** - Create and generate requirements.md -2. **`/kiro/spec-design [feature-name]`** - Create and generate design.md (requires approved requirements) -3. **`/kiro/spec-tasks [feature-name]`** - Create and generate tasks.md (requires approved design) - -**Important**: Each phase creates its respective file and requires approval before proceeding to the next phase. - -## Output Format - -After initialization, provide: - -1. Generated feature name and rationale -2. Brief project summary -3. Created spec.json path -4. **Clear next step**: `/kiro/spec-requirements [feature-name]` -5. Explanation that only spec.json was created, following stage-by-stage development principles diff --git a/.cursor/commands/kiro/spec-requirements.md b/.cursor/commands/kiro/spec-requirements.md deleted file mode 100644 index f40e146..0000000 --- a/.cursor/commands/kiro/spec-requirements.md +++ /dev/null @@ -1,170 +0,0 @@ - -description: Generate comprehensive requirements for a specification -argument-hint: [feature-name] - - - - - - -- Principle: Use Cursor file tools only (read_file, list_dir, glob_file_search, apply_patch, edit_file). -- Shell: Do not use shell. If a capability gap is encountered, stop and report instead of attempting a workaround. - - - - - - - - Architecture: @.kiro/steering/structure.md - - Technical constraints: @.kiro/steering/tech.md - - Product context: @.kiro/steering/product.md - - Custom steering: Load all "Always" mode custom steering files from .kiro/steering/ - - - - - Spec directory: Use list_dir or glob_file_search (no shell) for `.kiro/specs/[feature-name]/` - - Requirements: `.kiro/specs/[feature-name]/requirements.md` - - Spec metadata: `.kiro/specs/[feature-name]/spec.json` - - - - - - Purpose: `spec.json: language` specifies the OUTPUT LANGUAGE of the generated document only. - - Validation: Read and parse JSON; ensure `language` is a non-empty string (e.g., `ja`, `en`). - - Behavior: - - If valid: Generate all document text strictly in `language`. - - If missing/invalid/unreadable: FALLBACK to default `en` and REPORT the fallback in command output. - - Thinking rule: Always think in English; generate in the resolved output language only. - - - - Read existing requirements.md created by spec-init to extract the project description. - Generate an initial set of EARS-based requirements from the description, then iterate with user feedback (in later runs) to refine. - Do not focus on implementation details in this phase; concentrate on writing requirements that will inform the design. - - - - 1. Focus on core functionality from the user's idea. - 2. Use EARS format for all acceptance criteria. - 3. Avoid sequential questions on first pass; propose an initial version. - 4. Keep scope manageable; enable expansion through review. - 5. Choose an appropriate subject: For software projects, use the concrete system/service name (e.g., "Checkout Service"). For non-software, select a responsible subject (e.g., process/workflow, team/role, artifact/document, campaign, protocol). - - - - - - WHEN [event/condition] THEN [system/subject] SHALL [response] - - IF [precondition/state] THEN [system/subject] SHALL [response] - - WHILE [ongoing condition] THE [system/subject] SHALL [continuous behavior] - - WHERE [location/context/trigger] THE [system/subject] SHALL [contextual behavior] - - - - WHEN [event] AND [additional condition] THEN [system/subject] SHALL [response] - - IF [condition] AND [additional condition] THEN [system/subject] SHALL [response] - - - - - -Update requirements.md with complete content in the resolved output language (validated `language` from spec.json or fallback `en`). - -```markdown -# Requirements Document - -## Introduction - -[Clear introduction summarizing the feature and its business value] - -## Requirements - -### Requirement 1: [Major Objective Area] - -**Objective:** As a [role/stakeholder], I want [feature/capability/outcome], so that [benefit] - -#### Acceptance Criteria - -This section should have EARS requirements - -1. WHEN [event] THEN [system/subject] SHALL [response] -2. IF [precondition] THEN [system/subject] SHALL [response] -3. WHILE [ongoing condition] THE [system/subject] SHALL [continuous behavior] -4. WHERE [location/context/trigger] THE [system/subject] SHALL [contextual behavior] - -### Requirement 2: [Next Major Objective Area] - -**Objective:** As a [role/stakeholder], I want [feature/capability/outcome], so that [benefit] - -1. WHEN [event] THEN [system/subject] SHALL [response] -2. WHEN [event] AND [condition] THEN [system/subject] SHALL [response] - -### Requirement 3: [Additional Major Areas] - -[Continue pattern for all major functional areas] -``` - - - - - -Update spec.json with: - -```json -{ - "phase": "requirements-generated", - "approvals": { - "requirements": { - "generated": true, - "approved": false - } - }, - "updated_at": "current_timestamp" -} -``` - -JSON update: update via file tools, set ISO `updated_at`, merge only needed keys; avoid duplicates. - - - - Generate the requirements document content ONLY. Do not include any review or approval instructions in the actual document file. - - - - - - -- Every acceptance criterion strictly follows EARS syntax (WHEN/IF/WHILE/WHERE, with optional AND). -- Each criterion is observable and yields a single, testable outcome. -- No ambiguous or subjective wording (e.g., quickly, appropriately); quantify where necessary. -- No negations that create ambiguity; prefer positive, assertive statements. -- No mixing of multiple behaviors in a single line; split into separate criteria. -- Consistency with steering documents (product, tech, structure); no contradictions. -- No duplicates or circular/contradictory requirements across criteria. - - - -After generating requirements.md, review the requirements and choose: - -- If requirements look good: Run `/kiro/spec-design [feature-name] -y` to proceed to design phase. -- If requirements need modification: Request changes, then re-run this command after modifications. - -The `-y` flag auto-approves requirements and generates design directly, streamlining the workflow while maintaining review enforcement. - - - - 1. Validate spec.json `language` — if valid, generate strictly in that language; if missing/invalid, fall back to `en` and report the fallback. - 2. Generate initial requirements based on the feature idea WITHOUT asking sequential questions first. - 3. Apply EARS format — use proper EARS syntax patterns for all acceptance criteria. - 4. Focus on core functionality — start with essential features and user workflows. - 5. Structure clearly — group related functionality into logical requirement areas. - 6. Make requirements testable — each acceptance criterion should be verifiable. - 7. Update tracking metadata upon completion. - - - - -- Before output, internally verify the EARS Validation Checks above. -- If any check fails, silently revise and regenerate up to two times. -- Do not include this self_reflection content or validation notes in the generated requirements.md. - - - diff --git a/.cursor/commands/kiro/spec-status.md b/.cursor/commands/kiro/spec-status.md deleted file mode 100644 index 4182cba..0000000 --- a/.cursor/commands/kiro/spec-status.md +++ /dev/null @@ -1,110 +0,0 @@ - -description: Show specification status and progress - - -# Specification Status - -Show current status and progress for feature: **[feature-name]** - -## Spec Context - -### Spec Files - -- Spec directory: Inspect via list_dir/glob_file_search for `.kiro/specs/[feature-name]/` -- Spec metadata: `.kiro/specs/[feature-name]/spec.json` -- Requirements: `.kiro/specs/[feature-name]/requirements.md` -- Design: `.kiro/specs/[feature-name]/design.md` -- Tasks: `.kiro/specs/[feature-name]/tasks.md` - -### All Specs Overview - -- Available specs: Discover via list_dir/glob_file_search under `.kiro/specs/` -- Active specs: Filter `spec.json` files with `implementation_ready=true` by reading and parsing JSON (no shell) - -## Task: Generate Status Report - -Create comprehensive status report for the specification in the language specified in spec.json (check `.kiro/specs/[feature-name]/spec.json` for "language" field): - -### 1. Specification Overview - -Display: - -- Feature name and description -- Creation date and last update -- Current phase (requirements/design/tasks/implementation) -- Overall completion percentage - -### 2. Phase Status - -For each phase, show: - -- ✅ **Requirements Phase**: [completion %] - - Requirements count: [number] - - Acceptance criteria defined: [yes/no] - - Requirements coverage: [complete/partial/missing] - -- ✅ **Design Phase**: [completion %] - - Architecture documented: [yes/no] - - Components defined: [yes/no] - - Diagrams created: [yes/no] - - Integration planned: [yes/no] - -- ✅ **Tasks Phase**: [completion %] - - Total tasks: [number] - - Completed tasks: [number] - - Remaining tasks: [number] - - Blocked tasks: [number] - -### 3. Implementation Progress - -If in implementation phase: - -- Task completion breakdown -- Current blockers or issues -- Estimated time to completion -- Next actions needed - -#### Task Completion Tracking - -- Parse tasks.md checkbox status: `- [x]` (completed) vs `- [ ]` (pending) -- Count completed vs total tasks -- Show completion percentage -- Identify next uncompleted task - -### 4. Quality Metrics - -Show: - -- Requirements coverage: [percentage] -- Design completeness: [percentage] -- Task granularity: [appropriate/too large/too small] -- Dependencies resolved: [yes/no] - -### 5. Recommendations - -Based on status, provide: - -- Next steps to take -- Potential issues to address -- Suggested improvements -- Missing elements to complete - -### 6. Steering Alignment - -Check alignment with steering documents: - -- Architecture consistency: [aligned/misaligned] -- Technology stack compliance: [compliant/non-compliant] -- Product requirements alignment: [aligned/misaligned] - -## Instructions - -1. **Check spec.json for language** - Use the language specified in the metadata -2. **Parse all spec files** to understand current state -3. **Calculate completion percentages** for each phase -4. **Identify next actions** based on current progress -5. **Highlight any blockers** or issues -6. **Provide clear recommendations** for moving forward -7. **Check steering alignment** to ensure consistency - -Generate status report that provides clear visibility into spec progress and next steps. diff --git a/.cursor/commands/kiro/spec-tasks.md b/.cursor/commands/kiro/spec-tasks.md deleted file mode 100644 index 1f4a15c..0000000 --- a/.cursor/commands/kiro/spec-tasks.md +++ /dev/null @@ -1,174 +0,0 @@ - -description: Generate implementation tasks for a specification -argument-hint: [feature-name] [-y] - - -# Implementation Tasks - -Generate detailed implementation tasks for feature: **[feature-name]** - -## Task: Generate Implementation Tasks - -Tool policy: Use Cursor file tools (read_file/list_dir/glob_file_search/apply_patch/edit_file); no shell. - -### Prerequisites & Context Loading - -- If invoked with `-y`: Auto-approve requirements and design in `spec.json` -- Otherwise: Stop if requirements/design missing or unapproved with message: - "Run `/kiro/spec-requirements` and `/kiro/spec-design` first, or use `-y` flag to auto-approve" -- If tasks.md exists: Prompt [o]verwrite/[m]erge/[c]ancel - -**Context Loading (Full Paths)**: - -1. `.kiro/specs/[feature-name]/requirements.md` - Feature requirements (EARS format) -2. `.kiro/specs/[feature-name]/design.md` - Technical design document -3. `.kiro/steering/` - Project-wide guidelines and constraints: - - **Core files (always load)**: - - `.kiro/steering/product.md` - Business context, product vision, user needs - - `.kiro/steering/tech.md` - Technology stack, frameworks, libraries - - `.kiro/steering/structure.md` - File organization, naming conventions, code patterns - - **Custom steering files** (load all EXCEPT "Manual" mode in `AGENTS.md`): - - Any additional `*.md` files in `.kiro/steering/` directory - - Examples: `api.md`, `testing.md`, `security.md`, etc. - - (Task planning benefits from comprehensive context) -4. `.kiro/specs/[feature-name]/tasks.md` - Existing tasks (only if merge mode) - -### CRITICAL Task Numbering Rules (MUST FOLLOW) - -**⚠️ MANDATORY: Sequential major task numbering & hierarchy limits** - -- Major tasks: 1, 2, 3, 4, 5... (MUST increment sequentially) -- Sub-tasks: 1.1, 1.2, 2.1, 2.2... (reset per major task) -- **Maximum 2 levels of hierarchy** (no 1.1.1 or deeper) -- Format exactly as: - -```markdown -- [ ] 1. Major task description -- [ ] 1.1 Sub-task description - - Detail item 1 - - Detail item 2 - - _Requirements: X.X, Y.Y_ - -- [ ] 1.2 Sub-task description - - Detail items... - - _Requirements: X.X_ - -- [ ] 2. Next major task (NOT 1 again!) -- [ ] 2.1 Sub-task... -``` - -### Task Generation Rules - -1. **Natural language descriptions**: Focus on capabilities and outcomes, not code structure - - Describe **what functionality to achieve**, not file locations or code organization - - Specify **business logic and behavior**, not method signatures or type definitions - - Reference **features and capabilities**, not class names or API contracts - - Use **domain language**, not programming constructs - - **Avoid**: File paths, function/method names, type signatures, class/interface names, specific data structures - - **Include**: User-facing functionality, business rules, system behaviors, data relationships - - Implementation details (files, methods, types) come from design.md -2. **Task integration & progression**: - - Each task must build on previous outputs (no orphaned code) - - End with integration tasks to wire everything together - - No hanging features - every component must connect to the system - - Incremental complexity - no big jumps between tasks - - Validate core functionality early in the sequence -3. **Flexible task sizing**: - - Major tasks: As many sub-tasks as logically needed - - Sub-tasks: 1-3 hours each, 3-10 details per sub - - Group by cohesion, not arbitrary numbers - - Balance between too granular and too broad -4. **Requirements mapping**: End details with `_Requirements: X.X, Y.Y_` or `_Requirements: [description]_` -5. **Code-only focus**: Include ONLY coding/testing tasks, exclude deployment/docs/user testing - -### Example Structure (FORMAT REFERENCE ONLY) - -```markdown -# Implementation Plan - -- [ ] 1. Set up project foundation and infrastructure - - Initialize project with required technology stack - - Configure server infrastructure and request handling - - Establish data storage and caching layer - - Set up configuration and environment management - - _Requirements: All requirements need foundational setup_ - -- [ ] 2. Build authentication and user management system -- [ ] 2.1 Implement core authentication functionality - - Set up user data storage with validation rules - - Implement secure authentication mechanism - - Build user registration functionality - - Add login and session management features - - _Requirements: 7.1, 7.2_ - -- [ ] 2.2 Enable email service integration - - Implement secure credential storage system - - Build authentication flow for email providers - - Create email connection validation logic - - Develop email account management features - - _Requirements: 5.1, 5.2, 5.4_ -``` - -### Requirements Coverage Check - -- **MANDATORY**: Ensure ALL requirements from requirements.md are covered -- Cross-reference every requirement ID with task mappings -- If gaps found: Return to requirements or design phase -- No requirement should be left without corresponding tasks - -### Document Generation - -- Generate `.kiro/specs/[feature-name]/tasks.md` using the exact numbering format above -- **Language**: Use language from `spec.json.language` field, default to English -- **Task descriptions**: Use natural language for "what to do" (implementation details in design.md) -- Update `.kiro/specs/[feature-name]/spec.json`: - - Set `phase: "tasks-generated"` - - Set `tasks.generated: true` - - If `-y` flag used: Set `requirements.approved: true` and `design.approved: true` - - Preserve existing metadata (language, creation date, etc.) -- Use file tools only (no shell commands) - ---- - -## INTERACTIVE APPROVAL IMPLEMENTED (Not included in document) - -The following is for Coding Agent conversation only - NOT for the generated document: - -## Next Phase: Implementation Ready - -After generating tasks.md, review the implementation tasks: - -**If tasks look good:** -Begin implementation following the generated task sequence - -**If tasks need modification:** -Request changes and re-run this command after modifications - -Tasks represent the final planning phase - implementation can begin once tasks are approved. - -**Final approval process for implementation**: - -``` -📋 Tasks review completed. Ready for implementation. -📄 Generated: .kiro/specs/[feature-name]/tasks.md -✅ All phases approved. Implementation can now begin. -``` - -### Review Checklist (for user reference): - -- [ ] Tasks are properly sized (1-3 hours each) -- [ ] All requirements are covered by tasks -- [ ] Task dependencies are correct -- [ ] Technology choices match the design -- [ ] Testing tasks are included - -### Implementation Instructions - -When tasks are approved, the implementation phase begins: - -1. Work through tasks sequentially -2. Mark tasks as completed in tasks.md -3. Each task should produce working, tested code -4. Commit code after each major task completion - -think deeply diff --git a/.cursor/commands/kiro/steering-custom.md b/.cursor/commands/kiro/steering-custom.md deleted file mode 100644 index 8a46aeb..0000000 --- a/.cursor/commands/kiro/steering-custom.md +++ /dev/null @@ -1,162 +0,0 @@ - -description: Create custom Kiro steering documents for specialized project contexts - - -# Kiro Custom Steering Creation - -Create custom steering documents in `.kiro/steering/` for specialized contexts beyond the three foundational files (`product.md`, `tech.md`, `structure.md`). - -Tool policy: Use Cursor file tools (read_file/list_dir/glob_file_search/apply_patch/edit_file); no shell. - -## Current Steering Status - -### Existing Steering Documents - -- Foundational steering files: Discover via list_dir/glob_file_search under `.kiro/steering/` -- Custom steering count: Count non-core `.md` files in `.kiro/steering` via list_dir/glob_file_search - -### Project Analysis - -- Specialized areas: Discover notable directories via glob_file_search (e.g., `**/api/**`, `**/auth/**`, `**/security/**`, `**/test*/**`, `**/spec*/**`) -- Config patterns: Discover common config files via glob_file_search (e.g., `*.config.*`, `*rc.*`, `.*rc`) - -## Task: Create Custom Steering Document - -You will create a new custom steering document based on user requirements. Common use cases include: - -### Common Custom Steering Types - -1. **API Standards** (`api-standards.md`) - - REST/GraphQL conventions - - Error handling patterns - - Authentication/authorization approaches - - API versioning strategy - -2. **Testing Approach** (`testing.md`) - - Test file organization - - Naming conventions for tests - - Mocking strategies - - Coverage requirements - - E2E vs unit vs integration testing - -3. **Code Style Guidelines** (`code-style.md`) - - Language-specific conventions - - Formatting rules beyond linters - - Comment standards - - Function/variable naming patterns - - Code organization principles - -4. **Security Policies** (`security.md`) - - Input validation requirements - - Authentication patterns - - Secrets management - - OWASP compliance guidelines - - Security review checklist - -5. **Database Conventions** (`database.md`) - - Schema design patterns - - Migration strategies - - Query optimization guidelines - - Connection pooling settings - - Backup and recovery procedures - -6. **Performance Standards** (`performance.md`) - - Load time requirements - - Memory usage limits - - Optimization techniques - - Caching strategies - - Monitoring and profiling - -7. **Deployment Workflow** (`deployment.md`) - - CI/CD pipeline stages - - Environment configurations - - Release procedures - - Rollback strategies - - Health check requirements - -## Inclusion Mode Selection - -Choose the inclusion mode based on how frequently and in what context this steering document should be referenced: - -### 1. Always Included (Use sparingly for custom files) - -- **When to use**: Universal standards that apply to ALL code (security policies, core conventions) -- **Impact**: Increases context size for every interaction -- **Example**: `security-standards.md` for critical security requirements -- **Recommendation**: Only use for truly universal guidelines - -### 2. Conditional Inclusion (Recommended for most custom files) - -- **When to use**: Domain-specific guidelines for particular file types or directories -- **File patterns**: `"*.test.js"`, `"src/api/**/*"`, `"**/auth/*"`, `"*.config.*"` -- **Example**: `testing-approach.md` only loads when editing test files -- **Benefits**: Relevant context without overwhelming general interactions - -### 3. Manual Inclusion (Best for specialized contexts) - -- **When to use**: Specialized knowledge needed occasionally -- **Usage**: Reference with `@filename.md` during specific conversations -- **Example**: `deployment-runbook.md` for deployment-specific tasks -- **Benefits**: Available when needed, doesn't clutter routine interactions - -## Document Structure Guidelines - -Create the custom steering document with: - -1. **Clear Title and Purpose** - - What aspect of the project this document covers - - When this guidance should be applied - -2. **Specific Guidelines** - - Concrete rules and patterns to follow - - Rationale for important decisions - -3. **Code Examples** - - Show correct implementation patterns - - Include counter-examples if helpful - -4. **Integration Points** - - How this relates to other steering documents - - Dependencies or prerequisites - -## Security and Quality Guidelines - -### Security Requirements - -- **Never include sensitive data**: No API keys, passwords, database URLs, secrets -- **Review sensitive context**: Avoid internal server names, private API endpoints -- **Team access awareness**: All steering content is shared with team members - -### Content Quality Standards - -- **Single responsibility**: One steering file = one domain (don't mix API + database guidelines) -- **Concrete examples**: Include code snippets and real project examples -- **Clear rationale**: Explain WHY certain approaches are preferred -- **Maintainable size**: Target 2-3 minute read time per file - -## Instructions - -1. **Ask the user** for: - - Document name (descriptive filename ending in .md) - - Topic/purpose of the custom steering - - Inclusion mode preference - - Specific patterns for conditional inclusion (if applicable) - -2. **Create the document** in `.kiro/steering/` with: - - Clear, focused content (2-3 minute read) - - Practical examples - - Consistent formatting with other steering files - -3. **Document the inclusion mode** by adding a comment at the top: - - ```markdown - - ``` - -4. **Validate** that the document: - - Doesn't duplicate existing steering content - - Provides unique value for the specified context - - Follows markdown best practices - -Remember: Custom steering documents should supplement, not replace, the foundational three files. They provide specialized context for specific aspects of your project. -ultrathink diff --git a/.cursor/commands/kiro/steering.md b/.cursor/commands/kiro/steering.md deleted file mode 100644 index 9f78a5f..0000000 --- a/.cursor/commands/kiro/steering.md +++ /dev/null @@ -1,198 +0,0 @@ - -description: Create or update Kiro steering documents intelligently based on project state - - -# Kiro Steering Management - -Intelligently create or update steering documents in `.kiro/steering/` to maintain accurate project knowledge for spec-driven development. This command detects existing documents and handles them appropriately. - -Tool policy: Use Cursor file tools (read_file/list_dir/glob_file_search/apply_patch/edit_file); no shell. - -## Existing Files Check - -### Current steering documents status - -- Product overview: Check `.kiro/steering/product.md` via read_file; create if missing -- Technology stack: Check `.kiro/steering/tech.md` via read_file; create if missing -- Project structure: Check `.kiro/steering/structure.md` via read_file; create if missing -- Custom steering files: Discover via list_dir/glob_file_search in `.kiro/steering` excluding core files; preserve if present - -## Project Analysis - -### Current Project State - -- Project files: Discover via glob_file_search while excluding common vendor dirs; summarize by type -- Configuration files: Discover typical configs via glob_file_search (e.g. `package.json`, `requirements.txt`, `pyproject.toml`, `tsconfig.json`, etc.) -- Documentation: Discover markdown/docs via list_dir/glob_file_search (exclude vendor dirs) - -### Recent Changes (if updating) - -- Last steering update: `git log -1 --oneline -- .kiro/steering/ 2>/dev/null || echo "No previous steering commits"` -- Commits since last steering update: `LAST_COMMIT=$(git log -1 --format=%H -- .kiro/steering/ 2>/dev/null); if [ -n "$LAST_COMMIT" ]; then git log --oneline ${LAST_COMMIT}..HEAD --max-count=20 2>/dev/null || echo "Not a git repository"; else echo "No previous steering update found"; fi` -- Working tree status: `git status --porcelain 2>/dev/null || echo "Not a git repository"` - -### Existing Documentation - -- Main README: @README.md -- Package configuration: @package.json -- Python requirements: @requirements.txt -- TypeScript config: @tsconfig.json -- Project documentation: @docs/ - -## Smart Update Strategy - -Based on the existing files check above, this command will: - -### For NEW files (showing "📝 Not found"): - -Generate comprehensive initial content covering all aspects of the project. - -### For EXISTING files (showing "✅ EXISTS"): - -1. **Preserve user customizations** - Any manual edits or custom sections -2. **Update factual information** - Dependencies, file structures, commands -3. **Add new sections** - Only if significant new capabilities exist -4. **Mark deprecated content** - Rather than deleting -5. **Maintain formatting** - Keep consistent with existing style - -## Inclusion Modes for Core Steering Files - -The three core steering files (product.md, tech.md, structure.md) are designed to be **Always Included** - loaded in every AI interaction to provide consistent project context. - -### Understanding Inclusion Modes - -- **Always Included (Default for core files)**: Loaded in every interaction - ensures consistent project knowledge -- **Conditional**: Loaded only when working with matching file patterns (mainly for custom steering) -- **Manual**: Referenced on-demand with @filename syntax (for specialized contexts) - -### Core Files Strategy - -- `product.md`: Always - Business context needed for all development decisions -- `tech.md`: Always - Technical constraints affect all code generation -- `structure.md`: Always - Architectural decisions impact all file organization - -## Task: Create or Update Steering Documents - -### 1. Product Overview (`product.md`) - -#### For NEW file: - -Generate comprehensive product overview including: - -- **Product Overview**: Brief description of what the product is -- **Core Features**: Bulleted list of main capabilities -- **Target Use Case**: Specific scenarios the product addresses -- **Key Value Proposition**: Unique benefits and differentiators - -#### For EXISTING file: - -Update only if there are: - -- **New features** added to the product -- **Removed features** or deprecated functionality -- **Changed use cases** or target audience -- **Updated value propositions** or benefits - -### 2. Technology Stack (`tech.md`) - -#### For NEW file: - -Document the complete technology landscape: - -- **Architecture**: High-level system design -- **Frontend**: Frameworks, libraries, build tools (if applicable) -- **Backend**: Language, framework, server technology (if applicable) -- **Development Environment**: Required tools and setup -- **Common Commands**: Frequently used development commands -- **Environment Variables**: Key configuration variables -- **Port Configuration**: Standard ports used by services - -#### For EXISTING file: - -Check for changes in: - -- **New dependencies** added via package managers -- **Removed libraries** or frameworks -- **Version upgrades** of major dependencies -- **New development tools** or build processes -- **Changed environment variables** or configuration -- **Modified port assignments** or service architecture - -### 3. Project Structure (`structure.md`) - -#### For NEW file: - -Outline the codebase organization: - -- **Root Directory Organization**: Top-level structure with descriptions -- **Subdirectory Structures**: Detailed breakdown of key directories -- **Code Organization Patterns**: How code is structured -- **File Naming Conventions**: Standards for naming files and directories -- **Import Organization**: How imports/dependencies are organized -- **Key Architectural Principles**: Core design decisions and patterns - -#### For EXISTING file: - -Look for changes in: - -- **New directories** or major reorganization -- **Changed file organization** patterns -- **New or modified naming conventions** -- **Updated architectural patterns** or principles -- **Refactored code structure** or module boundaries - -### 4. Custom Steering Files - -If custom steering files exist: - -- **Preserve them** - Do not modify unless specifically outdated -- **Check relevance** - Note if they reference removed features -- **Suggest new custom files** - If new specialized areas emerge - -## Instructions - -1. **Create `.kiro/steering/` directory** if it doesn't exist -2. **Check existing files** to determine create vs update mode -3. **Analyze the codebase** using native tools (Glob, Grep, LS) -4. **For NEW files**: Generate comprehensive initial documentation -5. **For EXISTING files**: - - Read current content first - - Preserve user customizations and comments - - Update only factual/technical information - - Maintain existing structure and style -6. **Use clear markdown formatting** with proper headers and sections -7. **Include concrete examples** where helpful for understanding -8. **Focus on facts over assumptions** - document what exists -9. **Follow spec-driven development principles** - -## Important Principles - -### Security Guidelines - -- **Never include sensitive data**: No API keys, passwords, database credentials, or personal information -- **Review before commit**: Always review steering content before version control -- **Team sharing consideration**: Remember steering files are shared with all project collaborators - -### Content Quality Guidelines - -- **Single domain focus**: Each steering file should cover one specific area -- **Clear, descriptive content**: Provide concrete examples and rationale for decisions -- **Regular maintenance**: Review and update steering files after major project changes -- **Actionable guidance**: Write specific, implementable guidelines rather than abstract principles - -### Preservation Strategy - -- **User sections**: Any section not in the standard template should be preserved -- **Custom examples**: User-added examples should be maintained -- **Comments**: Inline comments or notes should be kept -- **Formatting preferences**: Respect existing markdown style choices - -### Update Philosophy - -- **Additive by default**: Add new information rather than replacing -- **Mark deprecation**: Use strikethrough or [DEPRECATED] tags -- **Date significant changes**: Add update timestamps for major changes -- **Explain changes**: Brief notes on why something was updated - -The goal is to maintain living documentation that stays current while respecting user customizations, supporting effective spec-driven development without requiring users to worry about losing their work. -ultrathink diff --git a/.cursor/commands/kiro/validate-design.md b/.cursor/commands/kiro/validate-design.md deleted file mode 100644 index b73c7bd..0000000 --- a/.cursor/commands/kiro/validate-design.md +++ /dev/null @@ -1,207 +0,0 @@ - -description: Interactive technical design quality review and validation -argument-hint: - - -# Technical Design Validation - -Interactive design quality review for feature: **[feature-name]** - -## Context Loading - -### Prerequisites Validation - -- Design document must exist: `.kiro/specs/[feature-name]/design.md` -- If not exist, stop with message: "Run `/kiro/spec-design [feature-name]` first to generate design document" - -### Review Context - -- Spec metadata: `.kiro/specs/[feature-name]/spec.json` -- Requirements document: `.kiro/specs/[feature-name]/requirements.md` -- Design document: `.kiro/specs/[feature-name]/design.md` -- Core steering documents: - - Architecture: @.kiro/steering/structure.md - - Technology: @.kiro/steering/tech.md - - Product context: @.kiro/steering/product.md -- Custom steering: All additional `.md` files in `.kiro/steering/` directory - -## Task: Interactive Design Quality Review - -### Review Methodology - -**Focus**: Critical issues only - limit to 3 most important concerns -**Format**: Interactive dialogue with immediate feedback and improvement suggestions -**Outcome**: GO/NO-GO decision with clear rationale - -### Core Review Criteria - -#### 1. Existing Architecture Alignment (Critical) - -**Evaluation Points**: - -- Integration with existing system boundaries and layers -- Consistency with established architectural patterns -- Proper dependency direction and coupling management -- Alignment with current module organization and responsibilities - -**Review Questions**: - -- Does this design respect existing architectural boundaries? -- Are new components properly integrated with existing systems? -- Does the design follow established patterns and conventions? - -#### 2. Design Consistency & Standards - -**Evaluation Points**: - -- Adherence to project naming conventions and code standards -- Consistent error handling and logging strategies -- Uniform approach to configuration and dependency management -- Alignment with established data modeling patterns - -**Review Questions**: - -- Is the design consistent with existing code standards? -- Are error handling and configuration approaches unified? -- Does naming and structure follow project conventions? - -#### 3. Extensibility & Maintainability - -**Evaluation Points**: - -- Design flexibility for future requirements changes -- Clear separation of concerns and single responsibility principle -- Testability and debugging considerations -- Documentation and code clarity requirements - -**Review Questions**: - -- How well does this design handle future changes? -- Are responsibilities clearly separated and testable? -- Is the design complexity appropriate for the requirements? - -#### 4. Type Safety & Interface Design - -**Evaluation Points** (for TypeScript projects): - -- Proper type definitions and interface contracts -- Avoidance of `any` types and unsafe patterns -- Clear API boundaries and data structure definitions -- Input validation and error handling coverage - -**Review Questions**: - -- Are types properly defined and interfaces clear? -- Is the API design robust and well-defined? -- Are edge cases and error conditions handled appropriately? - -### Interactive Review Process - -#### Step 1: Design Analysis - -Thoroughly analyze the design document against all review criteria, identifying the most critical issues that could impact: - -- System integration and compatibility -- Long-term maintainability -- Implementation complexity and risks -- Requirements fulfillment accuracy - -#### Step 2: Critical Issues Identification - -**Limit to 3 most important concerns maximum**. For each critical issue: - -**Issue Format**: - -``` -🔴 **Critical Issue [1-3]**: [Brief title] -**Concern**: [Specific problem description] -**Impact**: [Why this matters for the project] -**Suggestion**: [Concrete improvement recommendation] -``` - -#### Step 3: Design Strengths Recognition - -Acknowledge 1-2 strong aspects of the design to maintain balanced feedback. - -#### Step 4: GO/NO-GO Decision - -**GO Criteria**: - -- No critical architectural misalignment -- Requirements adequately addressed -- Implementation path is clear and reasonable -- Risks are acceptable and manageable - -**NO-GO Criteria**: - -- Fundamental architectural conflicts -- Critical requirements not addressed -- Implementation approach has high failure risk -- Design complexity disproportionate to requirements - -### Output Format - -Generate review in the language specified in spec.json (check `.kiro/specs/[feature-name]/spec.json` for "language" field): - -#### Design Review Summary - -Brief overview of the design's overall quality and readiness. - -#### Critical Issues (Maximum 3) - -For each issue identified: - -- **Issue**: Clear problem statement -- **Impact**: Why it matters -- **Recommendation**: Specific improvement suggestion - -#### Design Strengths - -1-2 positive aspects worth highlighting. - -#### Final Assessment - -**Decision**: GO / NO-GO -**Rationale**: Clear reasoning for the decision -**Next Steps**: What should happen next - -#### Interactive Discussion - -Engage in dialogue about: - -- Designer's perspective on identified issues -- Alternative approaches or trade-offs -- Clarification of design decisions -- Agreement on necessary changes (if any) - -## Review Guidelines - -1. **Critical Focus**: Only flag issues that significantly impact success -2. **Constructive Tone**: Provide solutions, not just criticism -3. **Interactive Approach**: Engage in dialogue rather than one-way evaluation -4. **Balanced Assessment**: Recognize both strengths and weaknesses -5. **Clear Decision**: Make definitive GO/NO-GO recommendation -6. **Actionable Feedback**: Ensure all suggestions are implementable - -## Instructions - -1. **Load all context documents** - Understand full project scope -2. **Analyze design thoroughly** - Review against all criteria -3. **Identify critical issues only** - Focus on most important problems -4. **Engage interactively** - Discuss findings with user -5. **Make clear decision** - Provide definitive GO/NO-GO -6. **Guide next steps** - Clear direction for proceeding - -**Remember**: This is quality assurance, not perfection seeking. The goal is ensuring the design is solid enough to proceed to implementation with acceptable risk. - ---- - -## Next Phase: Task Generation - -After design validation: - -**If design passes validation (GO decision):** -Run `/kiro/spec-tasks [feature-name]` to generate implementation tasks - -**Auto-approve and proceed:** -Run `/kiro/spec-tasks [feature-name] -y` to auto-approve requirements and design, then generate tasks directly diff --git a/.cursor/commands/kiro/validate-gap.md b/.cursor/commands/kiro/validate-gap.md deleted file mode 100644 index d0a6d99..0000000 --- a/.cursor/commands/kiro/validate-gap.md +++ /dev/null @@ -1,178 +0,0 @@ - -description: Analyze implementation gap between requirements and existing codebase -argument-hint: - - -# Implementation Gap Validation - -Analyze implementation requirements and existing codebase for feature: **[feature-name]** - -## Context Validation - -### Steering Context - -- Architecture context: @.kiro/steering/structure.md -- Technical constraints: @.kiro/steering/tech.md -- Product context: @.kiro/steering/product.md -- Custom steering: Load all "Always" mode custom steering files from `.kiro/steering/` - -### Existing Spec Context - -- Current spec directory: !`ls -la .kiro/specs/[feature-name]/` -- Requirements document: `.kiro/specs/[feature-name]/requirements.md` -- Spec metadata: `.kiro/specs/[feature-name]/spec.json` - -## Task: Implementation Gap Analysis - -### Prerequisites - -- Requirements document must exist: `.kiro/specs/[feature-name]/requirements.md` -- If not exist, stop with message: "Run `/kiro/spec-requirements [feature-name]` first to generate requirements" - -### Analysis Process - -#### 1. Current State Investigation - -**Existing Codebase Analysis**: - -- Identify files and modules related to the feature domain -- Map current architecture patterns, conventions, and tech stack usage -- Document existing services, utilities, and reusable components -- Understand current data models, APIs, and integration patterns - -**Code Structure Assessment**: - -- Document file organization, naming conventions, and architectural layers -- Extract import/export patterns and module dependency structures -- Identify existing testing patterns (file placement, frameworks, mocking approaches) -- Map API client, database, and authentication implementation approaches currently used -- Note established coding standards and development practices - -#### 2. Requirements Feasibility Analysis - -**Technical Requirements Extraction**: - -- Parse EARS format requirements from requirements.md -- Identify technical components needed for each requirement -- Extract non-functional requirements (security, performance, etc.) -- Map business logic complexity and integration points - -**Gap Identification**: - -- Missing technical capabilities vs requirements -- Unknown technologies or external dependencies -- Potential integration challenges with existing systems -- Areas requiring research or proof-of-concept work - -#### 3. Implementation Approach Options - -**Multiple Strategy Evaluation**: - -- **Option A**: Extend existing components/files - - Which existing files/modules to extend - - Compatibility with current patterns - - Code complexity and maintainability impact - -- **Option B**: Create new components (when justified) - - Clear rationale for new file creation - - Integration points with existing system - - Responsibility boundaries and interfaces - -- **Option C**: Hybrid approach - - Combination of extension and new creation - - Phased implementation strategy - - Risk mitigation approach - -#### 4. Technical Research Requirements - -**External Dependencies Analysis** (if any): - -- Required libraries, APIs, or services not currently used -- Version compatibility with existing dependencies -- Authentication, configuration, and setup requirements -- Rate limits, usage constraints, and cost implications - -**Knowledge Gap Assessment**: - -- Technologies unfamiliar to the team -- Complex integration patterns requiring research -- Performance or security considerations needing investigation -- Best practice research requirements - -#### 5. Implementation Complexity Assessment - -**Effort Estimation**: - -- **Small (S)**: 1-3 days, mostly using existing patterns -- **Medium (M)**: 3-7 days, some new patterns or integrations -- **Large (L)**: 1-2 weeks, significant new functionality -- **Extra Large (XL)**: 2+ weeks, complex architecture changes - -**Risk Factors**: - -- High: Unknown technologies, complex integrations, architectural changes -- Medium: New patterns, external dependencies, performance requirements -- Low: Extending existing patterns, well-understood technologies - -### Output Format - -Generate analysis in the language specified in spec.json (check `.kiro/specs/[feature-name]/spec.json` for "language" field): - -#### Analysis Summary - -- Feature scope and complexity overview -- Key technical challenges identified -- Overall implementation approach recommendation - -#### Existing Codebase Insights - -- Relevant existing components and their current responsibilities -- Established patterns and conventions to follow -- Reusable utilities and services available - -#### Implementation Strategy Options - -For each viable approach: - -- **Approach**: [Extension/New/Hybrid] -- **Rationale**: Why this approach makes sense -- **Trade-offs**: Pros and cons of this approach -- **Complexity**: [S/M/L/XL] with reasoning - -#### Technical Research Needs - -- External dependencies requiring investigation -- Unknown technologies needing research -- Integration patterns requiring proof-of-concept -- Performance or security considerations to investigate - -#### Recommendations for Design Phase - -- Preferred implementation approach with rationale -- Key architectural decisions that need to be made -- Areas requiring further investigation during design -- Potential risks to address in design phase - -## Instructions - -1. **Check spec.json for language** - Use the language specified in the metadata -2. **Prerequisites validation** - Ensure requirements are approved -3. **Thorough investigation** - Analyze existing codebase comprehensively -4. **Multiple options** - Present viable implementation approaches -5. **Information focus** - Provide analysis, not final decisions -6. **Research identification** - Flag areas needing investigation -7. **Design preparation** - Set up design phase for success - -**CRITICAL**: This is an analysis phase. Provide information and options, not final implementation decisions. The design phase will make strategic choices based on this analysis. - ---- - -## Next Phase: Design Generation - -After validation, proceed to design phase: - -**Generate design based on analysis:** -Run `/kiro/spec-design [feature-name]` to create technical design document - -**Auto-approve and proceed:** -Run `/kiro/spec-design [feature-name] -y` to auto-approve requirements and generate design directly diff --git a/.cursor/commands/linear.md b/.cursor/commands/linear.md deleted file mode 100644 index 15afbcc..0000000 --- a/.cursor/commands/linear.md +++ /dev/null @@ -1,41 +0,0 @@ -# Linear Command - -あなたは Task管理アプLinearのスペシャリストです。 -userの要求を理解しLinearのtask管理を以下のtool callを使用して要求に忠実に答えるのがmissionです。 - -## supported MCP commands - -1. list_comments -2. create_comment -3. list_cycles -4. get_document -5. list_documents -6. get_issue -7. list_issues -8. create_issue -9. update_issue -10. list_issue_statuses -11. get_issue_status -12. list_issue_labels -13. create_issue_label -14. list_projects -15. get_project -16. create_project -17. update_project -18. list_project_labels -19. list_teams -20. get_team -21. list_users -22. get_user -23. search_documentation - -## Default Input - -owner: DaikoAI -repo: driftie -project: 7e8237d8-9656-4f61-8512-9db66df4c489 -team: a0237492-5549-4368-8a64-3a8bf1a5f635 - -## Output format - -- `list_`で始まるコマンドでprojectの一覧や、issueの一覧を表示する場合は、tableを使用して表で出力してください。 diff --git a/.cursor/commands/solana-airdrop.md b/.cursor/commands/solana-airdrop.md deleted file mode 100644 index b33f66f..0000000 --- a/.cursor/commands/solana-airdrop.md +++ /dev/null @@ -1,27 +0,0 @@ -# Solanaエアドロップ - -テストネットワークからSOLを受け取ります。 - -```bash -solana airdrop 2 -``` - -## パラメータ -- `2` - 受け取るSOLの量(SOL単位) - -## 前提条件 -- devnetまたはtestnetに接続していること -- 1回のリクエストで最大2SOLまで - -## ネットワークの切り替え - -```bash -# devnet -solana config set --url https://api.devnet.solana.com - -# testnet -solana config set --url https://api.testnet.solana.com -``` - -## 関連コマンド -- `solana balance` - 残高確認 diff --git a/.cursor/commands/solana-balance.md b/.cursor/commands/solana-balance.md deleted file mode 100644 index 948f709..0000000 --- a/.cursor/commands/solana-balance.md +++ /dev/null @@ -1,17 +0,0 @@ -# Solana残高の確認 - -現在のウォレットのSOL残高を確認します。 - -```bash -solana balance -``` - -## 指定したアドレスを確認 - -```bash -solana balance
-``` - -## 関連コマンド -- `solana airdrop 2` - テストSOLの取得 -- `solana transfer ` - SOL送信 diff --git a/.cursor/commands/solana-config.md b/.cursor/commands/solana-config.md deleted file mode 100644 index a895428..0000000 --- a/.cursor/commands/solana-config.md +++ /dev/null @@ -1,35 +0,0 @@ -# Solana設定の確認・変更 - -現在のSolana CLI設定を確認・変更します。 - -## 現在の設定確認 - -```bash -solana config get -``` - -## ネットワークの設定 - -```bash -# localnet -solana config set --url http://127.0.0.1:8899 - -# devnet -solana config set --url https://api.devnet.solana.com - -# mainnet -solana config set --url https://api.mainnet.solana.com -``` - -## キーペアの設定 - -```bash -solana config set --keypair ~/.config/solana/id.json -``` - -## 設定例(開発環境) - -```bash -solana config set --url http://127.0.0.1:8899 -solana config set --keypair ~/.config/solana/id.json -``` diff --git a/.cursor/commands/solana-keygen.md b/.cursor/commands/solana-keygen.md deleted file mode 100644 index 4174edb..0000000 --- a/.cursor/commands/solana-keygen.md +++ /dev/null @@ -1,21 +0,0 @@ -# Solanaキーペアの生成 - -新しいSolanaキーペアを生成します。 - -```bash -solana-keygen new --outfile ~/.config/solana/id.json -``` - -## オプション -- `--no-passphrase` - パスフレーズなし -- `--force` - 既存ファイルを上書き -- `--silent` - 詳細出力を抑制 - -## 公開鍵の確認 - -```bash -solana-keygen pubkey ~/.config/solana/id.json -``` - -## 関連コマンド -- `solana config set --keypair ~/.config/solana/id.json` - デフォルトキーペアの設定 diff --git a/.cursor/commands/understand.md b/.cursor/commands/understand.md deleted file mode 100644 index 0033048..0000000 --- a/.cursor/commands/understand.md +++ /dev/null @@ -1,10 +0,0 @@ -# Understand Command - -あなたは Bun + TypeScript のコードベースの理解を行うスペシャリストです。 -開発者のinstructionに必要なコード、情報を整理するのがmissionです。 - -## Steps - -1. 以下は開発者が必要と判断したコンテキストです。以下の与えられたコンテキストを可能な限り読み込み理解してください。 -2. コンテキストを理解した後、開発者のinstructionに必要なコード、 情報を整理してください。 -3. 次のinstructionにそれらのコード、情報を提供してください。 diff --git a/.cursor/rules b/.cursor/rules new file mode 120000 index 0000000..2d5c9a9 --- /dev/null +++ b/.cursor/rules @@ -0,0 +1 @@ +../.agents/rules \ No newline at end of file diff --git a/.cursor/rules/anchor.mdc b/.cursor/rules/anchor.mdc deleted file mode 100644 index 67feaff..0000000 --- a/.cursor/rules/anchor.mdc +++ /dev/null @@ -1,25 +0,0 @@ -# Anchor開発ルール - -## プログラム構造 -- `declare_id!()` マクロでプログラムIDを宣言 -- `#[program]` 属性でプログラムモジュールを定義 -- 各instructionは `pub fn` で定義 - -## アカウント構造 -- `#[derive(Accounts)]` でアカウント構造体を定義 -- 必須アカウントには `#[account(mut)]` を付与 -- PDA派生には `seeds` と `bump` を使用 - -## エラーハンドリング -- `Result<()>` を戻り値型として使用 -- エラーは `error!()` マクロで定義 -- `require!()` マクロで条件チェック - -## セキュリティ -- 所有権チェックを必ず実施 -- signer権限の検証 -- PDAの正しい派生 - -## IDL生成 -- `anchor build` でIDLを自動生成 -- TypeScriptクライアントコードも同時に生成 \ No newline at end of file diff --git a/.cursor/rules/drawio.mdc b/.cursor/rules/drawio.mdc deleted file mode 100644 index 510a74f..0000000 --- a/.cursor/rules/drawio.mdc +++ /dev/null @@ -1,246 +0,0 @@ ---- -description: -globs: *.drawio -alwaysApply: false ---- -# draw.io XML文書作成ガイド (フロー図) - -この文書では、draw.ioでフロー図をXML文書として作成する方法を解説します。GUI操作に頼らず、テキストベースでダイアグラムを定義し、draw.ioにインポートする手順をステップごとに説明します。 - -## draw.io XML形式の基本 - -draw.ioのXML文書は、mxCell要素を基本として構成されます。mxCell要素は、ノード(図形)やエッジ(線)などのダイアグラムの要素を表します。 - -### mxCell (ノード): - -- id: ノードの一意なID (省略可能、draw.ioが自動生成) -- value: ノード内に表示するテキスト -- style: ノードのスタイル (図形の種類、色、フォントなど) -- vertex: ノードが頂点であることを示す属性 (常に vertex="1") -- parent: 親となるmxCellのID (通常は 1) - -### mxCell (エッジ): - -- id: エッジの一意なID (省略可能、draw.ioが自動生成) -- style: エッジのスタイル (線の種類、色、矢印など) -- edge: エッジであることを示す属性 (常に edge="1") -- parent: 親となるmxCellのID (通常は 1) -- source: 接続元のmxCellのID -- target: 接続先のmxCellのID - -### mxGeometry: エッジの形状を定義する要素 - -## XML文書作成手順 - -1. draw.io XMLの基本構造 - -draw.io XML文書は、以下の基本構造を持ちます。 - -```xml - - - - - - - - - - - -``` - -`` の部分に、フロー図の要素を記述していきます。 - -2. サブグラフ (mxCell - group) - -サブグラフは、mxCell要素で group スタイルを指定することで表現します。 - -例: - -```xml - - - -``` - -- style="group": サブグラフであることを指定 -- mxGeometry: サブグラフの位置とサイズを指定 - -3. ノード (mxCell - rectangle) - -フロー図の各ステップは、mxCell要素で rectangle スタイルを指定することで表現します。 - -例: - -```xml - - - -``` - -- style="rounded=1;whiteSpace=wrap;html=1;fillColor=#f9f;strokeColor=#333;strokeWidth=2;": ノードのスタイル (角丸、テキスト折り返し、HTML描画、塗りつぶし色、線色、線幅) -- parent="subgraph1": ノードがサブグラフ subgraph1 に属することを指定 - -4. エッジ (mxCell - connector) - -プロセスの流れは、mxCell要素で connector スタイルを指定することで表現します。 - -例: - -```xml - - - -``` - -- style="edgeStyle=orthogonalEdgeStyle;rounded=0;orthogonalLoop=1;jettySize=auto;html=1;": エッジのスタイル (直角線、角丸なし、直角ループ、自動調整、HTML描画) -- edge="1": エッジであることを指定 -- source="A1": 接続元ノードのID -- target="A2": 接続先ノードのID - -5. XML文書の作成 - -上記の手順を参考に、フロー図の各要素をmxCell要素としてXML文書に記述していきます。サブグラフ、ノード、エッジの順に記述すると、XML文書が構造化されて分かりやすくなります。 - -## draw.ioへのインポート - -draw.ioでXML文書からフロー図を作成するには、以下の手順に従います。 - -1. draw.ioを開く: Webブラウザで https://app.diagrams.net/ にアクセスし、draw.ioエディタを開きます。 -2. 「ファイル」メニューを開く: draw.ioエディタのメニューバーから「ファイル」をクリックします。 -3. 「開く」 > 「XMLをインポート」を選択: 「ファイル」メニューから「開く」を選択し、さらに「XMLをインポート」をクリックします。 -4. XMLコードを貼り付け: テキストエリアが表示されるので、作成したXML文書のコード全体をコピーして貼り付けます。 -5. 「インポート」をクリック: XMLコードの貼り付け後、「インポート」ボタンをクリックします。 - -draw.ioがXML文書を解析し、フロー図がエディタ上に表示されます。 - -## XML文書の構造 - -XML文書は、フロー図の要素をmxCell要素で記述します。主要な要素は以下の通りです。 - -- ``: draw.io XMLファイルのルート要素です。 - - host: ホスト名 (通常は app.diagrams.net) - - modified: 最終更新日時 - - agent: エージェント情報 (作成者など) - - etag: ETag (バージョン管理用) -- ``: ダイアグラムを定義する要素です。 - - id: ダイアグラムID - - name: ダイアグラム名 -- ``: ダイアグラムのモデルを定義する要素です。 - - dx, dy: ダイアグラムの配置位置 - - grid, gridSize, guides, tooltips, connect, arrows, fold, page, pageScale, pageWidth, pageHeight, math, shadow: 描画設定 -- ``: mxCell要素のルート要素です。 - - ``: ルートmxCell (固定) - - ``: ドキュメントmxCell (固定) -- `` (サブグラフ): サブグラフ (グループ) を定義します。 - - id: サブグラフID (例: subgraph1) - - value: サブグラフ名 (例: 1. プロセスグループ名) - - style="group": サブグラフスタイル - - vertex="1": 頂点属性 - - parent="1": 親mxCell (ドキュメントmxCell) -- ``: 位置とサイズ -- `` (ノード): フロー図のノード (ステップ) を定義します。 - - id: ノードID (例: A1) - - value: ノードテキスト (例: プロセスステップ名) - - style: ノードスタイル (形状、色など) - - vertex="1": 頂点属性 - - parent: 親mxCell (サブグラフまたはドキュメントmxCell) -- ``: 位置とサイズ -- `` (エッジ): フロー図のエッジ (矢印) を定義します。 - - id: エッジID (例: edge_A1_A2) - - style: エッジスタイル (線の種類、矢印の種類など) - - edge="1": エッジ属性 - - parent="1": 親mxCell (ドキュメントmxCell) - - source: 接続元ノードID - - target: 接続先ノードID -- ``: 形状 - -## XML文書の編集とレイアウト調整 - -XML文書を編集することで、フロー図をカスタマイズできます。特に、図形の配置に関する問題 (線の重なり、図形の貫通など) を解決するためには、以下の点に注意してXML文書を編集してください。 - -### ノードの配置 - -- ノード同士が重ならないように、 要素の x 属性と y 属性を調整し、適切な間隔を確保してください。 -- サブグラフ () を活用して、関連するノードをグループ化し、レイアウトを整理すると効果的です。サブグラフの 要素を調整することで、グループ全体の配置を制御できます。 -- フロー図全体のバランスを考え、各サブグラフ、ノードの の x, y 属性を調整してください。特に、階層的なフロー図の場合は、上位階層のサブグラフの配置を先に決定し、その内部のノード配置を調整すると、レイアウトがまとまりやすくなります。 - -### エッジのルーティング - -- エッジがノードを貫通したり、不必要に交差したりしないように、エッジのスタイル () を調整してください。 -- edgeStyle=orthogonalEdgeStyle: 直角線を使用するスタイルで、線の重なりを減らし、フロー図を明確にします。 -- rounded=0: 角を丸めない設定は、直角的なフロー図に適しています。 -- orthogonalLoop=1: ループ状エッジを直角に描画。自己ループノードに有効です。 -- jettySize=auto: ノードとエッジ接続点のサイズを自動調整。 -- エッジ同士が重ならないように、可能であればエッジの mxGeometry の mxPoint 要素を調整し、経路を変更してみてください。 - -### mxGeometryの微調整 - -- 各mxCell要素の 要素を細かく調整し、ノードの位置、サイズ、エッジの形状などを精密に制御します。 -- 特に、ノードの 要素の x, y, width, height 属性をフロー図全体のバランスを見て調整。 -- エッジの 要素内の mxPoint 要素で、エッジの形状を細かく調整可能です。必要に応じて追加・編集してください。 - -### レイアウトアルゴリズムの検討 (インポート後) - -- XMLインポート後、draw.ioの自動レイアウト機能も活用できます。「レイアウト」メニューから、各種アルゴリズム (例: 「階層型レイアウト」) を試し、自動調整を検討してください。 -- XML文書でレイアウトを完全に制御したい場合は、自動レイアウトに頼らず、 要素を手動調整します。 - -### グリッドとガイドの意識 - -- draw.ioエディタのグリッド線とガイド線 ([表示] > [グリッド]、[表示] > [ガイド]) を利用すると、ノードやエッジを整列しやすくなります。 -- XML文書作成時もグリッドとガイドを意識し、座標を揃えると、整ったレイアウトのフロー図になります。 - -## エッジスタイルの調整例 - -エッジのスタイル調整で、線の重なりを軽減できます。直角線スタイル (orthogonalEdgeStyle) を適用するには、エッジの 要素の style 属性に edgeStyle=orthogonalEdgeStyle を追加します。 - -```xml - - - -``` - -上記の例では、rounded=0 (角を丸めない)、orthogonalLoop=1 (直角ループ)、jettySize=auto (接続点サイズ自動調整) も追加。これらのスタイル組み合わせで、より明確なフロー図作成が可能です。 - -## 編集後のインポート - -XML文書編集後、再度draw.ioへインポートし、変更を反映します。「ファイル」 > 「開く」 > 「XMLをインポート」から、編集後のXMLコードをインポートしてください。 - -## 高度なスタイリング - -より高度なスタイリングを行いたい場合は、以下のような属性を活用できます: - -### ノードのスタイル属性 - -- fillColor: 塗りつぶし色(例: #f9f7ed, #ff0000) -- strokeColor: 線の色 -- strokeWidth: 線の太さ -- fontColor: テキストの色 -- fontSize: フォントサイズ -- fontStyle: フォントスタイル(例: 0=通常, 1=太字, 2=斜体, 3=太字+斜体) -- align: テキスト水平位置(例: left, center, right) -- verticalAlign: テキスト垂直位置(例: top, middle, bottom) -- dashed: 破線(例: 1=有効) -- dashPattern: 破線パターン(例: 3 3) -- shadow: 影(例: 1=有効) - -### エッジのスタイル属性 - -- strokeColor: 線の色 -- strokeWidth: 線の太さ -- dashed: 破線(例: 1=有効) -- dashPattern: 破線パターン(例: 3 3) -- startArrow: 始点の矢印(例: none, classic, open, diamond) -- endArrow: 終点の矢印(例: none, classic, open, diamond) -- startSize: 始点矢印のサイズ -- endSize: 終点矢印のサイズ -- curved: 曲線(例: 1=有効) - -これらの属性を組み合わせることで、より表現力豊かなフロー図を作成できます。 - -## まとめ - -draw.ioのXML文書を使用すると、GUI操作なしでフロー図を定義できます。この方法は、特に複雑なフロー図や、同様のパターンを持つ複数のフロー図を効率的に作成する場合に役立ちます。また、バージョン管理システムとの統合も容易になります。 - -このガイドを参考に、XML文書を編集し、draw.ioで独自のフロー図を作成してみてください。 diff --git a/.cursor/rules/general.mdc b/.cursor/rules/general.mdc deleted file mode 100644 index 9a39a69..0000000 --- a/.cursor/rules/general.mdc +++ /dev/null @@ -1,99 +0,0 @@ ---- -alwaysApply: true ---- - -- folder name, file name, class name, function name, などを名前空間として認識し、適切な命名を行うようにして下さい - -- lib/pureには純粋関数として切り出すべき関数、testabiltyが重要であったり、validateionが複雑、domain logicに直結する計算などを行うmoduleにとどめて下さい - -- all comments in source code should be in english - -## 型安全性とテスタビリティ - -### 外部依存の抽象化 - -外部APIやサービスを使用する際は、依存性注入パターンを使用してテスタビリティを確保して下さい。 - -```typescript -// ✅ 良い例: 依存性注入 -export function createTavilyClient({ - apiKey, - tavilyClient, - mockClient, // テスト用のモック注入ポイント -}: CreateTavilyClientDeps = {}): TavilyClient { - if (mockClient) { - return mockClient; - } - // 実装... -} - -// テストで使用 -const client = createTavilyClient({ mockClient: mockTavilyClient }); -``` - -### グローバル状態の回避 - -環境変数などのグローバル状態に依存する場合、明示的なパラメータで上書きできるようにして下さい。 - -```typescript -// ✅ 良い例: パラメータで上書き可能 -async function searchToken(input: TavilyQueryInput): Promise> { - const key = apiKey ?? env.TAVILY_API_KEY; // パラメータ優先 - if (!key) { - return err({ type: "ConfigurationError", ... }); - } - // 実装... -} -``` - -### 型定義の変更管理 - -スキーマやインターフェースを変更する際は、以下を必ず実行して下さい: - -1. 関連するテストコードも同時に更新 -2. `bun run typecheck` で型エラーを確認 -3. `bun run test` ですべてのテストがパスすることを確認 -4. PR レビューで型定義とテストの整合性を確認 - -### 依存ライブラリの更新 - -依存ライブラリを更新する際は、以下の手順を踏んで下さい: - -```bash -# 1. 更新 -bun update - -# 2. 型チェック(型定義の変更を検出) -bun run typecheck - -# 3. テスト(互換性を確認) -bun run test - -# 4. ビルド(本番環境での動作を確認) -bun run build -``` - -型エラーが発生した場合は、テストのモックデータを新しい型定義に合わせて更新して下さい。 - -### 型アサーションの使用ガイドライン - -- `as any` の使用は最小限に抑える -- 必要な場合は `@ts-expect-error` または `@ts-ignore` にコメントを付けて理由を明記 -- テスト用の型互換性問題は `@ts-expect-error` で対応可能 -- `as unknown as TargetType` のパターンを使用して安全に変換 - -```typescript -// ✅ 良い例: 理由を明記 -// @ts-expect-error - Cloudflare Workers types mismatch between test and runtime -const client = createWorkersAiClient({ aiBinding: mockAiBinding }); - -// ✅ 良い例: unknown を経由 -const calls = mockGenerate.mock.calls as unknown as Array<[ImageRequest, unknown?]>; -``` - -Please refer to the following steering documents for more information: - -- @.kiro/steering/structure.md -- @.kiro/steering/product.md -- @.kiro/steering/tech.md -- @docs/test-error-analysis-2025-11.md diff --git a/.cursor/rules/git.mdc b/.cursor/rules/git.mdc deleted file mode 100644 index 302df1f..0000000 --- a/.cursor/rules/git.mdc +++ /dev/null @@ -1,180 +0,0 @@ ---- -alwaysApply: false ---- -ALways use english for the content of commit and Pull Request. - -## Git Workflow - -This document explains best practices for creating commits and pull requests. - -### Creating Commits - -Follow these steps when creating commits: - -1. Check changes - - ```bash - # Check untracked files and changes - git status - - # Check detailed changes - git --no-pager diff - - # Check commit message style - git --no-pager log - ``` - -2. Analyze changes - - - Identify changed or added files - - Understand the nature of changes (new feature, bug fix, refactoring, etc.) - - Evaluate impact on the project - - Check for sensitive information - -3. Create commit message - - - Focus on "why" - - Use clear and concise language - - Accurately reflect the purpose of the change - - Avoid generic expressions - -4. Execute commit - - ```bash - # Stage only related files - git add - - # Create commit message (using HEREDOC) - git commit -m "$(cat <<'EOF' - feat: Introduce Result type for user authentication - - - Make error handling more type-safe - - Force explicit handling of error cases - - Improve tests - - 🤖 Generated with ${K4} - Co-Authored-By: Claude noreply@anthropic.com - EOF - )" - ``` - -### Creating Pull Requests - -Follow these steps when creating pull requests: - -1. Check branch status - - ```bash - # Check uncommitted changes - git status - - # Check changes - git --no-pager diff - - # Check differences from main - git --no-pager diff main...HEAD - - # Check commit history - git --no-pager log - ``` - -2. Analyze changes - - - Review all commits since branching from main - - Understand the nature and purpose of changes - - Evaluate impact on the project - - Check for sensitive information - -3. Create pull request - - ```bash - # Create pull request (using HEREDOC) - gh pr create --title "feat: Improve error handling with Result type" --body "$(cat <<'EOF' - ## Overview - - Introduced Result type to make error handling more type-safe. - - ## Changes - - - Introduced Result type using neverthrow - - Explicit type definitions for error cases - - Added test cases - - ## Review Points - - - Is the Result type usage appropriate? - - Are error cases comprehensive? - - Are tests sufficient? - EOF - )" - ``` - -### Important Notes - -1. Commit related - - - Use `git commit -am` when possible - - Don't include unrelated files - - Don't create empty commits - - Don't change git settings - -2. Pull request related - - - Create new branch as needed - - Commit changes appropriately - - Use `-u` flag when pushing to remote - - Analyze all changes - -3. Operations to avoid - - Using interactive git commands (with -i flag) - - Pushing directly to remote repository - - Changing git settings - -### Commit Message Examples - -```bash -# Adding new features -feat: Introduce Result type for error handling - -# Improving existing features -update: Enhance cache performance - -# Bug fixes -fix: Fix expired authentication token handling - -# Refactoring -refactor: Abstract external dependencies using Adapter pattern - -# Adding tests -test: Add tests for Result type error cases - -# Updating documentation -docs: Add error handling best practices -``` - -### Pull Request Example - -```markdown -## Overview - -Introduced Result type to make TypeScript error handling more type-safe. - -## Changes - -- Introduced neverthrow library -- Used Result type in API client -- Defined error case types -- Added test cases - -## Technical Details - -- Replaced existing exception handling with Result type -- Standardized error types -- Improved mock implementations - -## Review Points - -- Is the Result type usage appropriate? -- Are error cases comprehensive? -- Are tests sufficient? -``` diff --git a/.cursor/rules/github.mdc b/.cursor/rules/github.mdc deleted file mode 100644 index 8d7fbb2..0000000 --- a/.cursor/rules/github.mdc +++ /dev/null @@ -1,30 +0,0 @@ ---- -alwaysApply: false ---- -## IMPORTANT RULES - -- PR のタイトルは `[ FEAT / FIX ]`などかっこでその task のカテゴリを示して英語でわかりやすい title をつけて下さい -- Issue もわかりやすい format で作成し、label や assignee なども考慮し適切に作成して下さい -- markdown の図表や`
`などを適切に使用し情報を整理して下さい -- Github MCP server が動かない場合は普通に terminal で`gh`で始まる Github CLI コマンドを使って同様の task を実行して下さい -- issue 作成時は、**必ず**一時ファイルを作成**せず**に`echo "# 改行コメントを入れるサンプル\nこのPRではこんなことをしました\n\n$(gll)" | gh pr create -f -b master -a swfz --title="sample" --body-file=-`の様に標準入力の形式でワンライナーコマンドで作成して下さい! - -```bash -# 標準入力を使用したワンライナーでのissue作成 - echo "## 概要\n課題の説明...\n\n## 詳細\n- 項目1\n- 項目2" | gh issue create --title "[CATEGORY] タイトル" --label "label1,label2" --body-file - - -# 複雑な内容の場合はヒアドキュメントを使用 - gh issue create --title "[RESEARCH] リサーチ内容" --label "enhancement" --body-file - << EOF - ## 概要 - 詳細な説明... - - ## 課題 - - 項目1 - - 項目2 - - ## 対応方針 - 提案内容... - EOF -``` - -- また、issue 作成時は`
`などを使っって関連のあるファイルを list-up し、下の方に書いておいて下さい diff --git a/.cursor/rules/rust.mdc b/.cursor/rules/rust.mdc deleted file mode 100644 index 2997d6d..0000000 --- a/.cursor/rules/rust.mdc +++ /dev/null @@ -1,27 +0,0 @@ -# Rust開発ルール(Solana/Anchor向け) - -## 安全性 -- `unwrap()` の使用を避け、適切なエラーハンドリング -- オーバーフロー対策: `checked_add()`, `checked_sub()` 等 -- 所有権と借用のルールを理解 - -## パフォーマンス -- ゼロコスト抽象化の活用 -- 不要なヒープアロケーションの回避 -- 効率的なデータ構造の選択 - -## Anchor固有の制約 -- スタックサイズ制限: 4KB -- ヒープサイズ制限: 32KB (プログラムあたり) -- 再帰関数の使用禁止 -- 浮動小数点演算の制限 - -## テスト -- 単体テストを `#[cfg(test)]` で記述 -- 統合テストを `tests/` ディレクトリに配置 -- エッジケースの網羅 - -## コード品質 -- `clippy` でコード品質チェック -- `rustfmt` でコードフォーマット -- 意味のある変数名と関数名 \ No newline at end of file diff --git a/.cursor/rules/solana.mdc b/.cursor/rules/solana.mdc deleted file mode 100644 index 6e7c5b3..0000000 --- a/.cursor/rules/solana.mdc +++ /dev/null @@ -1,32 +0,0 @@ -# Solana開発ルール - -## アカウントモデル -- アカウントはデータを格納するストレージ -- プログラムは実行ロジックのみ -- Cross-Program Invocation (CPI)でプログラム間連携 - -## トランザクション -- 最大サイズ: 1232 bytes -- Compute Budget: 200,000 CU (デフォルト) -- 署名者数は最大16 - -## PDA (Program Derived Address) -- プログラム所有のアドレス -- `find_program_address()` で派生 -- `create_program_address()` で検証 - -## Rent -- アカウントはrentを支払う必要あり -- rent-exempt: 2年間分のrentを一括前払い -- rent-exempt minimum: `getMinimumBalanceForRentExemption()` - -## ネットワーク -- mainnet-beta: 本番環境 -- devnet: 開発テスト環境 -- testnet: バリデーター開発環境 -- localnet: ローカル開発環境 - -## トークン標準 -- SPL Token: トークン標準 -- Associated Token Account: ユーザーごとのトークンアカウント -- Token Metadata: NFT/トークンのメタデータ \ No newline at end of file diff --git a/.cursor/rules/test.mdc b/.cursor/rules/test.mdc deleted file mode 100644 index 61ea496..0000000 --- a/.cursor/rules/test.mdc +++ /dev/null @@ -1,179 +0,0 @@ ---- -globs: tests/**/* -alwaysApply: false ---- - -# Test Rules - -## 基本方針 - -- testを通すことを目的としないで下さい。anyやunknownによってtestがpassしてもproductの品質が担保できていなかったり、specを満たしていなければ意味がありません。 -- テストの独立性を確保し、グローバル状態への依存を避けて下さい。 -- 型安全なモック実装を心がけ、必要な場合のみ型アサーションを使用して下さい。 - -## Test Implementation Flow - -1. 境界値などを考慮しながら必要なビジネス要件をすべて満たす様にtest caseを過不足なく書き出す。 -2. 必ず`src/`以下の実装をimportしてtestコードを実行する。 -3. 外部依存は明示的にモックし、テスト実行順序に依存しないようにする。 - -## Bun Test Mocking Best Practices - -### 型安全なモック実装 - -参考: [Bun Test Mocks](https://bun.com/docs/test/mocks), [Mock Functions Guide](https://bun.com/docs/guides/test/mock-functions) - -#### ✅ 良い例: 完全な型定義でモックを作成 - -```typescript -import { mock } from "bun:test"; - -interface UserService { - getUser(id: string): Promise; - createUser(data: CreateUserData): Promise; -} - -// 型安全なモック -const mockUserService: UserService = { - getUser: mock(async (id: string) => ({ id, name: "Test User" })), - createUser: mock(async (data: CreateUserData) => ({ id: "new-id", ...data })), -}; -``` - -#### ✅ 良い例: モックの呼び出し履歴を型安全にアクセス - -```typescript -import { mock } from "bun:test"; - -const mockGenerate = mock((request: ImageRequest) => - Promise.resolve(ok({ imageBuffer: new ArrayBuffer(8) })) -); - -// 型アサーションで安全にアクセス -const calls = mockGenerate.mock.calls as unknown as Array<[ImageRequest, unknown?]>; -const request = calls[0]![0]; - -// または安全なチェック -const call = mockGenerate.mock.calls[0]; -if (call && call.length > 0 && call[0]) { - const request = call[0] as ImageRequest; - expect(request.referenceImageUrl).toBe("https://example.com/image.png"); -} -``` - -#### ❌ 悪い例: 型アサーションなしの直接アクセス - -```typescript -// TypeScript エラーになる -const request = mockGenerate.mock.calls[0][0]; // Type error! -``` - -### テストの独立性 - -#### ✅ 良い例: 明示的なパラメータ渡し - -```typescript -it("should fail when API key is not set", async () => { - // 明示的に空文字列を渡してテスト - const client = createTavilyClient({ apiKey: "" }); - const result = await client.searchToken(input); - expect(result.isErr()).toBe(true); -}); -``` - -#### ❌ 悪い例: グローバル状態への依存 - -```typescript -it("should fail when API key is not set", async () => { - // グローバル状態を変更(他のテストに影響する可能性) - delete process.env.TAVILY_API_KEY; - const client = createTavilyClient(); - // env.ts のモジュールキャッシュにより期待通り動作しない -}); -``` - -### 外部ライブラリの型定義変更への対応 - -#### 依存ライブラリ更新時のチェックリスト - -```bash -# 1. 依存関係を更新 -bun update - -# 2. 型エラーを即座に検出 -bun run typecheck - -# 3. テストの互換性確認 -bun run test - -# 4. すべて成功したらコミット -git add package.json bun.lockb -git commit -m "chore: update dependencies" -``` - -#### モックデータの型定義を最新に保つ - -```typescript -// CoinGecko API の型が変更された場合 -const mockResponse: CoinsMarketsResponse = [ - { - id: "bitcoin", - symbol: "btc", - name: "Bitcoin", - // 型定義の変更に追従 - max_supply: null, // number → number | null - ath_date: new Date("2021-11-10T14:24:11.849Z"), // string → Date - atl_date: new Date("2013-07-06T00:00:00.000Z"), - last_updated: new Date("2025-11-21T00:00:00.000Z"), - }, -]; -``` - -### 型アサーションの使用ガイドライン - -#### @ts-expect-error の適切な使用 - -```typescript -// ✅ 良い例: 理由を明記 -// @ts-expect-error - Cloudflare Workers types mismatch between test and runtime -const client = createWorkersAiClient({ aiBinding: mockAiBinding }); - -// ✅ 良い例: テスト用の型互換性問題 -// @ts-expect-error - BunSQLiteDatabase type mismatch but works at runtime -repository = new MarketSnapshotsRepository(db as any); -``` - -#### as any の使用は最小限に - -```typescript -// ❌ 避ける: 理由なく as any を使用 -const result = someFunction() as any; - -// ✅ 良い例: 具体的な型を指定 -const result = someFunction() as SpecificType; - -// ✅ より良い例: unknown を経由して安全に変換 -const result = someFunction() as unknown as SpecificType; -``` - -### モックのクリーンアップ - -```typescript -import { beforeEach, afterEach, mock } from "bun:test"; - -beforeEach(() => { - // テストごとにモックをリセット -}); - -afterEach(() => { - // 念のためクリーンアップ - mock.restore(); -}); -``` - -## 参考リンク - -- [Bun Test Mocks Documentation](https://bun.com/docs/test/mocks) -- [Bun Mock Functions Guide](https://bun.com/docs/guides/test/mock-functions) -- [Test Error Analysis Report](../docs/test-error-analysis-2025-11.md) -- [The Art of Mocking in Backend Testing](https://medium.com/@iqzaardiansyah/the-art-of-mocking-in-backend-testing-7af23b0d5881) \ No newline at end of file diff --git a/.cursor/skills b/.cursor/skills new file mode 120000 index 0000000..2b7a412 --- /dev/null +++ b/.cursor/skills @@ -0,0 +1 @@ +../.agents/skills \ No newline at end of file diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml new file mode 100644 index 0000000..70ab23e --- /dev/null +++ b/.github/workflows/ci.yml @@ -0,0 +1,43 @@ +name: CI + +on: + push: + branches: + - main + pull_request: + +jobs: + checks: + runs-on: ubuntu-latest + + steps: + - name: Checkout + uses: actions/checkout@v4 + + - name: Setup Bun + uses: oven-sh/setup-bun@v2 + with: + bun-version: 1.3.10 + + - name: Setup Rust + uses: dtolnay/rust-toolchain@stable + with: + components: rustfmt, clippy + + - name: Cache Rust + uses: Swatinem/rust-cache@v2 + + - name: Install JavaScript dependencies + run: bun install --frozen-lockfile + + - name: Check TypeScript formatting + run: bun run format:ts:check + + - name: Check Rust formatting + run: cargo fmt --all --check + + - name: Run Clippy + run: cargo clippy --workspace --all-targets --all-features -- -D warnings + + - name: Run tests + run: cargo test --workspace diff --git a/.prettierignore b/.prettierignore new file mode 100644 index 0000000..bb5909c --- /dev/null +++ b/.prettierignore @@ -0,0 +1,15 @@ +target +node_modules +.anchor +test-ledger +dist +coverage +bun.lock +Cargo.lock +.vscode +.cursor +.claude +.codex +AGENTS.md +CLAUDE.md +*.md diff --git a/.vscode/settings.json b/.vscode/settings.json index 23ef56c..9872962 100644 --- a/.vscode/settings.json +++ b/.vscode/settings.json @@ -1,6 +1,5 @@ { // Rust settings - "rust-analyzer.checkOnSave.command": "clippy", "rust-analyzer.check.command": "clippy", "rust-analyzer.cargo.features": "all", "rust-analyzer.procMacro.enable": true, diff --git a/AGENTS.md b/AGENTS.md new file mode 100644 index 0000000..5b76471 --- /dev/null +++ b/AGENTS.md @@ -0,0 +1,75 @@ +# Agent Guidelines + +- please do not eslint-disable, just fix the implementation +- Please use GitHub Flavored Markdown + +## Steering (Project Context) + +Load `docs/` as project memory at session start or when context is needed. + +- **Path**: `docs/` +- **Default files**: `PRODUCT.md`, `TECH.md`, `STRUCTURE.md` +- **Task memory**: `.agents/memory/todo.md`, `.agents/memory/lessons.md` +- **Other docs**: Add or manage as needed (domain-specific .md) + +Use steering to align decisions with product goals, tech stack, and structure. + +--- + +## Workflow Orchestration + +### 1. Plan Node Default + +- Enter plan mode for ANY non-trivial task (3+ steps or architectural decisions) +- If something goes sideways, STOP and re-plan immediately - don't keep pushing +- Use plan mode for verification steps, not just building +- Write detailed specs upfront to reduce ambiguity + +### 2. Subagent Strategy + +- Use subagents liberally to keep main context window clean +- Offload research, exploration, and parallel analysis to subagents +- For complex problems, throw more compute at it via subagents +- One tack per subagent for focused execution + +### 3. Self-Improvement Loop + +- After ANY correction from the user: update `.agents/memory/lessons.md` with the pattern +- Write rules for yourself that prevent the same mistake +- Ruthlessly iterate on these lessons until mistake rate drops +- Review lessons at session start for relevant project + +### 4. Verification Before Done + +- Never mark a task complete without proving it works +- Diff behavior between main and your changes when relevant +- Ask yourself: "Would a staff engineer approve this?" +- Run tests, check logs, demonstrate correctness + +### 5. Demand Elegance (Balanced) + +- For non-trivial changes: pause and ask "is there a more elegant way?" +- If a fix feels hacky: "Knowing everything I know now, implement the elegant solution" +- Skip this for simple, obvious fixes - don't over-engineer +- Challenge your own work before presenting it + +### 6. Autonomous Bug Fixing + +- When given a bug report: just fix it. Don't ask for hand-holding +- Point at logs, errors, failing tests - then resolve them +- Zero context switching required from the user +- Go fix failing CI tests without being told how + +## Task Management + +1. **Plan First**: Write plan to `.agents/memory/todo.md` with checkable items +2. **Verify Plan**: Check in before starting implementation +3. **Track Progress**: Mark items complete as you go +4. **Explain Changes**: High-level summary at each step +5. **Document Results**: Add review section to `.agents/memory/todo.md` +6. **Capture Lessons**: Update `.agents/memory/lessons.md` after corrections + +## Core Principles +- **Simplicity First**: Make every change as simple as possible. Impact minimal code.YAGNI, KISS, DRY. No backward-compat shims or fallback paths unless they come free without adding cyclomatic complexity. +- **No Laziness**: Find root causes. No temporary fixes. Senior developer standards. +- **Minimal Impact**: Changes should only touch what's necessary. Avoid introducing bugs. diff --git a/Anchor.toml b/Anchor.toml index 4cf8541..e11f494 100644 --- a/Anchor.toml +++ b/Anchor.toml @@ -1,5 +1,5 @@ [toolchain] -package_manager = "yarn" +package_manager = "bun" [features] resolution = true diff --git a/CLAUDE.md b/CLAUDE.md new file mode 120000 index 0000000..47dc3e3 --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1 @@ +AGENTS.md \ No newline at end of file diff --git a/Cargo.lock b/Cargo.lock index a420bc0..e754a53 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -1416,6 +1416,7 @@ version = "0.1.0" dependencies = [ "anchor-lang", "anchor-spl", + "solana-program", ] [[package]] diff --git a/Makefile b/Makefile new file mode 100644 index 0000000..2f2c8c3 --- /dev/null +++ b/Makefile @@ -0,0 +1,46 @@ +.PHONY: install build test lint lint-fix format format-fix help + +BUN ?= bun +CARGO ?= cargo +MAKEFLAGS += --no-builtin-rules + +install: + $(BUN) install + $(BUN) run prepare + +build: + $(CARGO) build --workspace + +test: + $(CARGO) test --workspace + +lint: + $(CARGO) clippy --workspace --all-targets --all-features -- -D warnings + +lint-fix: + $(CARGO) clippy --fix --allow-dirty --allow-staged --workspace --all-targets --all-features -- -D warnings + +format: + $(CARGO) fmt --all --check + $(BUN) run format:ts:check + +format-fix: + $(CARGO) fmt --all + $(BUN) run format:ts + +help: + @printf '%s\n' \ + 'make install' \ + 'make build' \ + 'make test' \ + 'make lint' \ + 'make lint:fix' \ + 'make format' \ + 'make format:fix' + +%: + @case "$@" in \ + lint:fix) $(MAKE) lint-fix ;; \ + format:fix) $(MAKE) format-fix ;; \ + *) printf 'Unknown target: %s\n' "$@"; exit 2 ;; \ + esac diff --git a/bun.lock b/bun.lock index 5ac70c0..43ae8e3 100644 --- a/bun.lock +++ b/bun.lock @@ -11,8 +11,9 @@ "@types/chai": "^4.3.0", "@types/mocha": "^9.0.0", "chai": "^4.3.4", + "lefthook": "2.1.3", "mocha": "^9.0.3", - "prettier": "^2.6.2", + "prettier": "3.8.1", "ts-mocha": "^10.0.0", "typescript": "^5.7.3", }, @@ -211,6 +212,28 @@ "json5": ["json5@1.0.2", "", { "dependencies": { "minimist": "^1.2.0" }, "bin": { "json5": "lib/cli.js" } }, "sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA=="], + "lefthook": ["lefthook@2.1.3", "", { "optionalDependencies": { "lefthook-darwin-arm64": "2.1.3", "lefthook-darwin-x64": "2.1.3", "lefthook-freebsd-arm64": "2.1.3", "lefthook-freebsd-x64": "2.1.3", "lefthook-linux-arm64": "2.1.3", "lefthook-linux-x64": "2.1.3", "lefthook-openbsd-arm64": "2.1.3", "lefthook-openbsd-x64": "2.1.3", "lefthook-windows-arm64": "2.1.3", "lefthook-windows-x64": "2.1.3" }, "bin": { "lefthook": "bin/index.js" } }, "sha512-2W8PP/EGCvyS/x+Xza0Lgvn/EM3FKnr6m6xkfzpl6RKHl8TwPvs9iYZFQL99CnWTTvO+1mtQvIxGE/bD05038Q=="], + + "lefthook-darwin-arm64": ["lefthook-darwin-arm64@2.1.3", "", { "os": "darwin", "cpu": "arm64" }, "sha512-VMSQK5ZUh66mKrEpHt5U81BxOg5xAXLoLZIK6e++4uc28tj8zGBqV9+tZqSRElXXzlnHbfdDVCMaKlTuqUy0Rg=="], + + "lefthook-darwin-x64": ["lefthook-darwin-x64@2.1.3", "", { "os": "darwin", "cpu": "x64" }, "sha512-4QhepF4cf+fa7sDow29IEuCfm/6LuV+oVyQGpnr5it1DEZIEEoa6vdH/x4tutYhAg/HH7I2jHq6FGz96HRiJEQ=="], + + "lefthook-freebsd-arm64": ["lefthook-freebsd-arm64@2.1.3", "", { "os": "freebsd", "cpu": "arm64" }, "sha512-kysx/9pjifOgcTZOj1bR0i74FAbMv3BDfrpZDKniBOo4Dp0hXhyOtUmRn4nWKL0bN+cqc4ZePAq4Qdm4fxWafA=="], + + "lefthook-freebsd-x64": ["lefthook-freebsd-x64@2.1.3", "", { "os": "freebsd", "cpu": "x64" }, "sha512-TLuPHQNg6iihShchrh5DrHvoCZO8FajZBMAEwLIKWlm6bkCcXbYNxy4dBaVK8lzHtS/Kv1bnH0D3BcK65iZFVQ=="], + + "lefthook-linux-arm64": ["lefthook-linux-arm64@2.1.3", "", { "os": "linux", "cpu": "arm64" }, "sha512-e5x4pq1aZAXc0C642V4HaUoKtcHVmGW1HBIDNfWUhtsThBKjhZBXPspecaAHIRA/8VtsXS3RnJ4VhQpgfrCbww=="], + + "lefthook-linux-x64": ["lefthook-linux-x64@2.1.3", "", { "os": "linux", "cpu": "x64" }, "sha512-yeVAiV5hoE6Qq8dQDB4XC14x4N9mhn+FetxzqDu5LVci0/sOPqyPq2b0YUtNwJ1ZUKawTz4I/oqnUsHkQrGH0w=="], + + "lefthook-openbsd-arm64": ["lefthook-openbsd-arm64@2.1.3", "", { "os": "openbsd", "cpu": "arm64" }, "sha512-8QVvRxIosV6NL2XrbifOPGVhMFE43h02BUNEHYhZhyad7BredfAakg9dA9J/NO0I3eMdvCYU50ubFyDGIqUJog=="], + + "lefthook-openbsd-x64": ["lefthook-openbsd-x64@2.1.3", "", { "os": "openbsd", "cpu": "x64" }, "sha512-YTS9qeW9PzzKg9Rk55mQprLIl1OdAIIjeOH8DF+MPWoAPkRqeUyq8Q2Bdlf3+Swy+kJOjoiU1pKvpjjc8upv9Q=="], + + "lefthook-windows-arm64": ["lefthook-windows-arm64@2.1.3", "", { "os": "win32", "cpu": "arm64" }, "sha512-Nlp80pWyF67GmxgM5NQmL7JTTccbJAvCNtS5QwHmKq3pJ9Xi0UegP9pGME520n06Rhp+gX7H4boXhm2D5hAghg=="], + + "lefthook-windows-x64": ["lefthook-windows-x64@2.1.3", "", { "os": "win32", "cpu": "x64" }, "sha512-KByBhvqgUNhjO/03Mr0y66D9B1ZnII7AB0x17cumwHMOYoDaPJh/AlgmEduqUpatqli3lnFzWD0DUkAY6pq/SA=="], + "locate-path": ["locate-path@6.0.0", "", { "dependencies": { "p-locate": "^5.0.0" } }, "sha512-iPZK6eYjbxRu3uB4/WZ3EsEIMJFMqAoopl3R+zuq0UjcAm/MO6KCweDgPfP3elTztoKP3KtnVHxTn2NHBSDVUw=="], "log-symbols": ["log-symbols@4.1.0", "", { "dependencies": { "chalk": "^4.1.0", "is-unicode-supported": "^0.1.0" } }, "sha512-8XPvpAA8uyhfteu8pIvQxpJZ7SYYdpUivZpGy6sFsBuKRY/7rQGavedeB8aK+Zkyq6upMFVL/9AW6vOYzfRyLg=="], @@ -253,7 +276,7 @@ "picomatch": ["picomatch@2.3.1", "", {}, "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="], - "prettier": ["prettier@2.8.8", "", { "bin": { "prettier": "bin-prettier.js" } }, "sha512-tdN8qQGvNjw4CHbY+XXk0JgCXn9QiF21a55rBe5LJAU+kDyC4WQn4+awm2Xfk2lQMk5fKup9XgzTZtGkjBdP9Q=="], + "prettier": ["prettier@3.8.1", "", { "bin": { "prettier": "bin/prettier.cjs" } }, "sha512-UOnG6LftzbdaHZcKoPFtOcCKztrQ57WkHDeRD9t/PTQtmT0NHSeWWepj6pS0z/N7+08BHFDQVUrfmfMRcZwbMg=="], "randombytes": ["randombytes@2.1.0", "", { "dependencies": { "safe-buffer": "^5.1.0" } }, "sha512-vYl3iOX+4CKUWuxGi9Ukhie6fsqXqS9FE2Zaic4tNFD2N2QQaXOMFbuKK4QmDHC0JO6B1Zp41J0LpT0oR68amQ=="], diff --git a/docs/PRODUCT.md b/docs/PRODUCT.md new file mode 100644 index 0000000..6debaab --- /dev/null +++ b/docs/PRODUCT.md @@ -0,0 +1,7 @@ +# PRODUCT + +DOOM INDEX は、CoinGecko から取得したトレンドトークンの市場データを基に、10分ごとにユニークなジェネラティブアートを生成・提示するビジュアライゼーションプロダクトです。AI がトークンの市場心理を芸術として可視化し、生成画像は Cloudflare R2 に保存され、Web UI と OGP に活用されます。 + +このプロジェクトは doomindex.fun (https://github.com/doom-protocol/doom-index)でAI生成された絵画をNFT化するためのsolanaのprogramです + +DOOM INDEXで生成したAI生成絵画をipfsなどの分散がストレージにuploadしそのmetadataを使いsolanaの規格に準拠したNFTをmintできるようにします。 \ No newline at end of file diff --git a/lefthook.yml b/lefthook.yml new file mode 100644 index 0000000..f02bee4 --- /dev/null +++ b/lefthook.yml @@ -0,0 +1,13 @@ +pre-commit: + commands: + rustfmt: + run: cargo fmt --all --check + clippy: + run: cargo clippy --workspace --all-targets --all-features -- -D warnings + prettier: + run: bun run format:ts:check + +pre-push: + commands: + test: + run: cargo test --workspace diff --git a/package.json b/package.json index 9267454..bc6b87b 100644 --- a/package.json +++ b/package.json @@ -2,23 +2,32 @@ "name": "doom-nft-program", "version": "1.0.0", "description": "Doom NFT Program", + "packageManager": "bun@1.3.10", "scripts": { - "lint:fix": "prettier */*.js \"*/**/*{.js,.ts}\" -w", - "lint": "prettier */*.js \"*/**/*{.js,.ts}\" --check", - "format": "prettier */*.js \"*/**/*{.js,.ts}\" -w" + "prepare": "lefthook install", + "format": "bun run format:rust && bun run format:ts", + "format:check": "bun run format:rust:check && bun run format:ts:check", + "format:rust": "cargo fmt --all", + "format:rust:check": "cargo fmt --all --check", + "format:ts": "prettier . --write --ignore-unknown", + "format:ts:check": "prettier . --check --ignore-unknown", + "lint": "cargo clippy --workspace --all-targets --all-features -- -D warnings", + "test": "cargo test --workspace", + "check": "bun run format:check && bun run lint && bun run test" }, "dependencies": { "@coral-xyz/anchor": "^0.31.1" }, "devDependencies": { - "chai": "^4.3.4", - "mocha": "^9.0.3", - "ts-mocha": "^10.0.0", "@types/bn.js": "^5.1.0", "@types/chai": "^4.3.0", "@types/mocha": "^9.0.0", - "typescript": "^5.7.3", - "prettier": "^2.6.2" + "chai": "^4.3.4", + "lefthook": "2.1.3", + "mocha": "^9.0.3", + "prettier": "3.8.1", + "ts-mocha": "^10.0.0", + "typescript": "^5.7.3" }, "prettier": { "semi": true, diff --git a/programs/doom-nft-program/Cargo.toml b/programs/doom-nft-program/Cargo.toml index a3fc1b5..b9bafad 100644 --- a/programs/doom-nft-program/Cargo.toml +++ b/programs/doom-nft-program/Cargo.toml @@ -10,13 +10,19 @@ name = "doom_nft_program" [features] default = [] +anchor-debug = [] +custom-heap = [] +custom-panic = [] cpi = ["no-entrypoint"] no-entrypoint = [] no-idl = [] no-log-ix-name = [] -idl-build = ["anchor-lang/idl-build"] +idl-build = ["anchor-lang/idl-build", "anchor-spl/idl-build"] +[lints.rust] +unexpected_cfgs = { level = "allow", check-cfg = ["cfg(target_os, values(\"solana\"))"] } [dependencies] anchor-lang = "0.29.0" anchor-spl = "0.29.0" +solana-program = "1.18.26" diff --git a/programs/doom-nft-program/src/lib.rs b/programs/doom-nft-program/src/lib.rs index 2c713c5..7b8dcf4 100644 --- a/programs/doom-nft-program/src/lib.rs +++ b/programs/doom-nft-program/src/lib.rs @@ -10,11 +10,6 @@ declare_id!("AavECgzCbVhHeBGAfcUgT1tYEC4N4B96E8XtF9H1fMGt"); pub mod doom_nft_program { use super::*; - pub fn initialize(ctx: Context) -> Result<()> { - msg!("Greetings from: {:?}", ctx.program_id); - Ok(()) - } - pub fn create_mint(ctx: Context) -> Result<()> { msg!("Creating NFT mint: {}", ctx.accounts.mint.key()); Ok(()) @@ -65,9 +60,6 @@ pub mod doom_nft_program { } } -#[derive(Accounts)] -pub struct Initialize {} - #[derive(Accounts)] pub struct CreateMint<'info> { #[account( diff --git a/rust-toolchain.toml b/rust-toolchain.toml new file mode 100644 index 0000000..92a57c5 --- /dev/null +++ b/rust-toolchain.toml @@ -0,0 +1,4 @@ +[toolchain] +channel = "stable" +components = ["clippy", "rustfmt"] +profile = "minimal"