Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ context: [] # optional: `{project-root}/`-prefixed paths to project-wide standar
## Tasks & Acceptance

<!-- Tasks: backtick-quoted file path -- action -- rationale. Prefer one task per file; group tightly-coupled changes when splitting would be artificial. -->
<!-- If an I/O Matrix is present, include a task to unit-test its edge cases. -->
<!-- REQUIRED: Include test tasks for every new or modified behavior. If an I/O Matrix is present, include a task to unit-test its edge cases. -->
<!-- AC covers system-level behaviors not captured by the I/O Matrix. Do not duplicate I/O scenarios here. -->

**Execution:**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,27 @@ Hand `{spec_file}` to a sub-agent/task and let it implement. If no sub-agents ar

**Path formatting rule:** Any markdown links written into `{spec_file}` must use paths relative to `{spec_file}`'s directory so they are clickable in VS Code. Any file paths displayed in terminal/conversation output must use CWD-relative format with `:line` notation (e.g., `src/path/file.ts:42`) for terminal clickability. No leading `/` in either case.

### Tests

**This is mandatory, not optional.** After implementation, write tests for every new or modified behavior. Follow the project's testing conventions discovered from `{project_context}`, CLAUDE.md, the existing test suite, and any configured test tooling.

1. **Discover conventions.** Identify the project's test framework, file-naming patterns, and test directory structure in this order:
- First, inspect existing tests.
- If existing tests are missing or insufficient, inspect the repo's configured test tooling: test scripts, dependencies/devDependencies, and test config files (for example `package.json`, `pytest.ini`, `pyproject.toml`, `jest.config.*`, `vitest.config.*`, `mocha` config, `rspec` config, `go test` conventions, etc.).
- Only if neither existing tests nor configured tooling establish conventions should you fall back to the project's language-idiomatic defaults.
2. **Write tests.** Cover:
- Happy-path behavior for each new or changed feature.
- Edge cases and error scenarios from the I/O & Edge-Case Matrix (if present in the spec).
- Regressions — any behavior that could break due to the change.
3. **Run tests.** Execute the test suite. All new and existing tests must pass. If any test fails, fix the implementation or the test before proceeding.

If the project has no test infrastructure at all (no existing tests, no test framework, no relevant test config, no test directory, and no test scripts), note this in the spec under `## Verification` and skip — but this is the only acceptable reason to skip tests. Record this skip reason explicitly in the final summary output.

### Self-Check

Before leaving this step, verify every task in the `## Tasks & Acceptance` section of `{spec_file}` is complete. Mark each finished task `[x]`. If any task is not done, finish it before proceeding.
Before leaving this step, verify:
1. Every task in the `## Tasks & Acceptance` section of `{spec_file}` is complete. Mark each finished task `[x]`. If any task is not done, finish it before proceeding.
2. Tests exist for every new or modified behavior (unless the project has no test infrastructure). If tests are missing, go back and write them before proceeding.

## NEXT

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Launch three subagents without conversation context. If no sub-agents are availa

- **Blind hunter** — receives `{diff_output}` only. No spec, no context docs, no project access. Invoke via the `bmad-review-adversarial-general` skill.
- **Edge case hunter** — receives `{diff_output}` and read access to the project. Invoke via the `bmad-review-edge-case-hunter` skill.
- **Acceptance auditor** — receives `{diff_output}`, `{spec_file}`, and read access to the project. Must also read the docs listed in `{spec_file}` frontmatter `context`. Checks for violations of acceptance criteria, rules, and principles from the spec and context docs.
- **Acceptance auditor** — receives `{diff_output}`, `{spec_file}`, and read access to the project. Must also read the docs listed in `{spec_file}` frontmatter `context`. Checks for violations of acceptance criteria, rules, and principles from the spec and context docs. **Must also verify that tests exist for every new or modified behavior** — missing test coverage for changed code is a `bad_spec` finding unless the project has no test infrastructure.

### Classify

Expand Down
13 changes: 13 additions & 0 deletions src/bmm-skills/4-implementation/bmad-quick-dev/step-oneshot.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,19 @@ spec_file: '' # set by step-01 before entering this step

Implement the clarified intent directly.

### Tests

**This is mandatory, not optional.** After implementation, write tests for every new or modified behavior. Follow the project's testing conventions discovered from `{project_context}`, CLAUDE.md, the existing test suite, and any configured test tooling.

1. **Discover conventions.** Identify the project's test framework, file-naming patterns, and test directory structure in this order:
- First, inspect existing tests.
- If existing tests are missing or insufficient, inspect the repo's configured test tooling: test scripts, dependencies/devDependencies, and test config files (for example `package.json`, `pytest.ini`, `pyproject.toml`, `jest.config.*`, `vitest.config.*`, `mocha` config, `rspec` config, `go test` conventions, etc.).
- Only if neither existing tests nor configured tooling establish conventions should you fall back to the project's language-idiomatic defaults.
2. **Write tests.** Cover happy-path behavior, edge cases, and regressions.
3. **Run tests.** All new and existing tests must pass. Fix failures before proceeding.

If the project has no test infrastructure at all (no existing tests, no test framework, no relevant test config, no test directory, and no test scripts), skip — but this is the only acceptable reason to skip tests. Record this skip reason explicitly in the final summary output.

### Review

Invoke the `bmad-review-adversarial-general` skill in a subagent with the changed files. The subagent gets NO conversation context — to avoid anchoring bias. If no sub-agents are available, write the changed files to a review prompt file in `{implementation_artifacts}` and HALT. Ask the human to run the review in a separate session and paste back the findings.
Expand Down