diff --git a/.changeset/atomic-capability-chaining.md b/.changeset/atomic-capability-chaining.md new file mode 100644 index 00000000..b48adad7 --- /dev/null +++ b/.changeset/atomic-capability-chaining.md @@ -0,0 +1,6 @@ +--- +"@ghx-dev/core": minor +--- + +Add atomic capability chaining: `executeTasks()` function that executes multiple capabilities in a single GraphQL batch with ≀2 API round-trips. New `ghx chain --steps ''` CLI command. Supersedes the unused composite capability system which has been removed. + diff --git a/.changeset/composite-capabilities-gql-integration.md b/.changeset/composite-capabilities-gql-integration.md deleted file mode 100644 index 03d62b85..00000000 --- a/.changeset/composite-capabilities-gql-integration.md +++ /dev/null @@ -1,7 +0,0 @@ ---- -"@ghx-dev/core": minor ---- - -Add composite capability execution support to the refactored GraphQL architecture, including batched alias-based composite routing and GraphQL-first `pr.review.submit` handling with inline comments. - -Also adds issue composite operation cards and aligns routing/output contracts and tests with aliased imports and the new gql domain/client layout. diff --git a/docs/README.md b/docs/README.md index b7c2962b..0ab9a736 100644 --- a/docs/README.md +++ b/docs/README.md @@ -78,12 +78,12 @@ Check: **[Benchmark Documentation](benchmark/README.md)** ## Key Facts -**69 Capabilities** organized by domain (66 atomic + 3 composite): +**66 Capabilities** organized by domain: | Domain | Count | Examples | |--------|-------|----------| -| Issues | 21 | Create, update, close, assign labels, manage relations, composite batch ops | -| Pull Requests | 22 | View, comment, approve, merge, rerun checks, composite batch ops | +| Issues | 19 | Create, update, close, assign labels, manage relations | +| Pull Requests | 21 | View, comment, approve, merge, rerun checks | | Workflows | 11 | List, dispatch, rerun, cancel, retrieve logs | | Releases | 5 | List, create, publish, update drafts | | Projects v2 | 6 | View, list items, update fields | diff --git a/docs/architecture/README.md b/docs/architecture/README.md index 584cf3cc..0f92b347 100644 --- a/docs/architecture/README.md +++ b/docs/architecture/README.md @@ -16,7 +16,7 @@ flowchart TB end subgraph Registry["Operation Registry πŸ“‹"] - Cards["69 Capability Cards"] + Cards["66 Capability Cards"] Schema["JSON Schema Validation"] end diff --git a/docs/architecture/adapters.md b/docs/architecture/adapters.md index f9f5f3d8..88bd2b88 100644 --- a/docs/architecture/adapters.md +++ b/docs/architecture/adapters.md @@ -9,11 +9,9 @@ Three adapters handle different GitHub interaction modes: | Adapter | Coverage | Route Name | Status | Purpose | |---------|----------|-----------|--------|---------| | **CLI Adapter** | 66 atomic | `cli` | Active | Execute via `gh` command-line tool | -| **GraphQL Adapter** | ~28 atomic + 3 composite | `graphql` | Active | Execute via GitHub GraphQL API | +| **GraphQL Adapter** | ~28 atomic | `graphql` | Active | Execute via GitHub GraphQL API | | **REST Adapter** | None | `rest` | Stub | Planned for future expansion | -> **Composite capabilities** (`issue.triage.composite`, `issue.update.composite`, `pr.threads.composite`) use a dedicated batch GraphQL path and do not have CLI routes. - ```mermaid %%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#9C27B0', 'primaryTextColor': '#fff', 'primaryBorderColor': '#7B1FA2', 'lineColor': '#666', 'secondaryColor': '#CE93D8', 'tertiaryColor': '#E1BEE7'}}}%% graph TD @@ -70,7 +68,7 @@ The CLI adapter executes capabilities through the GitHub command-line tool (`gh` ### Features -- **Full atomic coverage**: All 66 atomic capabilities have CLI routes defined (composite capabilities use GraphQL only) +- **Full atomic coverage**: All 66 capabilities have CLI routes defined where applicable - **Safe spawning**: Uses `spawn()` with `shell: false` β€” no shell interpretation - **Timeout enforcement**: Per-command timeout (default 30s) - **Output limits**: Bounded stdout/stderr size (default 10 MB) @@ -120,7 +118,7 @@ The GraphQL adapter executes capabilities through GitHub's GraphQL API. ### Features - **Typed queries**: Generated operation SDKs with type safety -- **Selective coverage**: ~28 atomic + 3 composite capabilities support GraphQL routes (typically read-heavy or batch operations) +- **Selective coverage**: ~28 capabilities support GraphQL routes (typically read-heavy or mutation operations requiring typed queries) - **Authentication**: Requires `GITHUB_TOKEN` environment variable - **Error classification**: Maps GraphQL errors (auth, rate limit, not found, etc.) to normalized error codes - **Field mapping**: Adapts GitHub's response shape to capability output schema diff --git a/docs/architecture/operation-cards.md b/docs/architecture/operation-cards.md index e9216fe9..92032c0d 100644 --- a/docs/architecture/operation-cards.md +++ b/docs/architecture/operation-cards.md @@ -32,11 +32,12 @@ classDiagram class GraphQLMetadata { operation?: string field_mapping?: Record + resolution?: ResolutionConfig } - class CompositeMetadata { - steps: CompositeStep[] - output_strategy: string + class ResolutionConfig { + lookup: LookupSpec + inject: InjectSpec[] } class Route { @@ -46,7 +47,7 @@ classDiagram OperationCard --> RoutingPolicy OperationCard --> CLIMetadata OperationCard --> GraphQLMetadata - OperationCard --> CompositeMetadata + GraphQLMetadata --> ResolutionConfig RoutingPolicy --> Route ``` @@ -64,30 +65,29 @@ Each operation card includes: | `routing.preferred` | Primary route(s) to attempt first | Yes | | `routing.fallbacks` | Secondary routes in order of preference | Yes | | `cli` | CLI command metadata and output mapping | No | -| `graphql` | GraphQL operation and field mapping | No | -| `composite` | Composite capability steps and output strategy | No | +| `graphql` | GraphQL operation, field mapping, and optional resolution block | No | ## Current Capability Surface -`ghx` currently defines **69 capabilities** across these domains (66 atomic + 3 composite): +`ghx` currently defines **66 capabilities** across these domains: | Domain | Count | Purpose | |--------|-------|---------| -| **Issues** | 21 | Create, read, update, close, link, label, milestone, assign issues; composite batch operations | -| **Pull Requests** | 22 | Read PR metadata, lists, reviews, threads, checks, mergeability, mutations; composite batch operations | +| **Issues** | 19 | Create, read, update, close, link, label, milestone, assign issues | +| **Pull Requests** | 21 | Read PR metadata, lists, reviews, threads, checks, mergeability, mutations | | **Releases** | 5 | Query and draft releases, publish, update | | **Workflows** | 11 | Query workflow runs, jobs, logs, cancel, rerun; list workflows; dispatch | | **Repositories** | 3 | Repo metadata, label listing, issue type listing | | **Projects (v2)** | 6 | Query and mutate projects, fields, items | | **Check Runs** | 1 | List check run annotations | -### Issue Capabilities (21) +### Issue Capabilities (19) -`issue.view`, `issue.list`, `issue.create`, `issue.update`, `issue.close`, `issue.reopen`, `issue.delete`, `issue.comments.list`, `issue.comments.create`, `issue.labels.update`, `issue.labels.add`, `issue.assignees.update`, `issue.milestone.set`, `issue.linked_prs.list`, `issue.parent.set`, `issue.parent.remove`, `issue.blocked_by.add`, `issue.blocked_by.remove`, `issue.relations.get`, `issue.triage.composite`, `issue.update.composite` +`issue.view`, `issue.list`, `issue.create`, `issue.update`, `issue.close`, `issue.reopen`, `issue.delete`, `issue.comments.list`, `issue.comments.create`, `issue.labels.set`, `issue.labels.add`, `issue.assignees.set`, `issue.milestone.set`, `issue.linked_prs.list`, `issue.parent.set`, `issue.parent.remove`, `issue.blocked_by.add`, `issue.blocked_by.remove`, `issue.relations.get` -### Pull Request Capabilities (22) +### Pull Request Capabilities (21) -`pr.view`, `pr.list`, `pr.create`, `pr.update`, `pr.thread.list`, `pr.thread.reply`, `pr.thread.resolve`, `pr.thread.unresolve`, `pr.review.list`, `pr.review.submit`, `pr.review.request`, `pr.diff.files`, `pr.diff.view`, `pr.checks.list`, `pr.checks.failed`, `pr.checks.rerun_failed`, `pr.checks.rerun_all`, `pr.merge.status`, `pr.merge`, `pr.branch.update`, `pr.assignees.update`, `pr.threads.composite` +`pr.view`, `pr.list`, `pr.create`, `pr.update`, `pr.thread.list`, `pr.thread.reply`, `pr.thread.resolve`, `pr.thread.unresolve`, `pr.review.list`, `pr.review.submit`, `pr.review.request`, `pr.diff.files`, `pr.diff.view`, `pr.checks.list`, `pr.checks.failed`, `pr.checks.rerun_failed`, `pr.checks.rerun_all`, `pr.merge.status`, `pr.merge`, `pr.branch.update`, `pr.assignees.update` ### Release Capabilities (5) @@ -109,28 +109,100 @@ Each operation card includes: `check_run.annotations.list` -## Composite Capabilities +## Card-Defined Resolution -Composite capabilities batch multiple GraphQL mutations into a single network round-trip using `gql/batch.ts`. They are declared with a `composite` field instead of `cli`/`graphql` fields: +Some GraphQL capabilities require a two-phase execution: first a **lookup query** to resolve +human-readable names (labels, assignees, milestones) into GitHub node IDs, then the actual +**mutation**. This is declared with a `graphql.resolution` block in the card. + +### Resolution Block Schema ```yaml -composite: - steps: - - capability_id: issue.labels.update - params_map: - issueId: issueId - labels: labelIds - - capability_id: issue.comments.create - params_map: +graphql: + operationName: IssueAssigneesUpdate + resolution: + lookup: + operationName: IssueAssigneesLookup # registered lookup query name + vars: + issueId: issueId # lookup variable β†’ input field + inject: + - target: assigneeIds # mutation variable to populate + source: map_array # inject variant + from_input: assignees # input field containing names + nodes_path: node.repository.assignableUsers.nodes + match_field: login # field to match against input values + extract_field: id # field to extract as the resolved ID +``` + +### Inject Source Variants + +Three `source` types control how Phase 1 lookup results (or step inputs) map into Phase 2 mutation variables: + +**`scalar`** β€” Extract a single value from the Phase 1 result at a dot-notation path. + +```yaml +inject: + - target: pullRequestId # mutation variable name + source: scalar + path: repository.pullRequest.id # dot-path into Phase 1 response +``` + +**`map_array`** β€” Resolve a list of human-readable names to node IDs. Matching is case-insensitive. + +```yaml +inject: + - target: labelIds # mutation variable name + source: map_array + from_input: labels # input field containing the list of names + nodes_path: repository.labels.nodes # path to array in Phase 1 response + match_field: name # field on each node to match against input names + extract_field: id # field on each node to extract as the resolved ID +``` + +**`input`** β€” Pass a value directly from the step's `input` into the mutation variable, with no Phase 1 lookup. Use this when the caller already has the required node ID. + +```yaml +inject: + - target: labelableId # mutation variable name + source: input + from_input: issueId # the input field whose value is passed through +``` + +> **Tip:** Prefer `source: input` when the agent already has the required ID β€” it skips the Phase 1 resolution query entirely. + +| `source` | Phase 1 required | Description | +|----------|-----------------|-------------| +| `scalar` | Yes | Extracts a single value at `path` from the lookup result | +| `map_array` | Yes | Maps an array of names to IDs using the lookup result | +| `input` | No | Passes a value directly from step input (no lookup needed) | + +**Example β€” `issue.assignees.set.yaml`** (uses `map_array`): + +```yaml +graphql: + operationName: IssueAssigneesUpdate + documentPath: src/gql/operations/issue-assignees-update.graphql + resolution: + lookup: + operationName: IssueAssigneesLookup + documentPath: src/gql/operations/issue-assignees-lookup.graphql + vars: issueId: issueId - body: body - output_strategy: merge + inject: + - target: assigneeIds + source: map_array + from_input: assignees + nodes_path: node.repository.assignableUsers.nodes + match_field: login + extract_field: id ``` -Composite capabilities: -- Use `routing.preferred: graphql` exclusively (no CLI fallback) -- Are expanded by `core/execute/composite.ts` β†’ `gql/builders.ts` β†’ `gql/batch.ts` -- Merge output from all steps into a single `ResultEnvelope` +When a capability declares `graphql.resolution`, `executeTasks()` batches all resolution +lookups into a single Phase 1 GraphQL query, then batches all mutations into a single Phase 2 +GraphQL mutation β€” resulting in at most 2 HTTP round-trips regardless of chain length. + +See [Chaining Capabilities](../guides/chaining-capabilities.md) for the full two-phase +execution model. ## Card Loading & Validation diff --git a/docs/architecture/repository-structure.md b/docs/architecture/repository-structure.md index 4c03b258..bf894b79 100644 --- a/docs/architecture/repository-structure.md +++ b/docs/architecture/repository-structure.md @@ -91,7 +91,7 @@ ghx/ | `core/registry/schema-utils.ts` | Schema helper utilities | JSON schema helpers | | `core/registry/operation-card-schema.ts` | Card schema definition | `operationCardSchema` | | `core/registry/ajv-instance.ts` | Shared AJV instance | `ajv` | -| `core/registry/cards/*.yaml` | Capability definitions | 69 operation cards (66 atomic + 3 composite) | +| `core/registry/cards/*.yaml` | Capability definitions | 66 operation cards | ### Routing Engine (`core/routing`) @@ -111,7 +111,6 @@ ghx/ | Module | Purpose | Key Exports | |--------|---------|-------------| | `core/execute/execute.ts` | Route planning & retry loop | `execute()` | -| `core/execute/composite.ts` | Composite capability expansion | `expandCompositeSteps()` | | `core/execution/preflight.ts` | Route readiness checks | `preflightCheck()` | | `core/execution/normalizer.ts` | Output normalization | `normalizeResult()`, `normalizeError()` | | `core/execution/adapters/cli-capability-adapter.ts` | CLI capability adapter | `runCliCapability()`, `CliCapabilityId` | @@ -148,7 +147,9 @@ ghx/ | `gql/capability-registry.ts` | GQL handler registry by capability ID | `GraphqlHandler` map | | `gql/types.ts` | GQL input/output contracts | GraphQL domain input/data types | | `gql/assertions.ts` | GraphQL input validation helpers | `assert*` validation utilities | -| `gql/batch.ts` | Composite batch mutation builder | `buildBatchMutation()` | +| `gql/document-registry.ts` | Lookup & mutation document registry | `getLookupDocument()`, `getMutationDocument()` | +| `gql/resolve.ts` | Resolution inject helpers | `applyInject()`, `buildMutationVars()` | +| `gql/batch.ts` | Batch query/mutation builder | `buildBatchQuery()`, `buildBatchMutation()` | | `gql/builders.ts` | Per-capability mutation builders | `OPERATION_BUILDERS` | | `gql/domains/*.ts` | Domain operation modules | `run*` operation handlers | | `gql/operations/*.generated.ts` | Generated operation SDKs | Operation-specific `getSdk()` | diff --git a/docs/architecture/system-design.md b/docs/architecture/system-design.md index 4ceb3409..bba835ae 100644 --- a/docs/architecture/system-design.md +++ b/docs/architecture/system-design.md @@ -53,6 +53,14 @@ flowchart TB G --> H H --> I C --> J + + subgraph "Atomic Chaining (2+ tasks)" + EC["executeTasks()"] --> PF2["Pre-flight validation\n(all steps)"] + PF2 -->|"any invalid"| REJ["Reject whole chain"] + PF2 -->|"all valid"| P1["Phase 1 β€” batch resolution query\n≀1 HTTP round-trip"] + P1 --> P2["Phase 2 β€” batch mutation\n≀1 HTTP round-trip"] + P2 --> CR["ChainResultEnvelope\nstatus: success / partial / failed"] + end ``` ## Result Envelope @@ -73,23 +81,26 @@ Every capability returns: ## Current Scope -69 capabilities (66 atomic + 3 composite) across 7 domains: +66 capabilities across 7 domains: -- **Issues** (21): view, list, create, update, close, reopen, delete, comment, label, assign, milestone, link, parent, block, relation, and 2 composite batch capabilities -- **Pull Requests** (22): view, list, create, update, thread operations, review operations, diff, checks, merge, branch update, and 1 composite batch capability +- **Issues** (19): view, list, create, update, close, reopen, delete, comment, label, assign, milestone, link, parent, block, relation +- **Pull Requests** (21): view, list, create, update, thread operations, review operations, diff, checks, merge, branch update - **Workflows** (11): view, list, dispatch, run lifecycle, logs, cancel, rerun, artifacts - **Releases** (5): get, list, create draft, publish draft, update - **Repositories** (3): view, labels list, issue types list - **Projects v2** (6): get, fields list, items list, add issue, update field - **Check Runs** (1): annotations list -Composite capabilities batch multiple GraphQL mutations into a single network round-trip. Route preferences are capability-specific and defined in cards (`preferred` + `fallbacks`), with REST still outside active routing for current capabilities. +Route preferences are capability-specific and defined in cards (`preferred` + `fallbacks`), with REST still outside active routing for current capabilities. For multi-capability mutations, use `executeTasks()` β€” it batches all resolution lookups into one Phase 1 query and all mutations into one Phase 2 mutation (≀2 HTTP round-trips for any chain length). ## Source References - `packages/core/src/core/execute/execute.ts` +- `packages/core/src/core/routing/engine.ts` β€” `executeTask()`, `executeTasks()` - `packages/core/src/core/registry/cards/*.yaml` -- `packages/core/src/core/contracts/envelope.ts` +- `packages/core/src/core/contracts/envelope.ts` β€” `ChainResultEnvelope`, `ChainStepResult`, `ChainStatus` +- `packages/core/src/gql/document-registry.ts` β€” lookup & mutation document registry +- `packages/core/src/gql/resolve.ts` β€” resolution inject logic - `packages/core/src/core/execute/execute-tool.ts` - `packages/core/src/core/registry/list-capabilities.ts` - `packages/core/src/core/registry/explain-capability.ts` diff --git a/docs/benchmark/workflow-roadmap.md b/docs/benchmark/workflow-roadmap.md index 7db0e4c5..dda71b6f 100644 --- a/docs/benchmark/workflow-roadmap.md +++ b/docs/benchmark/workflow-roadmap.md @@ -33,8 +33,8 @@ Workflow scenarios differ from atomic scenarios by: **Expected Capabilities:** - `issue.view` -- `issue.labels.update` -- `issue.assignees.update` +- `issue.labels.set` +- `issue.assignees.set` - `project-v2.items.list` - `project-v2.item.add-issue` diff --git a/docs/capabilities/README.md b/docs/capabilities/README.md index f754d94b..88624a7a 100644 --- a/docs/capabilities/README.md +++ b/docs/capabilities/README.md @@ -11,13 +11,12 @@ orchestrating workflows and tracking projects. ```mermaid %%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#F5A623', 'primaryTextColor': '#fff', 'primaryBorderColor': '#D4891A', 'lineColor': '#666'}}}%% graph TB - Issues["Issues
(21 capabilities)"] - PRs["Pull Requests
(22 capabilities)"] + Issues["Issues
(23 capabilities)"] + PRs["Pull Requests
(21 capabilities)"] Releases["Releases
(5 capabilities)"] Workflows["Workflows
(11 capabilities)"] Repos["Repositories
(3 capabilities)"] - Projects["Projects V2
(6 capabilities)"] - CheckRuns["Check Runs
(1 capability)"] + Projects["Projects V2
(7 capabilities)"] style Issues fill:#F5A623 style PRs fill:#F5A623 @@ -25,14 +24,13 @@ graph TB style Workflows fill:#F5A623 style Repos fill:#F5A623 style Projects fill:#F5A623 - style CheckRuns fill:#F5A623 ``` ## Summary Table | Capability ID | Description | Routes | |---|---|---| -| **Issues (21)** | +| **Issues (23)** | | `issue.create` | Create a new issue. | graphql (preferred) | | `issue.view` | Fetch one issue by number. | cli (preferred), graphql (fallback) | | `issue.list` | List repository issues. | cli (preferred), graphql (fallback) | @@ -42,19 +40,21 @@ graph TB | `issue.delete` | Delete an issue. | graphql (preferred) | | `issue.comments.create` | Create an issue comment. | graphql (preferred) | | `issue.comments.list` | List comments for one issue. | graphql (preferred), cli (fallback) | -| `issue.labels.update` | Replace issue labels. | graphql (preferred) | +| `issue.labels.set` | Replace issue labels. | graphql (preferred) | | `issue.labels.add` | Add labels to an issue without removing existing labels. | graphql (preferred) | -| `issue.assignees.update` | Replace issue assignees. | graphql (preferred) | -| `issue.milestone.set` | Set issue milestone number or clear with null. | graphql (preferred) | -| `issue.parent.set` | Set an issue parent relation. | graphql (preferred) | -| `issue.parent.remove` | Remove an issue parent relation. | graphql (preferred) | -| `issue.blocked_by.add` | Add a blocked-by relation for an issue. | graphql (preferred) | -| `issue.blocked_by.remove` | Remove a blocked-by relation for an issue. | graphql (preferred) | -| `issue.linked_prs.list` | List pull requests linked to an issue. | graphql (preferred) | -| `issue.relations.get` | Get issue parent/children/blocking relations. | graphql (preferred) | -| `issue.triage.composite` | Set issue labels and create a comment in a single GraphQL batch call. | graphql (preferred) | -| `issue.update.composite` | Update issue fields, labels, assignees, and milestone in a single GraphQL batch call. | graphql (preferred) | -| **Pull Requests (22)** | +| `issue.labels.remove` | Remove labels from an issue. | cli (preferred) | +| `issue.assignees.set` | Replace issue assignees. | graphql (preferred) | +| `issue.assignees.add` | Add assignees to an issue. | cli (preferred) | +| `issue.assignees.remove` | Remove assignees from an issue. | cli (preferred) | +| `issue.milestone.set` | Set issue milestone by number. | graphql (preferred) | +| `issue.milestone.clear` | Clear the milestone from an issue. | cli (preferred) | +| `issue.relations.parent.set` | Set an issue parent relation. | graphql (preferred) | +| `issue.relations.parent.remove` | Remove an issue parent relation. | graphql (preferred) | +| `issue.relations.blocked_by.add` | Add a blocked-by relation for an issue. | graphql (preferred) | +| `issue.relations.blocked_by.remove` | Remove a blocked-by relation for an issue. | graphql (preferred) | +| `issue.relations.prs.list` | List pull requests linked to an issue. | graphql (preferred) | +| `issue.relations.view` | Get issue parent/children/blocking relations. | graphql (preferred) | +| **Pull Requests (21)** | | `pr.view` | Fetch one pull request by number. | graphql (preferred), cli (fallback) | | `pr.list` | List repository pull requests. | cli (preferred), graphql (fallback) | | `pr.create` | Create a pull request. | cli (preferred) | @@ -77,34 +77,35 @@ graph TB | `pr.diff.files` | List changed files in a pull request diff. | graphql (preferred) | | `pr.diff.view` | View the unified diff for a pull request. | cli (preferred) | | **Releases (5)** | -| `release.create_draft` | Create a draft release. | cli (preferred) | -| `release.get` | Get release details by tag name. | cli (preferred) | +| `release.create` | Create a draft release. | cli (preferred) | +| `release.view` | Get release details by tag name. | cli (preferred) | | `release.list` | List releases for a repository. | cli (preferred) | -| `release.publish_draft` | Publish an existing draft release. | cli (preferred) | +| `release.publish` | Publish an existing draft release. | cli (preferred) | | `release.update` | Update a draft release without publishing it. | cli (preferred) | | **Workflows (11)** | | `workflow.list` | List repository workflows. | cli (preferred) | -| `workflow.get` | Get one repository workflow. | cli (preferred) | -| `workflow.dispatch.run` | Trigger a workflow dispatch event. | cli (preferred) | +| `workflow.view` | Get one repository workflow. | cli (preferred) | +| `workflow.dispatch` | Trigger a workflow dispatch event. | cli (preferred) | | `workflow.runs.list` | List workflow runs for a repository. | cli (preferred) | | `workflow.run.view` | View a workflow run with its jobs. | cli (preferred) | | `workflow.run.cancel` | Cancel a workflow run. | cli (preferred) | -| `workflow.run.rerun_all` | Rerun all jobs in a workflow run. | cli (preferred) | -| `workflow.run.rerun_failed` | Rerun failed jobs for a workflow run. | cli (preferred) | +| `workflow.run.rerun.all` | Rerun all jobs in a workflow run. | cli (preferred) | +| `workflow.run.rerun.failed` | Rerun failed jobs for a workflow run. | cli (preferred) | | `workflow.run.artifacts.list` | List artifacts for a workflow run. | cli (preferred) | | `workflow.job.logs.raw` | Fetch raw (unprocessed) logs for a workflow job. | cli (preferred) | -| `workflow.job.logs.get` | Fetch and analyze workflow job logs. | cli (preferred) | +| `workflow.job.logs.view` | Fetch and analyze workflow job logs. | cli (preferred) | | **Repositories (3)** | | `repo.view` | Fetch repository metadata. | cli (preferred), graphql (fallback) | | `repo.labels.list` | List repository labels. | cli (preferred) | | `repo.issue_types.list` | List repository issue types. | cli (preferred) | -| **Projects V2 (6)** | -| `project_v2.org.get` | Get an organization Projects v2 project. | cli (preferred) | -| `project_v2.user.get` | Get a user Projects v2 project. | cli (preferred) | +| **Projects V2 (7)** | +| `project_v2.org.view` | Get an organization Projects v2 project. | cli (preferred) | +| `project_v2.user.view` | Get a user Projects v2 project. | cli (preferred) | | `project_v2.fields.list` | List fields for a Projects v2 project. | cli (preferred) | | `project_v2.items.list` | List items in a Projects v2 project. | cli (preferred) | -| `project_v2.item.add_issue` | Add an issue to a Projects v2 project. | cli (preferred) | -| `project_v2.item.field.update` | Update a field on a Projects v2 project item. | cli (preferred) | +| `project_v2.items.issue.add` | Add an issue to a Projects v2 project. | cli (preferred) | +| `project_v2.items.issue.remove` | Remove an issue from a Projects v2 project. | cli (preferred) | +| `project_v2.items.field.update` | Update a field on a Projects v2 project item. | cli (preferred) | ## Domain Documentation @@ -126,8 +127,6 @@ graph TB - [Projects V2](/docs/capabilities/projects.md) β€” Project discovery, field management, item tracking, and field updates. -- [Check Runs](/docs/capabilities/check-runs.md) β€” Check run annotation inspection. - ## Usage All capabilities can be invoked via the ghx CLI: diff --git a/docs/capabilities/issues.md b/docs/capabilities/issues.md index a4674929..088fccf0 100644 --- a/docs/capabilities/issues.md +++ b/docs/capabilities/issues.md @@ -319,7 +319,7 @@ npx ghx run issue.comments.list --input '{ ### Labels -#### `issue.labels.update` +#### `issue.labels.set` **Description:** Replace issue labels. @@ -342,7 +342,7 @@ npx ghx run issue.comments.list --input '{ **Example:** ```bash -npx ghx run issue.labels.update --input '{ +npx ghx run issue.labels.set --input '{ "issueId": "I_kwDODhlyV4567890", "labels": ["bug", "high-priority", "in-progress"] }' @@ -383,7 +383,7 @@ npx ghx run issue.labels.add --input '{ ### Assignees -#### `issue.assignees.update` +#### `issue.assignees.set` **Description:** Replace issue assignees. @@ -406,7 +406,7 @@ npx ghx run issue.labels.add --input '{ **Example:** ```bash -npx ghx run issue.assignees.update --input '{ +npx ghx run issue.assignees.set --input '{ "issueId": "I_kwDODhlyV4567890", "assignees": ["octocat", "hubot"] }' diff --git a/docs/getting-started/README.md b/docs/getting-started/README.md index 86dd102b..b3e58d71 100644 --- a/docs/getting-started/README.md +++ b/docs/getting-started/README.md @@ -8,7 +8,7 @@ through installation, your first capability execution, and agent setup. graph LR A["1. Install
@ghx-dev/core"] --> B["2. Verify
gh auth"] B --> C["3. Run First
Capability"] - C --> D["4. Explore
69 Operations"] + C --> D["4. Explore
66 Operations"] D --> E["5. Setup for
Agents"] style A fill:#4A90D9,color:#fff @@ -70,7 +70,7 @@ gh auth login ## Step 3: List Available Capabilities -See all 69 capabilities ghx provides: +See all 66 capabilities ghx provides: ```bash npx ghx capabilities list @@ -209,7 +209,7 @@ result=$(npx ghx run issue.create --input '{ issue_number=$(echo "$result" | jq '.data.number') # Step 2: Update labels on the issue -npx ghx run issue.labels.update --input "{ +npx ghx run issue.labels.set --input "{ \"owner\": \"aryeko\", \"repo\": \"ghx\", \"number\": $issue_number, diff --git a/docs/getting-started/first-task.md b/docs/getting-started/first-task.md index 869310a6..eb8b2e07 100644 --- a/docs/getting-started/first-task.md +++ b/docs/getting-started/first-task.md @@ -86,7 +86,7 @@ This shows all labels in the repository. Find one like `docs`, `enhancement`, or Now add a label to our issue using the number from Part 1: ```bash -npx ghx run issue.labels.update --input '{ +npx ghx run issue.labels.set --input '{ "owner": "YOUR_USERNAME", "repo": "YOUR_REPO", "number": 42, @@ -105,7 +105,7 @@ Output: }, "error": null, "meta": { - "capability_id": "issue.labels.update", + "capability_id": "issue.labels.set", "route_used": "cli", "reason": "CARD_PREFERRED" } @@ -217,7 +217,7 @@ async function main() { console.log(`Created issue #${issue.number}`) // Part 2: Add label - await runCapability("issue.labels.update", { + await runCapability("issue.labels.set", { owner: "YOUR_USERNAME", repo: "YOUR_REPO", number: issue.number, @@ -344,6 +344,37 @@ npx ghx capabilities explain workflow.dispatch.run npx ghx capabilities explain workflow.job.logs.get ``` +### Chain Mutations Atomically + +When multiple mutations must happen together (e.g., update labels and assignees on the same +issue), use `executeTasks()` instead of sequential `executeTask` calls. It executes the chain +in at most 2 HTTP round-trips regardless of how many steps you have: + +```ts +import { executeTasks, createGithubClientFromToken } from "@ghx-dev/core" + +const chain = await executeTasks( + [ + { task: "issue.labels.set", input: { issueId: "I_kwDOOx...", labels: ["docs"] } }, + { task: "issue.assignees.set", input: { issueId: "I_kwDOOx...", assignees: ["YOUR_USERNAME"] } }, + ], + { githubClient, githubToken: token }, +) + +console.log(chain.status) // "success" | "partial" | "failed" +``` + +From the CLI: + +```bash +ghx chain --steps '[ + {"task":"issue.labels.set","input":{"issueId":"I_kwDOOx...","labels":["docs"]}}, + {"task":"issue.assignees.set","input":{"issueId":"I_kwDOOx...","assignees":["YOUR_USERNAME"]}} +]' +``` + +See [Chaining Capabilities](../guides/chaining-capabilities.md) for details. + ### Build a Real Workflow Combine multiple capabilities to solve a problem: @@ -415,7 +446,7 @@ You're ready to: 1. Build more complex workflows 2. Integrate ghx into automation scripts 3. Set up ghx for coding agents -4. Explore the full 69-capability API +4. Explore the full 66-capability API ## Next Resources diff --git a/docs/getting-started/how-it-works.md b/docs/getting-started/how-it-works.md index 6347243b..10b9d4c5 100644 --- a/docs/getting-started/how-it-works.md +++ b/docs/getting-started/how-it-works.md @@ -254,6 +254,42 @@ sequenceDiagram This flow ensures **consistency**: The envelope shape is stable because ghx enforces it, regardless of what the underlying API returns. +## Chaining Flow (Multiple Mutations) + +When you need to execute multiple mutations atomically, `executeTasks()` uses a two-phase +approach that stays within ≀2 HTTP round-trips: + +```mermaid +%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#4A90D9', 'primaryTextColor': '#fff', 'primaryBorderColor': '#2E6BA4', 'lineColor': '#666', 'fontSize': '13px'}}}%% +sequenceDiagram + participant Caller + participant executeTasks + participant GitHub as GitHub GraphQL API + + Caller->>executeTasks: [{task, input}, ...] + Note over executeTasks: Pre-flight: validate all steps
(whole chain rejected if any step is invalid) + executeTasks->>GitHub: query BatchChain(...) { step0: ..., step1: ... } + Note right of GitHub: Phase 1 β€” resolve names to IDs
(labels, assignees, milestones) + GitHub-->>executeTasks: Phase 1 results (IDs resolved) + executeTasks->>GitHub: mutation BatchComposite(...) { step0: ..., step1: ... } + Note right of GitHub: Phase 2 β€” execute all mutations + GitHub-->>executeTasks: Phase 2 results + executeTasks-->>Caller: ChainResultEnvelope { status, results[], meta } +``` + +**Key properties:** + +- **Pre-flight validation** β€” All steps are validated before any HTTP call; invalid input + rejects the entire chain immediately +- **Phase 1 (optional)** β€” A single batch GraphQL query resolves human-readable names (e.g., + label names β†’ label IDs) for all steps that need it +- **Phase 2** β€” A single batch GraphQL mutation executes all steps +- **`ChainStatus`** β€” `"success"` (all steps OK), `"partial"` (some failed), or `"failed"` + (none succeeded) + +See [Chaining Capabilities](../guides/chaining-capabilities.md) for supported capabilities, +error handling, and full examples. + ## Why Three Routes? ghx supports three execution routes because each is optimal for different scenarios: diff --git a/docs/getting-started/setup-for-agents.md b/docs/getting-started/setup-for-agents.md index 6fae9260..a1a7742c 100644 --- a/docs/getting-started/setup-for-agents.md +++ b/docs/getting-started/setup-for-agents.md @@ -1,7 +1,7 @@ # Agent Setup Guide: Installing ghx for Coding Agents Make ghx discoverable to AI coding agents (Claude Code, Cursor, OpenCode, etc.) by installing a -skill file. Then agents automatically know how to use 69 GitHub capabilities without manual +skill file. Then agents automatically know how to use 66 GitHub capabilities without manual prompting. ## Why Agent Setup? @@ -215,7 +215,7 @@ The installed `SKILL.md` includes: Execute typed GitHub capabilities with deterministic routing and stable result envelopes. -69 capabilities across 7 domains: Issues, Pull Requests, Workflows, Releases, etc. +66 capabilities across 7 domains: Issues, Pull Requests, Workflows, Releases, etc. - No wasted tokens on discovery - Schema-validated input/output @@ -228,7 +228,7 @@ Instructions for verifying setup: ```bash gh auth status # Verify gh CLI is authenticated -ghx capabilities list # Discover all 69 capabilities +ghx capabilities list # Discover all 66 capabilities ``` ### Workflow Section diff --git a/docs/guides/README.md b/docs/guides/README.md index 7bd7482e..6b384185 100644 --- a/docs/guides/README.md +++ b/docs/guides/README.md @@ -9,7 +9,7 @@ Practical how-to documentation for using ghx in your project or AI agent. Start here if you're running ghx commands from the terminal. - `npx ghx run` β€” Execute a capability -- `npx ghx capabilities list` β€” List all 69 capabilities +- `npx ghx capabilities list` β€” List all 66 capabilities - `npx ghx capabilities explain` β€” Understand a capability's contract **[Library API](library-api.md)** β€” Programmatic access in Node.js @@ -17,9 +17,19 @@ Start here if you're running ghx commands from the terminal. Use ghx in your JavaScript or TypeScript code with the `@ghx-dev/core` package. - `executeTask()` β€” Run a capability programmatically +- `executeTasks()` β€” Run a chain of mutations atomically (≀2 HTTP round-trips) - `listOperationCards()` β€” Inspect the registry - `createGithubClientFromToken()` β€” Create a GitHub client +**[Chaining Capabilities](chaining-capabilities.md)** β€” Atomic multi-step mutations + +Run multiple mutations in a single logical operation using `executeTasks()` or `ghx chain`. + +- Two-phase execution model (resolution query + mutation batch) +- `ChainResultEnvelope` / `ChainStatus` / `ChainStepResult` types +- Pre-flight validation before any HTTP call +- Which capabilities support chaining + **[Custom GraphQL Transport](custom-graphql-transport.md)** β€” Bring your own GraphQL client Override the default GraphQL implementation with your own transport. diff --git a/docs/guides/agent-integration.md b/docs/guides/agent-integration.md index b3b77e51..7fa8abba 100644 --- a/docs/guides/agent-integration.md +++ b/docs/guides/agent-integration.md @@ -29,7 +29,7 @@ ghx provides three tools for agents: ### 1. `listCapabilities()` -Discover all 69 capabilities available. +Discover all 66 capabilities available. **Usage:** @@ -77,7 +77,14 @@ const explanation = explainCapability("repo.view") > "Before executing a capability, call `explainCapability(capability_id)` to see > required inputs and output fields. This prevents errors and reduces token waste." -### 3. `createExecuteTool(deps)` +### 3. `executeTasks(requests, deps)` + +Execute a chain of mutations atomically. For multi-step mutations that must share a single +HTTP round-trip (e.g., update labels and assignees together), use `executeTasks` instead of +multiple `executeTask` calls. See [Chaining Capabilities](chaining-capabilities.md) for full +details and examples. + +### 4. `createExecuteTool(deps)` Create a tool that executes capabilities with proper error handling and result envelope parsing. @@ -89,6 +96,7 @@ import { createExecuteTool, createGithubClientFromToken, executeTask, + executeTasks, } from "@ghx-dev/core" const token = process.env.GITHUB_TOKEN! @@ -161,8 +169,7 @@ console.log(`Executed via ${route} (reason: ${reason})`) ### Agent Tool Registration -In your agent framework (Claude Code, custom LLM, etc.), register the three -tools: +In your agent framework (Claude Code, custom LLM, etc.), register the tools: ```ts const tools = { diff --git a/docs/guides/chaining-capabilities.md b/docs/guides/chaining-capabilities.md new file mode 100644 index 00000000..f7289b1f --- /dev/null +++ b/docs/guides/chaining-capabilities.md @@ -0,0 +1,267 @@ +# Chaining Capabilities + +Atomic chaining lets you execute multiple GitHub mutations in a single logical operation β€” +without needing to wire together individual `executeTask` calls. The entire chain completes in +at most **2 HTTP round-trips** regardless of how many steps you have. + +## When to Use Chaining + +Use `executeTasks()` (library) or `ghx chain` (CLI) when: + +- Two or more mutations must be applied together (e.g., update labels **and** assignees on an + issue atomically) +- You want pre-flight validation to reject the entire request before any mutation is attempted +- You want a single `ChainResultEnvelope` summarizing all steps, rather than multiple + `ResultEnvelope` objects + +For **independent read-only queries**, the standard `Promise.all(steps.map(executeTask))` or +shell `for` loop is still appropriate β€” chaining is designed for mutations. + +## Library API + +### `executeTasks(requests, deps)` + +```ts +import { executeTasks, createGithubClientFromToken } from "@ghx-dev/core" + +const token = process.env.GITHUB_TOKEN! +const githubClient = createGithubClientFromToken(token) + +const chain = await executeTasks( + [ + { task: "issue.labels.set", input: { issueId: "I_kwDOOx...", labels: ["bug"] } }, + { task: "issue.assignees.set", input: { issueId: "I_kwDOOx...", assignees: ["octocat"] } }, + ], + { githubClient, githubToken: token }, +) +``` + +**Parameters:** + +- `requests` β€” `Array<{ task: string; input: Record }>` β€” ordered list of + capability steps +- `deps` β€” same `ExecutionDeps` as `executeTask()` (requires a `githubClient` and token) + +**Returns:** `Promise` + +### Return Type: `ChainResultEnvelope` + +```ts +type ChainStatus = "success" | "partial" | "failed" + +interface ChainStepResult { + task: string // capability_id of the step + ok: boolean + data?: unknown // step output (when ok is true) + error?: { + code: string + message: string + retryable: boolean + } +} + +interface ChainResultEnvelope { + status: ChainStatus + results: ChainStepResult[] + meta: { + route_used: "graphql" // chains always use GraphQL + total: number + succeeded: number + failed: number + } +} +``` + +### `ChainStatus` Semantics + +| Value | Meaning | +|-------|---------| +| `"success"` | All steps completed successfully | +| `"partial"` | Some steps succeeded, some failed | +| `"failed"` | No steps succeeded (pre-flight rejection or phase-level error) | + +## CLI + +### Inline JSON + +```bash +ghx chain --steps '[ + {"task":"issue.labels.set","input":{"issueId":"I_kwDOOx...","labels":["bug"]}}, + {"task":"issue.assignees.set","input":{"issueId":"I_kwDOOx...","assignees":["octocat"]}} +]' +``` + +### Stdin Variant + +```bash +echo '[ + {"task":"issue.labels.set","input":{"issueId":"I_kwDOOx...","labels":["bug"]}}, + {"task":"issue.assignees.set","input":{"issueId":"I_kwDOOx...","assignees":["octocat"]}} +]' | ghx chain --steps - +``` + +### Exit Codes + +| Exit code | Condition | +|-----------|-----------| +| `0` | `status` is `"success"` or `"partial"` | +| `1` | `status` is `"failed"` | + +## Pre-flight Validation + +Before any HTTP call is made, `executeTasks()` validates every step: + +1. The `task` must refer to a known capability +2. The `input` must pass JSON Schema validation for that capability +3. The capability must have a GraphQL route (`card.graphql` must be defined) + +If **any** step fails pre-flight, the **entire chain is rejected** with `status: "failed"` and +no HTTP calls are made: + +```json +{ + "status": "failed", + "results": [ + { + "task": "issue.labels.set", + "ok": false, + "error": { + "code": "VALIDATION", + "message": "Input validation failed: issueId: must be string", + "retryable": false + } + } + ], + "meta": { "route_used": "graphql", "total": 1, "succeeded": 0, "failed": 1 } +} +``` + +## Two-Phase Execution Model + +```mermaid +%%{init: {'theme': 'base', 'themeVariables': {'primaryColor': '#9C27B0', 'primaryTextColor': '#fff', 'primaryBorderColor': '#7B1FA2', 'lineColor': '#666', 'fontSize': '13px'}}}%% +sequenceDiagram + participant Caller + participant executeTasks + participant GitHub as GitHub GraphQL API + + Caller->>executeTasks: [{task, input}, ...] + Note over executeTasks: Pre-flight: validate all steps
(chain rejected if any step is invalid) + executeTasks->>GitHub: query BatchChain(...) { step0: ..., step1: ... } + Note right of GitHub: Phase 1 β€” resolve names to IDs
(labels, assignees, milestones) + GitHub-->>executeTasks: Phase 1 results (IDs resolved) + executeTasks->>GitHub: mutation BatchComposite(...) { step0: ..., step1: ... } + Note right of GitHub: Phase 2 β€” execute all mutations + GitHub-->>executeTasks: Phase 2 results + executeTasks-->>Caller: ChainResultEnvelope { status, results[], meta } +``` + +**Phase 1 β€” Batch resolution query (≀1 HTTP call):** + +Some capabilities require human-readable names (label names, usernames, milestone titles) to +be resolved to GitHub node IDs before the mutation can run. For each step that declares a +`graphql.resolution` block in its operation card, `executeTasks()` collects the lookup queries +and batches them into a single GraphQL query. + +**Phase 2 β€” Batch mutation (≀1 HTTP call):** + +Using the resolved IDs from Phase 1, all mutations are combined into a single GraphQL +mutation document and sent in one request. + +**If Phase 1 fails** (network error, auth failure, etc.), the entire chain is marked +`"failed"` and Phase 2 is not attempted. + +**If Phase 2 fails** (the entire mutation batch is rejected), all steps in the chain are +marked as failed. + +## Which Capabilities Support Chaining + +Any capability with a `graphql` route can be chained. Capabilities that also declare a +`graphql.resolution` block benefit from automatic nameβ†’ID resolution in Phase 1: + +| Capability | Phase 1 (resolution) | Phase 2 (mutation) | +|------------|---------------------|-------------------| +| `issue.labels.set` | Yes β€” resolves label names to IDs | Yes | +| `issue.assignees.set` | Yes β€” resolves login names to user IDs | Yes | +| `issue.milestone.set` | No | Yes | +| `issue.close` | No | Yes | +| `pr.thread.resolve` | No | Yes | +| `pr.thread.reply` | No | Yes | + +To check whether a specific capability supports chaining, look for `graphql:` in its +operation card (`packages/core/src/core/registry/cards/.yaml`). If `graphql:` +is defined, the capability can be chained. + +## Error Handling + +### Pre-flight errors + +The chain is rejected before any HTTP call. All steps are marked failed with `retryable: false`. +Fix the input and resubmit. + +### Phase 1 errors + +A network or auth failure during the batch resolution query marks **all steps** as failed with +`retryable: true`. The mutation is not attempted. Safe to retry. + +### Phase 2 errors + +A network or auth failure during the batch mutation marks **all pending steps** as failed with +`retryable: true`. Safe to retry. + +### Step-level errors + +If individual mutations within Phase 2 return GraphQL errors, those steps are marked as +failed while other steps may succeed β€” resulting in `status: "partial"`. + +## Example: Update Assignees and Labels Atomically + +```ts +import { executeTasks, createGithubClientFromToken } from "@ghx-dev/core" + +const token = process.env.GITHUB_TOKEN! +const githubClient = createGithubClientFromToken(token) + +// Both mutations execute in a single logical operation (≀2 HTTP calls total) +const chain = await executeTasks( + [ + { + task: "issue.labels.set", + input: { + issueId: "I_kwDOOx...", // GitHub node ID of the issue + labels: ["bug", "priority:high"], + }, + }, + { + task: "issue.assignees.set", + input: { + issueId: "I_kwDOOx...", + assignees: ["octocat", "monalisa"], + }, + }, + ], + { githubClient, githubToken: token }, +) + +if (chain.status === "success") { + console.log("Both mutations applied atomically") + chain.results.forEach((step) => { + console.log(` ${step.task}: ok`) + }) +} else if (chain.status === "partial") { + console.warn("Some steps failed:") + chain.results.forEach((step) => { + if (!step.ok) { + console.error(` ${step.task}: [${step.error?.code}] ${step.error?.message}`) + } + }) +} else { + console.error("Chain failed:", chain.results[0]?.error?.message) +} +``` + +--- + +See [Understanding the Result Envelope](result-envelope.md) for the `ChainResultEnvelope` +type reference, and [Operation Cards](../architecture/operation-cards.md) for the +`graphql.resolution` block schema. diff --git a/docs/guides/cli-usage.md b/docs/guides/cli-usage.md index 5a6ca35e..72b314d5 100644 --- a/docs/guides/cli-usage.md +++ b/docs/guides/cli-usage.md @@ -32,7 +32,7 @@ npx @ghx-dev/core ghx capabilities list ``` -Returns a JSON array of all 69 capabilities with descriptions: +Returns a JSON array of all 66 capabilities with descriptions: ```json [ @@ -175,9 +175,50 @@ if ! echo "$RESULT" | jq '.ok' | grep -q true; then fi ``` -### Batch operations +### Atomic chains + +For mutations that must share a single HTTP round-trip, use `ghx chain`. All steps are +validated before any HTTP call is made, and the entire chain executes in at most 2 network +round-trips regardless of chain length. + +**Inline JSON:** + +```bash +ghx chain --steps '[ + {"task":"issue.labels.set","input":{"issueId":"I_kwDOOx...","labels":["bug"]}}, + {"task":"issue.assignees.set","input":{"issueId":"I_kwDOOx...","assignees":["octocat"]}} +]' +``` + +**Stdin variant:** + +```bash +echo '[ + {"task":"issue.labels.set","input":{"issueId":"I_kwDOOx...","labels":["bug"]}}, + {"task":"issue.assignees.set","input":{"issueId":"I_kwDOOx...","assignees":["octocat"]}} +]' | ghx chain --steps - +``` + +Output: + +```json +{ + "status": "success", + "results": [ + {"task": "issue.labels.set", "ok": true, "data": {"id": "I_kwDOOx...", "labels": ["bug"]}}, + {"task": "issue.assignees.set", "ok": true, "data": {"id": "I_kwDOOx...", "assignees": ["octocat"]}} + ], + "meta": {"route_used": "graphql", "total": 2, "succeeded": 2, "failed": 0} +} +``` + +Exit code is `0` when `status` is `"success"` or `"partial"`, `1` when `"failed"`. + +> **Note:** For independent read-only operations (queries), the shell `for` loop or +> `Promise.all()` pattern is still appropriate β€” atomicity is only needed for mutations. ```bash +# Parallel read-only queries (no atomicity needed) for owner_repo in "aryeko/ghx" "owner2/repo2"; do owner=$(echo $owner_repo | cut -d/ -f1) repo=$(echo $owner_repo | cut -d/ -f2) diff --git a/docs/guides/library-api.md b/docs/guides/library-api.md index fd44f5ff..d26e47e8 100644 --- a/docs/guides/library-api.md +++ b/docs/guides/library-api.md @@ -16,6 +16,7 @@ npm install @ghx-dev/core import { createGithubClientFromToken, executeTask, + executeTasks, } from "@ghx-dev/core" const token = process.env.GITHUB_TOKEN! @@ -186,7 +187,40 @@ See [Custom GraphQL Transport](custom-graphql-transport.md) for more examples. ## Common Patterns -### Execute Multiple Tasks +### Atomic chain (mutations) + +For mutations that must share a single HTTP round-trip β€” e.g., updating labels and assignees +on the same issue β€” use `executeTasks()`. All steps are pre-flight validated before any HTTP +call is made, and the chain executes in at most 2 network round-trips. + +> **For mutations that must share a single HTTP round-trip, use `executeTasks`.** + +```ts +import { executeTasks } from "@ghx-dev/core" + +const chain = await executeTasks( + [ + { task: "issue.labels.set", input: { issueId: "I_kwDOOx...", labels: ["bug"] } }, + { task: "issue.assignees.set", input: { issueId: "I_kwDOOx...", assignees: ["octocat"] } }, + ], + { githubClient, githubToken: token }, +) + +if (chain.status === "success") { + console.log(`All ${chain.meta.succeeded} steps succeeded`) +} else if (chain.status === "partial") { + console.log(`${chain.meta.succeeded}/${chain.meta.total} steps succeeded`) + chain.results.filter((r) => !r.ok).forEach((r) => { + console.error(`Step ${r.task} failed: ${r.error?.message}`) + }) +} else { + console.error("Chain failed:", chain.results[0]?.error?.message) +} +``` + +### Parallel queries + +For independent read-only operations (queries) that don't need atomicity, use `Promise.all`: ```ts const tasks = [ @@ -295,7 +329,8 @@ if (!result.ok) { ### Root Exports (`@ghx-dev/core`) -- `executeTask(request, deps)` β€” Execute a capability +- `executeTask(request, deps)` β€” Execute a single capability +- `executeTasks(requests, deps)` β€” Execute a chain of capabilities atomically (≀2 HTTP round-trips) - `createGithubClientFromToken(token)` β€” Create a client from a token - `createGithubClient(transport)` β€” Create a client from a custom transport - `listOperationCards()` β€” Get all capability cards @@ -305,7 +340,10 @@ if (!result.ok) { **Types:** - `TaskRequest` β€” Capability request shape -- `ResultEnvelope` β€” Response shape +- `ResultEnvelope` β€” Response shape for a single capability +- `ChainResultEnvelope` β€” Response shape for `executeTasks()` +- `ChainStepResult` β€” Per-step result in a chain +- `ChainStatus` β€” `"success"` | `"partial"` | `"failed"` - `ResultError` β€” Error shape - `ResultMeta` β€” Metadata shape - `RouteSource` β€” Route type: "cli" | "graphql" | "rest" diff --git a/docs/guides/result-envelope.md b/docs/guides/result-envelope.md index b9feda36..e477b134 100644 --- a/docs/guides/result-envelope.md +++ b/docs/guides/result-envelope.md @@ -380,6 +380,99 @@ This means agents can: --- +## Chain Result Envelope + +When using `executeTasks()` (library) or `ghx chain` (CLI), the response is a +`ChainResultEnvelope` rather than a `ResultEnvelope`. + +### Type + +```ts +type ChainStatus = "success" | "partial" | "failed" + +interface ChainStepResult { + task: string // capability_id of the step + ok: boolean + data?: unknown // step output when ok is true + error?: { + code: string + message: string + retryable: boolean + } +} + +interface ChainResultEnvelope { + status: ChainStatus + results: ChainStepResult[] + meta: { + route_used: "graphql" // chains always use GraphQL + total: number + succeeded: number + failed: number + } +} +``` + +### `ChainStatus` Semantics + +| Value | Meaning | +|-------|---------| +| `"success"` | All steps completed successfully | +| `"partial"` | Some steps succeeded, some failed | +| `"failed"` | No steps succeeded (pre-flight rejection or phase-level error) | + +### Reading a Chain Result + +```ts +const chain = await executeTasks(steps, { githubClient, githubToken }) + +switch (chain.status) { + case "success": + console.log(`All ${chain.meta.succeeded} steps succeeded`) + break + case "partial": + console.log(`${chain.meta.succeeded}/${chain.meta.total} steps succeeded`) + chain.results.filter((r) => !r.ok).forEach((r) => { + console.error(` ${r.task}: [${r.error?.code}] ${r.error?.message}`) + }) + break + case "failed": + console.error("Chain failed:", chain.results[0]?.error?.message) + break +} +``` + +### Example Output + +```json +{ + "status": "success", + "results": [ + { + "task": "issue.labels.set", + "ok": true, + "data": {"id": "I_kwDOOx...", "labels": ["bug"]} + }, + { + "task": "issue.assignees.set", + "ok": true, + "data": {"id": "I_kwDOOx...", "assignees": ["octocat"]} + } + ], + "meta": { + "route_used": "graphql", + "total": 2, + "succeeded": 2, + "failed": 0 + } +} +``` + +See [Chaining Capabilities](chaining-capabilities.md) for the full two-phase execution model, +error handling, and supported capabilities. + +--- + See [Error Handling & Codes](error-handling.md) for detailed error handling strategies, and [How Routing Works](routing-explained.md) for understanding why certain routes are chosen. diff --git a/docs/plans/2026-02-20-atomic-capability-chaining-design.md b/docs/plans/2026-02-20-atomic-capability-chaining-design.md new file mode 100644 index 00000000..df96ed28 --- /dev/null +++ b/docs/plans/2026-02-20-atomic-capability-chaining-design.md @@ -0,0 +1,294 @@ +# Atomic Capability Chaining Design + +**Date:** 2026-02-20 +**Branch:** `feat/atomic-chaining` +**Status:** Approved + +## Problem + +The current execution model supports one capability per tool call. Agents that need to perform multiple related GitHub mutations (e.g., resolve a PR thread and post a comment) must make separate tool calls, each incurring a round-trip. Composite capability cards (`pr.threads.composite`, `issue.triage.composite`, `issue.update.composite`) were introduced as a workaround but require pre-authoring a YAML card for every fixed combination β€” a maintenance burden that doesn't scale. + +## Goal + +Allow callers to specify an arbitrary list of `[capabilityId, input]` pairs in a single tool call. Steps execute in a two-phase batch β€” resolution queries batched together (≀ 1 HTTP call), then mutations batched together (≀ 1 HTTP call) β€” reducing agent round-trips to at most 2 HTTP calls for any chain, regardless of length. + +## Decisions + +| Question | Decision | +|---|---| +| Dynamic or card-based? | Dynamic runtime API β€” no card needed at call site | +| Input data flow | All inputs specified upfront; no inter-step data flow | +| Error model | Pre-flight: whole chain rejected on any validation failure. Runtime: partial results β€” per-step ok/error | +| Routes supported | GraphQL only β€” CLI doesn't support chaining | +| Composite cards | Deleted β€” never published (0.1.2 is last release) | +| API surface | `executeTasks` (primary) + `executeTask` as 1-item wrapper | +| Resolution config | Declared in YAML operation cards via `graphql.resolution` β€” no separate TypeScript registry | +| Execution model | Two-phase batch: Phase 1 = batch resolution queries (≀ 1 call), Phase 2 = batch mutations (≀ 1 call) | +| Single-step behaviour | 1-item `executeTasks` β†’ falls through to existing routing engine; existing handlers unchanged | +| CLI interface | `ghx chain --steps '' \| --steps -` | + +## Card-Defined Resolution + +Multi-step capabilities (those that currently do an internal ID lookup before their mutation) declare their resolution requirements in `card.graphql.resolution`. This schema is added to existing cards; capabilities that don't need resolution have no `resolution` field. + +### Resolution schema (YAML) + +```yaml +graphql: + operationName: IssueLabelsUpdate + documentPath: src/gql/operations/issue-labels-update.graphql + resolution: + lookup: + operationName: IssueLabelsLookup + documentPath: src/gql/operations/issue-labels-lookup.graphql + vars: + issueId: issueId # lookupVar: inputField (same-name mapping) + inject: + - target: labelIds # mutation variable to populate + source: map_array + from_input: labels # input field β€” array of human-readable names + nodes_path: node.repository.labels.nodes # dot-path into lookup result + match_field: name # field in each node matched against input value + extract_field: id # field in each node extracted as ID +``` + +### Two inject sources + +| source | description | required fields | +|---|---|---| +| `scalar` | Single value extracted from lookup result | `path` (dot-notation into result) | +| `map_array` | Array of names mapped to IDs via a lookup table | `from_input`, `nodes_path`, `match_field`, `extract_field` | + +### Cards requiring resolution + +| Card | Lookup operation | Inject source | Target var | +|---|---|---|---| +| `issue.labels.update` | `IssueLabelsLookup(issueId)` | `map_array` | `labelIds` | +| `issue.labels.add` | `IssueLabelsLookup(issueId)` | `map_array` | `labelIds` | +| `issue.assignees.update` | `IssueAssigneesLookup(issueId)` | `map_array` | `assigneeIds` | +| `issue.milestone.set` | `IssueMilestoneLookup(issueId, milestoneNumber)` | `scalar` | `milestoneId` | +| `issue.parent.remove` | `IssueParentLookup(issueId)` | `scalar` | `parentIssueId` | +| `issue.create` | `IssueCreateRepositoryId(owner, name)` | `scalar` | `repositoryId` | + +### Variable pass-through + +Mutation variables not covered by `inject` entries are populated by matching the input field with the same name (e.g., mutation var `issueId` ← `input.issueId`). Input fields that don't correspond to a mutation variable are silently ignored (they may have been consumed by `resolution.lookup.vars`). + +## API Contract + +### New types + +```ts +type ChainStatus = "success" | "partial" | "failed" + +interface ChainStepResult { + task: string + ok: boolean + data?: unknown + error?: ResultError +} + +interface ChainResultEnvelope { + status: ChainStatus // success = all ok, partial = some ok, failed = none ok + results: ChainStepResult[] + meta: { + route_used: "graphql" + total: number + succeeded: number + failed: number + } +} +``` + +### Resolution types (TypeScript, internal) + +```ts +interface ScalarInject { + target: string + source: "scalar" + path: string // dot-notation into lookup result +} + +interface MapArrayInject { + target: string + source: "map_array" + from_input: string + nodes_path: string + match_field: string + extract_field: string +} + +type InjectSpec = ScalarInject | MapArrayInject + +interface ResolutionConfig { + lookup: { + operationName: string + documentPath: string + vars: Record // lookupVar: inputField + } + inject: InjectSpec[] +} +``` + +### `executeTasks` β€” new primary function + +```ts +executeTasks( + requests: Array<{ task: string; input: Record }>, + deps: ExecutionDeps, +): Promise +``` + +- **1 item:** full routing engine with CLI fallback (identical to current `executeTask` behaviour) +- **2+ items:** GraphQL-only two-phase batch; pre-flight rejects whole chain if any step has no `card.graphql` + +### `executeTask` β€” thin wrapper (unchanged signature) + +```ts +async function executeTask( + request: TaskRequest, + deps: ExecutionDeps, +): Promise { + const chain = await executeTasks([request], deps) + const step = chain.results[0] + return { + ok: step.ok, + data: step.data, + error: step.error, + meta: { capability_id: step.task, route_used: "graphql", ... }, + } +} +``` + +No existing callsites change. + +### New public exports + +```ts +export { executeTasks } from "./core/routing/engine.js" +export type { ChainResultEnvelope, ChainStepResult, ChainStatus } from "./core/contracts/envelope.js" +``` + +### CLI interface + +``` +ghx chain --steps '' +ghx chain --steps - # read from stdin +``` + +`--steps` accepts a JSON array of `{ task, input }` objects β€” identical structure to `executeTasks` requests. Mirrors `ghx run --input` ergonomics. Output is the `ChainResultEnvelope` serialised as JSON. + +## Execution Engine + +### Pre-flight (2+ items only) + +For each `{ task, input }`: +1. `getOperationCard(task)` β€” reject whole chain if not found +2. Validate `input` against `card.input_schema` (AJV) +3. Assert `card.graphql` exists β€” all chainable caps must have a GQL route + +Pre-flight failures reject the entire chain before any HTTP call. + +### Phase 1 β€” Resolution batch query (≀ 1 HTTP call) + +Collect all steps that have `card.graphql.resolution`. For each: +1. Build lookup variables by mapping `resolution.lookup.vars` (`{ lookupVar: inputField }`) against `step.input` +2. Load the lookup document string from `LOOKUP_DOCUMENTS[resolution.lookup.operationName]` (a thin TypeScript registry mapping operation name β†’ pre-imported document string) +3. Accumulate into `buildBatchQuery` call + +Execute the combined batch query once. Parse aliased results back per step. + +Also batch any pure-query steps (capabilities where the GQL operation is a `query`, not a `mutation`) into this same Phase 1 call. Pure-query steps complete in Phase 1. + +**HTTP calls in Phase 1:** 0 (no resolution or query steps) or 1. + +### Phase 2 β€” Mutation batch (≀ 1 HTTP call) + +For each mutation step: +1. Start with input fields that match mutation variable names (pass-through) +2. Apply `inject` specs from `card.graphql.resolution` using Phase 1 results: + - `scalar`: extract value at dot-path from aliased lookup result + - `map_array`: build nameβ†’ID map from lookup nodes; resolve `input[from_input]` array to IDs +3. Load mutation document string from `MUTATION_DOCUMENTS[card.graphql.operationName]` +4. Accumulate into `buildBatchMutation` call + +Execute combined batch mutation once. Map aliased results back per step. + +**HTTP calls in Phase 2:** 0 (no mutation steps) or 1. + +### Result assembly + +```ts +const succeeded = results.filter(r => r.ok).length +const status: ChainStatus = + succeeded === results.length ? "success" : + succeeded === 0 ? "failed" : "partial" + +return { + status, + results, + meta: { route_used: "graphql", total: results.length, succeeded, failed: results.length - succeeded }, +} +``` + +### HTTP call summary + +| Chain composition | Phase 1 | Phase 2 | Total | +|---|---|---|---| +| Pure queries only | 1 | 0 | 1 | +| Mutations, no resolution | 0 | 1 | 1 | +| Mutations with resolution | 1 | 1 | 2 | +| Mixed queries + mutations | 1 | 1 | 2 | + +### `buildBatchQuery` β€” new function in `gql/batch.ts` + +Mirrors `buildBatchMutation` but emits `query BatchChain(...)` instead of `mutation BatchComposite(...)`. `parseMutation` is generalised to `parseOperation` to handle both keywords. + +## Migration + +### Deletions + +| What | Location | +|---|---| +| `expandCompositeSteps` | `core/execute/composite.ts` β†’ delete file | +| `executeComposite` | `core/routing/engine.ts` | +| `CompositeConfig`, `CompositeStep` types | `core/registry/types.ts` | +| `composite?` field on `OperationCard` | `core/registry/types.ts` | +| `composite` property in card JSON schema | `core/registry/operation-card-schema.ts` | +| Composite IDs from `preferredOrder` | `core/registry/index.ts` | +| `pr.threads.composite.yaml` | `core/registry/cards/` | +| `issue.triage.composite.yaml` | `core/registry/cards/` | +| `issue.update.composite.yaml` | `core/registry/cards/` | + +### Additions + +| What | Location | +|---|---| +| `graphql.resolution` field on `OperationCard` | `core/registry/types.ts` | +| `resolution` property in card JSON schema | `core/registry/operation-card-schema.ts` | +| `resolution:` blocks in 6 capability cards | `core/registry/cards/*.yaml` | +| `buildBatchQuery`, `parseOperation` | `gql/batch.ts` | +| `LOOKUP_DOCUMENTS`, `MUTATION_DOCUMENTS` registries | `gql/document-registry.ts` (new file) | +| `executeTasks` | `core/routing/engine.ts` | +| `ChainResultEnvelope`, `ChainStepResult`, `ChainStatus` | `core/contracts/envelope.ts` | +| `ghx chain` subcommand | `cli/commands/chain.ts` | +| Dispatch in CLI entry | `cli/index.ts` | +| New exports | `index.ts` | + +### Changeset + +Delete `composite-capabilities-gql-integration.md`. Create new `minor` changeset β€” `executeTasks` is additive; composite removal is not breaking since composites were never published (current released version: `0.1.2`). + +## Validation + +### Runtime (this PR) + +Pre-flight rejects any step without `card.graphql` before issuing any HTTP call: + +``` +"capability 'pr.checks.rerun_failed' has no GraphQL route and cannot be chained" +``` + +### Schema enforcement (follow-up PR) + +39 of ~60 current cards lack a `graphql` config. Adding graphql support to all caps and making `graphql` required in `operation-card-schema.ts` is scoped to a follow-up PR. diff --git a/docs/plans/2026-02-20-atomic-capability-chaining.md b/docs/plans/2026-02-20-atomic-capability-chaining.md new file mode 100644 index 00000000..a0aca215 --- /dev/null +++ b/docs/plans/2026-02-20-atomic-capability-chaining.md @@ -0,0 +1,1362 @@ +# Atomic Capability Chaining Implementation Plan + +> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. + +**Goal:** Add `executeTasks` β€” a two-phase batch API letting callers execute multiple capabilities in ≀ 2 GitHub API calls β€” while deleting the composite card system that was never published. + +**Architecture:** Card-defined resolution (`graphql.resolution` in YAML) declares ID-lookup requirements per capability. The batch engine reads these at runtime: Phase 1 batches all resolution queries + pure-query steps into one HTTP call; Phase 2 batches all mutations into one HTTP call. Single-step `executeTask` is a thin wrapper that preserves existing behaviour exactly. + +**Tech Stack:** TypeScript strict ESM, Vitest, AJV (schema validation), `graphql-request` (GQL transport), Biome (formatter). Monorepo: `pnpm` + Nx. All work in `.worktrees/feat-atomic-chaining`. + +--- + +## Context You Need + +- **Worktree:** `.worktrees/feat-atomic-chaining` (branch `feat/atomic-chaining`) +- **Key files:** `packages/core/src/` + - `gql/batch.ts` β€” existing `buildBatchMutation`, `parseMutation` + - `gql/capability-registry.ts` β€” existing handler registry (read it, don't change it in these tasks) + - `gql/operations/*.generated.ts` β€” generated SDK files; export `*Document` strings and `getSdk` + - `core/routing/engine.ts` β€” `executeTask`, `executeComposite` (to be removed) + - `core/contracts/envelope.ts` β€” `ResultEnvelope`, `ResultError`, `ResultMeta` + - `core/registry/types.ts` β€” `OperationCard`, `CompositeConfig`, `CompositeStep` + - `core/registry/operation-card-schema.ts` β€” JSON schema for YAML cards + - `core/registry/index.ts` β€” card loading, `preferredOrder` array + - `core/execute/composite.ts` β€” `expandCompositeSteps` (to be deleted) + - `cli/index.ts` β€” main CLI dispatcher + - `cli/commands/run.ts` β€” reference for `chain.ts` patterns + - `index.ts` β€” public exports +- **Design doc:** `docs/plans/2026-02-20-atomic-capability-chaining-design.md` β€” read this for full type definitions and schemas +- **Test pattern:** existing `test/unit/*.test.ts` files for style; run single test with `pnpm --filter @ghx-dev/core exec vitest run ` +- **Format:** run `pnpm run format` after each task to let Biome auto-fix; then `pnpm run typecheck` to catch type errors +- **Commit:** after each passing task + +--- + +## Task 1: Extend `OperationCard` type and JSON schema for `graphql.resolution` + +**Files:** +- Modify: `packages/core/src/core/registry/types.ts` +- Modify: `packages/core/src/core/registry/operation-card-schema.ts` +- Test: `packages/core/test/unit/operation-card-schema.test.ts` (may not exist β€” create it) + +**Step 1: Write the failing test** + +Create `packages/core/test/unit/operation-card-schema.test.ts`: + +```ts +import Ajv from "ajv" +import { describe, expect, it } from "vitest" +import { operationCardSchema } from "@core/core/registry/operation-card-schema.js" + +const ajv = new Ajv() +const validate = ajv.compile(operationCardSchema) + +describe("operationCardSchema resolution", () => { + it("accepts a card with scalar resolution", () => { + const card = { + capability_id: "issue.milestone.set", + version: "1.0.0", + description: "Set milestone", + input_schema: { type: "object" }, + output_schema: { type: "object" }, + routing: { preferred: "graphql", fallbacks: [] }, + graphql: { + operationName: "IssueMilestoneSet", + documentPath: "src/gql/operations/issue-milestone-set.graphql", + resolution: { + lookup: { + operationName: "IssueMilestoneLookup", + documentPath: "src/gql/operations/issue-milestone-lookup.graphql", + vars: { issueId: "issueId", milestoneNumber: "milestoneNumber" }, + }, + inject: [{ target: "milestoneId", source: "scalar", path: "node.repository.milestone.id" }], + }, + }, + } + expect(validate(card)).toBe(true) + }) + + it("accepts a card with map_array resolution", () => { + const card = { + capability_id: "issue.labels.update", + version: "1.0.0", + description: "Update labels", + input_schema: { type: "object" }, + output_schema: { type: "object" }, + routing: { preferred: "graphql", fallbacks: [] }, + graphql: { + operationName: "IssueLabelsUpdate", + documentPath: "src/gql/operations/issue-labels-update.graphql", + resolution: { + lookup: { + operationName: "IssueLabelsLookup", + documentPath: "src/gql/operations/issue-labels-lookup.graphql", + vars: { issueId: "issueId" }, + }, + inject: [ + { + target: "labelIds", + source: "map_array", + from_input: "labels", + nodes_path: "node.repository.labels.nodes", + match_field: "name", + extract_field: "id", + }, + ], + }, + }, + } + expect(validate(card)).toBe(true) + }) + + it("rejects resolution with unknown source", () => { + const card = { + capability_id: "x", + version: "1.0.0", + description: "x", + input_schema: { type: "object" }, + output_schema: { type: "object" }, + routing: { preferred: "graphql", fallbacks: [] }, + graphql: { + operationName: "X", + documentPath: "x.graphql", + resolution: { + lookup: { operationName: "Y", documentPath: "y.graphql", vars: {} }, + inject: [{ target: "t", source: "unknown_source", path: "a.b" }], + }, + }, + } + expect(validate(card)).toBe(false) + }) +}) +``` + +**Step 2: Run test to verify it fails** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/operation-card-schema.test.ts +``` +Expected: FAIL β€” `resolution` property is not in the schema + +**Step 3: Add `ResolutionConfig` types to `types.ts`** + +In `packages/core/src/core/registry/types.ts`, add after the existing interfaces (keep `CompositeConfig` for now β€” it's deleted in Task 9): + +```ts +export interface ScalarInject { + target: string + source: "scalar" + path: string +} + +export interface MapArrayInject { + target: string + source: "map_array" + from_input: string + nodes_path: string + match_field: string + extract_field: string +} + +export type InjectSpec = ScalarInject | MapArrayInject + +export interface ResolutionConfig { + lookup: { + operationName: string + documentPath: string + vars: Record + } + inject: InjectSpec[] +} +``` + +Also extend the `graphql?` field on `OperationCard` to include `resolution?`: + +```ts +graphql?: { + operationName: string + documentPath: string + variables?: Record + limits?: { maxPageSize?: number } + resolution?: ResolutionConfig +} +``` + +**Step 4: Add `resolution` to `operation-card-schema.ts`** + +Inside the `graphql` property object, add `resolution` alongside `operationName`, `documentPath`, etc.: + +```ts +resolution: { + type: "object", + required: ["lookup", "inject"], + properties: { + lookup: { + type: "object", + required: ["operationName", "documentPath", "vars"], + properties: { + operationName: { type: "string", minLength: 1 }, + documentPath: { type: "string", minLength: 1 }, + vars: { type: "object" }, + }, + additionalProperties: false, + }, + inject: { + type: "array", + minItems: 1, + items: { + oneOf: [ + { + type: "object", + required: ["target", "source", "path"], + properties: { + target: { type: "string", minLength: 1 }, + source: { const: "scalar" }, + path: { type: "string", minLength: 1 }, + }, + additionalProperties: false, + }, + { + type: "object", + required: ["target", "source", "from_input", "nodes_path", "match_field", "extract_field"], + properties: { + target: { type: "string", minLength: 1 }, + source: { const: "map_array" }, + from_input: { type: "string", minLength: 1 }, + nodes_path: { type: "string", minLength: 1 }, + match_field: { type: "string", minLength: 1 }, + extract_field: { type: "string", minLength: 1 }, + }, + additionalProperties: false, + }, + ], + }, + }, + }, + additionalProperties: false, +}, +``` + +Note: also remove `additionalProperties: false` from the parent `graphql` object temporarily β€” it's already there; just insert `resolution` into its `properties`. + +**Step 5: Run tests** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/operation-card-schema.test.ts +``` +Expected: PASS + +**Step 6: Typecheck** + +```bash +pnpm run typecheck +``` +Expected: PASS (new optional field, backwards-compatible) + +**Step 7: Commit** + +```bash +cd .worktrees/feat-atomic-chaining +git add packages/core/src/core/registry/types.ts packages/core/src/core/registry/operation-card-schema.ts packages/core/test/unit/operation-card-schema.test.ts +git commit -m "feat(core): add graphql.resolution schema and types to OperationCard" +``` + +--- + +## Task 2: Add `resolution:` blocks to the 6 capability YAML cards + +**Files:** +- Modify: `packages/core/src/core/registry/cards/issue.labels.update.yaml` +- Modify: `packages/core/src/core/registry/cards/issue.labels.add.yaml` +- Modify: `packages/core/src/core/registry/cards/issue.assignees.update.yaml` +- Modify: `packages/core/src/core/registry/cards/issue.milestone.set.yaml` +- Modify: `packages/core/src/core/registry/cards/issue.parent.remove.yaml` +- Modify: `packages/core/src/core/registry/cards/issue.create.yaml` +- Test: `packages/core/test/unit/registry.test.ts` (existing β€” check card loading still works) + +**Step 1: Write a failing test** + +In `packages/core/test/unit/registry.test.ts` (or create if absent), add: + +```ts +import { describe, expect, it } from "vitest" +import { getOperationCard } from "@core/core/registry/index.js" + +describe("card resolution blocks", () => { + it("issue.labels.update has resolution config", () => { + const card = getOperationCard("issue.labels.update") + expect(card.graphql?.resolution).toBeDefined() + expect(card.graphql?.resolution?.lookup.operationName).toBe("IssueLabelsLookup") + expect(card.graphql?.resolution?.inject[0].source).toBe("map_array") + }) + + it("issue.milestone.set has scalar resolution", () => { + const card = getOperationCard("issue.milestone.set") + expect(card.graphql?.resolution?.inject[0].source).toBe("scalar") + }) +}) +``` + +**Step 2: Run test to verify it fails** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/registry.test.ts -t "card resolution blocks" +``` +Expected: FAIL β€” cards don't have resolution yet + +**Step 3: Add resolution blocks to all 6 cards** + +**`issue.labels.update.yaml`** β€” append to `graphql:` section: +```yaml + resolution: + lookup: + operationName: IssueLabelsLookup + documentPath: src/gql/operations/issue-labels-lookup.graphql + vars: + issueId: issueId + inject: + - target: labelIds + source: map_array + from_input: labels + nodes_path: node.repository.labels.nodes + match_field: name + extract_field: id +``` + +**`issue.labels.add.yaml`** β€” same resolution block as `issue.labels.update.yaml` (same lookup, same inject, since `IssueLabelsAdd` also takes `labelIds`). + +**`issue.assignees.update.yaml`** β€” append to `graphql:` section: +```yaml + resolution: + lookup: + operationName: IssueAssigneesLookup + documentPath: src/gql/operations/issue-assignees-lookup.graphql + vars: + issueId: issueId + inject: + - target: assigneeIds + source: map_array + from_input: assignees + nodes_path: node.repository.assignableUsers.nodes + match_field: login + extract_field: id +``` + +**`issue.milestone.set.yaml`** β€” append to `graphql:` section: +```yaml + resolution: + lookup: + operationName: IssueMilestoneLookup + documentPath: src/gql/operations/issue-milestone-lookup.graphql + vars: + issueId: issueId + milestoneNumber: milestoneNumber + inject: + - target: milestoneId + source: scalar + path: node.repository.milestone.id +``` + +**`issue.parent.remove.yaml`** β€” append to `graphql:` section: +```yaml + resolution: + lookup: + operationName: IssueParentLookup + documentPath: src/gql/operations/issue-parent-lookup.graphql + vars: + issueId: issueId + inject: + - target: parentIssueId + source: scalar + path: node.parent.id +``` + +**`issue.create.yaml`** β€” append to `graphql:` section: +```yaml + resolution: + lookup: + operationName: IssueCreateRepositoryId + documentPath: src/gql/operations/issue-create-repository-id.graphql + vars: + owner: owner + name: name + inject: + - target: repositoryId + source: scalar + path: repository.id +``` + +**Step 4: Run test** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/registry.test.ts -t "card resolution blocks" +``` +Expected: PASS + +**Step 5: Run all core tests** + +```bash +pnpm --filter @ghx-dev/core exec vitest run +``` +Expected: All pass (card loading validates against schema; new field passes AJV) + +**Step 6: Commit** + +```bash +git add packages/core/src/core/registry/cards/ packages/core/test/unit/registry.test.ts +git commit -m "feat(core): add graphql.resolution blocks to 6 capability cards" +``` + +--- + +## Task 3: Add `buildBatchQuery` and generalise `parseOperation` in `gql/batch.ts` + +**Files:** +- Modify: `packages/core/src/gql/batch.ts` +- Test: `packages/core/test/unit/batch.test.ts` (existing β€” extend it) + +**Step 1: Write failing tests** + +In `packages/core/test/unit/batch.test.ts`, add: + +```ts +import { buildBatchQuery } from "@core/gql/batch.js" + +describe("buildBatchQuery", () => { + it("wraps single query with alias", () => { + const result = buildBatchQuery([ + { + alias: "step0", + query: `query IssueLabelsLookup($issueId: ID!) { + node(id: $issueId) { + ... on Issue { id } + } +}`, + variables: { issueId: "I_123" }, + }, + ]) + expect(result.document).toContain("query BatchChain") + expect(result.document).toContain("step0:") + expect(result.document).toContain("$step0_issueId: ID!") + expect(result.variables).toEqual({ step0_issueId: "I_123" }) + }) + + it("merges two queries", () => { + const q = `query Foo($id: ID!) { node(id: $id) { id } }` + const result = buildBatchQuery([ + { alias: "a", query: q, variables: { id: "1" } }, + { alias: "b", query: q, variables: { id: "2" } }, + ]) + expect(result.document).toContain("$a_id: ID!") + expect(result.document).toContain("$b_id: ID!") + expect(result.variables).toEqual({ a_id: "1", b_id: "2" }) + }) + + it("throws on empty array", () => { + expect(() => buildBatchQuery([])).toThrow() + }) +}) +``` + +**Step 2: Run test to verify it fails** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/batch.test.ts -t "buildBatchQuery" +``` +Expected: FAIL β€” `buildBatchQuery` not exported + +**Step 3: Generalise `parseMutation` β†’ `parseOperation` and add `buildBatchQuery`** + +In `packages/core/src/gql/batch.ts`: + +1. Rename `parseMutation` to `parseOperation` (keep it private). Change the regex to match `query|mutation` keyword instead of just `mutation`: + +```ts +function parseOperation(document: string): ParsedOperation { + const headerMatch = document.match(/(query|mutation)\s+\w+\s*\(([^)]*)\)/) + // ... rest identical, but use headerMatch[2] for var string (group 2 now) +} +``` + +2. Update `buildBatchMutation` to call `parseOperation` instead of `parseMutation`. + +3. Add `buildBatchQuery`: + +```ts +export type BatchQueryInput = { + alias: string + query: string + variables: GraphqlVariables +} + +export type BatchQueryResult = { + document: string + variables: GraphqlVariables +} + +export function buildBatchQuery(operations: BatchQueryInput[]): BatchQueryResult { + if (operations.length === 0) { + throw new Error("buildBatchQuery requires at least one operation") + } + + const allVarDeclarations: string[] = [] + const allSelections: string[] = [] + const mergedVariables: GraphqlVariables = {} + + for (const op of operations) { + const parsed = parseOperation(op.query) + + for (const varDecl of parsed.variableDeclarations) { + allVarDeclarations.push(`$${op.alias}_${varDecl.name}: ${varDecl.type}`) + } + + let body = parsed.body + const sortedDeclarations = [...parsed.variableDeclarations].sort( + (a, b) => b.name.length - a.name.length, + ) + for (const varDecl of sortedDeclarations) { + body = body.replaceAll( + new RegExp(`\\$${escapeRegex(varDecl.name)}\\b`, "g"), + `$${op.alias}_${varDecl.name}`, + ) + } + + const aliasedBody = body.replace(/^\s*(\w+)/, `${op.alias}: $1`) + allSelections.push(aliasedBody) + + for (const [key, value] of Object.entries(op.variables)) { + mergedVariables[`${op.alias}_${key}`] = value + } + } + + const varList = allVarDeclarations.length > 0 ? `(${allVarDeclarations.join(", ")})` : "" + const document = `query BatchChain${varList} {\n${allSelections.join("\n")}\n}` + + return { document, variables: mergedVariables } +} +``` + +**Step 4: Run tests** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/batch.test.ts +``` +Expected: All pass (new tests + existing mutation tests still pass) + +**Step 5: Typecheck and commit** + +```bash +pnpm run typecheck +git add packages/core/src/gql/batch.ts packages/core/test/unit/batch.test.ts +git commit -m "feat(core): add buildBatchQuery and generalise parseOperation in batch.ts" +``` + +--- + +## Task 4: Create `gql/document-registry.ts` (lookup + mutation document strings) + +**Files:** +- Create: `packages/core/src/gql/document-registry.ts` +- Test: `packages/core/test/unit/document-registry.test.ts` + +**Step 1: Write failing test** + +Create `packages/core/test/unit/document-registry.test.ts`: + +```ts +import { describe, expect, it } from "vitest" +import { getLookupDocument, getMutationDocument } from "@core/gql/document-registry.js" + +describe("document-registry", () => { + it("getLookupDocument returns document for IssueLabelsLookup", () => { + const doc = getLookupDocument("IssueLabelsLookup") + expect(doc).toContain("query IssueLabelsLookup") + }) + + it("getLookupDocument returns document for IssueMilestoneLookup", () => { + const doc = getLookupDocument("IssueMilestoneLookup") + expect(doc).toContain("milestoneNumber") + }) + + it("getMutationDocument returns document for IssueLabelsUpdate", () => { + const doc = getMutationDocument("IssueLabelsUpdate") + expect(doc).toContain("mutation IssueLabelsUpdate") + }) + + it("getLookupDocument throws on unknown operation", () => { + expect(() => getLookupDocument("UnknownOp")).toThrow() + }) + + it("getMutationDocument throws on unknown operation", () => { + expect(() => getMutationDocument("UnknownOp")).toThrow() + }) +}) +``` + +**Step 2: Run test to verify it fails** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/document-registry.test.ts +``` + +**Step 3: Create `gql/document-registry.ts`** + +Import the `*Document` constants from generated files and build two registries: + +```ts +import { IssueAssigneesLookupDocument } from "./operations/issue-assignees-lookup.generated.js" +import { IssueCreateRepositoryIdDocument } from "./operations/issue-create-repository-id.generated.js" +import { IssueLabelsLookupDocument } from "./operations/issue-labels-lookup.generated.js" +import { IssueMilestoneLookupDocument } from "./operations/issue-milestone-lookup.generated.js" +import { IssueParentLookupDocument } from "./operations/issue-parent-lookup.generated.js" + +// Resolution lookup queries (Phase 1) +const LOOKUP_DOCUMENTS: Record = { + IssueLabelsLookup: IssueLabelsLookupDocument, + IssueAssigneesLookup: IssueAssigneesLookupDocument, + IssueMilestoneLookup: IssueMilestoneLookupDocument, + IssueParentLookup: IssueParentLookupDocument, + IssueCreateRepositoryId: IssueCreateRepositoryIdDocument, +} +``` + +For mutations, import from the relevant generated files. You need to check which generated files export a `*Document` constant β€” look at files like `issue-labels-update.generated.ts`, `issue-milestone-set.generated.ts`, etc. and import the document strings. Add entries for all capabilities that have a `graphql` config (at minimum the 6 that have resolution, plus others used in `capability-registry.ts`). + +```ts +export function getLookupDocument(operationName: string): string { + const doc = LOOKUP_DOCUMENTS[operationName] + if (!doc) { + throw new Error(`No lookup document registered for operation: ${operationName}`) + } + return doc +} + +export function getMutationDocument(operationName: string): string { + const doc = MUTATION_DOCUMENTS[operationName] + if (!doc) { + throw new Error(`No mutation document registered for operation: ${operationName}`) + } + return doc +} +``` + +**Step 4: Run test** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/document-registry.test.ts +``` +Expected: PASS + +**Step 5: Commit** + +```bash +git add packages/core/src/gql/document-registry.ts packages/core/test/unit/document-registry.test.ts +git commit -m "feat(core): add document-registry for lookup and mutation GQL documents" +``` + +--- + +## Task 5: Add chain types to `core/contracts/envelope.ts` + +**Files:** +- Modify: `packages/core/src/core/contracts/envelope.ts` +- Test: `packages/core/test/unit/envelope.test.ts` (may not exist β€” create) + +**Step 1: Write failing test** + +```ts +import { describe, expect, it } from "vitest" +import type { ChainResultEnvelope, ChainStatus, ChainStepResult } from "@core/core/contracts/envelope.js" + +describe("ChainResultEnvelope types", () => { + it("status type accepts valid values", () => { + const s1: ChainStatus = "success" + const s2: ChainStatus = "partial" + const s3: ChainStatus = "failed" + expect([s1, s2, s3]).toHaveLength(3) + }) + + it("ChainStepResult can be ok", () => { + const r: ChainStepResult = { task: "issue.close", ok: true, data: { id: "x" } } + expect(r.ok).toBe(true) + }) + + it("ChainResultEnvelope has expected shape", () => { + const env: ChainResultEnvelope = { + status: "success", + results: [{ task: "issue.close", ok: true }], + meta: { route_used: "graphql", total: 1, succeeded: 1, failed: 0 }, + } + expect(env.meta.total).toBe(1) + }) +}) +``` + +**Step 2: Run test to verify it fails** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/envelope.test.ts -t "ChainResultEnvelope" +``` + +**Step 3: Add to `envelope.ts`** + +```ts +export type ChainStatus = "success" | "partial" | "failed" + +export interface ChainStepResult { + task: string + ok: boolean + data?: unknown + error?: ResultError +} + +export interface ChainResultEnvelope { + status: ChainStatus + results: ChainStepResult[] + meta: { + route_used: "graphql" + total: number + succeeded: number + failed: number + } +} +``` + +**Step 4: Run test, typecheck, commit** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/envelope.test.ts +pnpm run typecheck +git add packages/core/src/core/contracts/envelope.ts packages/core/test/unit/envelope.test.ts +git commit -m "feat(core): add ChainResultEnvelope, ChainStepResult, ChainStatus types" +``` + +--- + +## Task 6: Create the resolution engine helper (`gql/resolve.ts`) + +This module contains the logic to: +- Extract a resolved value from a lookup result using an `InjectSpec` +- Build final mutation variables for a step by combining pass-through input vars with resolved vars + +**Files:** +- Create: `packages/core/src/gql/resolve.ts` +- Test: `packages/core/test/unit/resolve.test.ts` + +**Step 1: Write failing tests** + +Create `packages/core/test/unit/resolve.test.ts`: + +```ts +import { describe, expect, it } from "vitest" +import { applyInject, buildMutationVars } from "@core/gql/resolve.js" +import type { InjectSpec } from "@core/core/registry/types.js" + +describe("applyInject", () => { + it("scalar: extracts value at dot-path", () => { + const lookupResult = { node: { repository: { milestone: { id: "M_456" } } } } + const spec: InjectSpec = { target: "milestoneId", source: "scalar", path: "node.repository.milestone.id" } + expect(applyInject(spec, lookupResult, {})).toEqual({ milestoneId: "M_456" }) + }) + + it("scalar: throws when path not found", () => { + const spec: InjectSpec = { target: "milestoneId", source: "scalar", path: "node.repository.milestone.id" } + expect(() => applyInject(spec, {}, {})).toThrow("milestoneId") + }) + + it("map_array: maps names to ids", () => { + const lookupResult = { + node: { repository: { labels: { nodes: [{ id: "L_1", name: "bug" }, { id: "L_2", name: "feat" }] } } }, + } + const spec: InjectSpec = { + target: "labelIds", + source: "map_array", + from_input: "labels", + nodes_path: "node.repository.labels.nodes", + match_field: "name", + extract_field: "id", + } + const input = { labels: ["feat", "bug"] } + expect(applyInject(spec, lookupResult, input)).toEqual({ labelIds: ["L_2", "L_1"] }) + }) + + it("map_array: throws when name not found", () => { + const lookupResult = { node: { repository: { labels: { nodes: [] } } } } + const spec: InjectSpec = { + target: "labelIds", + source: "map_array", + from_input: "labels", + nodes_path: "node.repository.labels.nodes", + match_field: "name", + extract_field: "id", + } + const input = { labels: ["nonexistent"] } + expect(() => applyInject(spec, lookupResult, input)).toThrow("nonexistent") + }) +}) + +describe("buildMutationVars", () => { + it("passes through vars matching mutation variable names", () => { + const mutDoc = `mutation IssueClose($issueId: ID!) { closeIssue(input: {issueId: $issueId}) { issue { id } } }` + const input = { issueId: "I_123", extraField: "ignored" } + const resolved: Record = {} + const vars = buildMutationVars(mutDoc, input, resolved) + expect(vars).toEqual({ issueId: "I_123" }) + }) + + it("resolved vars override pass-through", () => { + const mutDoc = `mutation IssueLabelsUpdate($issueId: ID!, $labelIds: [ID!]!) { updateIssue(input: {id: $issueId, labelIds: $labelIds}) { issue { id } } }` + const input = { issueId: "I_123", labels: ["bug"] } + const resolved = { labelIds: ["L_1"] } + const vars = buildMutationVars(mutDoc, input, resolved) + expect(vars).toEqual({ issueId: "I_123", labelIds: ["L_1"] }) + }) +}) +``` + +**Step 2: Run to verify failure** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/resolve.test.ts +``` + +**Step 3: Implement `gql/resolve.ts`** + +```ts +import type { InjectSpec } from "@core/core/registry/types.js" +import type { GraphqlVariables } from "./transport.js" + +function getAtPath(obj: unknown, path: string): unknown { + const parts = path.split(".") + let current = obj + for (const part of parts) { + if (current === null || typeof current !== "object") return undefined + current = (current as Record)[part] + } + return current +} + +export function applyInject( + spec: InjectSpec, + lookupResult: unknown, + input: Record, +): Record { + if (spec.source === "scalar") { + const value = getAtPath(lookupResult, spec.path) + if (value === undefined || value === null) { + throw new Error(`Resolution failed for '${spec.target}': no value at path '${spec.path}'`) + } + return { [spec.target]: value } + } + + // map_array + const nodes = getAtPath(lookupResult, spec.nodes_path) + if (!Array.isArray(nodes)) { + throw new Error(`Resolution failed for '${spec.target}': nodes at '${spec.nodes_path}' is not an array`) + } + + const idByName = new Map() + for (const node of nodes) { + if (node && typeof node === "object") { + const n = node as Record + const key = n[spec.match_field] + const val = n[spec.extract_field] + if (typeof key === "string") { + idByName.set(key.toLowerCase(), val) + } + } + } + + const inputValues = input[spec.from_input] + if (!Array.isArray(inputValues)) { + throw new Error(`Resolution failed for '${spec.target}': input field '${spec.from_input}' is not an array`) + } + + const resolved = inputValues.map((name: unknown) => { + if (typeof name !== "string") throw new Error(`Resolution: expected string in '${spec.from_input}'`) + const id = idByName.get(name.toLowerCase()) + if (id === undefined) throw new Error(`Resolution: '${name}' not found in lookup result`) + return id + }) + + return { [spec.target]: resolved } +} + +export function buildMutationVars( + mutationDoc: string, + input: Record, + resolved: Record, +): GraphqlVariables { + // Extract variable names declared in the mutation header + const headerMatch = mutationDoc.match(/(?:query|mutation)\s+\w+\s*\(([^)]*)\)/) + const mutVarNames = new Set() + if (headerMatch?.[1]) { + for (const match of headerMatch[1].matchAll(/\$(\w+)\s*:/g)) { + if (match[1]) mutVarNames.add(match[1]) + } + } + + const vars: GraphqlVariables = {} + // Pass through input fields whose names match mutation variables + for (const varName of mutVarNames) { + if (varName in input) { + vars[varName] = input[varName] as GraphqlVariables[string] + } + } + // Apply resolved values (may override pass-through) + for (const [key, value] of Object.entries(resolved)) { + if (mutVarNames.has(key)) { + vars[key] = value as GraphqlVariables[string] + } + } + return vars +} +``` + +**Step 4: Run tests** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/resolve.test.ts +``` +Expected: PASS + +**Step 5: Typecheck and commit** + +```bash +pnpm run typecheck +git add packages/core/src/gql/resolve.ts packages/core/test/unit/resolve.test.ts +git commit -m "feat(core): add resolution engine helpers (applyInject, buildMutationVars)" +``` + +--- + +## Task 7: Implement `executeTasks` in `core/routing/engine.ts` + +**Files:** +- Modify: `packages/core/src/core/routing/engine.ts` +- Test: `packages/core/test/unit/engine.test.ts` (existing β€” extend) + +**Step 1: Read `engine.ts` before modifying** + +Read `packages/core/src/core/routing/engine.ts` in full. Understand how `executeTask`, `executeComposite`, and `ExecutionDeps` are structured. + +**Step 2: Write failing tests** + +In `packages/core/test/unit/engine.test.ts`, add a section for `executeTasks`. You'll need to mock: +- `getOperationCard` (from `@core/core/registry/index.js`) +- `getLookupDocument`, `getMutationDocument` (from `@core/gql/document-registry.js`) +- The HTTP transport (`githubClient`) + +Use `vi.mock` to mock those modules. Test key behaviours: + +```ts +describe("executeTasks", () => { + it("rejects whole chain pre-flight if card not found", async () => { + // mock getOperationCard to throw + // expect executeTasks to return status: "failed" with pre-flight error + }) + + it("rejects whole chain pre-flight if card has no graphql config", async () => { + // mock getOperationCard to return card without graphql + // expect status: "failed" + }) + + it("1-item chain delegates to existing executeTask path (full routing)", async () => { + // Verify that executeTasks([singleItem]) calls the same path as executeTask + // Check the returned ChainResultEnvelope has 1 result + }) + + it("2-item pure-mutation chain returns status:success after batch mutation", async () => { + // Mock two cards (no resolution), mock HTTP client to return batch results + // expect 1 HTTP call (Phase 2 only), status: "success" + }) + + it("status is partial when one step fails", async () => { + // Mock one step success, one step failure from HTTP client + // expect status: "partial", results[0].ok=true, results[1].ok=false + }) +}) +``` + +Write the tests first even if they are somewhat high-level β€” you can refine them as you implement. + +**Step 3: Run to verify failures** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/engine.test.ts -t "executeTasks" +``` + +**Step 4: Implement `executeTasks` in `engine.ts`** + +Add after the existing `executeTask` function: + +```ts +import { buildBatchMutation, buildBatchQuery } from "../../gql/batch.js" +import { getLookupDocument, getMutationDocument } from "../../gql/document-registry.js" +import { applyInject, buildMutationVars } from "../../gql/resolve.js" +import type { ChainResultEnvelope, ChainStepResult, ChainStatus } from "../contracts/envelope.js" + +export async function executeTasks( + requests: Array<{ task: string; input: Record }>, + deps: ExecutionDeps, +): Promise { + // 1-item: delegate to existing routing engine + if (requests.length === 1) { + const req = requests[0]! + const result = await executeTask({ task: req.task, input: req.input }, deps) + const step: ChainStepResult = result.ok + ? { task: req.task, ok: true, data: result.data } + : { task: req.task, ok: false, error: result.error } + return { + status: result.ok ? "success" : "failed", + results: [step], + meta: { route_used: "graphql", total: 1, succeeded: result.ok ? 1 : 0, failed: result.ok ? 0 : 1 }, + } + } + + // Pre-flight: validate all steps + const preflightErrors: ChainStepResult[] = [] + const cards: Array> = [] + for (const req of requests) { + try { + const card = getOperationCard(req.task) + validateInput(req.input, card.input_schema) + if (!card.graphql) throw new Error(`capability '${req.task}' has no GraphQL route and cannot be chained`) + cards.push(card) + } catch (err) { + preflightErrors.push({ task: req.task, ok: false, error: mapError(err) }) + } + } + + if (preflightErrors.length > 0) { + return { + status: "failed", + results: requests.map( + (req) => preflightErrors.find((e) => e.task === req.task) ?? { task: req.task, ok: false, error: { code: "UNKNOWN", message: "pre-flight failed" } }, + ), + meta: { route_used: "graphql", total: requests.length, succeeded: 0, failed: requests.length }, + } + } + + // Phase 1: batch resolution queries (steps with card.graphql.resolution) + // Also batch pure-query steps here (future: detect by GQL keyword; for now, assume all steps are mutations) + const lookupInputs: Array<{ alias: string; query: string; variables: Record; stepIndex: number }> = [] + for (let i = 0; i < requests.length; i++) { + const card = cards[i]! + const req = requests[i]! + if (card.graphql?.resolution) { + const { lookup } = card.graphql.resolution + const lookupVars: Record = {} + for (const [lookupVar, inputField] of Object.entries(lookup.vars)) { + lookupVars[lookupVar] = req.input[inputField] + } + lookupInputs.push({ + alias: `step${i}`, + query: getLookupDocument(lookup.operationName), + variables: lookupVars, + stepIndex: i, + }) + } + } + + const lookupResults: Record = {} + if (lookupInputs.length > 0) { + const { document, variables } = buildBatchQuery( + lookupInputs.map(({ alias, query, variables }) => ({ alias, query, variables: variables as import("../../gql/transport.js").GraphqlVariables })), + ) + const rawResult = await deps.githubClient.request(document, variables) + // Un-alias results: BatchChain result has keys like "step0", "step2", etc. + for (const { alias, stepIndex } of lookupInputs) { + lookupResults[stepIndex] = (rawResult as Record)[alias] + } + } + + // Phase 2: batch mutations + const mutationInputs: Array<{ alias: string; mutation: string; variables: import("../../gql/transport.js").GraphqlVariables; stepIndex: number }> = [] + const stepPreResults: Record = {} + + for (let i = 0; i < requests.length; i++) { + const card = cards[i]! + const req = requests[i]! + + try { + const resolved: Record = {} + if (card.graphql?.resolution && lookupResults[i] !== undefined) { + for (const spec of card.graphql.resolution.inject) { + Object.assign(resolved, applyInject(spec, lookupResults[i], req.input)) + } + } + + const mutDoc = getMutationDocument(card.graphql!.operationName) + const mutVars = buildMutationVars(mutDoc, req.input, resolved) + mutationInputs.push({ alias: `step${i}`, mutation: mutDoc, variables: mutVars, stepIndex: i }) + } catch (err) { + stepPreResults[i] = { task: req.task, ok: false, error: mapError(err) } + } + } + + let rawMutResult: Record = {} + if (mutationInputs.length > 0) { + try { + const { document, variables } = buildBatchMutation( + mutationInputs.map(({ alias, mutation, variables }) => ({ alias, mutation, variables })), + ) + rawMutResult = (await deps.githubClient.request(document, variables)) as Record + } catch (err) { + // Whole batch mutation failed β€” mark all pending steps as failed + for (const { stepIndex, alias: _alias } of mutationInputs) { + const req = requests[stepIndex]! + stepPreResults[stepIndex] = { task: req.task, ok: false, error: mapError(err) } + } + } + } + + // Assemble results + const results: ChainStepResult[] = requests.map((req, i) => { + if (stepPreResults[i]) return stepPreResults[i]! + const mutInput = mutationInputs.find((m) => m.stepIndex === i) + if (!mutInput) return { task: req.task, ok: false, error: { code: "UNKNOWN" as const, message: "step skipped" } } + const data = rawMutResult[mutInput.alias] + return { task: req.task, ok: true, data } + }) + + const succeeded = results.filter((r) => r.ok).length + const status: ChainStatus = + succeeded === results.length ? "success" : succeeded === 0 ? "failed" : "partial" + + return { + status, + results, + meta: { route_used: "graphql", total: results.length, succeeded, failed: results.length - succeeded }, + } +} +``` + +Note: `mapError` and `validateInput` are existing internal helpers in `engine.ts` β€” use them as they're already there. Check the actual signatures by reading the file before implementing. Also check how `deps.githubClient.request` is typed β€” adjust imports accordingly. + +**Step 5: Update `executeTask` to be a thin wrapper (optional for now)** + +Leave `executeTask` as-is for this task. The design says it eventually wraps `executeTasks`, but since `executeTasks` delegates 1-item to `executeTask`'s existing path, this would be circular. Keep both functions independent for now β€” `executeTasks(1 item)` calls `executeTask` internally. + +**Step 6: Run tests** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/engine.test.ts +``` +Fix any type errors or logic bugs. Aim for all tests passing. + +**Step 7: Typecheck and commit** + +```bash +pnpm run typecheck +git add packages/core/src/core/routing/engine.ts packages/core/test/unit/engine.test.ts +git commit -m "feat(core): implement executeTasks with two-phase batch execution" +``` + +--- + +## Task 8: Add `ghx chain` CLI subcommand + +**Files:** +- Create: `packages/core/src/cli/commands/chain.ts` +- Modify: `packages/core/src/cli/index.ts` +- Test: `packages/core/test/unit/chain-command.test.ts` + +**Step 1: Read `cli/commands/run.ts` before writing `chain.ts`** + +Read `packages/core/src/cli/commands/run.ts` fully to understand patterns: flag parsing, stdin reading, token resolution, error handling, JSON output. Mirror these patterns. + +**Step 2: Write failing test** + +```ts +import { describe, expect, it, vi } from "vitest" +// Test that chain command parses --steps JSON and calls executeTasks +// Mock executeTasks to verify it's called with parsed requests +``` + +**Step 3: Implement `cli/commands/chain.ts`** + +```ts +import { executeTasks } from "../../core/routing/engine.js" +// Parse --steps flag (JSON array or stdin "-") +// Validate parsed value is array of { task, input } +// Build ExecutionDeps from token +// Call executeTasks, print JSON.stringify(result, null, 2) +// Exit 0 on success/partial, non-zero on full failure +``` + +Key ergonomics: +- `--steps ''` β€” inline JSON +- `--steps -` β€” read from stdin (same as `--input -` in run command) +- Exit code: 0 for `success`/`partial`, 1 for `failed` + +**Step 4: Add dispatch in `cli/index.ts`** + +```ts +case "chain": + await runChainCommand(args.slice(1)) + break +``` + +**Step 5: Run tests, typecheck, commit** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/chain-command.test.ts +pnpm run typecheck +git add packages/core/src/cli/commands/chain.ts packages/core/src/cli/index.ts packages/core/test/unit/chain-command.test.ts +git commit -m "feat(cli): add ghx chain --steps subcommand" +``` + +--- + +## Task 9: Delete composite system + +**Files:** +- Delete: `packages/core/src/core/execute/composite.ts` +- Delete: `packages/core/src/core/registry/cards/pr.threads.composite.yaml` +- Delete: `packages/core/src/core/registry/cards/issue.triage.composite.yaml` +- Delete: `packages/core/src/core/registry/cards/issue.update.composite.yaml` +- Modify: `packages/core/src/core/registry/types.ts` β€” remove `CompositeConfig`, `CompositeStep`, `composite?` field +- Modify: `packages/core/src/core/registry/operation-card-schema.ts` β€” remove `composite` property +- Modify: `packages/core/src/core/registry/index.ts` β€” remove composite IDs from `preferredOrder` +- Modify: `packages/core/src/core/routing/engine.ts` β€” remove `executeComposite` + +**Step 1: Read all files being changed before touching them** + +Especially `engine.ts` (find all references to `executeComposite`/`composite`) and `registry/index.ts` (find composite IDs in `preferredOrder`). + +**Step 2: Delete the three composite YAML cards** + +```bash +git rm packages/core/src/core/registry/cards/pr.threads.composite.yaml +git rm packages/core/src/core/registry/cards/issue.triage.composite.yaml +git rm packages/core/src/core/registry/cards/issue.update.composite.yaml +``` + +**Step 3: Delete `composite.ts`** + +```bash +git rm packages/core/src/core/execute/composite.ts +``` + +**Step 4: Remove composite from `types.ts`** + +Remove `CompositeStep` interface, `CompositeConfig` interface, and `composite?` from `OperationCard`. + +**Step 5: Remove `composite` from `operation-card-schema.ts`** + +Delete the `composite:` property block. + +**Step 6: Remove composite IDs from `registry/index.ts`** + +Find the `preferredOrder` array and remove `pr.threads.composite`, `issue.triage.composite`, `issue.update.composite` entries. + +**Step 7: Remove `executeComposite` from `engine.ts`** + +Find and delete the `executeComposite` function and any code that references it in the routing logic. + +**Step 8: Run all tests** + +```bash +pnpm --filter @ghx-dev/core exec vitest run +``` +Fix any broken imports or references. The composite system should leave no traces. + +**Step 9: Typecheck and commit** + +```bash +pnpm run typecheck +git add -A +git commit -m "refactor(core): delete composite capability system (never published)" +``` + +--- + +## Task 10: Export new public API from `index.ts` + +**Files:** +- Modify: `packages/core/src/index.ts` +- Test: verify exports compile + +**Step 1: Read `index.ts`** + +Understand what's currently exported. + +**Step 2: Add new exports** + +```ts +export { executeTasks } from "./core/routing/engine.js" +export type { ChainResultEnvelope, ChainStepResult, ChainStatus } from "./core/contracts/envelope.js" +``` + +**Step 3: Typecheck and run full test suite** + +```bash +pnpm run typecheck +pnpm --filter @ghx-dev/core exec vitest run +``` + +**Step 4: Commit** + +```bash +git add packages/core/src/index.ts +git commit -m "feat(core): export executeTasks and chain types from public API" +``` + +--- + +## Task 11: Add changeset + clean up + +**Files:** +- Delete: `.changeset/composite-capabilities-gql-integration.md` (if exists) +- Create: `.changeset/.md` + +**Step 1: Check for old changeset** + +```bash +ls .changeset/ +``` + +If `composite-capabilities-gql-integration.md` exists, delete it: +```bash +git rm .changeset/composite-capabilities-gql-integration.md +``` + +**Step 2: Create new changeset** + +```bash +cat > .changeset/atomic-capability-chaining.md << 'EOF' +--- +"@ghx-dev/core": minor +--- + +Add `executeTasks` for atomic capability chaining. Callers can now execute multiple capabilities in a single tool call with at most 2 GitHub API round-trips via two-phase batch execution (Phase 1: resolution queries, Phase 2: mutations). Resolution requirements are declared in YAML operation cards via `graphql.resolution`. Removes unused composite capability cards. +EOF +``` + +**Step 3: Final CI check** + +```bash +pnpm run ci --outputStyle=static +``` + +Fix any remaining lint, format, typecheck, or test failures. + +**Step 4: Final commit** + +```bash +git add .changeset/ +git commit -m "chore: add minor changeset for executeTasks" +``` + +--- + +## Done + +All tasks complete when `pnpm run ci --outputStyle=static` passes with no errors. The implementation delivers: +- `executeTasks` API (≀ 2 HTTP calls for any chain) +- Card-defined resolution in 6 capability cards +- `ghx chain --steps` CLI +- Composite system removed +- `ChainResultEnvelope` exported from public API diff --git a/docs/plans/2026-02-20-atomic-chaining-followup.md b/docs/plans/2026-02-20-atomic-chaining-followup.md new file mode 100644 index 00000000..dccf16b3 --- /dev/null +++ b/docs/plans/2026-02-20-atomic-chaining-followup.md @@ -0,0 +1,171 @@ +# Atomic Chaining β€” Follow-up Work + +> Status snapshot after `feat/atomic-chaining` implementation. +> 18 capabilities are now batchable. Items below are improvements and gaps +> discovered during implementation. + +--- + +## 1. SKILL.md β€” update for agents + +**File:** `packages/core/skills/using-ghx/SKILL.md` + +Current state: no mention of `ghx chain` or `executeTasks`. Stale count (says 69, +should be 66). Domain list still includes `check_run` (removed in PR #58). + +**Required changes:** + +- Update capability count: 69 β†’ 66 +- Remove `check_run` from domain list +- Add a "Chain" section covering: + - When to prefer `ghx chain` over repeated `ghx run` (multi-mutation atomicity) + - Syntax: `ghx chain --steps '[{"task":"...","input":{...}},...]'` + - stdin variant: `ghx chain --steps -` + - Output shape: `{ status, results[], meta }` β€” check `status` first + - Which capabilities are chainable (those taking node IDs or human-readable + fields; everything with a `graphql:` block) + - Example: close + comment on an issue in one round-trip + +--- + +## 2. Benchmark β€” add chaining scenarios + +No scenarios exercise `executeTasks` / `ghx chain`. The benchmark only tests single +`ghx run` calls. + +**Required work:** + +- Add at least 2 workflow scenarios that require a multi-step mutation chain: + - `issue-triage-atomic-wf-001`: set labels + assignee + milestone on an issue in + a single chain (exercises `issue.labels.set` + `issue.assignees.set` + + `issue.milestone.set` β€” all three have resolution lookups) + - `pr-review-submit-atomic-wf-001`: submit a review with inline comments + (exercises `pr.reviews.submit` with PrNodeId resolution) +- Add to `scenario-sets.json` under a `chaining` set and include representative + scenarios in `default` +- Confirm assertion format handles `ChainResultEnvelope` (status + nested results) + +--- + +## 3. GQL β€” partial error handling in Phase 2 + +**Current behavior:** if `githubClient.query()` throws during Phase 2 (the batch +mutation), all steps are marked failed. But GitHub GraphQL returns HTTP 200 even +when individual mutations error β€” errors appear in the `errors[]` field alongside +partial `data`. The chain command (`chain.ts`) already throws on `errors[]` +presence, which means one bad mutation kills the whole batch. + +**Required work:** + +- In `chain.ts` (or `engine.ts` Phase 2), after receiving the batch mutation + response, check `data` and `errors` separately: + - Map each error back to its aliased step (errors include a `path` field like + `["step1", "createIssue"]`) + - Mark only the failed step(s) as `ok: false`; keep successful step data + - Update `ChainStatus` to `partial` if some steps succeeded +- This requires changing `GithubClient.query()` to return raw `{data, errors}` + instead of throwing, or a new lower-level method for chain use + +--- + +## 4. GQL β€” cross-step data passing + +**Current limitation:** `executeTasks` takes static inputs per step. There is no +way for step N to use output from step N-1 as its input β€” each step's input must +be fully specified by the caller upfront. + +**Example gap:** "Create an issue, then immediately set it as a child of another +issue" requires two separate round-trips today: +1. `executeTask("issue.create", ...)` β†’ get new `issueId` from result +2. `executeTask("issue.relations.parent.set", { issueId: , ... })` + +**Proposed design (for evaluation):** + +Allow step inputs to reference prior step outputs via a template syntax, e.g.: +```json +[ + { "task": "issue.create", "input": { "repositoryId": "R_x", "title": "Bug" } }, + { "task": "issue.relations.parent.set", + "input": { "issueId": "{{steps.0.data.id}}", "parentIssueId": "I_parent" } } +] +``` + +This would require a pre-processing pass in `executeTasks` after each phase to +resolve template references from accumulated results. Adds significant complexity β€” +evaluate whether the use-case frequency justifies it. + +--- + +## 5. GQL β€” expand chainable coverage for currently CLI-only mutations + +Several useful mutations are CLI-only and cannot be chained. The highest-value +candidates for adding a GraphQL route (and thus making them chainable): + +| Capability | Why valuable in chains | +|---|---| +| `issue.labels.remove` | complement to `issue.labels.add` in same chain | +| `issue.assignees.add` | complement to `issue.assignees.set` in same chain | +| `issue.assignees.remove` | complement to `issue.assignees.set` in same chain | +| `issue.milestone.clear` | complement to `issue.milestone.set` in same chain | + +For each: add a GraphQL mutation `.graphql` file, run codegen, add to card YAML +(`graphql:` block), register in `MUTATION_DOCUMENTS`. Milestone clear and label +remove need resolution lookups (need the current milestone/label IDs). + +--- + +## 6. Resolution β€” cache lookup results across calls + +**Current behavior:** every `executeTasks` call re-fetches all Phase 1 lookups +(label IDs, assignee IDs, milestone IDs, etc.) even if the same repo's data was +fetched in a recent prior call. + +**Opportunity:** a short-lived in-memory cache keyed by `(operationName, variables)` +with a TTL (e.g. 60s) would eliminate redundant lookups for agents issuing multiple +chains against the same repo in rapid succession. + +**Note:** Phase 1 already batches lookups within a single chain into one HTTP +request. This is about cross-call caching, not intra-chain batching. + +**Scope:** small addition to `engine.ts` or a dedicated `resolution-cache.ts` +module. Use a `Map` keyed by +`${operationName}:${stableStringify(variables)}` with explicit TTL eviction. +(Note: `WeakMap` cannot be used here β€” its keys must be objects, not strings.) + +--- + +## 7. Test coverage β€” new mutations not individually exercised + +The 11 newly registered mutations (`IssueClose`, `IssueReopen`, `IssueDelete`, +`IssueUpdate`, `IssueCommentCreate`, `IssueParentSet`, `IssueBlockedByAdd`, +`IssueBlockedByRemove`, `PrCommentReply`, `PrCommentResolve`, `PrCommentUnresolve`) +and the new `PrReviewSubmit` + `PrNodeId` are registered in `document-registry.ts` +but not individually exercised by any unit test. + +**Required work:** + +- Add a test in `registry-validation.test.ts` (or a new + `document-registry.test.ts`) that: + - Asserts `getMutationDocument` succeeds for all 18 registered mutations + - Asserts `getLookupDocument` succeeds for all 6 registered lookups +- Add an `executeTasks` integration test for at least one of the new "no-resolution" + mutations (e.g. `issue.close`) to confirm the full Phase 2 path works end-to-end + with a mocked `githubClient.query` + +--- + +## 8. `pr.reviews.submit` resolution β€” validate inject path + +The new resolution for `pr.reviews.submit` injects `pullRequestId` from +`repository.pullRequest.id` (scalar inject). This was added based on the +`PrNodeIdQuery` type signature. It should be validated with a real GitHub API call +before the PR merges, since the path must exactly match the live response structure. + +**Validation step:** add an integration test or run manually against a test repo: +```bash +ghx chain --steps '[ + {"task":"pr.reviews.submit", + "input":{"owner":"","name":"","prNumber":1,"event":"COMMENT","body":"test"}} +]' +``` +Confirm `status: "success"` and `pullRequestId` was correctly resolved. diff --git a/docs/plans/2026-02-20-pr-review-fixes-round2.md b/docs/plans/2026-02-20-pr-review-fixes-round2.md new file mode 100644 index 00000000..d2e536c6 --- /dev/null +++ b/docs/plans/2026-02-20-pr-review-fixes-round2.md @@ -0,0 +1,546 @@ +# PR #59 Review Fixes β€” Round 2 + +> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. + +**Goal:** Address all actionable comments from the second round of CodeRabbit review on PR #59 (feat/atomic-chaining). + +**Architecture:** Seven independent fixes across code correctness, type hygiene, import style, and doc accuracy. No new features β€” each change is a targeted, minimal edit. The one large suggestion (partial-error propagation) is explicitly deferred. + +**Tech Stack:** TypeScript strict + ESM, Vitest, Biome formatter, pnpm workspaces in `.worktrees/feat-atomic-chaining/`. + +--- + +## Skipped / Deferred + +| Comment | Decision | +|---------|----------| +| `docs/plans` (39-41): change `.set` β†’ `.update` in followup plan | **Skip** β€” reviewer direction is wrong; actual card files are named `.set` | +| `docs/plans` (52-68): partial-error propagation via raw HTTP response | **Defer** β€” requires breaking change to `GithubClient` contract; tracked in follow-up | +| `docs/capabilities/README.md` (53-54): rename `blocked_by` β†’ `blocking` | **Skip** β€” YAML card files already named `issue.relations.blocked_by.*`; renaming would break existing consumers | + +--- + +## Task 1: Add `queryMock.toHaveBeenCalledTimes(2)` assertion + +**Files:** +- Modify: `packages/core/test/unit/engine.test.ts:622-627` + +**Step 1: Open the mixed-resolution chain test** (line 512 of engine.test.ts). The `queryMock` variable is defined at line 599. The test currently only checks `buildBatchQueryMock` and `buildBatchMutationMock` were called, but never asserts the underlying HTTP client was actually invoked twice (once per phase). + +**Step 2: Add the assertion** immediately after line 623 (`expect(buildBatchMutationMock).toHaveBeenCalled()`): + +```typescript + expect(buildBatchQueryMock).toHaveBeenCalled() + expect(buildBatchMutationMock).toHaveBeenCalled() + expect(queryMock).toHaveBeenCalledTimes(2) // ← add this line + expect(result.status).toBe("success") +``` + +**Step 3: Run the test** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/engine.test.ts -t "mixed resolution" +``` +Expected: `1 test passed` + +**Step 4: Commit** + +```bash +git add packages/core/test/unit/engine.test.ts +git commit -m "test(engine): assert queryMock called twice in mixed-resolution chain test" +``` + +--- + +## Task 2: Extract `LookupSpec` named interface from `types.ts` + +**Files:** +- Modify: `packages/core/src/core/registry/types.ts:78-89` + +The docs diagram (`operation-cards.md`) references `LookupSpec` as a named type, but the TypeScript source inlines the lookup shape inside `ResolutionConfig`. Extracting it makes the two consistent and enables reuse in tests. + +**Step 1: Add the named interface** just before `ResolutionConfig` in `types.ts`. The full change: + +```typescript +// Add before ResolutionConfig: +export interface LookupSpec { + operationName: string + documentPath: string + vars: Record +} + +export interface ResolutionConfig { + lookup: LookupSpec // was: inline object type + inject: InjectSpec[] +} +``` + +No callers need to change β€” `ResolutionConfig.lookup` retains the same shape. + +**Step 2: Typecheck** + +```bash +pnpm run typecheck +``` +Expected: no errors. + +**Step 3: Run all core tests** + +```bash +pnpm --filter @ghx-dev/core exec vitest run +``` +Expected: all tests pass. + +**Step 4: Commit** + +```bash +git add packages/core/src/core/registry/types.ts +git commit -m "refactor(types): extract LookupSpec named interface from ResolutionConfig" +``` + +--- + +## Task 3: Align `operation-cards.md` diagram to use `ResolutionConfig` + `LookupSpec` + +**Files:** +- Modify: `docs/architecture/operation-cards.md:35-50` (the class diagram block) + +The diagram currently writes `ResolutionBlock` for what the TS type calls `ResolutionConfig`. After Task 2 we also have `LookupSpec`. Update the diagram to match. + +**Step 1: Find the class diagram block** in `operation-cards.md` (search for `class ResolutionBlock`). + +**Step 2: Replace the stale class names:** + +Current: +``` + resolution?: ResolutionBlock +... + class ResolutionBlock { + lookup: LookupSpec +... + GraphQLMetadata --> ResolutionBlock +``` + +Replace with: +``` + resolution?: ResolutionConfig +... + class ResolutionConfig { + lookup: LookupSpec +... + GraphQLMetadata --> ResolutionConfig +``` + +**Step 3: Verify no other stale `ResolutionBlock` references remain** + +```bash +grep -n "ResolutionBlock" docs/architecture/operation-cards.md +``` +Expected: no output. + +**Step 4: Commit** + +```bash +git add docs/architecture/operation-cards.md +git commit -m "docs(arch): align operation-cards diagram to ResolutionConfig + LookupSpec" +``` + +--- + +## Task 4: Fix stale capability IDs in docs (`issue.labels.update` β†’ `issue.labels.set`, `issue.assignees.update` β†’ `issue.assignees.set`) + +**Files (all docs, NOT `docs/plans/`):** +- `docs/capabilities/issues.md` +- `docs/architecture/operation-cards.md` +- `docs/getting-started/first-task.md` +- `docs/getting-started/README.md` +- `docs/guides/chaining-capabilities.md` +- `docs/guides/cli-usage.md` +- `docs/guides/library-api.md` +- `docs/guides/result-envelope.md` +- `docs/benchmark/workflow-roadmap.md` + +The actual YAML card files are `issue.labels.set.yaml` and `issue.assignees.set.yaml`. Every doc that references `issue.labels.update` or `issue.assignees.update` is wrong and will confuse users who copy-paste examples. + +**Step 1: Mass replace (run from worktree root)** + +```bash +# Replace in all docs except plans/ (cross-platform: perl works on macOS and Linux) +find docs/ -name "*.md" ! -path "docs/plans/*" \ + -exec perl -pi -e \ + 's/issue\.labels\.update/issue.labels.set/g; s/issue\.assignees\.update/issue.assignees.set/g' \ + {} + +``` + +**Step 2: Verify no stale references remain outside plans/** + +```bash +grep -rn "issue\.labels\.update\|issue\.assignees\.update" docs/ --include="*.md" \ + | grep -v "docs/plans/" +``` +Expected: no output. + +**Step 3: Spot-check two files** to make sure the replacements look correct (not double-replaced or mangled): + +```bash +grep -n "issue\.labels\|issue\.assignees" docs/guides/chaining-capabilities.md | head -10 +grep -n "issue\.labels\|issue\.assignees" docs/capabilities/issues.md | head -10 +``` + +**Step 4: Run core tests** (no code changed, but sanity check) + +```bash +pnpm --filter @ghx-dev/core exec vitest run +``` + +**Step 5: Commit** + +```bash +git add docs/ +git commit -m "docs: fix stale capability IDs issue.labels.updateβ†’set and issue.assignees.updateβ†’set" +``` + +--- + +## Task 5: Remove `issue.comments.create` from non-batchable table in followup plan + +**Files:** +- Modify: `docs/plans/2026-02-20-atomic-chaining-followup.md:110` + +The table at line ~104 lists "CLI-only mutations that cannot be chained". `issue.comments.create` has a note "(already batchable)" and a "βœ“ done" status β€” it shouldn't be in this table at all. + +**Step 1: Delete the row** (the line that reads): +``` +| `issue.comments.create` (already batchable) | βœ“ done | +``` + +**Step 2: Commit** + +```bash +git add docs/plans/2026-02-20-atomic-chaining-followup.md +git commit -m "docs(plans): remove issue.comments.create from non-batchable candidates table" +``` + +--- + +## Task 6: Use `@core/` alias imports in `chain.ts` + +**Files:** +- Modify: `packages/core/src/cli/commands/chain.ts:1-2` + +All multi-level relative imports in this package use `@core/` aliases per CLAUDE.md. `chain.ts` currently uses raw relative paths for two deep imports. + +**Step 1: Replace the two imports** at the top of `chain.ts`: + +```typescript +// Before: +import { executeTasks } from "../../core/routing/engine.js" +import { createGithubClient } from "../../gql/github-client.js" + +// After: +import { executeTasks } from "@core/core/routing/engine.js" +import { createGithubClient } from "@core/gql/github-client.js" +``` + +Leave line 3 (`import { readStdin } from "./run.js"`) unchanged β€” that's a same-directory relative import, correct per convention. + +**Step 2: Typecheck** + +```bash +pnpm run typecheck +``` +Expected: no errors. + +**Step 3: Run chain-command tests** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/chain-command.test.ts +``` +Expected: all 10 tests pass. + +**Step 4: Commit** + +```bash +git add packages/core/src/cli/commands/chain.ts +git commit -m "refactor(chain): use @core/ path aliases instead of deep relative imports" +``` + +--- + +## Task 7: Detect missing alias key in result assembly (`engine.ts`) + +**Files:** +- Modify: `packages/core/src/core/routing/engine.ts:470-472` + +Current code at line 470: +```typescript +const data = rawMutResult[mutInput.alias] +return { task: req.task, ok: true, data } +``` + +If `rawMutResult` doesn't contain the alias key (e.g., GitHub returned a partial response without the alias), `data` is `undefined` and the step is incorrectly marked `ok: true`. Use the `in` operator to distinguish "key present with undefined value" from "key absent". + +**Step 1: Write the failing test** β€” add to the "executeTasks chaining" describe block in `engine.test.ts` (after the existing batch mutation tests). Use `vi.resetModules()` pattern: + +```typescript +it("marks step as failed when mutation result alias is missing from response", async () => { + vi.resetModules() + vi.doMock("@core/core/execute/execute.js", () => ({ + execute: (...args: unknown[]) => executeMock(...args), + })) + vi.doMock("@core/core/registry/index.js", () => ({ + getOperationCard: (...args: unknown[]) => getOperationCardMock(...args), + })) + vi.doMock("@core/gql/document-registry.js", () => ({ + getLookupDocument: vi.fn(), + getMutationDocument: vi.fn().mockReturnValue("mutation IssueClose($issueId: ID!) { closeIssue(input: {issueId: $issueId}) { issue { id } } }"), + })) + vi.doMock("@core/gql/batch.js", () => ({ + buildBatchQuery: vi.fn(), + buildBatchMutation: vi.fn().mockReturnValue({ document: "mutation {}", variables: {} }), + })) + vi.doMock("@core/gql/resolve.js", () => ({ + applyInject: vi.fn(), + buildMutationVars: vi.fn().mockReturnValue({ issueId: "I1" }), + })) + + getOperationCardMock.mockReturnValue({ + ...baseCard, + graphql: { operationName: "IssueClose", documentPath: "x" }, + }) + + const { executeTasks } = await import("@core/core/routing/engine.js") + + const queryMock = vi.fn() + // Phase 1: no lookups (no resolution on either card) + // Phase 2: response is missing the alias key entirely + .mockResolvedValueOnce({ step1: { closeIssue: { issue: { id: "I2" } } } }) // step0 key absent + + const result = await executeTasks( + [ + { task: "issue.close", input: { issueId: "I1" } }, + { task: "issue.close", input: { issueId: "I2" } }, + ], + { githubClient: createGithubClient({ query: queryMock }) }, + ) + + const r0 = result.results[0] + expect(r0?.ok).toBe(false) + expect(r0?.error?.message).toContain("missing") + expect(result.results[1]?.ok).toBe(true) +}) +``` + +**Step 2: Run to confirm it fails** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/engine.test.ts -t "missing mutation result" +``` +Expected: FAIL (step 0 currently returns `ok: true` with `data: undefined`). + +**Step 3: Fix the assembly block** in `engine.ts` (lines 469-472): + +```typescript +// Before: + const data = rawMutResult[mutInput.alias] + return { task: req.task, ok: true, data } + +// After: + if (!(mutInput.alias in rawMutResult)) { + return { + task: req.task, + ok: false, + error: { + code: errorCodes.Unknown, + message: `missing mutation result for alias ${mutInput.alias}`, + retryable: false, + }, + } + } + const data = rawMutResult[mutInput.alias] + return { task: req.task, ok: true, data } +``` + +**Step 4: Run to confirm it passes** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/engine.test.ts +``` +Expected: all 14 tests pass. + +**Step 5: Commit** + +```bash +git add packages/core/src/core/routing/engine.ts packages/core/test/unit/engine.test.ts +git commit -m "fix(engine): detect missing alias key in result assembly via 'in' operator" +``` + +--- + +## Task 8: Derive `retryable` from error code in batch failure paths + +**Files:** +- Modify: `packages/core/src/core/routing/engine.ts:360-381` (Phase 1 catch) and `:438-454` (Phase 2 catch) + +Both catch blocks currently hardcode `retryable: true`. Auth and validation errors are not retryable β€” hardcoding `true` could cause agents to retry permanently on permanent failures. + +**Step 1: Add a local helper** at the top of the `executeTasks` function body (before the pre-flight loop), or as a module-level private function: + +```typescript +function isRetryableCode(code: string): boolean { + return ( + code === errorCodes.RateLimit || + code === errorCodes.Network || + code === errorCodes.Server + ) +} +``` + +**Step 2: Update Phase 1 catch block** (lines 360-381). Change: + +```typescript + } catch (err) { + // Phase 1 failure: mark all steps as failed + const errorMsg = err instanceof Error ? err.message : String(err) + return { + status: "failed", + results: requests.map((req) => ({ + task: req.task, + ok: false, + error: { + code: mapErrorToCode(err), + message: `Phase 1 (resolution) failed: ${errorMsg}`, + retryable: true, // ← hardcoded +``` + +To: + +```typescript + } catch (err) { + // Phase 1 failure: mark all steps as failed + const errorMsg = err instanceof Error ? err.message : String(err) + const code = mapErrorToCode(err) + return { + status: "failed", + results: requests.map((req) => ({ + task: req.task, + ok: false, + error: { + code, + message: `Phase 1 (resolution) failed: ${errorMsg}`, + retryable: isRetryableCode(code), // ← derived +``` + +**Step 3: Update Phase 2 catch block** (lines 438-454). Change: + +```typescript + stepPreResults[stepIndex] = { + task: reqAtIndex.task, + ok: false, + error: { + code: mapErrorToCode(err), + message: err instanceof Error ? err.message : String(err), + retryable: true, // ← hardcoded +``` + +To: + +```typescript + const code = mapErrorToCode(err) // compute once outside inner loop + for (const { stepIndex } of mutationInputs) { + const reqAtIndex = requests[stepIndex] + if (reqAtIndex !== undefined) { + stepPreResults[stepIndex] = { + task: reqAtIndex.task, + ok: false, + error: { + code, + message: err instanceof Error ? err.message : String(err), + retryable: isRetryableCode(code), // ← derived +``` + +**Step 4: Add a regression test** to the "executeTasks chaining" describe block, verifying that an auth error is NOT retryable: + +```typescript +it("marks batch mutation failure as non-retryable for auth errors", async () => { + vi.resetModules() + vi.doMock("@core/core/execute/execute.js", () => ({ + execute: (...args: unknown[]) => executeMock(...args), + })) + vi.doMock("@core/core/registry/index.js", () => ({ + getOperationCard: (...args: unknown[]) => getOperationCardMock(...args), + })) + vi.doMock("@core/gql/document-registry.js", () => ({ + getLookupDocument: vi.fn(), + getMutationDocument: vi.fn().mockReturnValue("mutation IssueClose { ok }"), + })) + vi.doMock("@core/gql/batch.js", () => ({ + buildBatchQuery: vi.fn(), + buildBatchMutation: vi.fn().mockReturnValue({ document: "mutation {}", variables: {} }), + })) + vi.doMock("@core/gql/resolve.js", () => ({ + applyInject: vi.fn(), + buildMutationVars: vi.fn().mockReturnValue({}), + })) + + getOperationCardMock.mockReturnValue({ + ...baseCard, + graphql: { operationName: "IssueClose", documentPath: "x" }, + }) + + const { executeTasks } = await import("@core/core/routing/engine.js") + + const authError = new Error("Bad credentials") + ;(authError as NodeJS.ErrnoException).code = "AUTH" + const queryMock = vi.fn().mockRejectedValue(authError) + + const result = await executeTasks( + [ + { task: "issue.close", input: { issueId: "I1" } }, + { task: "issue.close", input: { issueId: "I2" } }, + ], + { githubClient: createGithubClient({ query: queryMock }) }, + ) + + expect(result.status).toBe("failed") + for (const r of result.results) { + expect(r.ok).toBe(false) + if (!r.ok) { + expect(r.error.retryable).toBe(false) + } + } +}) +``` + +**Step 5: Run all engine tests** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/engine.test.ts +``` +Expected: all tests pass. + +**Step 6: Typecheck** + +```bash +pnpm run typecheck +``` + +**Step 7: Commit** + +```bash +git add packages/core/src/core/routing/engine.ts packages/core/test/unit/engine.test.ts +git commit -m "fix(engine): derive retryable from error code instead of hardcoding true in batch failure paths" +``` + +--- + +## Final verification + +```bash +pnpm run ci --outputStyle=static +``` + +Expected: +- All tests pass (should be 15+ in engine.test.ts) +- Coverage β‰₯ 90% branch +- Typecheck, lint, format all green diff --git a/docs/plans/2026-02-20-pr-review-fixes.md b/docs/plans/2026-02-20-pr-review-fixes.md new file mode 100644 index 00000000..10f4e2a8 --- /dev/null +++ b/docs/plans/2026-02-20-pr-review-fixes.md @@ -0,0 +1,963 @@ +# PR Review Fixes β€” Atomic Chaining Implementation Plan + +> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task. + +**Goal:** Address 8 issues identified in the CodeRabbit + Claude review of PR #59 (feat/atomic-chaining). + +**Architecture:** Four independent fix tracks that can be executed in parallel β€” chain.ts robustness, engine.ts correctness, test infrastructure repair, and documentation. All work is confined to `packages/core/` inside the `feat-atomic-chaining` worktree at `.worktrees/feat-atomic-chaining/`. + +**Tech Stack:** TypeScript strict ESM, Vitest, Biome (formatter), Node >=22. + +--- + +## Worktree Context + +All file paths are relative to `.worktrees/feat-atomic-chaining/`. Always run commands from that directory: + +```bash +cd /Users/aryekogan/repos/ghx/.worktrees/feat-atomic-chaining +``` + +Run tests with: +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/engine.test.ts +pnpm --filter @ghx-dev/core exec vitest run test/unit/chain.test.ts +pnpm --filter @ghx-dev/core exec vitest run test/unit/document-registry.test.ts +pnpm run typecheck +pnpm run lint +``` + +Format (auto-fix) with: +```bash +pnpm run format +``` + +--- + +## Track A β€” `chain.ts` Robustness (CR-1 + CR-2) + +**Files:** +- Modify: `packages/core/src/cli/commands/chain.ts` +- Test: `packages/core/test/unit/chain.test.ts` (create if not exists) + +### Task A1: Add fetch timeout and safe JSON parsing + +**Context:** `executeGraphqlRequest` in `chain.ts` calls `response.json()` without a try-catch. If GitHub returns an HTML error page (e.g. 503 gateway timeout), `response.json()` throws a `SyntaxError` with a confusing internal stack trace. Also, there is no network timeout β€” the CLI can hang indefinitely. + +**Step 1: Write failing tests** + +Create (or add to) `packages/core/test/unit/chain.test.ts`: + +```ts +import { describe, expect, it, vi, beforeEach } from "vitest" + +describe("executeGraphqlRequest (via chainCommand)", () => { + beforeEach(() => { + vi.unstubAllGlobals() + process.env.GITHUB_TOKEN = "test-token" + }) + + it("throws a user-friendly error when response is non-JSON", async () => { + vi.stubGlobal("fetch", vi.fn().mockResolvedValue({ + ok: false, + status: 503, + json: vi.fn().mockRejectedValue(new SyntaxError("Unexpected token")), + })) + const { chainCommand } = await import("@core/cli/commands/chain.js") + // chainCommand must not throw β€” it must return exit code 1 + const code = await chainCommand(["--steps", "[{\"task\":\"issue.create\",\"input\":{}}]"]) + expect(code).toBe(1) + }) + + it("aborts request after 30s timeout", async () => { + // Simulate a fetch that never resolves + vi.stubGlobal("fetch", vi.fn().mockImplementation((_url, opts) => { + // Verify the signal is attached + expect(opts?.signal).toBeDefined() + return new Promise(() => {}) // never resolves + })) + // Just verify signal is passed β€” we can't wait 30s in tests + const { chainCommand } = await import("@core/cli/commands/chain.js") + // Fire and forget β€” just check fetch was called with a signal + chainCommand(["--steps", "[{\"task\":\"issue.create\",\"input\":{}}]"]) + await new Promise((r) => setTimeout(r, 0)) + expect(fetch).toHaveBeenCalledWith( + expect.any(String), + expect.objectContaining({ signal: expect.any(AbortSignal) }), + ) + }) +}) +``` + +**Step 2: Run tests to verify they fail** + +```bash +cd /Users/aryekogan/repos/ghx/.worktrees/feat-atomic-chaining +pnpm --filter @ghx-dev/core exec vitest run test/unit/chain.test.ts +``` + +Expected: FAIL (non-JSON test likely throws instead of returning 1; signal test fails because no signal is passed). + +**Step 3: Implement fix in `chain.ts`** + +In `executeGraphqlRequest`, make two changes: + +1. Add `signal: AbortSignal.timeout(30_000)` to the `fetch` call options. +2. Wrap `response.json()` in a try-catch: + +```ts +async function executeGraphqlRequest( + token: string, + query: string, + variables?: Record, +): Promise { + const response = await fetch(GITHUB_GRAPHQL_ENDPOINT, { + method: "POST", + headers: { + "content-type": "application/json", + accept: "application/json", + authorization: `Bearer ${token}`, + "user-agent": "ghx", + }, + body: JSON.stringify({ query, variables: variables ?? {} }), + signal: AbortSignal.timeout(30_000), + }) + + let payload: { data?: TData; errors?: Array<{ message?: string }>; message?: string } + try { + payload = (await response.json()) as typeof payload + } catch { + throw new Error( + `GitHub GraphQL returned non-JSON response (status ${response.status})`, + ) + } + + // ... rest unchanged +} +``` + +**Step 4: Run tests to verify they pass** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/chain.test.ts +``` + +Expected: PASS + +**Step 5: Format and typecheck** + +```bash +pnpm run format +pnpm run typecheck +``` + +**Step 6: Commit** + +```bash +git add packages/core/src/cli/commands/chain.ts packages/core/test/unit/chain.test.ts +git commit -m "fix(chain): safe json parse + 30s abort timeout on graphql fetch" +``` + +--- + +### Task A2: Wrap sync throws inside chainCommand to produce clean exit code 1 + +**Context:** `chainCommand` calls `parseChainFlags`, `parseJsonSteps`, `resolveGithubToken` synchronously. These throw plain `Error`s. Inside an `async` function this rejects the returned promise. `index.ts` catches rejections via `.then(_, err => process.exit(1))`, but the pattern doesn't give the user a clean error path vs unexpected failures. Wrap with try-catch to explicitly return code 1 with stderr output. + +**Step 1: Write failing test** + +Add to `packages/core/test/unit/chain.test.ts`: + +```ts +it("returns exit code 1 and writes to stderr when token is missing", async () => { + delete process.env.GITHUB_TOKEN + delete process.env.GH_TOKEN + const stderrSpy = vi.spyOn(process.stderr, "write").mockImplementation(() => true) + const { chainCommand } = await import("@core/cli/commands/chain.js") + const code = await chainCommand(["--steps", "[{\"task\":\"t\",\"input\":{}}]"]) + expect(code).toBe(1) + expect(stderrSpy).toHaveBeenCalledWith(expect.stringContaining("GITHUB_TOKEN")) + stderrSpy.mockRestore() +}) + +it("returns exit code 1 and writes to stderr when steps JSON is invalid", async () => { + process.env.GITHUB_TOKEN = "tok" + const stderrSpy = vi.spyOn(process.stderr, "write").mockImplementation(() => true) + const { chainCommand } = await import("@core/cli/commands/chain.js") + const code = await chainCommand(["--steps", "not-json"]) + expect(code).toBe(1) + expect(stderrSpy).toHaveBeenCalledWith(expect.stringContaining("Invalid JSON")) + stderrSpy.mockRestore() +}) +``` + +**Step 2: Run tests to verify they fail** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/chain.test.ts -t "returns exit code 1" +``` + +Expected: FAIL (currently throws instead of returning 1). + +**Step 3: Wrap `chainCommand` body in try-catch** + +In `chain.ts`, change `chainCommand` to: + +```ts +export async function chainCommand(argv: string[] = []): Promise { + if (argv.length === 0) { + process.stdout.write( + "Usage: ghx chain --steps '' | --steps - [--check-gh-preflight]\n", + ) + return 1 + } + + try { + const { stepsSource, skipGhPreflight } = parseChainFlags(argv) + const steps = + stepsSource === "stdin" + ? parseJsonSteps(await readStdin()) + : parseJsonSteps(stepsSource.raw) + const githubToken = resolveGithubToken() + + const githubClient = createGithubClient({ + async execute(query: string, variables?: Record): Promise { + return executeGraphqlRequest(githubToken, query, variables) + }, + }) + + const result = await executeTasks(steps, { + githubClient, + githubToken, + skipGhPreflight, + }) + + process.stdout.write(`${JSON.stringify(result, null, 2)}\n`) + return result.status === "success" || result.status === "partial" ? 0 : 1 + } catch (err) { + const message = err instanceof Error ? err.message : String(err) + process.stderr.write(`${message}\n`) + return 1 + } +} +``` + +**Step 4: Run tests** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/chain.test.ts +``` + +Expected: PASS + +**Step 5: Format, typecheck, commit** + +```bash +pnpm run format +pnpm run typecheck +git add packages/core/src/cli/commands/chain.ts packages/core/test/unit/chain.test.ts +git commit -m "fix(chain): catch sync throws inside chainCommand, return exit 1 with stderr" +``` + +--- + +## Track B β€” `engine.ts` Correctness (CR-3 + CL-4) + +**Files:** +- Modify: `packages/core/src/core/routing/engine.ts` +- Test: `packages/core/test/unit/engine.test.ts` + +### Task B1: Fix preflight error mis-attribution on duplicate task names (CR-3) + +**Context:** `executeTasks` builds `preflightErrors: ChainStepResult[]` by task name string. Then it maps results back with: + +```ts +preflightErrors.find((e) => e.task === req.task) +``` + +If a chain has two steps with the same capability (e.g. `issue.close` twice) and only the second fails, the `.find()` matches the task name and returns the error for both, mis-attributing it to the first step too. + +**Step 1: Write failing test** + +Add to `packages/core/test/unit/engine.test.ts` inside `describe("executeTasks", ...)`: + +```ts +it("pre-flight correctly attributes errors when same capability appears twice", async () => { + // First call returns a valid card, second call returns undefined (unknown task) + getOperationCardMock + .mockReturnValueOnce(undefined) // step 0 fails: unknown task + .mockReturnValueOnce({ ...baseCard, graphql: { operationName: "IssueClose", documentPath: "x" } }) // step 1 ok + + const { executeTasks } = await import("@core/core/routing/engine.js") + + const result = await executeTasks( + [ + { task: "issue.close", input: { issueId: "I1" } }, + { task: "issue.close", input: { issueId: "I2" } }, + ], + { githubClient: createGithubClient() }, + ) + + expect(result.status).toBe("failed") + // Only step 0 should have a specific error; step 1 gets "pre-flight failed" fallback + expect(result.results[0]?.ok).toBe(false) + expect(result.results[0]?.error?.message).toContain("Invalid task") + expect(result.results[1]?.ok).toBe(false) + // step 1 should get the generic fallback, NOT the same error as step 0 + expect(result.results[1]?.error?.message).not.toContain("Invalid task") +}) +``` + +**Step 2: Run to confirm failure** + +```bash +cd /Users/aryekogan/repos/ghx/.worktrees/feat-atomic-chaining +pnpm --filter @ghx-dev/core exec vitest run test/unit/engine.test.ts -t "correctly attributes errors" +``` + +Expected: FAIL (both get the same error currently). + +**Step 3: Fix the error attribution in `engine.ts`** + +In `executeTasks`, change the `preflightErrors` array to track by step index. Locate the pre-flight loop (around line 253) and modify: + +```ts +// Change type of preflightErrors to include stepIndex +const preflightErrors: Array = [] + +// In the loop (add index): +for (let i = 0; i < requests.length; i += 1) { + const req = requests[i] + if (req === undefined) continue + try { + const card = getOperationCard(req.task) + if (!card) { + throw new Error(`Invalid task: ${req.task}`) + } + // ... validation unchanged ... + cards.push(card) + } catch (err) { + preflightErrors.push({ + stepIndex: i, + task: req.task, + ok: false, + error: { + code: mapErrorToCode(err), + message: err instanceof Error ? err.message : String(err), + retryable: false, + }, + }) + } +} + +// Change the result mapping: +results: requests.map( + (req, i) => + preflightErrors.find((e) => e.stepIndex === i) ?? { + task: req.task, + ok: false, + error: { code: errorCodes.Unknown, message: "pre-flight failed", retryable: false }, + }, +), +``` + +Note: the `for...of` loop over `requests` must become a `for (let i = 0; ...)` loop. The `cards.push` logic needs adjustment too β€” currently `cards` is built in the same loop but only when preflight succeeds. Keep that: only push to `cards` in the non-error branch. + +**Step 4: Run test** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/engine.test.ts +``` + +Expected: all engine tests PASS. + +**Step 5: Format + typecheck + commit** + +```bash +pnpm run format +pnpm run typecheck +git add packages/core/src/core/routing/engine.ts packages/core/test/unit/engine.test.ts +git commit -m "fix(engine): track preflight errors by step index, not task name" +``` + +--- + +### Task B2: Validate lookup vars exist in input before Phase 1 (CL-4) + +**Context:** Phase 1 resolution (lookup) reads vars from `req.input` via `lookup.vars` mapping. If a required lookup var is missing from input (e.g. because the YAML card was misconfigured), GitHub receives `undefined` as a variable and returns a confusing error. Add an explicit check. + +**Step 1: Write failing test** + +Add to `packages/core/test/unit/engine.test.ts`: + +```ts +it("pre-flight rejects chain when resolution lookup var is missing from input", async () => { + const cardWithResolution = { + ...baseCard, + graphql: { + operationName: "IssueLabelsSet", + documentPath: "src/gql/operations/issue-labels-set.graphql", + resolution: { + lookup: { + operationName: "IssueLabelsLookup", + documentPath: "src/gql/operations/issue-labels-lookup.graphql", + // requires "owner" and "name" from input + vars: { owner: "owner", name: "name", repoName: "name" }, + }, + inject: [], + }, + }, + } + getOperationCardMock.mockReturnValue(cardWithResolution) + + const { executeTasks } = await import("@core/core/routing/engine.js") + + const result = await executeTasks( + // input is missing "name" + [{ task: "issue.labels.set", input: { owner: "acme" } }], + { githubClient: createGithubClient() }, + ) + + expect(result.status).toBe("failed") + expect(result.results[0]?.ok).toBe(false) + expect(result.results[0]?.error?.message).toContain("name") +}) +``` + +**Step 2: Run to confirm failure** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/engine.test.ts -t "resolution lookup var is missing" +``` + +Expected: FAIL (currently no such check). + +**Step 3: Add lookup var validation in `engine.ts`** + +In the pre-flight loop, after `cards.push(card)` succeeds (the card is valid), add: + +```ts +// Validate resolution lookup vars exist in input +if (card.graphql?.resolution) { + const { lookup } = card.graphql.resolution + for (const [, inputField] of Object.entries(lookup.vars)) { + if (req.input[inputField] === undefined) { + throw new Error( + `Resolution pre-flight failed for '${req.task}': lookup var '${inputField}' is missing from input`, + ) + } + } +} +``` + +Place this check INSIDE the try block, BEFORE `cards.push(card)` (so the card isn't pushed if vars are missing). + +**Step 4: Run all engine tests** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/engine.test.ts +``` + +Expected: PASS + +**Step 5: Format + typecheck + commit** + +```bash +pnpm run format +pnpm run typecheck +git add packages/core/src/core/routing/engine.ts packages/core/test/unit/engine.test.ts +git commit -m "fix(engine): validate resolution lookup vars exist in input at pre-flight" +``` + +--- + +## Track C β€” Test Infrastructure (CR-5 + CL-7 + CL-6) + +**Files:** +- Modify: `packages/core/test/unit/engine.test.ts` +- Create: `packages/core/test/unit/document-registry.test.ts` + +### Task C1: Fix ineffective `vi.doMock` calls (CR-5) + +**Context:** Two tests in `engine.test.ts` ("2-item pure-mutation chain" and "status is partial when one step fails") call `vi.doMock` for `@core/gql/document-registry.js`, `@core/gql/batch.js`, and `@core/gql/resolve.js`. These calls are ineffective because those modules are already statically imported by `engine.ts` at the time `engine.ts` was first imported. The mocks never take effect β€” the tests exercise the real implementations. + +**The correct pattern** (already used in `safe-runner.test.ts`): +1. `vi.resetModules()` β€” clears module cache +2. `vi.doMock(...)` β€” registers factory for next import +3. `await import(...)` β€” re-imports with fresh mocks + +However: `engine.test.ts` uses `vi.mock()` at module level for `@core/core/execute/execute.js` and `@core/core/registry/index.js`. These module-level mocks ARE effective because Vitest hoists them. The `vi.doMock` calls in individual tests are the problem. + +**Step 1: Understand what the two broken tests actually need** + +The "2-item pure-mutation chain" test wants to: +- Control what `getMutationDocument` returns (a raw GQL string) +- Control what `buildBatchMutation` returns (a composed document + variables) +- Assert that `query` is called with the composed document + +The "status is partial" test wants the same but simulates the `query` call rejecting. + +**Step 2: Rewrite the two broken tests using `vi.resetModules` + dynamic import** + +Replace the two tests (starting at "2-item pure-mutation chain") with: + +```ts +describe("executeTasks β€” batch mutation integration", () => { + it("2-item pure-mutation chain returns success after batch mutation", async () => { + vi.resetModules() + + vi.doMock("@core/core/execute/execute.js", () => ({ + execute: (...args: unknown[]) => executeMock(...args), + })) + vi.doMock("@core/core/registry/index.js", () => ({ + getOperationCard: (...args: unknown[]) => getOperationCardMock(...args), + })) + + const cardWithGql = { + ...baseCard, + graphql: { + operationName: "IssueCreate", + documentPath: "src/gql/operations/issue-create.graphql", + }, + } + getOperationCardMock.mockReturnValue(cardWithGql) + + const getMutationDocumentMock = vi.fn().mockReturnValue( + `mutation IssueCreate($repositoryId: ID!, $title: String!) { createIssue(input: {repositoryId: $repositoryId, title: $title}) { issue { id } } }`, + ) + const buildBatchMutationMock = vi.fn().mockReturnValue({ + document: `mutation BatchComposite { step0: createIssue { issue { id } } step1: createIssue { issue { id } } }`, + variables: { step0_repositoryId: "R1", step0_title: "Issue 1", step1_repositoryId: "R2", step1_title: "Issue 2" }, + }) + + vi.doMock("@core/gql/document-registry.js", () => ({ + getLookupDocument: vi.fn(), + getMutationDocument: getMutationDocumentMock, + })) + vi.doMock("@core/gql/batch.js", () => ({ + buildBatchMutation: buildBatchMutationMock, + buildBatchQuery: vi.fn(), + })) + + const { executeTasks } = await import("@core/core/routing/engine.js") + + const queryMock = vi.fn().mockResolvedValue({ + step0: { issue: { id: "I1" } }, + step1: { issue: { id: "I2" } }, + }) + + const result = await executeTasks( + [ + { task: "issue.create", input: { repositoryId: "R1", title: "Issue 1" } }, + { task: "issue.create", input: { repositoryId: "R2", title: "Issue 2" } }, + ], + { githubClient: createGithubClient({ query: queryMock }) }, + ) + + expect(getMutationDocumentMock).toHaveBeenCalledWith("IssueCreate") + expect(buildBatchMutationMock).toHaveBeenCalled() + expect(result.status).toBe("success") + expect(result.results).toHaveLength(2) + expect(result.results[0]).toMatchObject({ task: "issue.create", ok: true }) + expect(result.results[1]).toMatchObject({ task: "issue.create", ok: true }) + }) + + it("status is failed when batch mutation query rejects", async () => { + vi.resetModules() + + vi.doMock("@core/core/execute/execute.js", () => ({ + execute: (...args: unknown[]) => executeMock(...args), + })) + vi.doMock("@core/core/registry/index.js", () => ({ + getOperationCard: (...args: unknown[]) => getOperationCardMock(...args), + })) + + const cardWithGql = { + ...baseCard, + graphql: { + operationName: "IssueCreate", + documentPath: "src/gql/operations/issue-create.graphql", + }, + } + getOperationCardMock.mockReturnValue(cardWithGql) + + vi.doMock("@core/gql/document-registry.js", () => ({ + getLookupDocument: vi.fn(), + getMutationDocument: vi.fn().mockReturnValue( + `mutation IssueCreate($repositoryId: ID!, $title: String!) { createIssue(input: {repositoryId: $repositoryId, title: $title}) { issue { id } } }`, + ), + })) + vi.doMock("@core/gql/batch.js", () => ({ + buildBatchMutation: vi.fn().mockReturnValue({ + document: `mutation BatchComposite { step0: createIssue { issue { id } } }`, + variables: {}, + }), + buildBatchQuery: vi.fn(), + })) + + const { executeTasks } = await import("@core/core/routing/engine.js") + + const result = await executeTasks( + [ + { task: "issue.create", input: { repositoryId: "R1", title: "Issue 1" } }, + { task: "issue.create", input: { repositoryId: "R2", title: "Issue 2" } }, + ], + { + githubClient: createGithubClient({ + query: vi.fn().mockRejectedValue(new Error("network error")), + }), + }, + ) + + expect(result.status).toBe("failed") + expect(result.results[0]?.ok).toBe(false) + expect(result.results[1]?.ok).toBe(false) + }) +}) +``` + +**Step 3: Run tests** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/engine.test.ts +``` + +Expected: PASS, and now `getMutationDocumentMock` and `buildBatchMutationMock` are actually called (verifiable via `.toHaveBeenCalledWith`). + +**Step 4: Format + typecheck + commit** + +```bash +pnpm run format +pnpm run typecheck +git add packages/core/test/unit/engine.test.ts +git commit -m "fix(test): replace ineffective vi.doMock with resetModules+dynamic import pattern" +``` + +--- + +### Task C2: Add mixed-resolution chain test (CL-7) + +**Context:** No test covers a chain where some steps need Phase 1 resolution (lookup) and some don't. This is the most common real-world case (e.g., step 0 is `issue.close` with no resolution, step 1 is `issue.labels.set` needing label ID lookup). + +**Step 1: Write the test** + +Add a new `describe` block in `engine.test.ts`: + +```ts +describe("executeTasks β€” mixed resolution chain", () => { + it("correctly handles a chain where only one step requires Phase 1 resolution", async () => { + vi.resetModules() + + vi.doMock("@core/core/execute/execute.js", () => ({ + execute: (...args: unknown[]) => executeMock(...args), + })) + vi.doMock("@core/core/registry/index.js", () => ({ + getOperationCard: (...args: unknown[]) => getOperationCardMock(...args), + })) + + const cardNoResolution = { + ...baseCard, + graphql: { + operationName: "IssueClose", + documentPath: "src/gql/operations/issue-close.graphql", + }, + } + const cardWithResolution = { + ...baseCard, + graphql: { + operationName: "IssueLabelsSet", + documentPath: "src/gql/operations/issue-labels-set.graphql", + resolution: { + lookup: { + operationName: "IssueLabelsLookup", + documentPath: "src/gql/operations/issue-labels-lookup.graphql", + vars: { owner: "owner", name: "name" }, + }, + inject: [ + { target: "labelIds", source: "map_array", from_input: "labels", + nodes_path: "repository.labels.nodes", match_field: "name", extract_field: "id" }, + ], + }, + }, + } + + getOperationCardMock + .mockReturnValueOnce(cardNoResolution) // step 0 + .mockReturnValueOnce(cardWithResolution) // step 1 + + const lookupDocMock = vi.fn().mockReturnValue( + `query IssueLabelsLookup($owner: String!, $name: String!) { repository(owner: $owner, name: $name) { labels(first: 100) { nodes { id name } } } }`, + ) + const mutDocMock = vi.fn().mockImplementation((op: string) => { + if (op === "IssueClose") return `mutation IssueClose($issueId: ID!) { closeIssue(input: {issueId: $issueId}) { issue { id } } }` + return `mutation IssueLabelsSet($issueId: ID!, $labelIds: [ID!]!) { updateIssue(input: {id: $issueId, labelIds: $labelIds}) { issue { id } } }` + }) + const buildBatchQueryMock = vi.fn().mockReturnValue({ + document: `query BatchQuery { step1: repository { labels { nodes { id name } } } }`, + variables: { step1_owner: "acme", step1_name: "repo" }, + }) + const buildBatchMutMock = vi.fn().mockReturnValue({ + document: `mutation BatchMut { step0: closeIssue { issue { id } } step1: updateIssue { issue { id } } }`, + variables: {}, + }) + + vi.doMock("@core/gql/document-registry.js", () => ({ + getLookupDocument: lookupDocMock, + getMutationDocument: mutDocMock, + })) + vi.doMock("@core/gql/batch.js", () => ({ + buildBatchQuery: buildBatchQueryMock, + buildBatchMutation: buildBatchMutMock, + })) + + // Phase 1 returns label nodes; Phase 2 returns mutation result + const queryMock = vi.fn() + .mockResolvedValueOnce({ + // Phase 1 result: step1 lookup + step1: { repository: { labels: { nodes: [{ id: "L1", name: "bug" }] } } }, + }) + .mockResolvedValueOnce({ + // Phase 2 result: mutations + step0: { closeIssue: { issue: { id: "I1" } } }, + step1: { updateIssue: { issue: { id: "I2" } } }, + }) + + const { executeTasks } = await import("@core/core/routing/engine.js") + + const result = await executeTasks( + [ + { task: "issue.close", input: { issueId: "I1" } }, + { task: "issue.labels.set", input: { issueId: "I2", owner: "acme", name: "repo", labels: ["bug"] } }, + ], + { githubClient: createGithubClient({ query: queryMock }) }, + ) + + // Phase 1 should have been called (lookup for step 1) + expect(buildBatchQueryMock).toHaveBeenCalled() + // Phase 2 should have been called + expect(buildBatchMutMock).toHaveBeenCalled() + expect(result.status).toBe("success") + expect(result.results[0]).toMatchObject({ task: "issue.close", ok: true }) + expect(result.results[1]).toMatchObject({ task: "issue.labels.set", ok: true }) + }) +}) +``` + +**Step 2: Run test** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/engine.test.ts -t "mixed resolution" +``` + +Investigate failures carefully β€” if Phase 1 or Phase 2 is wired incorrectly in the engine, this test will expose it. + +**Step 3: Format + typecheck + commit** + +```bash +pnpm run format +pnpm run typecheck +git add packages/core/test/unit/engine.test.ts +git commit -m "test(engine): add mixed-resolution chain coverage (step with + without Phase 1)" +``` + +--- + +### Task C3: Add document registry validation test (CL-6) + +**Context:** `document-registry.ts` manually maps operation names to generated GraphQL documents. If a generated file is renamed or removed, the registry silently breaks. A simple test asserting all registered operations resolve prevents regressions. + +**Step 1: Create test file** + +Create `packages/core/test/unit/document-registry.test.ts`: + +```ts +import { describe, expect, it } from "vitest" +import { getLookupDocument, getMutationDocument } from "@core/gql/document-registry.js" + +const EXPECTED_LOOKUP_OPERATIONS = [ + "IssueLabelsLookup", + "IssueAssigneesLookup", + "IssueMilestoneLookup", + "IssueParentLookup", + "IssueCreateRepositoryId", + "PrNodeId", +] + +const EXPECTED_MUTATION_OPERATIONS = [ + "IssueAssigneesUpdate", + "IssueBlockedByAdd", + "IssueBlockedByRemove", + "IssueClose", + "IssueCommentCreate", + "IssueCreate", + "IssueDelete", + "IssueLabelsAdd", + "IssueLabelsUpdate", + "IssueMilestoneSet", + "IssueParentRemove", + "IssueParentSet", + "IssueReopen", + "IssueUpdate", + "PrCommentReply", + "PrCommentResolve", + "PrCommentUnresolve", + "PrReviewSubmit", +] + +describe("document-registry", () => { + it.each(EXPECTED_LOOKUP_OPERATIONS)( + "getLookupDocument resolves '%s' to a non-empty string", + (op) => { + const doc = getLookupDocument(op) + expect(typeof doc).toBe("string") + expect(doc.length).toBeGreaterThan(0) + }, + ) + + it.each(EXPECTED_MUTATION_OPERATIONS)( + "getMutationDocument resolves '%s' to a non-empty string", + (op) => { + const doc = getMutationDocument(op) + expect(typeof doc).toBe("string") + expect(doc.length).toBeGreaterThan(0) + }, + ) + + it("getLookupDocument throws for unknown operation", () => { + expect(() => getLookupDocument("UnknownOp")).toThrow(/UnknownOp/) + }) + + it("getMutationDocument throws for unknown operation", () => { + expect(() => getMutationDocument("UnknownOp")).toThrow(/UnknownOp/) + }) +}) +``` + +**Step 2: Run tests** + +```bash +pnpm --filter @ghx-dev/core exec vitest run test/unit/document-registry.test.ts +``` + +Expected: PASS (all operations should resolve if registry is intact). + +**Step 3: Format + typecheck + commit** + +```bash +pnpm run format +pnpm run typecheck +git add packages/core/test/unit/document-registry.test.ts +git commit -m "test(registry): validate all registered lookup and mutation operations resolve" +``` + +--- + +## Track D β€” Documentation (CL-1) + +**Files:** +- Modify: `packages/core/src/core/registry/types.ts` +- Modify: `docs/architecture/operation-cards.md` + +### Task D1: Document `input` inject source type + +**Context:** `InputPassthroughInject` (`source: "input"`) is implemented in `types.ts` and `resolve.ts`, used in `issue.labels.add.yaml`, but not documented anywhere card authors would look. Future card authors will create unnecessary Phase 1 lookups instead of using this simpler form. + +**Step 1: Add JSDoc to `types.ts`** + +In `packages/core/src/core/registry/types.ts`, add comments to each inject interface: + +```ts +/** + * Injects a scalar value extracted from a Phase 1 lookup result. + * `path` is a dot-notation path into the lookup response data. + * Example: `{ source: "scalar", target: "pullRequestId", path: "repository.pullRequest.id" }` + */ +export interface ScalarInject { + target: string + source: "scalar" + path: string +} + +/** + * Resolves a list of human-readable names to their corresponding node IDs using a Phase 1 lookup. + * Uses case-insensitive matching on `match_field`. + * Example: resolves label names β†’ label IDs via `repository.labels.nodes`. + */ +export interface MapArrayInject { + target: string + source: "map_array" + from_input: string + nodes_path: string + match_field: string + extract_field: string +} + +/** + * Passes a value directly from the step's `input` into the mutation variable β€” no Phase 1 lookup needed. + * Use when the caller already has the node ID (e.g. `issueId` is passed directly by the agent). + * Example: `{ source: "input", target: "labelableId", from_input: "issueId" }` + */ +export interface InputPassthroughInject { + target: string + source: "input" + from_input: string +} +``` + +**Step 2: Update architecture doc** + +Open `docs/architecture/operation-cards.md`. Find the section describing the `inject` array in resolution blocks. Add documentation for the three `source` types. Look for a YAML example block showing `inject:` β€” expand it to include all three types: + +```yaml +# inject source types: + +# 1. scalar β€” extract a single value from Phase 1 lookup result using dot-path +inject: + - target: pullRequestId + source: scalar + path: repository.pullRequest.id + +# 2. map_array β€” resolve a list of names β†’ IDs via Phase 1 lookup nodes +inject: + - target: labelIds + source: map_array + from_input: labels + nodes_path: repository.labels.nodes + match_field: name + extract_field: id + +# 3. input β€” pass value directly from step input, no Phase 1 lookup needed +inject: + - target: labelableId + source: input + from_input: issueId +``` + +If the doc doesn't have a resolution section, add one under the `graphql:` block documentation. + +**Step 3: Typecheck (no new code, just comments)** + +```bash +pnpm run typecheck +``` + +**Step 4: Format + commit** + +```bash +pnpm run format +git add packages/core/src/core/registry/types.ts docs/architecture/operation-cards.md +git commit -m "docs(types): document all three InjectSpec source types with JSDoc and arch doc examples" +``` + +--- + +## Final Integration + +After all four tracks complete, run the full CI suite from the worktree: + +```bash +cd /Users/aryekogan/repos/ghx/.worktrees/feat-atomic-chaining +pnpm run ci --outputStyle=static +``` + +Expected: all checks pass. If typecheck fails, fix types before declaring done. diff --git a/packages/core/src/cli/commands/chain.ts b/packages/core/src/cli/commands/chain.ts new file mode 100644 index 00000000..1c93fda2 --- /dev/null +++ b/packages/core/src/cli/commands/chain.ts @@ -0,0 +1,149 @@ +import { executeTasks } from "@core/core/routing/engine.js" +import { createGithubClient } from "@core/gql/github-client.js" +import { resolveGraphqlUrl } from "@core/gql/transport.js" +import { readStdin } from "./run.js" + +const GITHUB_GRAPHQL_ENDPOINT = resolveGraphqlUrl() + +interface ParsedChainFlags { + stepsSource: "stdin" | { raw: string } + skipGhPreflight: boolean +} + +export function parseChainFlags(argv: string[]): ParsedChainFlags { + const stepsIndex = argv.findIndex((arg) => arg === "--steps") + const inlineSteps = argv.find((arg) => arg.startsWith("--steps=")) + const stepsCandidate = stepsIndex >= 0 ? argv[stepsIndex + 1] : undefined + + if (stepsCandidate === "-") { + const skipGhPreflight = !argv.includes("--check-gh-preflight") + return { stepsSource: "stdin", skipGhPreflight } + } + + const stepsRaw = + stepsCandidate && !stepsCandidate.startsWith("--") + ? stepsCandidate + : inlineSteps + ? inlineSteps.slice("--steps=".length) + : undefined + + if (!stepsRaw) { + throw new Error("Missing --steps JSON") + } + + const skipGhPreflight = !argv.includes("--check-gh-preflight") + return { stepsSource: { raw: stepsRaw }, skipGhPreflight } +} + +function parseJsonSteps(raw: string): Array<{ task: string; input: Record }> { + let parsed: unknown + try { + parsed = JSON.parse(raw) + } catch { + throw new Error("Invalid JSON for --steps") + } + + if (!Array.isArray(parsed)) { + throw new Error("--steps must be a JSON array") + } + + for (const item of parsed) { + if ( + typeof item !== "object" || + item === null || + typeof (item as Record).task !== "string" || + typeof (item as Record).input !== "object" || + (item as Record).input === null + ) { + throw new Error('Each step must have "task" (string) and "input" (object) fields') + } + } + + return parsed as Array<{ task: string; input: Record }> +} + +function resolveGithubToken(): string { + const token = process.env.GITHUB_TOKEN ?? process.env.GH_TOKEN + if (!token || token.trim().length === 0) { + throw new Error("Missing GITHUB_TOKEN or GH_TOKEN for GraphQL transport") + } + + return token +} + +async function executeGraphqlRequest( + token: string, + query: string, + variables?: Record, +): Promise { + const response = await fetch(GITHUB_GRAPHQL_ENDPOINT, { + method: "POST", + headers: { + "content-type": "application/json", + accept: "application/json", + authorization: `Bearer ${token}`, + "user-agent": "ghx", + }, + body: JSON.stringify({ query, variables: variables ?? {} }), + signal: AbortSignal.timeout(30_000), + }) + + let payload: { data?: TData; errors?: Array<{ message?: string }>; message?: string } + try { + payload = (await response.json()) as typeof payload + } catch { + throw new Error(`GitHub GraphQL returned non-JSON response (status ${response.status})`) + } + + if (!response.ok) { + const message = + payload.message ?? `GitHub GraphQL request failed with status ${response.status}` + throw new Error(message) + } + + if (Array.isArray(payload.errors) && payload.errors.length > 0) { + const message = payload.errors[0]?.message ?? "GitHub GraphQL returned errors" + throw new Error(message) + } + + if (payload.data === undefined) { + throw new Error("GitHub GraphQL response missing data") + } + + return payload.data +} + +export async function chainCommand(argv: string[] = []): Promise { + if (argv.length === 0) { + process.stdout.write( + "Usage: ghx chain --steps '' | --steps - [--check-gh-preflight]\n", + ) + return 1 + } + + try { + const { stepsSource, skipGhPreflight } = parseChainFlags(argv) + const steps = + stepsSource === "stdin" ? parseJsonSteps(await readStdin()) : parseJsonSteps(stepsSource.raw) + const githubToken = resolveGithubToken() + + const githubClient = createGithubClient({ + async execute(query: string, variables?: Record): Promise { + return executeGraphqlRequest(githubToken, query, variables) + }, + }) + + const result = await executeTasks(steps, { + githubClient, + githubToken, + skipGhPreflight, + }) + + process.stdout.write(`${JSON.stringify(result, null, 2)}\n`) + return result.status === "success" || result.status === "partial" ? 0 : 1 + } catch (err) { + const message = err instanceof Error ? err.message : String(err) + process.stderr.write(`${message}\n`) + return 1 + } +} diff --git a/packages/core/src/cli/index.ts b/packages/core/src/cli/index.ts index 6bb14190..2ad9c692 100644 --- a/packages/core/src/cli/index.ts +++ b/packages/core/src/cli/index.ts @@ -5,6 +5,7 @@ import { pathToFileURL } from "node:url" import { capabilitiesExplainCommand } from "./commands/capabilities-explain.js" import { capabilitiesListCommand } from "./commands/capabilities-list.js" +import { chainCommand } from "./commands/chain.js" import { runCommand } from "./commands/run.js" import { setupCommand } from "./commands/setup.js" @@ -12,6 +13,7 @@ function usage(): string { return [ "Usage:", " ghx run --input '' | --input - [--check-gh-preflight]", + " ghx chain --steps '' | --steps - [--check-gh-preflight]", " ghx setup --scope [--yes] [--dry-run] [--verify] [--track]", " ghx capabilities list", " ghx capabilities explain ", @@ -30,6 +32,10 @@ export async function main(argv: string[] = process.argv.slice(2)): Promise { error?: ResultError meta: ResultMeta } + +export type ChainStatus = "success" | "partial" | "failed" + +export interface ChainStepResult { + task: string + ok: boolean + data?: unknown + error?: ResultError +} + +export interface ChainResultEnvelope { + status: ChainStatus + results: ChainStepResult[] + meta: { + route_used: RouteSource + total: number + succeeded: number + failed: number + } +} diff --git a/packages/core/src/core/registry/cards/issue.assignees.set.yaml b/packages/core/src/core/registry/cards/issue.assignees.set.yaml index 92be5b22..e05bd316 100644 --- a/packages/core/src/core/registry/cards/issue.assignees.set.yaml +++ b/packages/core/src/core/registry/cards/issue.assignees.set.yaml @@ -25,3 +25,16 @@ routing: graphql: operationName: IssueAssigneesUpdate documentPath: src/gql/operations/issue-assignees-update.graphql + resolution: + lookup: + operationName: IssueAssigneesLookup + documentPath: src/gql/operations/issue-assignees-lookup.graphql + vars: + issueId: issueId + inject: + - target: assigneeIds + source: map_array + from_input: assignees + nodes_path: node.repository.assignableUsers.nodes + match_field: login + extract_field: id diff --git a/packages/core/src/core/registry/cards/issue.create.yaml b/packages/core/src/core/registry/cards/issue.create.yaml index 0bbdf370..aa1becf5 100644 --- a/packages/core/src/core/registry/cards/issue.create.yaml +++ b/packages/core/src/core/registry/cards/issue.create.yaml @@ -26,3 +26,14 @@ routing: graphql: operationName: IssueCreate documentPath: src/gql/operations/issue-create.graphql + resolution: + lookup: + operationName: IssueCreateRepositoryId + documentPath: src/gql/operations/issue-create-repository-id.graphql + vars: + owner: owner + name: name + inject: + - target: repositoryId + source: scalar + path: repository.id diff --git a/packages/core/src/core/registry/cards/issue.labels.add.yaml b/packages/core/src/core/registry/cards/issue.labels.add.yaml index 4911c5d6..982dc3a0 100644 --- a/packages/core/src/core/registry/cards/issue.labels.add.yaml +++ b/packages/core/src/core/registry/cards/issue.labels.add.yaml @@ -23,3 +23,22 @@ output_schema: routing: preferred: graphql fallbacks: [] +graphql: + operationName: IssueLabelsAdd + documentPath: src/gql/operations/issue-labels-add.graphql + resolution: + lookup: + operationName: IssueLabelsLookup + documentPath: src/gql/operations/issue-labels-lookup.graphql + vars: + issueId: issueId + inject: + - target: labelableId + source: input + from_input: issueId + - target: labelIds + source: map_array + from_input: labels + nodes_path: node.repository.labels.nodes + match_field: name + extract_field: id diff --git a/packages/core/src/core/registry/cards/issue.labels.set.yaml b/packages/core/src/core/registry/cards/issue.labels.set.yaml index eee24ba1..3da4dd08 100644 --- a/packages/core/src/core/registry/cards/issue.labels.set.yaml +++ b/packages/core/src/core/registry/cards/issue.labels.set.yaml @@ -25,3 +25,16 @@ routing: graphql: operationName: IssueLabelsUpdate documentPath: src/gql/operations/issue-labels-update.graphql + resolution: + lookup: + operationName: IssueLabelsLookup + documentPath: src/gql/operations/issue-labels-lookup.graphql + vars: + issueId: issueId + inject: + - target: labelIds + source: map_array + from_input: labels + nodes_path: node.repository.labels.nodes + match_field: name + extract_field: id diff --git a/packages/core/src/core/registry/cards/issue.milestone.set.yaml b/packages/core/src/core/registry/cards/issue.milestone.set.yaml index 45c3b1ad..de778783 100644 --- a/packages/core/src/core/registry/cards/issue.milestone.set.yaml +++ b/packages/core/src/core/registry/cards/issue.milestone.set.yaml @@ -22,3 +22,14 @@ routing: graphql: operationName: IssueMilestoneSet documentPath: src/gql/operations/issue-milestone-set.graphql + resolution: + lookup: + operationName: IssueMilestoneLookup + documentPath: src/gql/operations/issue-milestone-lookup.graphql + vars: + issueId: issueId + milestoneNumber: milestoneNumber + inject: + - target: milestoneId + source: scalar + path: node.repository.milestone.id diff --git a/packages/core/src/core/registry/cards/issue.relations.parent.remove.yaml b/packages/core/src/core/registry/cards/issue.relations.parent.remove.yaml index ba5a98e3..7026b61e 100644 --- a/packages/core/src/core/registry/cards/issue.relations.parent.remove.yaml +++ b/packages/core/src/core/registry/cards/issue.relations.parent.remove.yaml @@ -20,3 +20,13 @@ routing: graphql: operationName: IssueParentRemove documentPath: src/gql/operations/issue-parent-remove.graphql + resolution: + lookup: + operationName: IssueParentLookup + documentPath: src/gql/operations/issue-parent-lookup.graphql + vars: + issueId: issueId + inject: + - target: parentIssueId + source: scalar + path: node.parent.id diff --git a/packages/core/src/core/registry/cards/pr.reviews.submit.yaml b/packages/core/src/core/registry/cards/pr.reviews.submit.yaml index 323bfd5b..ff6bfb16 100644 --- a/packages/core/src/core/registry/cards/pr.reviews.submit.yaml +++ b/packages/core/src/core/registry/cards/pr.reviews.submit.yaml @@ -39,3 +39,15 @@ routing: graphql: operationName: PrReviewSubmit documentPath: src/gql/operations/pr-review-submit.graphql + resolution: + lookup: + operationName: PrNodeId + documentPath: src/gql/operations/pr-node-id.graphql + vars: + owner: owner + name: name + prNumber: prNumber + inject: + - target: pullRequestId + source: scalar + path: repository.pullRequest.id diff --git a/packages/core/src/core/registry/operation-card-schema.ts b/packages/core/src/core/registry/operation-card-schema.ts index d058e098..03aee2c9 100644 --- a/packages/core/src/core/registry/operation-card-schema.ts +++ b/packages/core/src/core/registry/operation-card-schema.ts @@ -51,6 +51,71 @@ export const operationCardSchema = { }, additionalProperties: false, }, + resolution: { + type: "object", + required: ["lookup", "inject"], + properties: { + lookup: { + type: "object", + required: ["operationName", "documentPath", "vars"], + properties: { + operationName: { type: "string", minLength: 1 }, + documentPath: { type: "string", minLength: 1 }, + vars: { type: "object" }, + }, + additionalProperties: false, + }, + inject: { + type: "array", + minItems: 1, + items: { + oneOf: [ + { + type: "object", + required: ["target", "source", "path"], + properties: { + target: { type: "string", minLength: 1 }, + source: { const: "scalar" }, + path: { type: "string", minLength: 1 }, + }, + additionalProperties: false, + }, + { + type: "object", + required: [ + "target", + "source", + "from_input", + "nodes_path", + "match_field", + "extract_field", + ], + properties: { + target: { type: "string", minLength: 1 }, + source: { const: "map_array" }, + from_input: { type: "string", minLength: 1 }, + nodes_path: { type: "string", minLength: 1 }, + match_field: { type: "string", minLength: 1 }, + extract_field: { type: "string", minLength: 1 }, + }, + additionalProperties: false, + }, + { + type: "object", + required: ["target", "source", "from_input"], + properties: { + target: { type: "string", minLength: 1 }, + source: { const: "input" }, + from_input: { type: "string", minLength: 1 }, + }, + additionalProperties: false, + }, + ], + }, + }, + }, + additionalProperties: false, + }, }, additionalProperties: false, }, diff --git a/packages/core/src/core/registry/types.ts b/packages/core/src/core/registry/types.ts index 335a58b3..7ff9f003 100644 --- a/packages/core/src/core/registry/types.ts +++ b/packages/core/src/core/registry/types.ts @@ -8,6 +8,84 @@ export interface SuitabilityRule { reason: string } +/** + * Extracts a single value from a Phase 1 lookup result using a dot-notation path. + * + * Use when the mutation needs one node ID that can be resolved via a lookup query. + * + * @example + * ```yaml + * inject: + * - target: pullRequestId + * source: scalar + * path: repository.pullRequest.id + * ``` + */ +export interface ScalarInject { + target: string + source: "scalar" + path: string +} + +/** + * Resolves a list of human-readable names to node IDs using a Phase 1 lookup result. + * + * Matching is case-insensitive. Use when the mutation needs an array of IDs + * (e.g. label IDs, assignee IDs) that must be looked up by name. + * + * @example + * ```yaml + * inject: + * - target: labelIds + * source: map_array + * from_input: labels # input field containing list of names + * nodes_path: repository.labels.nodes + * match_field: name # field on each node to match against input names + * extract_field: id # field on each node to extract as the resolved value + * ``` + */ +export interface MapArrayInject { + target: string + source: "map_array" + from_input: string + nodes_path: string + match_field: string + extract_field: string +} + +/** + * Passes a value directly from the step's `input` into a mutation variable. + * + * No Phase 1 lookup is required. Use when the caller already has the required node ID + * (e.g. the agent passes `issueId` directly), avoiding an unnecessary resolution round-trip. + * + * @example + * ```yaml + * inject: + * - target: labelableId + * source: input + * from_input: issueId # the input field whose value is passed through + * ``` + */ +export interface InputPassthroughInject { + target: string + source: "input" + from_input: string +} + +export type InjectSpec = ScalarInject | MapArrayInject | InputPassthroughInject + +export interface LookupSpec { + operationName: string + documentPath: string + vars: Record +} + +export interface ResolutionConfig { + lookup: LookupSpec + inject: InjectSpec[] +} + export interface OperationCard> { capability_id: string version: string @@ -25,6 +103,7 @@ export interface OperationCard> { documentPath: string variables?: Record limits?: { maxPageSize?: number } + resolution?: ResolutionConfig } cli?: { command: string diff --git a/packages/core/src/core/routing/engine.ts b/packages/core/src/core/routing/engine.ts index 54417661..d4b9657f 100644 --- a/packages/core/src/core/routing/engine.ts +++ b/packages/core/src/core/routing/engine.ts @@ -1,6 +1,13 @@ -import type { ResultEnvelope, RouteSource } from "@core/core/contracts/envelope.js" +import type { + ChainResultEnvelope, + ChainStatus, + ChainStepResult, + ResultEnvelope, + RouteSource, +} from "@core/core/contracts/envelope.js" import type { TaskRequest } from "@core/core/contracts/task.js" import { errorCodes } from "@core/core/errors/codes.js" +import { mapErrorToCode } from "@core/core/errors/map-error.js" import { execute } from "@core/core/execute/execute.js" import { type CliCapabilityId, @@ -12,11 +19,15 @@ import { createSafeCliCommandRunner } from "@core/core/execution/cli/safe-runner import { normalizeError } from "@core/core/execution/normalizer.js" import { preflightCheck } from "@core/core/execution/preflight.js" import { getOperationCard } from "@core/core/registry/index.js" +import { validateInput } from "@core/core/registry/schema-validator.js" import { routePreferenceOrder } from "@core/core/routing/policy.js" import type { RouteReasonCode } from "@core/core/routing/reason-codes.js" +import { buildBatchMutation, buildBatchQuery } from "@core/gql/batch.js" +import { getLookupDocument, getMutationDocument } from "@core/gql/document-registry.js" import type { GithubClient } from "@core/gql/github-client.js" +import { applyInject, buildMutationVars } from "@core/gql/resolve.js" -type ExecutionDeps = { +export type ExecutionDeps = { githubClient: GithubClient githubToken?: string | null cliRunner?: CliCommandRunner @@ -94,6 +105,10 @@ async function detectCliEnvironmentCached(runner: CliCommandRunner): Promise }>, + deps: ExecutionDeps, +): Promise { + // 1-item: delegate to existing routing engine + if (requests.length === 1) { + const [req] = requests + if (req === undefined) { + // This should never happen, but TypeScript needs it + return { + status: "failed", + results: [], + meta: { route_used: "graphql", total: 0, succeeded: 0, failed: 0 }, + } + } + + const result = await executeTask({ task: req.task, input: req.input }, deps) + const step: ChainStepResult = result.ok + ? { task: req.task, ok: true, data: result.data } + : { + task: req.task, + ok: false, + error: result.error || { + code: errorCodes.Unknown, + message: "Unknown error", + retryable: false, + }, + } + return { + status: result.ok ? "success" : "failed", + results: [step], + meta: { + route_used: result.meta?.route_used ?? "graphql", + total: 1, + succeeded: result.ok ? 1 : 0, + failed: result.ok ? 0 : 1, + }, + } + } + + // Pre-flight: validate all steps + const preflightErrorByIndex = new Map() + const cards: NonNullable>[] = [] + for (let i = 0; i < requests.length; i += 1) { + const req = requests[i] + if (req === undefined) continue + try { + const card = getOperationCard(req.task) + if (!card) { + throw new Error(`Invalid task: ${req.task}`) + } + + const inputValidation = validateInput(card.input_schema, req.input) + if (!inputValidation.ok) { + const details = inputValidation.errors + .map((e) => `${e.instancePath || "root"}: ${e.message}`) + .join("; ") + throw new Error(`Input validation failed: ${details}`) + } + + if (!card.graphql) { + throw new Error(`capability '${req.task}' has no GraphQL route and cannot be chained`) + } + + // Validate that all resolution lookup vars are present in input + if (card.graphql.resolution) { + const { lookup } = card.graphql.resolution + for (const [, inputField] of Object.entries(lookup.vars)) { + if (req.input[inputField] === undefined) { + throw new Error( + `Resolution pre-flight failed for '${req.task}': lookup var '${inputField}' is missing from input`, + ) + } + } + } + + cards.push(card) + } catch (err) { + preflightErrorByIndex.set(i, { + task: req.task, + ok: false, + error: { + code: mapErrorToCode(err), + message: err instanceof Error ? err.message : String(err), + retryable: false, + }, + }) + } + } + + if (preflightErrorByIndex.size > 0) { + return { + status: "failed", + results: requests.map( + (req, i) => + preflightErrorByIndex.get(i) ?? { + task: req.task, + ok: false, + error: { code: errorCodes.Unknown, message: "pre-flight failed", retryable: false }, + }, + ), + meta: { + route_used: "graphql", + total: requests.length, + succeeded: 0, + failed: requests.length, + }, + } + } + + // Phase 1: batch resolution queries (steps with card.graphql.resolution) + const lookupInputs: Array<{ + alias: string + query: string + variables: Record + stepIndex: number + }> = [] + for (let i = 0; i < requests.length; i += 1) { + const card = cards[i] + const req = requests[i] + if (card === undefined || req === undefined) continue + if (!card.graphql?.resolution) continue + + const { lookup } = card.graphql.resolution + const lookupVars: Record = {} + for (const [lookupVar, inputField] of Object.entries(lookup.vars)) { + lookupVars[lookupVar] = req.input[inputField] + } + lookupInputs.push({ + alias: `step${i}`, + query: getLookupDocument(lookup.operationName), + variables: lookupVars, + stepIndex: i, + }) + } + + const lookupResults: Record = {} + if (lookupInputs.length > 0) { + try { + const { document, variables } = buildBatchQuery( + lookupInputs.map(({ alias, query, variables }) => ({ alias, query, variables })), + ) + const rawResult = await deps.githubClient.query(document, variables) + // Un-alias results: BatchChain result has keys like "step0", "step2", etc. + for (const { alias, stepIndex } of lookupInputs) { + lookupResults[stepIndex] = (rawResult as Record)[alias] + } + } catch (err) { + // Phase 1 failure: mark all steps as failed + const errorMsg = err instanceof Error ? err.message : String(err) + const code = mapErrorToCode(err) + return { + status: "failed", + results: requests.map((req) => ({ + task: req.task, + ok: false, + error: { + code, + message: `Phase 1 (resolution) failed: ${errorMsg}`, + retryable: isRetryableCode(code), + }, + })), + meta: { + route_used: "graphql", + total: requests.length, + succeeded: 0, + failed: requests.length, + }, + } + } + } + + // Phase 2: batch mutations + const mutationInputs: Array<{ + alias: string + mutation: string + variables: Record + stepIndex: number + }> = [] + const stepPreResults: Record = {} + + for (let i = 0; i < requests.length; i += 1) { + const card = cards[i] + const req = requests[i] + if (card === undefined || req === undefined) continue + + try { + const resolved: Record = {} + if (card.graphql?.resolution && lookupResults[i] !== undefined) { + for (const spec of card.graphql.resolution.inject) { + Object.assign(resolved, applyInject(spec, lookupResults[i], req.input)) + } + } + + if (card.graphql === undefined) { + throw new Error("card.graphql is unexpectedly undefined") + } + + const mutDoc = getMutationDocument(card.graphql.operationName) + const mutVars = buildMutationVars(mutDoc, req.input, resolved) + mutationInputs.push({ + alias: `step${i}`, + mutation: mutDoc, + variables: mutVars, + stepIndex: i, + }) + } catch (err) { + stepPreResults[i] = { + task: req.task, + ok: false, + error: { + code: mapErrorToCode(err), + message: err instanceof Error ? err.message : String(err), + retryable: false, + }, + } + } + } + + let rawMutResult: Record = {} + if (mutationInputs.length > 0) { + try { + const { document, variables } = buildBatchMutation( + mutationInputs.map(({ alias, mutation, variables }) => ({ alias, mutation, variables })), + ) + rawMutResult = (await deps.githubClient.query(document, variables)) as Record + } catch (err) { + // Whole batch mutation failed β€” mark all pending steps as failed + const code = mapErrorToCode(err) + for (const { stepIndex } of mutationInputs) { + const reqAtIndex = requests[stepIndex] + if (reqAtIndex !== undefined) { + stepPreResults[stepIndex] = { + task: reqAtIndex.task, + ok: false, + error: { + code, + message: err instanceof Error ? err.message : String(err), + retryable: isRetryableCode(code), + }, + } + } + } + } + } + + // Assemble results + const results: ChainStepResult[] = requests.map((req, stepIndex) => { + const preResult = stepPreResults[stepIndex] + if (preResult !== undefined) return preResult + + const mutInput = mutationInputs.find((m) => m.stepIndex === stepIndex) + if (mutInput === undefined) { + return { + task: req.task, + ok: false, + error: { code: errorCodes.Unknown, message: "step skipped", retryable: false }, + } + } + if (rawMutResult == null || typeof rawMutResult !== "object") { + return { + task: req.task, + ok: false, + error: { + code: errorCodes.Unknown, + message: `unexpected mutation response shape for alias ${mutInput.alias}`, + retryable: false, + }, + } + } + if (!(mutInput.alias in rawMutResult)) { + return { + task: req.task, + ok: false, + error: { + code: errorCodes.Unknown, + message: `missing mutation result for alias ${mutInput.alias}`, + retryable: false, + }, + } + } + const data = rawMutResult[mutInput.alias] + return { task: req.task, ok: true, data } + }) + + const succeeded = results.filter((r) => r.ok).length + const status: ChainStatus = + succeeded === results.length ? "success" : succeeded === 0 ? "failed" : "partial" + + return { + status, + results, + meta: { + route_used: "graphql", + total: results.length, + succeeded, + failed: results.length - succeeded, + }, + } +} diff --git a/packages/core/src/gql/batch.ts b/packages/core/src/gql/batch.ts index d812fe7a..565021c9 100644 --- a/packages/core/src/gql/batch.ts +++ b/packages/core/src/gql/batch.ts @@ -11,6 +11,17 @@ export type BatchMutationResult = { variables: GraphqlVariables } +export type BatchQueryInput = { + alias: string + query: string + variables: GraphqlVariables +} + +export type BatchQueryResult = { + document: string + variables: GraphqlVariables +} + export function buildBatchMutation(operations: BatchOperationInput[]): BatchMutationResult { if (operations.length === 0) { throw new Error("buildBatchMutation requires at least one operation") @@ -19,9 +30,17 @@ export function buildBatchMutation(operations: BatchOperationInput[]): BatchMuta const allVarDeclarations: string[] = [] const allSelections: string[] = [] const mergedVariables: GraphqlVariables = {} + const allFragments = new Map() for (const op of operations) { - const parsed = parseMutation(op.mutation) + const parsed = parseOperation(op.mutation) + + // Collect unique fragment definitions + for (const [name, text] of parsed.fragments) { + if (!allFragments.has(name)) { + allFragments.set(name, text) + } + } // Prefix variable declarations for (const varDecl of parsed.variableDeclarations) { @@ -50,25 +69,30 @@ export function buildBatchMutation(operations: BatchOperationInput[]): BatchMuta } } - const document = `mutation BatchComposite(${allVarDeclarations.join(", ")}) {\n${allSelections.join("\n")}\n}` + const fragmentBlock = allFragments.size > 0 ? "\n" + [...allFragments.values()].join("\n") : "" + const document = `mutation BatchComposite(${allVarDeclarations.join(", ")}) {\n${allSelections.join("\n")}\n}${fragmentBlock}` return { document, variables: mergedVariables } } type VariableDeclaration = { name: string; type: string } -type ParsedMutation = { variableDeclarations: VariableDeclaration[]; body: string } +type ParsedOperation = { + variableDeclarations: VariableDeclaration[] + body: string + fragments: Map +} function escapeRegex(value: string): string { return value.replace(/[.*+?^${}()|[\]\\]/g, "\\$&") } -function parseMutation(mutation: string): ParsedMutation { - // Extract variable declarations from header: mutation Name($var1: Type!, $var2: Type!) - const headerMatch = mutation.match(/mutation\s+\w+\s*\(([^)]*)\)/) +function parseOperation(document: string): ParsedOperation { + // Extract variable declarations from header: query|mutation Name($var1: Type!, $var2: Type!) + const headerMatch = document.match(/(query|mutation)\s+\w+\s*\(([^)]*)\)/) const variableDeclarations: VariableDeclaration[] = [] - if (headerMatch && headerMatch[1]) { - const varString = headerMatch[1] + if (headerMatch && headerMatch[2]) { + const varString = headerMatch[2] const varMatches = varString.matchAll(/\$(\w+)\s*:\s*([^,)]+)/g) for (const match of varMatches) { const name = match[1] @@ -83,7 +107,7 @@ function parseMutation(mutation: string): ParsedMutation { } // Extract body: everything between the outermost { } after the header - const headerEnd = mutation.indexOf("{") + const headerEnd = document.indexOf("{") if (headerEnd === -1) { throw new Error("Invalid mutation: no opening brace found") } @@ -91,11 +115,11 @@ function parseMutation(mutation: string): ParsedMutation { let depth = 0 let bodyStart = -1 let bodyEnd = -1 - for (let i = headerEnd; i < mutation.length; i++) { - if (mutation[i] === "{") { + for (let i = headerEnd; i < document.length; i++) { + if (document[i] === "{") { if (depth === 0) bodyStart = i + 1 depth++ - } else if (mutation[i] === "}") { + } else if (document[i] === "}") { depth-- if (depth === 0) { bodyEnd = i @@ -108,6 +132,75 @@ function parseMutation(mutation: string): ParsedMutation { throw new Error("Invalid mutation: unbalanced braces") } - const body = mutation.slice(bodyStart, bodyEnd).trim() - return { variableDeclarations, body } + const body = document.slice(bodyStart, bodyEnd).trim() + + // Extract fragment definitions that appear after the operation's closing brace. + // Use brace-depth counting to correctly handle nested selections like `labels { nodes { id } }`. + const fragments = new Map() + const remainder = document.slice(bodyEnd + 1) + const fragHeaderRe = /fragment\s+(\w+)\s+on\s+\w+\s*\{/g + let fragMatch: RegExpExecArray | null + while ((fragMatch = fragHeaderRe.exec(remainder)) !== null) { + const fragName = fragMatch[1] + if (!fragName || fragments.has(fragName)) continue + const openIdx = fragMatch.index + fragMatch[0].length - 1 + let d = 0 + let fragEnd = -1 + for (let i = openIdx; i < remainder.length; i++) { + if (remainder[i] === "{") d++ + else if (remainder[i] === "}") { + d-- + if (d === 0) { + fragEnd = i + break + } + } + } + if (fragEnd !== -1) { + fragments.set(fragName, remainder.slice(fragMatch.index, fragEnd + 1).trim()) + } + } + + return { variableDeclarations, body, fragments } +} + +export function buildBatchQuery(operations: BatchQueryInput[]): BatchQueryResult { + if (operations.length === 0) { + throw new Error("buildBatchQuery requires at least one operation") + } + + const allVarDeclarations: string[] = [] + const allSelections: string[] = [] + const mergedVariables: GraphqlVariables = {} + + for (const op of operations) { + const parsed = parseOperation(op.query) + + for (const varDecl of parsed.variableDeclarations) { + allVarDeclarations.push(`$${op.alias}_${varDecl.name}: ${varDecl.type}`) + } + + let body = parsed.body + const sortedDeclarations = [...parsed.variableDeclarations].sort( + (a, b) => b.name.length - a.name.length, + ) + for (const varDecl of sortedDeclarations) { + body = body.replaceAll( + new RegExp(`\\$${escapeRegex(varDecl.name)}\\b`, "g"), + `$${op.alias}_${varDecl.name}`, + ) + } + + const aliasedBody = body.replace(/^\s*(\w+)/, `${op.alias}: $1`) + allSelections.push(aliasedBody) + + for (const [key, value] of Object.entries(op.variables)) { + mergedVariables[`${op.alias}_${key}`] = value + } + } + + const varList = allVarDeclarations.length > 0 ? `(${allVarDeclarations.join(", ")})` : "" + const document = `query BatchChain${varList} {\n${allSelections.join("\n")}\n}` + + return { document, variables: mergedVariables } } diff --git a/packages/core/src/gql/document-registry.ts b/packages/core/src/gql/document-registry.ts new file mode 100644 index 00000000..5c965aab --- /dev/null +++ b/packages/core/src/gql/document-registry.ts @@ -0,0 +1,72 @@ +import { IssueAssigneesLookupDocument } from "./operations/issue-assignees-lookup.generated.js" +import { IssueAssigneesUpdateDocument } from "./operations/issue-assignees-update.generated.js" +import { IssueBlockedByAddDocument } from "./operations/issue-blocked-by-add.generated.js" +import { IssueBlockedByRemoveDocument } from "./operations/issue-blocked-by-remove.generated.js" +import { IssueCloseDocument } from "./operations/issue-close.generated.js" +import { IssueCommentCreateDocument } from "./operations/issue-comment-create.generated.js" +import { IssueCreateDocument } from "./operations/issue-create.generated.js" +import { IssueCreateRepositoryIdDocument } from "./operations/issue-create-repository-id.generated.js" +import { IssueDeleteDocument } from "./operations/issue-delete.generated.js" +import { IssueLabelsAddDocument } from "./operations/issue-labels-add.generated.js" +import { IssueLabelsLookupDocument } from "./operations/issue-labels-lookup.generated.js" +import { IssueLabelsUpdateDocument } from "./operations/issue-labels-update.generated.js" +import { IssueMilestoneLookupDocument } from "./operations/issue-milestone-lookup.generated.js" +import { IssueMilestoneSetDocument } from "./operations/issue-milestone-set.generated.js" +import { IssueParentLookupDocument } from "./operations/issue-parent-lookup.generated.js" +import { IssueParentRemoveDocument } from "./operations/issue-parent-remove.generated.js" +import { IssueParentSetDocument } from "./operations/issue-parent-set.generated.js" +import { IssueReopenDocument } from "./operations/issue-reopen.generated.js" +import { IssueUpdateDocument } from "./operations/issue-update.generated.js" +import { PrCommentReplyDocument } from "./operations/pr-comment-reply.generated.js" +import { PrCommentResolveDocument } from "./operations/pr-comment-resolve.generated.js" +import { PrCommentUnresolveDocument } from "./operations/pr-comment-unresolve.generated.js" +import { PrNodeIdDocument } from "./operations/pr-node-id.generated.js" +import { PrReviewSubmitDocument } from "./operations/pr-review-submit.generated.js" + +// Resolution lookup queries (Phase 1) +const LOOKUP_DOCUMENTS: Record = { + IssueLabelsLookup: IssueLabelsLookupDocument, + IssueAssigneesLookup: IssueAssigneesLookupDocument, + IssueMilestoneLookup: IssueMilestoneLookupDocument, + IssueParentLookup: IssueParentLookupDocument, + IssueCreateRepositoryId: IssueCreateRepositoryIdDocument, + PrNodeId: PrNodeIdDocument, +} + +// Mutation documents for chaining (Phase 2) +const MUTATION_DOCUMENTS: Record = { + IssueAssigneesUpdate: IssueAssigneesUpdateDocument, + IssueBlockedByAdd: IssueBlockedByAddDocument, + IssueBlockedByRemove: IssueBlockedByRemoveDocument, + IssueClose: IssueCloseDocument, + IssueCommentCreate: IssueCommentCreateDocument, + IssueCreate: IssueCreateDocument, + IssueDelete: IssueDeleteDocument, + IssueLabelsAdd: IssueLabelsAddDocument, + IssueLabelsUpdate: IssueLabelsUpdateDocument, + IssueMilestoneSet: IssueMilestoneSetDocument, + IssueParentRemove: IssueParentRemoveDocument, + IssueParentSet: IssueParentSetDocument, + IssueReopen: IssueReopenDocument, + IssueUpdate: IssueUpdateDocument, + PrCommentReply: PrCommentReplyDocument, + PrCommentResolve: PrCommentResolveDocument, + PrCommentUnresolve: PrCommentUnresolveDocument, + PrReviewSubmit: PrReviewSubmitDocument, +} + +export function getLookupDocument(operationName: string): string { + const doc = LOOKUP_DOCUMENTS[operationName] + if (!doc) { + throw new Error(`No lookup document registered for operation: ${operationName}`) + } + return doc +} + +export function getMutationDocument(operationName: string): string { + const doc = MUTATION_DOCUMENTS[operationName] + if (!doc) { + throw new Error(`No mutation document registered for operation: ${operationName}`) + } + return doc +} diff --git a/packages/core/src/gql/resolve.ts b/packages/core/src/gql/resolve.ts new file mode 100644 index 00000000..62528975 --- /dev/null +++ b/packages/core/src/gql/resolve.ts @@ -0,0 +1,106 @@ +import type { InjectSpec } from "@core/core/registry/types.js" +import type { GraphqlVariables } from "./transport.js" + +function getAtPath(obj: unknown, path: string): unknown { + const parts = path.split(".") + let current = obj + for (const part of parts) { + if (current === null || typeof current !== "object") return undefined + current = (current as Record)[part] + } + return current +} + +export function applyInject( + spec: InjectSpec, + lookupResult: unknown, + input: Record, +): Record { + if (spec.source === "scalar") { + const value = getAtPath(lookupResult, spec.path) + if (value === undefined || value === null) { + throw new Error(`Resolution failed for '${spec.target}': no value at path '${spec.path}'`) + } + return { [spec.target]: value } + } + + if (spec.source === "input") { + const value = input[spec.from_input] + if (value === undefined || value === null) { + throw new Error( + `Resolution failed for '${spec.target}': no value at input field '${spec.from_input}'`, + ) + } + return { [spec.target]: value } + } + + // map_array + if (spec.source !== "map_array") { + throw new Error(`Unknown inject source: '${(spec as InjectSpec).source}'`) + } + const nodes = getAtPath(lookupResult, spec.nodes_path) + if (!Array.isArray(nodes)) { + throw new Error( + `Resolution failed for '${spec.target}': nodes at '${spec.nodes_path}' is not an array`, + ) + } + + const idByName = new Map() + for (const node of nodes) { + if (node && typeof node === "object") { + const n = node as Record + const key = n[spec.match_field] + const val = n[spec.extract_field] + if (typeof key === "string") { + idByName.set(key.toLowerCase(), val) + } + } + } + + const inputValues = input[spec.from_input] + if (!Array.isArray(inputValues)) { + throw new Error( + `Resolution failed for '${spec.target}': input field '${spec.from_input}' is not an array`, + ) + } + + const resolved = inputValues.map((name: unknown) => { + if (typeof name !== "string") + throw new Error(`Resolution: expected string in '${spec.from_input}'`) + const id = idByName.get(name.toLowerCase()) + if (id === undefined) throw new Error(`Resolution: '${name}' not found in lookup result`) + return id + }) + + return { [spec.target]: resolved } +} + +export function buildMutationVars( + mutationDoc: string, + input: Record, + resolved: Record, +): GraphqlVariables { + // Extract variable names declared in the mutation header + const headerMatch = mutationDoc.match(/(?:query|mutation)\s+\w+\s*\(([^)]*)\)/) + const mutVarNames = new Set() + if (headerMatch?.[1]) { + for (const match of headerMatch[1].matchAll(/\$(\w+)\s*:/g)) { + if (match[1]) mutVarNames.add(match[1]) + } + } + + const vars: GraphqlVariables = {} + // Pass through input fields whose names match mutation variables + for (const varName of mutVarNames) { + if (varName in input) { + vars[varName] = input[varName] as GraphqlVariables[string] + } + } + // Apply resolved values (may override pass-through) + for (const [key, value] of Object.entries(resolved)) { + if (mutVarNames.has(key)) { + vars[key] = value as GraphqlVariables[string] + } + } + return vars +} diff --git a/packages/core/src/gql/transport.ts b/packages/core/src/gql/transport.ts index 7f396631..215facfa 100644 --- a/packages/core/src/gql/transport.ts +++ b/packages/core/src/gql/transport.ts @@ -80,7 +80,7 @@ export function createGraphqlRequestClient(transport: GraphqlTransport): GraphQL const DEFAULT_GRAPHQL_URL = "https://api.github.com/graphql" -function resolveGraphqlUrl(): string { +export function resolveGraphqlUrl(): string { if (process.env.GITHUB_GRAPHQL_URL) { return process.env.GITHUB_GRAPHQL_URL } diff --git a/packages/core/src/index.ts b/packages/core/src/index.ts index 16cce6e6..fc1c3095 100644 --- a/packages/core/src/index.ts +++ b/packages/core/src/index.ts @@ -1,5 +1,8 @@ export type { AttemptMeta, + ChainResultEnvelope, + ChainStatus, + ChainStepResult, ResultEnvelope, ResultError, ResultMeta, @@ -19,7 +22,7 @@ export { listCapabilities, } from "./core/registry/list-capabilities.js" export type { OperationCard } from "./core/registry/types.js" -export { executeTask } from "./core/routing/engine.js" +export { executeTask, executeTasks } from "./core/routing/engine.js" export type { RouteReasonCode } from "./core/routing/reason-codes.js" export type { GithubClient } from "./gql/github-client.js" export { diff --git a/packages/core/test/unit/batch.test.ts b/packages/core/test/unit/batch.test.ts new file mode 100644 index 00000000..605aeae0 --- /dev/null +++ b/packages/core/test/unit/batch.test.ts @@ -0,0 +1,114 @@ +import { buildBatchMutation, buildBatchQuery } from "@core/gql/batch.js" +import { describe, expect, it } from "vitest" + +describe("buildBatchQuery", () => { + it("wraps single query with alias", () => { + const result = buildBatchQuery([ + { + alias: "step0", + query: `query IssueLabelsLookup($issueId: ID!) { + node(id: $issueId) { + ... on Issue { id } + } +}`, + variables: { issueId: "I_123" }, + }, + ]) + expect(result.document).toContain("query BatchChain") + expect(result.document).toContain("step0:") + expect(result.document).toContain("$step0_issueId: ID!") + expect(result.variables).toEqual({ step0_issueId: "I_123" }) + }) + + it("merges two queries", () => { + const q = `query Foo($id: ID!) { node(id: $id) { id } }` + const result = buildBatchQuery([ + { alias: "a", query: q, variables: { id: "1" } }, + { alias: "b", query: q, variables: { id: "2" } }, + ]) + expect(result.document).toContain("$a_id: ID!") + expect(result.document).toContain("$b_id: ID!") + expect(result.variables).toEqual({ a_id: "1", b_id: "2" }) + }) + + it("throws on empty array", () => { + expect(() => buildBatchQuery([])).toThrow() + }) +}) + +describe("buildBatchMutation", () => { + it("wraps single mutation with alias", () => { + const result = buildBatchMutation([ + { + alias: "step0", + mutation: `mutation CloseIssue($issueId: ID!) { + closeIssue(input: {issueId: $issueId}) { + issue { id } + } +}`, + variables: { issueId: "I_123" }, + }, + ]) + expect(result.document).toContain("mutation BatchComposite") + expect(result.document).toContain("step0:") + expect(result.variables).toEqual({ step0_issueId: "I_123" }) + }) + + it("merges two mutations", () => { + const m = `mutation UpdateIssue($issueId: ID!, $title: String!) { updateIssue(input: {id: $issueId, title: $title}) { issue { id } } }` + const result = buildBatchMutation([ + { alias: "a", mutation: m, variables: { issueId: "I_1", title: "Title 1" } }, + { alias: "b", mutation: m, variables: { issueId: "I_2", title: "Title 2" } }, + ]) + expect(result.document).toContain("$a_issueId: ID!") + expect(result.document).toContain("$b_title: String!") + expect(result.variables).toEqual({ + a_issueId: "I_1", + a_title: "Title 1", + b_issueId: "I_2", + b_title: "Title 2", + }) + }) + + it("preserves fragment definitions appended to mutation document", () => { + const mutWithFragment = `mutation CreateIssue($repositoryId: ID!, $title: String!) { + createIssue(input: {repositoryId: $repositoryId, title: $title}) { + issue { + ...IssueCoreFields + } + } +} +fragment IssueCoreFields on Issue { + id + number + title +}` + const result = buildBatchMutation([ + { alias: "step0", mutation: mutWithFragment, variables: { repositoryId: "R_1", title: "T" } }, + ]) + expect(result.document).toContain("...IssueCoreFields") + expect(result.document).toContain("fragment IssueCoreFields on Issue") + }) + + it("deduplicates shared fragment definitions across operations", () => { + const mutWithFragment = `mutation CreateIssue($repositoryId: ID!, $title: String!) { + createIssue(input: {repositoryId: $repositoryId, title: $title}) { + issue { ...IssueCoreFields } + } +} +fragment IssueCoreFields on Issue { + id + number +}` + const result = buildBatchMutation([ + { alias: "a", mutation: mutWithFragment, variables: { repositoryId: "R_1", title: "A" } }, + { alias: "b", mutation: mutWithFragment, variables: { repositoryId: "R_2", title: "B" } }, + ]) + const count = (result.document.match(/fragment IssueCoreFields/g) ?? []).length + expect(count).toBe(1) + }) + + it("throws on empty array", () => { + expect(() => buildBatchMutation([])).toThrow() + }) +}) diff --git a/packages/core/test/unit/chain-command.test.ts b/packages/core/test/unit/chain-command.test.ts new file mode 100644 index 00000000..175eb185 --- /dev/null +++ b/packages/core/test/unit/chain-command.test.ts @@ -0,0 +1,230 @@ +import { describe, expect, it, vi } from "vitest" + +const executeTasksMock = vi.fn() + +vi.mock("@core/core/routing/engine.js", () => ({ + executeTasks: (...args: unknown[]) => executeTasksMock(...args), +})) + +vi.mock("@core/gql/github-client.js", () => ({ + createGithubClient: (transport: unknown) => transport, +})) + +describe("chainCommand parsing", () => { + it("parseChainFlags extracts inline --steps", async () => { + const { parseChainFlags } = await import("@core/cli/commands/chain.js") + + const flags = parseChainFlags(["--steps", '[{"task":"issue.close","input":{"issueId":"I_1"}}]']) + expect(flags.stepsSource).toEqual({ + raw: '[{"task":"issue.close","input":{"issueId":"I_1"}}]', + }) + expect(flags.skipGhPreflight).toBe(true) + }) + + it("parseChainFlags detects --steps - for stdin", async () => { + const { parseChainFlags } = await import("@core/cli/commands/chain.js") + + const flags = parseChainFlags(["--steps", "-"]) + expect(flags.stepsSource).toBe("stdin") + expect(flags.skipGhPreflight).toBe(true) + }) + + it("parseChainFlags respects --check-gh-preflight", async () => { + const { parseChainFlags } = await import("@core/cli/commands/chain.js") + + const flags = parseChainFlags([ + "--steps", + '[{"task":"issue.close","input":{"issueId":"I_1"}}]', + "--check-gh-preflight", + ]) + expect(flags.skipGhPreflight).toBe(false) + }) + + it("chainCommand returns 0 on success status", async () => { + executeTasksMock.mockResolvedValue({ + status: "success", + results: [{ task: "issue.close", ok: true }], + meta: { route_used: "graphql", total: 1, succeeded: 1, failed: 0 }, + }) + + const { chainCommand } = await import("@core/cli/commands/chain.js") + + vi.stubEnv("GITHUB_TOKEN", "test-token") + + const exitCode = await chainCommand([ + "--steps", + '[{"task":"issue.close","input":{"issueId":"I_1"}}]', + ]) + + expect(exitCode).toBe(0) + }) + + it("chainCommand returns 0 on partial status", async () => { + executeTasksMock.mockResolvedValue({ + status: "partial", + results: [ + { task: "issue.close", ok: true }, + { + task: "issue.close", + ok: false, + error: { code: "UNKNOWN", message: "failed", retryable: false }, + }, + ], + meta: { route_used: "graphql", total: 2, succeeded: 1, failed: 1 }, + }) + + const { chainCommand } = await import("@core/cli/commands/chain.js") + + vi.stubEnv("GITHUB_TOKEN", "test-token") + + const exitCode = await chainCommand([ + "--steps", + '[{"task":"issue.close","input":{"issueId":"I_1"}},{"task":"issue.close","input":{"issueId":"I_2"}}]', + ]) + + expect(exitCode).toBe(0) + }) + + it("chainCommand returns 1 on failed status", async () => { + executeTasksMock.mockResolvedValue({ + status: "failed", + results: [ + { + task: "issue.close", + ok: false, + error: { code: "VALIDATION", message: "invalid", retryable: false }, + }, + ], + meta: { route_used: "graphql", total: 1, succeeded: 0, failed: 1 }, + }) + + const { chainCommand } = await import("@core/cli/commands/chain.js") + + vi.stubEnv("GITHUB_TOKEN", "test-token") + + const exitCode = await chainCommand([ + "--steps", + '[{"task":"issue.close","input":{"issueId":"I_1"}}]', + ]) + + expect(exitCode).toBe(1) + }) + + it("chainCommand returns 1 and writes to stderr when GITHUB_TOKEN missing", async () => { + const { chainCommand } = await import("@core/cli/commands/chain.js") + + vi.stubEnv("GITHUB_TOKEN", undefined) + vi.stubEnv("GH_TOKEN", undefined) + + const stderrSpy = vi.spyOn(process.stderr, "write").mockImplementation(() => true) + const exitCode = await chainCommand([ + "--steps", + '[{"task":"issue.close","input":{"issueId":"I_1"}}]', + ]) + expect(exitCode).toBe(1) + expect(stderrSpy).toHaveBeenCalledWith(expect.stringContaining("GITHUB_TOKEN")) + stderrSpy.mockRestore() + }) + + it("chainCommand returns 1 and writes to stderr when --steps JSON is invalid", async () => { + const { chainCommand } = await import("@core/cli/commands/chain.js") + + vi.stubEnv("GITHUB_TOKEN", "tok") + + const stderrSpy = vi.spyOn(process.stderr, "write").mockImplementation(() => true) + const exitCode = await chainCommand(["--steps", "not-valid-json"]) + expect(exitCode).toBe(1) + expect(stderrSpy).toHaveBeenCalledWith(expect.stringContaining("Invalid JSON")) + stderrSpy.mockRestore() + }) +}) + +describe("chainCommand β€” executeGraphqlRequest fetch behaviour", () => { + it("passes AbortSignal to fetch", async () => { + vi.resetModules() + + vi.doMock("@core/core/routing/engine.js", () => ({ + executeTasks: vi + .fn() + .mockImplementation( + async ( + _steps: unknown, + opts: { githubClient: { execute: (q: string) => Promise } }, + ) => { + await opts.githubClient.execute("query {}") + return { + status: "success", + results: [], + meta: { route_used: "graphql", total: 0, succeeded: 0, failed: 0 }, + } + }, + ), + })) + vi.doMock("@core/gql/github-client.js", () => ({ + createGithubClient: (transport: unknown) => transport, + })) + + const fetchMock = vi.fn().mockResolvedValue({ + ok: true, + status: 200, + json: vi.fn().mockResolvedValue({ data: {} }), + }) + vi.stubGlobal("fetch", fetchMock) + vi.stubEnv("GITHUB_TOKEN", "tok") + + const { chainCommand } = await import("@core/cli/commands/chain.js") + await chainCommand(["--steps", '[{"task":"issue.close","input":{"issueId":"I1"}}]']) + + expect(fetchMock).toHaveBeenCalledWith( + expect.any(String), + expect.objectContaining({ signal: expect.any(AbortSignal) }), + ) + vi.unstubAllGlobals() + }) + + it("returns exit code 1 with message when response.json() throws (non-JSON body)", async () => { + vi.resetModules() + + vi.doMock("@core/core/routing/engine.js", () => ({ + executeTasks: vi + .fn() + .mockImplementation( + async ( + _steps: unknown, + opts: { githubClient: { execute: (q: string) => Promise } }, + ) => { + await opts.githubClient.execute("query {}") + return { + status: "success", + results: [], + meta: { route_used: "graphql", total: 0, succeeded: 0, failed: 0 }, + } + }, + ), + })) + vi.doMock("@core/gql/github-client.js", () => ({ + createGithubClient: (transport: unknown) => transport, + })) + + vi.stubGlobal( + "fetch", + vi.fn().mockResolvedValue({ + ok: false, + status: 503, + json: vi.fn().mockRejectedValue(new SyntaxError("Unexpected token < in JSON")), + }), + ) + vi.stubEnv("GITHUB_TOKEN", "tok") + + const stderrSpy = vi.spyOn(process.stderr, "write").mockImplementation(() => true) + const { chainCommand } = await import("@core/cli/commands/chain.js") + const code = await chainCommand([ + "--steps", + '[{"task":"issue.close","input":{"issueId":"I1"}}]', + ]) + expect(code).toBe(1) + expect(stderrSpy).toHaveBeenCalledWith(expect.stringContaining("non-JSON")) + stderrSpy.mockRestore() + vi.unstubAllGlobals() + }) +}) diff --git a/packages/core/test/unit/cli-index-entrypoint.test.ts b/packages/core/test/unit/cli-index-entrypoint.test.ts index 1d176f9b..c873fae7 100644 --- a/packages/core/test/unit/cli-index-entrypoint.test.ts +++ b/packages/core/test/unit/cli-index-entrypoint.test.ts @@ -32,7 +32,7 @@ describe("cli index entrypoint", () => { expect(process.exitCode).toBe(0) expect(stdout).toHaveBeenCalledWith( - "Usage:\n ghx run --input '' | --input - [--check-gh-preflight]\n ghx setup --scope [--yes] [--dry-run] [--verify] [--track]\n ghx capabilities list\n ghx capabilities explain \n", + "Usage:\n ghx run --input '' | --input - [--check-gh-preflight]\n ghx chain --steps '' | --steps - [--check-gh-preflight]\n ghx setup --scope [--yes] [--dry-run] [--verify] [--track]\n ghx capabilities list\n ghx capabilities explain \n", ) }) diff --git a/packages/core/test/unit/cli-index.test.ts b/packages/core/test/unit/cli-index.test.ts index 24a339f8..84348a39 100644 --- a/packages/core/test/unit/cli-index.test.ts +++ b/packages/core/test/unit/cli-index.test.ts @@ -2,6 +2,7 @@ import { afterEach, beforeEach, describe, expect, it, vi } from "vitest" const runCommandMock = vi.fn() const setupCommandMock = vi.fn() +const chainCommandMock = vi.fn() const capabilitiesListCommandMock = vi.fn() const capabilitiesExplainCommandMock = vi.fn() @@ -13,6 +14,10 @@ vi.mock("@core/cli/commands/setup.js", () => ({ setupCommand: (...args: unknown[]) => setupCommandMock(...args), })) +vi.mock("@core/cli/commands/chain.js", () => ({ + chainCommand: (...args: unknown[]) => chainCommandMock(...args), +})) + vi.mock("@core/cli/commands/capabilities-list.js", () => ({ capabilitiesListCommand: (...args: unknown[]) => capabilitiesListCommandMock(...args), })) @@ -28,6 +33,7 @@ describe("cli index main", () => { vi.clearAllMocks() runCommandMock.mockResolvedValue(0) setupCommandMock.mockResolvedValue(0) + chainCommandMock.mockResolvedValue(0) capabilitiesListCommandMock.mockResolvedValue(0) capabilitiesExplainCommandMock.mockResolvedValue(0) }) @@ -43,7 +49,7 @@ describe("cli index main", () => { expect(code).toBe(0) expect(stdout).toHaveBeenCalledWith( - "Usage:\n ghx run --input '' | --input - [--check-gh-preflight]\n ghx setup --scope [--yes] [--dry-run] [--verify] [--track]\n ghx capabilities list\n ghx capabilities explain \n", + "Usage:\n ghx run --input '' | --input - [--check-gh-preflight]\n ghx chain --steps '' | --steps - [--check-gh-preflight]\n ghx setup --scope [--yes] [--dry-run] [--verify] [--track]\n ghx capabilities list\n ghx capabilities explain \n", ) }) @@ -54,7 +60,7 @@ describe("cli index main", () => { expect(code).toBe(0) expect(stdout).toHaveBeenCalledWith( - "Usage:\n ghx run --input '' | --input - [--check-gh-preflight]\n ghx setup --scope [--yes] [--dry-run] [--verify] [--track]\n ghx capabilities list\n ghx capabilities explain \n", + "Usage:\n ghx run --input '' | --input - [--check-gh-preflight]\n ghx chain --steps '' | --steps - [--check-gh-preflight]\n ghx setup --scope [--yes] [--dry-run] [--verify] [--track]\n ghx capabilities list\n ghx capabilities explain \n", ) }) @@ -101,7 +107,7 @@ describe("cli index main", () => { expect(code).toBe(1) expect(stderr).toHaveBeenCalledWith( - "Unknown capabilities subcommand: nope\nUsage:\n ghx run --input '' | --input - [--check-gh-preflight]\n ghx setup --scope [--yes] [--dry-run] [--verify] [--track]\n ghx capabilities list\n ghx capabilities explain \n", + "Unknown capabilities subcommand: nope\nUsage:\n ghx run --input '' | --input - [--check-gh-preflight]\n ghx chain --steps '' | --steps - [--check-gh-preflight]\n ghx setup --scope [--yes] [--dry-run] [--verify] [--track]\n ghx capabilities list\n ghx capabilities explain \n", ) }) @@ -112,7 +118,7 @@ describe("cli index main", () => { expect(code).toBe(1) expect(stderr).toHaveBeenCalledWith( - "Missing capabilities subcommand.\nUsage:\n ghx run --input '' | --input - [--check-gh-preflight]\n ghx setup --scope [--yes] [--dry-run] [--verify] [--track]\n ghx capabilities list\n ghx capabilities explain \n", + "Missing capabilities subcommand.\nUsage:\n ghx run --input '' | --input - [--check-gh-preflight]\n ghx chain --steps '' | --steps - [--check-gh-preflight]\n ghx setup --scope [--yes] [--dry-run] [--verify] [--track]\n ghx capabilities list\n ghx capabilities explain \n", ) }) @@ -123,7 +129,7 @@ describe("cli index main", () => { expect(code).toBe(1) expect(stderr).toHaveBeenCalledWith( - "Unknown command: nope\nUsage:\n ghx run --input '' | --input - [--check-gh-preflight]\n ghx setup --scope [--yes] [--dry-run] [--verify] [--track]\n ghx capabilities list\n ghx capabilities explain \n", + "Unknown command: nope\nUsage:\n ghx run --input '' | --input - [--check-gh-preflight]\n ghx chain --steps '' | --steps - [--check-gh-preflight]\n ghx setup --scope [--yes] [--dry-run] [--verify] [--track]\n ghx capabilities list\n ghx capabilities explain \n", ) }) }) diff --git a/packages/core/test/unit/document-registry.test.ts b/packages/core/test/unit/document-registry.test.ts new file mode 100644 index 00000000..887badb1 --- /dev/null +++ b/packages/core/test/unit/document-registry.test.ts @@ -0,0 +1,27 @@ +import { getLookupDocument, getMutationDocument } from "@core/gql/document-registry.js" +import { describe, expect, it } from "vitest" + +describe("document-registry", () => { + it("getLookupDocument returns document for IssueLabelsLookup", () => { + const doc = getLookupDocument("IssueLabelsLookup") + expect(doc).toContain("query IssueLabelsLookup") + }) + + it("getLookupDocument returns document for IssueMilestoneLookup", () => { + const doc = getLookupDocument("IssueMilestoneLookup") + expect(doc).toContain("milestoneNumber") + }) + + it("getMutationDocument returns document for IssueLabelsUpdate", () => { + const doc = getMutationDocument("IssueLabelsUpdate") + expect(doc).toContain("mutation IssueLabelsUpdate") + }) + + it("getLookupDocument throws on unknown operation", () => { + expect(() => getLookupDocument("UnknownOp")).toThrow() + }) + + it("getMutationDocument throws on unknown operation", () => { + expect(() => getMutationDocument("UnknownOp")).toThrow() + }) +}) diff --git a/packages/core/test/unit/engine.test.ts b/packages/core/test/unit/engine.test.ts index d96227b3..529dc8cd 100644 --- a/packages/core/test/unit/engine.test.ts +++ b/packages/core/test/unit/engine.test.ts @@ -244,3 +244,386 @@ describe("executeTask engine wiring", () => { nowSpy.mockRestore() }) }) + +describe("executeTasks chaining", () => { + beforeEach(() => { + executeMock.mockReset() + getOperationCardMock.mockReset() + }) + + it("1-item chain delegates to executeTask path", async () => { + getOperationCardMock.mockReturnValue(baseCard) + executeMock.mockResolvedValue({ ok: true, data: { id: "test" } }) + + const { executeTasks } = await import("@core/core/routing/engine.js") + + const result = await executeTasks( + [{ task: "repo.view", input: { owner: "acme", name: "modkit" } }], + { + githubClient: createGithubClient(), + }, + ) + + expect(executeMock).toHaveBeenCalledTimes(1) + expect(result.status).toBe("success") + expect(result.results).toHaveLength(1) + expect(result.results[0]).toMatchObject({ task: "repo.view", ok: true, data: { id: "test" } }) + }) + + it("pre-flight rejects whole chain if card not found", async () => { + getOperationCardMock.mockReturnValue(null) + + const { executeTasks } = await import("@core/core/routing/engine.js") + + const result = await executeTasks( + [ + { task: "unknown.task", input: {} }, + { task: "repo.view", input: {} }, + ], + { + githubClient: createGithubClient(), + }, + ) + + expect(result.status).toBe("failed") + expect(result.results).toHaveLength(2) + const firstResult = result.results[0] + expect(firstResult).toBeDefined() + expect(firstResult?.ok).toBe(false) + expect(firstResult?.error?.code).toBe("VALIDATION") + }) + + it("pre-flight rejects whole chain if card has no graphql config", async () => { + const cardWithoutGql = { + ...baseCard, + routing: { preferred: "cli", fallbacks: [] }, + graphql: undefined, + } + getOperationCardMock.mockReturnValue(cardWithoutGql) + + const { executeTasks } = await import("@core/core/routing/engine.js") + + const result = await executeTasks( + [ + { task: "repo.view", input: { owner: "acme", name: "modkit" } }, + { task: "repo.view", input: { owner: "acme", name: "modkit" } }, + ], + { + githubClient: createGithubClient(), + }, + ) + + expect(result.status).toBe("failed") + expect(result.results.every((r) => !r.ok)).toBe(true) + }) + + it("pre-flight correctly attributes errors when same capability appears twice and only one fails", async () => { + // step 0: card not found (preflight fails); step 1: valid card + getOperationCardMock.mockReturnValueOnce(undefined).mockReturnValueOnce({ + ...baseCard, + graphql: { operationName: "IssueClose", documentPath: "x" }, + }) + + const { executeTasks } = await import("@core/core/routing/engine.js") + + const result = await executeTasks( + [ + { task: "issue.close", input: { issueId: "I1" } }, + { task: "issue.close", input: { issueId: "I2" } }, + ], + { githubClient: createGithubClient() }, + ) + + expect(result.status).toBe("failed") + const r0 = result.results[0] + const r1 = result.results[1] + // step 0 gets the real "Invalid task" error + expect(r0?.ok).toBe(false) + expect(r0?.error?.message).toContain("Invalid task") + // step 1 should NOT inherit the same error β€” it gets the generic "pre-flight failed" fallback + expect(r1?.ok).toBe(false) + expect(r1?.error?.message).not.toContain("Invalid task") + }) + + it("pre-flight rejects step when resolution lookup var is missing from input", async () => { + const cardNoResolution = { + ...baseCard, + graphql: { operationName: "IssueClose", documentPath: "x" }, + } + const cardWithResolution = { + ...baseCard, + graphql: { + operationName: "IssueLabelsSet", + documentPath: "x", + resolution: { + lookup: { + operationName: "IssueLabelsLookup", + documentPath: "y", + vars: { owner: "owner", name: "name" }, + }, + inject: [], + }, + }, + } + getOperationCardMock + .mockReturnValueOnce(cardNoResolution) + .mockReturnValueOnce(cardWithResolution) + + const { executeTasks } = await import("@core/core/routing/engine.js") + + const result = await executeTasks( + // step 1 is missing "name" from input + [ + { task: "issue.close", input: { issueId: "I1" } }, + { task: "issue.labels.set", input: { owner: "acme" } }, + ], + { githubClient: createGithubClient() }, + ) + + expect(result.status).toBe("failed") + const r1 = result.results[1] + expect(r1?.ok).toBe(false) + expect(r1?.error?.message).toMatch(/name/) + }) + + it("2-item pure-mutation chain returns success after batch mutation", async () => { + // vi.resetModules + vi.doMock pattern required: engine.ts is already loaded, + // so module-level doMock calls are ineffective without resetting the cache first. + vi.resetModules() + vi.doMock("@core/core/execute/execute.js", () => ({ + execute: (...args: unknown[]) => executeMock(...args), + })) + vi.doMock("@core/core/registry/index.js", () => ({ + getOperationCard: (...args: unknown[]) => getOperationCardMock(...args), + })) + + const cardWithGql = { + ...baseCard, + graphql: { + operationName: "IssueCreate", + documentPath: "src/gql/operations/issue-create.graphql", + }, + } + getOperationCardMock.mockReturnValue(cardWithGql) + + const getMutationDocumentMock = vi + .fn() + .mockReturnValue( + `mutation IssueCreate($repositoryId: ID!, $title: String!) { createIssue(input: {repositoryId: $repositoryId, title: $title}) { issue { id } } }`, + ) + const buildBatchMutationMock = vi.fn().mockReturnValue({ + document: `mutation BatchComposite { step0: createIssue { issue { id } } step1: createIssue { issue { id } } }`, + variables: { + step0_repositoryId: "R1", + step0_title: "Issue 1", + step1_repositoryId: "R2", + step1_title: "Issue 2", + }, + }) + + vi.doMock("@core/gql/document-registry.js", () => ({ + getLookupDocument: vi.fn(), + getMutationDocument: getMutationDocumentMock, + })) + vi.doMock("@core/gql/batch.js", () => ({ + buildBatchMutation: buildBatchMutationMock, + buildBatchQuery: vi.fn(), + })) + + const { executeTasks } = await import("@core/core/routing/engine.js") + + const result = await executeTasks( + [ + { task: "issue.create", input: { repositoryId: "R1", title: "Issue 1" } }, + { task: "issue.create", input: { repositoryId: "R2", title: "Issue 2" } }, + ], + { + githubClient: createGithubClient({ + query: vi + .fn() + .mockResolvedValue({ step0: { issue: { id: "I1" } }, step1: { issue: { id: "I2" } } }), + }), + }, + ) + + // Mocks are now effective: verify they were actually called + expect(getMutationDocumentMock).toHaveBeenCalledWith("IssueCreate") + expect(buildBatchMutationMock).toHaveBeenCalled() + expect(result.status).toBe("success") + expect(result.results).toHaveLength(2) + expect(result.results[0]).toMatchObject({ task: "issue.create", ok: true }) + expect(result.results[1]).toMatchObject({ task: "issue.create", ok: true }) + }) + + it("status is failed when batch mutation query rejects", async () => { + vi.resetModules() + vi.doMock("@core/core/execute/execute.js", () => ({ + execute: (...args: unknown[]) => executeMock(...args), + })) + vi.doMock("@core/core/registry/index.js", () => ({ + getOperationCard: (...args: unknown[]) => getOperationCardMock(...args), + })) + + const cardWithGql = { + ...baseCard, + graphql: { + operationName: "IssueCreate", + documentPath: "src/gql/operations/issue-create.graphql", + }, + } + getOperationCardMock.mockReturnValue(cardWithGql) + + vi.doMock("@core/gql/document-registry.js", () => ({ + getLookupDocument: vi.fn(), + getMutationDocument: vi + .fn() + .mockReturnValue( + `mutation IssueCreate($repositoryId: ID!, $title: String!) { createIssue(input: {repositoryId: $repositoryId, title: $title}) { issue { id } } }`, + ), + })) + vi.doMock("@core/gql/batch.js", () => ({ + buildBatchMutation: vi.fn().mockReturnValue({ + document: `mutation BatchComposite { step0: createIssue { issue { id } } }`, + variables: { step0_repositoryId: "R1", step0_title: "Issue 1" }, + }), + buildBatchQuery: vi.fn(), + })) + + const { executeTasks } = await import("@core/core/routing/engine.js") + + const result = await executeTasks( + [ + { task: "issue.create", input: { repositoryId: "R1", title: "Issue 1" } }, + { task: "issue.create", input: { repositoryId: "R2", title: "Issue 2" } }, + ], + { + githubClient: createGithubClient({ + query: vi.fn().mockRejectedValue(new Error("network error")), + }), + }, + ) + + expect(result.status).toBe("failed") + expect(result.results[0]?.ok).toBe(false) + expect(result.results[1]?.ok).toBe(false) + }) +}) + +describe("executeTasks β€” mixed resolution chain", () => { + it("handles chain where step 0 has no resolution and step 1 requires Phase 1 lookup", async () => { + vi.resetModules() + vi.doMock("@core/core/execute/execute.js", () => ({ + execute: (...args: unknown[]) => executeMock(...args), + })) + vi.doMock("@core/core/registry/index.js", () => ({ + getOperationCard: (...args: unknown[]) => getOperationCardMock(...args), + })) + + const cardNoResolution = { + ...baseCard, + graphql: { + operationName: "IssueClose", + documentPath: "src/gql/operations/issue-close.graphql", + }, + } + const cardWithResolution = { + ...baseCard, + graphql: { + operationName: "IssueLabelsSet", + documentPath: "src/gql/operations/issue-labels-set.graphql", + resolution: { + lookup: { + operationName: "IssueLabelsLookup", + documentPath: "src/gql/operations/issue-labels-lookup.graphql", + vars: { owner: "owner", name: "name" }, + }, + inject: [ + { + target: "labelIds", + source: "map_array" as const, + from_input: "labels", + nodes_path: "repository.labels.nodes", + match_field: "name", + extract_field: "id", + }, + ], + }, + }, + } + + getOperationCardMock + .mockReturnValueOnce(cardNoResolution) + .mockReturnValueOnce(cardWithResolution) + + const buildBatchQueryMock = vi.fn().mockReturnValue({ + document: `query BatchQuery { step1: repository { labels { nodes { id name } } } }`, + variables: { step1_owner: "acme", step1_name: "repo" }, + }) + const buildBatchMutationMock = vi.fn().mockReturnValue({ + document: `mutation BatchMut { step0: closeIssue { issue { id } } step1: updateIssue { issue { id } } }`, + variables: {}, + }) + + vi.doMock("@core/gql/document-registry.js", () => ({ + getLookupDocument: vi + .fn() + .mockReturnValue( + `query IssueLabelsLookup($owner: String!, $name: String!) { repository(owner: $owner, name: $name) { labels(first: 100) { nodes { id name } } } }`, + ), + getMutationDocument: vi + .fn() + .mockImplementation((op: string) => + op === "IssueClose" + ? `mutation IssueClose($issueId: ID!) { closeIssue(input: {issueId: $issueId}) { issue { id } } }` + : `mutation IssueLabelsSet($issueId: ID!, $labelIds: [ID!]!) { updateIssue(input: {id: $issueId, labelIds: $labelIds}) { issue { id } } }`, + ), + })) + vi.doMock("@core/gql/batch.js", () => ({ + buildBatchQuery: buildBatchQueryMock, + buildBatchMutation: buildBatchMutationMock, + })) + vi.doMock("@core/gql/resolve.js", () => ({ + applyInject: vi.fn().mockReturnValue({ labelIds: ["L1"] }), + buildMutationVars: vi + .fn() + .mockImplementation( + (_doc: string, input: Record, resolved: Record) => ({ + ...input, + ...resolved, + }), + ), + })) + + const { executeTasks } = await import("@core/core/routing/engine.js") + + const queryMock = vi + .fn() + // Phase 1: label lookup for step 1 + .mockResolvedValueOnce({ + step1: { repository: { labels: { nodes: [{ id: "L1", name: "bug" }] } } }, + }) + // Phase 2: both mutations + .mockResolvedValueOnce({ + step0: { closeIssue: { issue: { id: "I1" } } }, + step1: { updateIssue: { issue: { id: "I2" } } }, + }) + + const result = await executeTasks( + [ + { task: "issue.close", input: { issueId: "I1" } }, + { + task: "issue.labels.set", + input: { issueId: "I2", owner: "acme", name: "repo", labels: ["bug"] }, + }, + ], + { githubClient: createGithubClient({ query: queryMock }) }, + ) + + expect(buildBatchQueryMock).toHaveBeenCalled() + expect(buildBatchMutationMock).toHaveBeenCalled() + expect(queryMock).toHaveBeenCalledTimes(2) + expect(result.status).toBe("success") + expect(result.results[0]).toMatchObject({ task: "issue.close", ok: true }) + expect(result.results[1]).toMatchObject({ task: "issue.labels.set", ok: true }) + }) +}) diff --git a/packages/core/test/unit/envelope.test.ts b/packages/core/test/unit/envelope.test.ts new file mode 100644 index 00000000..16f29429 --- /dev/null +++ b/packages/core/test/unit/envelope.test.ts @@ -0,0 +1,29 @@ +import type { + ChainResultEnvelope, + ChainStatus, + ChainStepResult, +} from "@core/core/contracts/envelope.js" +import { describe, expect, it } from "vitest" + +describe("ChainResultEnvelope types", () => { + it("status type accepts valid values", () => { + const s1: ChainStatus = "success" + const s2: ChainStatus = "partial" + const s3: ChainStatus = "failed" + expect([s1, s2, s3]).toHaveLength(3) + }) + + it("ChainStepResult can be ok", () => { + const r: ChainStepResult = { task: "issue.close", ok: true, data: { id: "x" } } + expect(r.ok).toBe(true) + }) + + it("ChainResultEnvelope has expected shape", () => { + const env: ChainResultEnvelope = { + status: "success", + results: [{ task: "issue.close", ok: true }], + meta: { route_used: "graphql", total: 1, succeeded: 1, failed: 0 }, + } + expect(env.meta.total).toBe(1) + }) +}) diff --git a/packages/core/test/unit/operation-card-schema.test.ts b/packages/core/test/unit/operation-card-schema.test.ts new file mode 100644 index 00000000..fadabb3b --- /dev/null +++ b/packages/core/test/unit/operation-card-schema.test.ts @@ -0,0 +1,87 @@ +import { operationCardSchema } from "@core/core/registry/operation-card-schema.js" +import { Ajv } from "ajv" +import { describe, expect, it } from "vitest" + +const ajv = new Ajv() +const validate = ajv.compile(operationCardSchema) + +describe("operationCardSchema resolution", () => { + it("accepts a card with scalar resolution", () => { + const card = { + capability_id: "issue.milestone.set", + version: "1.0.0", + description: "Set milestone", + input_schema: { type: "object" }, + output_schema: { type: "object" }, + routing: { preferred: "graphql", fallbacks: [] }, + graphql: { + operationName: "IssueMilestoneSet", + documentPath: "src/gql/operations/issue-milestone-set.graphql", + resolution: { + lookup: { + operationName: "IssueMilestoneLookup", + documentPath: "src/gql/operations/issue-milestone-lookup.graphql", + vars: { issueId: "issueId", milestoneNumber: "milestoneNumber" }, + }, + inject: [ + { target: "milestoneId", source: "scalar", path: "node.repository.milestone.id" }, + ], + }, + }, + } + expect(validate(card)).toBe(true) + }) + + it("accepts a card with map_array resolution", () => { + const card = { + capability_id: "issue.labels.update", + version: "1.0.0", + description: "Update labels", + input_schema: { type: "object" }, + output_schema: { type: "object" }, + routing: { preferred: "graphql", fallbacks: [] }, + graphql: { + operationName: "IssueLabelsUpdate", + documentPath: "src/gql/operations/issue-labels-update.graphql", + resolution: { + lookup: { + operationName: "IssueLabelsLookup", + documentPath: "src/gql/operations/issue-labels-lookup.graphql", + vars: { issueId: "issueId" }, + }, + inject: [ + { + target: "labelIds", + source: "map_array", + from_input: "labels", + nodes_path: "node.repository.labels.nodes", + match_field: "name", + extract_field: "id", + }, + ], + }, + }, + } + expect(validate(card)).toBe(true) + }) + + it("rejects resolution with unknown source", () => { + const card = { + capability_id: "x", + version: "1.0.0", + description: "x", + input_schema: { type: "object" }, + output_schema: { type: "object" }, + routing: { preferred: "graphql", fallbacks: [] }, + graphql: { + operationName: "X", + documentPath: "x.graphql", + resolution: { + lookup: { operationName: "Y", documentPath: "y.graphql", vars: {} }, + inject: [{ target: "t", source: "unknown_source", path: "a.b" }], + }, + }, + } + expect(validate(card)).toBe(false) + }) +}) diff --git a/packages/core/test/unit/registry-validation.test.ts b/packages/core/test/unit/registry-validation.test.ts index 8263cd61..c0e5f687 100644 --- a/packages/core/test/unit/registry-validation.test.ts +++ b/packages/core/test/unit/registry-validation.test.ts @@ -2,18 +2,6 @@ import { validateOperationCard } from "@core/core/registry/index.js" import { describe, expect, it } from "vitest" describe("validateOperationCard", () => { - const validBaseCard = { - capability_id: "test.card", - version: "1.0.0", - description: "Test", - input_schema: { type: "object" }, - output_schema: { type: "object" }, - routing: { - preferred: "graphql" as const, - fallbacks: [] as readonly string[], - }, - } - it("rejects malformed operation cards", () => { expect(validateOperationCard(null).ok).toBe(false) expect( @@ -53,28 +41,25 @@ describe("validateOperationCard", () => { expect(result.ok).toBe(true) }) +}) - it("rejects card with invalid output_strategy", () => { - const card = { - ...validBaseCard, - composite: { - steps: [{ capability_id: "pr.threads.reply", params_map: {} }], - output_strategy: "invalid", - }, - } - const result = validateOperationCard(card) - expect(result.ok).toBe(false) +describe("card resolution blocks", () => { + it("issue.labels.set has resolution config", async () => { + const { getOperationCard } = await import("@core/core/registry/index.js") + const card = getOperationCard("issue.labels.set") + expect(card).toBeDefined() + if (!card) return + expect(card.graphql?.resolution).toBeDefined() + expect(card.graphql?.resolution?.lookup.operationName).toBe("IssueLabelsLookup") + expect(card.graphql?.resolution?.inject[0]?.source).toBe("map_array") }) - it("rejects composite with empty steps array", () => { - const card = { - ...validBaseCard, - composite: { - steps: [], - output_strategy: "array", - }, - } - const result = validateOperationCard(card) - expect(result.ok).toBe(false) + it("issue.milestone.set has scalar resolution", async () => { + const { getOperationCard } = await import("@core/core/registry/index.js") + const card = getOperationCard("issue.milestone.set") + expect(card).toBeDefined() + if (!card) return + expect(card.graphql?.resolution).toBeDefined() + expect(card.graphql?.resolution?.inject[0]?.source).toBe("scalar") }) }) diff --git a/packages/core/test/unit/resolve.test.ts b/packages/core/test/unit/resolve.test.ts new file mode 100644 index 00000000..34d5a15f --- /dev/null +++ b/packages/core/test/unit/resolve.test.ts @@ -0,0 +1,100 @@ +import type { InjectSpec } from "@core/core/registry/types.js" +import { applyInject, buildMutationVars } from "@core/gql/resolve.js" +import { describe, expect, it } from "vitest" + +describe("applyInject", () => { + it("scalar: extracts value at dot-path", () => { + const lookupResult = { node: { repository: { milestone: { id: "M_456" } } } } + const spec: InjectSpec = { + target: "milestoneId", + source: "scalar", + path: "node.repository.milestone.id", + } + expect(applyInject(spec, lookupResult, {})).toEqual({ milestoneId: "M_456" }) + }) + + it("scalar: throws when path not found", () => { + const spec: InjectSpec = { + target: "milestoneId", + source: "scalar", + path: "node.repository.milestone.id", + } + expect(() => applyInject(spec, {}, {})).toThrow("milestoneId") + }) + + it("map_array: maps names to ids", () => { + const lookupResult = { + node: { + repository: { + labels: { + nodes: [ + { id: "L_1", name: "bug" }, + { id: "L_2", name: "feat" }, + ], + }, + }, + }, + } + const spec: InjectSpec = { + target: "labelIds", + source: "map_array", + from_input: "labels", + nodes_path: "node.repository.labels.nodes", + match_field: "name", + extract_field: "id", + } + const input = { labels: ["feat", "bug"] } + expect(applyInject(spec, lookupResult, input)).toEqual({ labelIds: ["L_2", "L_1"] }) + }) + + it("input: passes value from input field directly", () => { + const spec: InjectSpec = { + target: "labelableId", + source: "input", + from_input: "issueId", + } + const input = { issueId: "I_123" } + expect(applyInject(spec, {}, input)).toEqual({ labelableId: "I_123" }) + }) + + it("input: throws when input field is missing", () => { + const spec: InjectSpec = { + target: "labelableId", + source: "input", + from_input: "issueId", + } + expect(() => applyInject(spec, {}, {})).toThrow("labelableId") + }) + + it("map_array: throws when name not found", () => { + const lookupResult = { node: { repository: { labels: { nodes: [] } } } } + const spec: InjectSpec = { + target: "labelIds", + source: "map_array", + from_input: "labels", + nodes_path: "node.repository.labels.nodes", + match_field: "name", + extract_field: "id", + } + const input = { labels: ["nonexistent"] } + expect(() => applyInject(spec, lookupResult, input)).toThrow("nonexistent") + }) +}) + +describe("buildMutationVars", () => { + it("passes through vars matching mutation variable names", () => { + const mutDoc = `mutation IssueClose($issueId: ID!) { closeIssue(input: {issueId: $issueId}) { issue { id } } }` + const input = { issueId: "I_123", extraField: "ignored" } + const resolved: Record = {} + const vars = buildMutationVars(mutDoc, input, resolved) + expect(vars).toEqual({ issueId: "I_123" }) + }) + + it("resolved vars override pass-through", () => { + const mutDoc = `mutation IssueLabelsUpdate($issueId: ID!, $labelIds: [ID!]!) { updateIssue(input: {id: $issueId, labelIds: $labelIds}) { issue { id } } }` + const input = { issueId: "I_123", labels: ["bug"] } + const resolved = { labelIds: ["L_1"] } + const vars = buildMutationVars(mutDoc, input, resolved) + expect(vars).toEqual({ issueId: "I_123", labelIds: ["L_1"] }) + }) +})