Conversation
…ble coverage - Add executeRaw/queryRaw to GraphQL transport for partial error handling in Phase 2 batch mutations (item 3) - Expand chainable coverage: issue.labels.remove, issue.assignees.add, issue.assignees.remove, issue.milestone.clear now route through GraphQL with CLI fallback (item 5) - Add resolution cache to avoid redundant Phase 1 lookups across chained calls (item 6) - Add null_literal inject type for milestone clearing - Add pr.reviews.submit resolution inject-path validation tests (item 8) - 6 new GraphQL operations (3 lookups, 3 mutations) https://claude.ai/code/session_015z5iPcLzyLN2Co2Ck11joN
🦋 Changeset detectedLatest commit: e1dd3bf The changes in this PR will be included in the next version bump. This PR includes changesets to release 2 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
|
No actionable comments were generated in the recent review. 🎉 📝 WalkthroughWalkthroughAdds an in-memory resolution cache, extends GraphQL transport/client with raw-result support, enables per-step error attribution for Phase‑2 mutations, introduces a null_literal inject, registers new issue lookup/mutation documents and GraphQL-first routing for four capabilities, and adds tests and docs. Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant Engine
participant Cache as "Resolution Cache"
participant Transport
participant GraphQL as "GitHub GraphQL API"
Client->>Engine: Execute batch (Phase 1: lookups)
Engine->>Cache: check(buildCacheKey)
Cache-->>Engine: hit / miss
alt cache hit
Engine-->>Client: use cached lookup result
else cache miss
Engine->>Transport: batch lookup query
Transport->>GraphQL: HTTP POST (lookup)
GraphQL-->>Transport: lookup data
Transport-->>Engine: lookup responses
Engine->>Cache: set(buildCacheKey, result)
end
Client->>Engine: Execute batch (Phase 2: mutations)
Engine->>Transport: mutation via queryRaw
Transport->>GraphQL: HTTP POST (mutation)
GraphQL-->>Transport: { data, errors }
Transport-->>Engine: GraphqlRawResult
alt per-step errors present
Engine->>Engine: map GraphqlError.path -> stepErrors
Engine-->>Client: partial success with per-step failures
else unattributed errors
Engine-->>Client: mark all steps failed (batch-level)
else success
Engine-->>Client: full success
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/core/src/core/registry/cards/issue.assignees.remove.yaml (1)
16-24:⚠️ Potential issue | 🔴 CriticalGraphQL route implementation is missing for these capabilities.
The cards declare
routing.preferred: graphqlwith operation names and document paths, but the GraphQL handlers (runIssueAssigneesAddandrunIssueAssigneesRemove) are not implemented and not registered inpackages/core/src/gql/capability-registry.ts. Onlyissue.assignees.setis currently registered (line 129).When executing via the declared preferred GraphQL route,
getGraphqlHandler()will returnundefined, causing the engine to fall back to the CLI route. The CLI handlers exist and correctly construct output ({ issueNumber, added/removed: assignees }), but this defeats the declared routing preference.Implement the missing GraphQL mutation handlers in
packages/core/src/gql/domains/issue-mutations.ts(following the pattern ofrunIssueAssigneesUpdate), then register them in the capability-registry handlers map.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/core/registry/cards/issue.assignees.remove.yaml` around lines 16 - 24, Implement GraphQL handlers for the missing capabilities by adding runIssueAssigneesAdd and runIssueAssigneesRemove functions to packages/core/src/gql/domains/issue-mutations.ts following the same pattern as runIssueAssigneesUpdate (accept GraphQL resolver args/context, call the same underlying logic used by the CLI handlers, and return the shaped payload { issueNumber, added/removed: assignees }). Then register these two functions in the handlers map inside packages/core/src/gql/capability-registry.ts alongside the existing issue.assignees.set entry so getGraphqlHandler() returns the appropriate resolver for the declared graphql routing.
🧹 Nitpick comments (8)
packages/core/src/core/routing/resolution-cache.ts (1)
34-43: FIFO eviction may discard valid entries while expired ones occupy capacity.When
store.size >= maxEntriesis reached andsetis called with a new key, the oldest-inserted entry is evicted. But expired entries that were never accessed viagetstill count toward capacity — so a valid entry could be evicted while the Map is full of expired-but-uncollected entries.♻️ Sweep expired entries first before falling back to FIFO eviction
set(key: string, value: unknown): void { - // Evict oldest entry (first inserted) if at capacity if (!store.has(key) && store.size >= maxEntries) { + // First, reclaim any expired entries to avoid evicting valid ones + const now = Date.now() + for (const [k, v] of store) { + if (now > v.expiresAt) { + store.delete(k) + if (store.size < maxEntries) break + } + } + // Fall back to FIFO eviction only if still at capacity const oldest = store.keys().next() if (!oldest.done) { store.delete(oldest.value) } } store.set(key, { value, expiresAt: Date.now() + ttlMs }) },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/core/routing/resolution-cache.ts` around lines 34 - 43, The set method currently evicts the oldest-inserted entry unconditionally when store.size >= maxEntries, which can remove valid entries while expired entries remain; modify the set(key: string, value: unknown) implementation to first iterate the Map (store) and remove any entries whose expiresAt <= Date.now() to free space, then re-check store.size and only if still >= maxEntries perform the existing FIFO eviction logic (using store.keys().next()) before finally calling store.set(...); update references to maxEntries, ttlMs, store, and the set function accordingly.packages/core/src/gql/operations/issue-assignees-add.generated.ts (1)
47-52: Update codegen configuration to emitunknowninstead ofanyforvariablesparameter inSdkFunctionWrapper.The
variables?: anypattern appears in all 47 generated operation files and violates the type safety guideline. Since this is generated code, the fix must occur at the codegen plugin level, not via manual patching. The@graphql-codegen/typescript-graphql-requestplugin generates this boilerplate; the configuration or plugin options may need adjustment to emitunknowninstead. After codegen is updated, runpnpm run ghx:gql:verifyto confirm the change propagates across all generated artifacts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/gql/operations/issue-assignees-add.generated.ts` around lines 47 - 52, The generated SdkFunctionWrapper currently types the variables parameter as `any` (variables?: any) which breaks type safety; update the GraphQL codegen configuration (the `@graphql-codegen/typescript-graphql-request` plugin or your custom plugin template) to emit `unknown` for the `variables` parameter so the generated signature becomes variables?: unknown; modify the plugin option or template that emits the SdkFunctionWrapper signature (so all generated files using SdkFunctionWrapper are affected), re-run the codegen and then run pnpm run ghx:gql:verify to ensure all 47 generated operation files now use `unknown` instead of `any`.packages/core/test/unit/transport.test.ts (2)
15-18:importTransport()dynamic import is unnecessary; a static import suffices.The comment says "Re-import to pick up env changes," but
resolveGraphqlUrl()is called insidecreateTokenTransport()(line 161 oftransport.ts), so it readsprocess.envat call-time.vi.stubEnvalready patchesprocess.envin place before the test runs, making the dynamic import no-op — ES modules are cached andimport()returns the same module object without re-executing top-level code.♻️ Suggested simplification
At the top of the file, replace the dynamic import with a static import:
+import { createTokenTransport } from "@core/gql/transport.js"Then remove the helper and inline:
- async function importTransport() { - // Re-import to pick up env changes - return import("@core/gql/transport.js") - }And update each usage site (e.g.):
- const { createTokenTransport } = await importTransport() - const transport = createTokenTransport("ghp_test") + const transport = createTokenTransport("ghp_test")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/test/unit/transport.test.ts` around lines 15 - 18, The dynamic helper importTransport() is unnecessary because vi.stubEnv patches process.env before tests and resolveGraphqlUrl() is invoked at runtime inside createTokenTransport(), so replace the dynamic import pattern by statically importing the module at the top of the test file (remove importTransport()), then inline uses by directly calling the imported createTokenTransport/resolveGraphqlUrl exports (update any call sites referencing importTransport() to use the static imports); ensure tests still call vi.stubEnv before invoking createTokenTransport so process.env changes are observed.
62-104:executeRawtest suite is missing the all-errors / no-partial-data case.The three covered shapes are clean (
{ data, errors: undefined }), partial success ({ data, errors }), and HTTP error. The fourth shape — where the batch produces no usable data at all ({ data: undefined, errors: [...] }) — is not tested. This is the path the engine takes when it cannot attribute errors to individual steps and marks all steps failed.💡 Suggested addition
+ it("returns { data: undefined, errors } when response has no data", async () => { + mockFetchResponse({ + errors: [{ message: "Internal error", path: null }], + }) + const transport = await createTransportWithRaw("ghp_test") + + const result = await transport.executeRaw("mutation { ... }") + expect(result.data).toBeUndefined() + expect(result.errors).toHaveLength(1) + expect(result.errors?.[0]?.message).toBe("Internal error") + })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/test/unit/transport.test.ts` around lines 62 - 104, Add a new test in the "executeRaw" suite that uses createTransportWithRaw and mockFetchResponse to simulate the fourth response shape: { data: undefined, errors: [...] }; call transport.executeRaw("...") and assert it returns result.data === undefined and result.errors contains the expected errors (check length and error.message/path), ensuring the method handles the all-errors/no-partial-data case without changing other tests.packages/core/test/unit/resolution-cache.test.ts (1)
21-32: Consider adding a pre-expiry retrieval assertion to fully document the TTL contract.The test proves entries expire after TTL but doesn't assert that the same entry is still accessible before time is advanced. A short pre-advance assertion would make the test self-contained and prevent a future regression where TTL is miscalculated in the opposite direction.
💡 Suggested addition
it("expired entries return undefined after TTL", () => { const cache = createResolutionCache({ ttlMs: 100 }) cache.set("key1", "value1") + // Entry should be accessible before TTL elapses + expect(cache.get("key1")).toBe("value1") // Advance time past TTL const now = Date.now() vi.spyOn(Date, "now").mockReturnValue(now + 200) expect(cache.get("key1")).toBeUndefined() // Entry should be cleaned up expect(cache.size).toBe(0) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/test/unit/resolution-cache.test.ts` around lines 21 - 32, Add a pre-expiry assertion to show the entry is valid before the TTL elapses: after calling createResolutionCache({ ttlMs: 100 }) and cache.set("key1", "value1"), assert that cache.get("key1") returns the stored value (and optionally cache.size === 1) before you call vi.spyOn(Date, "now").mockReturnValue(...) to advance time; this uses the existing createResolutionCache, cache.set, cache.get and cache.size symbols to document the expected behavior prior to expiry.packages/core/src/gql/operations/issue-assignees-remove.graphql (1)
6-6:assignees(first: 50)is a hardcoded page size.This limits how many remaining assignees are returned in the mutation response. While GitHub's practical ceiling for issue assignees is well below 50, making this a configurable constant (matching the
limits.maxPageSizepattern already present in operation cards) or aligning it with a shared constant would prevent a silent truncation if the limit is ever needed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/gql/operations/issue-assignees-remove.graphql` at line 6, The GraphQL operation in issue-assignees-remove.graphql uses a hardcoded page size in the assignees(first: 50) selection; replace the literal 50 with the shared page-size constant used across operation cards (e.g., limits.maxPageSize or the repo's shared PAGE_SIZE constant) so the query reads assignees(first: <sharedConstant>) and ensure the operation or generator references that same constant (update any variables/imports the operation expects, e.g., use the existing limits.maxPageSize pattern) to avoid silent truncation.packages/core/src/core/routing/engine.ts (1)
380-392: Cache-population block duplicates lookup-var construction from lines 346–349.The variable-building loop (
for (const [lookupVar, inputField] ...) is repeated verbatim. Consider extracting a small helper (e.g.,buildLookupVars(lookup, input)) to reduce drift risk.♻️ Suggested helper extraction
+function buildLookupVars( + lookup: { vars: Record<string, string> }, + input: Record<string, unknown>, +): Record<string, unknown> { + const vars: Record<string, unknown> = {} + for (const [lookupVar, inputField] of Object.entries(lookup.vars)) { + vars[lookupVar] = input[inputField] + } + return vars +}Then use it in both the scheduling loop (line 346) and the cache-population block (line 386).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/core/routing/engine.ts` around lines 380 - 392, The cache-population block duplicates the lookup-vars construction used earlier; extract a small helper (e.g., buildLookupVars(lookup, input)) that takes the resolution.lookup and request.input and returns the lookupVars Record, then replace the duplicate loop in the scheduling section and the cache-population block to call buildLookupVars; update calls around card.graphql.resolution, requests[stepIndex], deps.resolutionCache.set(buildCacheKey(...), result) to use the new helper so both locations share the same implementation.packages/core/src/gql/transport.ts (1)
152-152: Unchecked type assertion onresponse.json().The
as JsonPayload<TData>cast assumes the response body conforms to the expected shape. If GitHub returns an unexpected format (e.g.,{ "documentation_url": "..." }withoutdata/errors/message), the code will silently propagateundefinedfields. This is a minor robustness gap — currently the callers (executeandexecuteRaw) handleundefineddata, so it won't crash, but a minimal shape guard would make this more defensive.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/gql/transport.ts` at line 152, The unchecked cast of response.json() to JsonPayload<TData> in the payload assignment should be replaced with a minimal runtime shape guard: after calling await response.json() (the current payload variable), verify it is an object and contains at least one of the expected keys ("data", "errors", or "message"); if not, construct a safe fallback object (e.g., { data: undefined, errors: undefined, message: String(payload) } or similar) before returning to callers like execute and executeRaw so they receive a predictable JsonPayload<TData>; update the code around the payload assignment and usage of JsonPayload<TData> to use this validated/fallback object instead of the direct as-cast.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/core/src/core/routing/engine.ts`:
- Around line 474-495: The code currently wraps per-step error strings in Error
objects before passing to mapErrorToCode; change the call site that does
mapErrorToCode(new Error(stepError)) to pass the raw stepError (e.g.,
mapErrorToCode(stepError)) so mapErrorToCode and its internal toMessage() handle
string conversion directly; update the location where per-step errors are turned
into codes (the logic that iterates stepErrors derived from
deps.githubClient.queryRaw and mutationInputs) to pass the string rather than
constructing an Error.
In `@packages/core/src/core/routing/resolution-cache.ts`:
- Around line 49-51: The size getter currently returns store.size which can
include expired-but-not-yet-evicted entries due to lazy eviction; update the
JSDoc for the get size() accessor in ResolutionCache to explicitly state that
size reflects the underlying store size and may overstate live entries (expired
entries linger until accessed/evicted), and, if you want to offer an accurate
count, add a new method (e.g., liveSize or countLiveEntries) that iterates the
store and filters out entries whose TTL/expiry has passed before returning the
true live count; reference the get size() accessor and the underlying store when
documenting and/or implementing the alternative.
In `@packages/core/src/gql/operations/issue-assignees-lookup-by-number.graphql`:
- Around line 6-11: The GraphQL fragment using assignableUsers(first: 100) can
silently truncate collaborator lists causing login→ID resolution to miss users;
update the lookup logic to either page through assignableUsers with cursor-based
pagination (use edges/pageInfo and after cursor loop until all nodes are
fetched) in the query and in the resolver that resolves logins to IDs, or detect
when a requested login is not found and throw a clear error before calling
addAssigneesToAssignable/removeAssigneesFromAssignable; specifically modify the
assignableUsers query and the login-to-ID resolution code path to perform
iterative pagination (using pageInfo.hasNextPage and endCursor) or to surface a
"user not found in repository collaborators" error when no match is returned.
In `@packages/core/src/gql/operations/issue-labels-lookup-by-number.graphql`:
- Around line 6-10: The GraphQL field labels(first: 100) imposes a silent
100-label cap causing missing labels and downstream "not found in lookup result"
errors (see labels(first: 100) and applyInject's map_array branch); update this
file to (1) add a clear comment above labels(first: 100) documenting the
100-item per-page cap and its implications, (2) add a note in the consuming
operation card issue.labels.remove.yaml warning callers about the pagination
limit and recommending the CLI fallback for large repos, and (3) plan for proper
pagination by using pageInfo.endCursor and hasNextPage in the execution layer to
fetch additional pages when repositories may exceed 100 labels.
In `@packages/core/src/gql/operations/issue-node-id-lookup.generated.ts`:
- Around line 29-34: The generated SdkFunctionWrapper type currently declares
variables?: any; update the GraphQL codegen config so the
typescript-graphql-request plugin emits variables?: unknown instead of any:
modify packages/core/codegen.ts to supply a custom wrapper type/template (via
the plugin's .add or wrapper template option) that defines SdkFunctionWrapper
with variables?: unknown, then regenerate all operation artifacts with pnpm run
gql:generate; do not hand-edit the generated files—change the generator
configuration to produce the correct type for SdkFunctionWrapper.
In `@packages/core/src/gql/transport.ts`:
- Around line 143-159: The fetch handler in packages/core/src/gql/transport.ts
currently calls response.json() before checking response.ok, causing non-JSON
error bodies to throw a SyntaxError and mask the real HTTP error; modify the
function to check response.ok (and/or the Content-Type header) before parsing:
if !response.ok, read response.text() (or include headers) and throw an Error
that includes response.status and the raw text or content-type; only call
response.json() to populate payload when the response is OK and content-type is
application/json, and handle json parse errors with a clear error that includes
the raw body and status. Ensure you update the usage around response.json(),
payload, and the error thrown so response.ok is evaluated first.
---
Outside diff comments:
In `@packages/core/src/core/registry/cards/issue.assignees.remove.yaml`:
- Around line 16-24: Implement GraphQL handlers for the missing capabilities by
adding runIssueAssigneesAdd and runIssueAssigneesRemove functions to
packages/core/src/gql/domains/issue-mutations.ts following the same pattern as
runIssueAssigneesUpdate (accept GraphQL resolver args/context, call the same
underlying logic used by the CLI handlers, and return the shaped payload {
issueNumber, added/removed: assignees }). Then register these two functions in
the handlers map inside packages/core/src/gql/capability-registry.ts alongside
the existing issue.assignees.set entry so getGraphqlHandler() returns the
appropriate resolver for the declared graphql routing.
---
Nitpick comments:
In `@packages/core/src/core/routing/engine.ts`:
- Around line 380-392: The cache-population block duplicates the lookup-vars
construction used earlier; extract a small helper (e.g., buildLookupVars(lookup,
input)) that takes the resolution.lookup and request.input and returns the
lookupVars Record, then replace the duplicate loop in the scheduling section and
the cache-population block to call buildLookupVars; update calls around
card.graphql.resolution, requests[stepIndex],
deps.resolutionCache.set(buildCacheKey(...), result) to use the new helper so
both locations share the same implementation.
In `@packages/core/src/core/routing/resolution-cache.ts`:
- Around line 34-43: The set method currently evicts the oldest-inserted entry
unconditionally when store.size >= maxEntries, which can remove valid entries
while expired entries remain; modify the set(key: string, value: unknown)
implementation to first iterate the Map (store) and remove any entries whose
expiresAt <= Date.now() to free space, then re-check store.size and only if
still >= maxEntries perform the existing FIFO eviction logic (using
store.keys().next()) before finally calling store.set(...); update references to
maxEntries, ttlMs, store, and the set function accordingly.
In `@packages/core/src/gql/operations/issue-assignees-add.generated.ts`:
- Around line 47-52: The generated SdkFunctionWrapper currently types the
variables parameter as `any` (variables?: any) which breaks type safety; update
the GraphQL codegen configuration (the
`@graphql-codegen/typescript-graphql-request` plugin or your custom plugin
template) to emit `unknown` for the `variables` parameter so the generated
signature becomes variables?: unknown; modify the plugin option or template that
emits the SdkFunctionWrapper signature (so all generated files using
SdkFunctionWrapper are affected), re-run the codegen and then run pnpm run
ghx:gql:verify to ensure all 47 generated operation files now use `unknown`
instead of `any`.
In `@packages/core/src/gql/operations/issue-assignees-remove.graphql`:
- Line 6: The GraphQL operation in issue-assignees-remove.graphql uses a
hardcoded page size in the assignees(first: 50) selection; replace the literal
50 with the shared page-size constant used across operation cards (e.g.,
limits.maxPageSize or the repo's shared PAGE_SIZE constant) so the query reads
assignees(first: <sharedConstant>) and ensure the operation or generator
references that same constant (update any variables/imports the operation
expects, e.g., use the existing limits.maxPageSize pattern) to avoid silent
truncation.
In `@packages/core/src/gql/transport.ts`:
- Line 152: The unchecked cast of response.json() to JsonPayload<TData> in the
payload assignment should be replaced with a minimal runtime shape guard: after
calling await response.json() (the current payload variable), verify it is an
object and contains at least one of the expected keys ("data", "errors", or
"message"); if not, construct a safe fallback object (e.g., { data: undefined,
errors: undefined, message: String(payload) } or similar) before returning to
callers like execute and executeRaw so they receive a predictable
JsonPayload<TData>; update the code around the payload assignment and usage of
JsonPayload<TData> to use this validated/fallback object instead of the direct
as-cast.
In `@packages/core/test/unit/resolution-cache.test.ts`:
- Around line 21-32: Add a pre-expiry assertion to show the entry is valid
before the TTL elapses: after calling createResolutionCache({ ttlMs: 100 }) and
cache.set("key1", "value1"), assert that cache.get("key1") returns the stored
value (and optionally cache.size === 1) before you call vi.spyOn(Date,
"now").mockReturnValue(...) to advance time; this uses the existing
createResolutionCache, cache.set, cache.get and cache.size symbols to document
the expected behavior prior to expiry.
In `@packages/core/test/unit/transport.test.ts`:
- Around line 15-18: The dynamic helper importTransport() is unnecessary because
vi.stubEnv patches process.env before tests and resolveGraphqlUrl() is invoked
at runtime inside createTokenTransport(), so replace the dynamic import pattern
by statically importing the module at the top of the test file (remove
importTransport()), then inline uses by directly calling the imported
createTokenTransport/resolveGraphqlUrl exports (update any call sites
referencing importTransport() to use the static imports); ensure tests still
call vi.stubEnv before invoking createTokenTransport so process.env changes are
observed.
- Around line 62-104: Add a new test in the "executeRaw" suite that uses
createTransportWithRaw and mockFetchResponse to simulate the fourth response
shape: { data: undefined, errors: [...] }; call transport.executeRaw("...") and
assert it returns result.data === undefined and result.errors contains the
expected errors (check length and error.message/path), ensuring the method
handles the all-errors/no-partial-data case without changing other tests.
packages/core/src/gql/operations/issue-assignees-lookup-by-number.graphql
Show resolved
Hide resolved
| export type SdkFunctionWrapper = <T>( | ||
| action: (requestHeaders?: Record<string, string>) => Promise<T>, | ||
| operationName: string, | ||
| operationType?: string, | ||
| variables?: any, | ||
| ) => Promise<T> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how many generated operation files declare SdkFunctionWrapper with `any`
rg -n "variables\?: any" --type=ts packages/core/src/gql/operations/Repository: aryeko/ghx
Length of output: 4058
🏁 Script executed:
#!/bin/bash
# Search for codegen config files
find packages/core -type f \( -name "codegen.yml" -o -name "codegen.yaml" -o -name "codegen.json" -o -name "*.codegen.*" \) -o -name ".graphqlconfig*"
# Also check for any scripts that mention codegen or gql generation
rg "gql|codegen" --type=json packages/core/package.json -A 5 -B 1 | head -40Repository: aryeko/ghx
Length of output: 1004
🏁 Script executed:
#!/bin/bash
# Find codegen config files in packages/core
find packages/core -type f \( -name "codegen*.ts" -o -name "codegen*.yml" -o -name "codegen*.yaml" -o -name "codegen*.json" \)
# Look at the generation script
ls -la packages/core/scripts/ | grep -i gqlRepository: aryeko/ghx
Length of output: 287
🏁 Script executed:
#!/bin/bash
# Read the codegen.ts file to see how SdkFunctionWrapper is configured
cat -n packages/core/codegen.tsRepository: aryeko/ghx
Length of output: 969
🏁 Script executed:
#!/bin/bash
# Check if there are custom scalar or wrapper configurations
cat -n packages/core/codegen.ts | head -30
# Check the base-types.js file which is referenced
cat -n packages/core/src/gql/operations/base-types.js
# Also check the generation script
cat -n packages/core/scripts/generate-gql-artifacts.mjsRepository: aryeko/ghx
Length of output: 3249
🏁 Script executed:
#!/bin/bash
# Check if base-types.js exists somewhere else
find packages/core -name "base-types.js" -o -name "base-types.ts"
# Let's look at one of the generated files to see the full context of SdkFunctionWrapper
head -50 packages/core/src/gql/operations/issue-node-id-lookup.generated.tsRepository: aryeko/ghx
Length of output: 1686
🏁 Script executed:
#!/bin/bash
# Check the base-types.ts file
cat -n packages/core/src/gql/operations/base-types.tsRepository: aryeko/ghx
Length of output: 1395
🏁 Script executed:
#!/bin/bash
# Check if there's any additional codegen config in package.json or elsewhere
grep -r "SdkFunctionWrapper" packages/core --type=ts -B 2 -A 2 | head -40
# Check graphql-codegen documentation references or additional config
grep -r "wrapperTypesPath\|customFetch\|rawRequest" packages/core/codegen.ts
# Check if there's a type definition for SdkFunctionWrapper elsewhere (e.g., in a hand-written type file)
find packages/core/src -name "*.ts" -type f | xargs grep -l "SdkFunctionWrapper" | grep -v generatedRepository: aryeko/ghx
Length of output: 218
🏁 Script executed:
#!/bin/bash
# Search for SdkFunctionWrapper references using rg
rg "SdkFunctionWrapper" packages/core/src --type=ts -B 2 -A 2
# Check if there's a hand-written definition outside generated files
rg "type SdkFunctionWrapper" packages/core --type=ts --glob="!*.generated.ts"
# Look at the graphql-codegen docs for typescript-graphql-request plugin config
# Check if there's a way to override the wrapper type
rg "wrapperTypesPath\|sdkFunctionWrapper" packages/core --type=tsRepository: aryeko/ghx
Length of output: 50366
🏁 Script executed:
#!/bin/bash
# Check if there are any tests or comments about the codegen config
rg "SdkFunctionWrapper\|wrapperTypes\|typescript-graphql-request" packages/core/test/unit/codegen --type=ts -A 3 -B 1
# Also check the README or any documentation
find packages/core -name "README*" -o -name "*.md" | xargs grep -l "codegen" 2>/dev/nullRepository: aryeko/ghx
Length of output: 127
🏁 Script executed:
#!/bin/bash
# Check if there are any markdown files with codegen guidance
find packages/core -type f \( -name "*.md" -o -name "CLAUDE*" -o -name "AGENTS*" \)
# Also check the root level for any docs
ls -la | grep -i readmeRepository: aryeko/ghx
Length of output: 206
🏁 Script executed:
#!/bin/bash
# Check root level for CLAUDE.md and AGENTS.md
ls -la | grep -E "CLAUDE|AGENTS"
# Also check the packages/core README
head -100 packages/core/README.md
# Look for any existing configuration that handles `any` type issues
rg "any\|unknown" packages/core/codegen.ts -A 2 -B 2Repository: aryeko/ghx
Length of output: 3087
Fix variables?: any to unknown in SdkFunctionWrapper via codegen configuration.
All 47 generated operation files in packages/core/src/gql/operations/ declare SdkFunctionWrapper with variables?: any, violating the unknown-over-any coding guideline. The fix belongs in packages/core/codegen.ts, where the typescript-graphql-request plugin needs a custom wrapper type configuration (e.g., via a custom .add plugin or wrapper template) to generate unknown instead of any. After updating the config, regenerate all artifacts with pnpm run gql:generate. Do not manually edit generated files.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/src/gql/operations/issue-node-id-lookup.generated.ts` around
lines 29 - 34, The generated SdkFunctionWrapper type currently declares
variables?: any; update the GraphQL codegen config so the
typescript-graphql-request plugin emits variables?: unknown instead of any:
modify packages/core/codegen.ts to supply a custom wrapper type/template (via
the plugin's .add or wrapper template option) that defines SdkFunctionWrapper
with variables?: unknown, then regenerate all operation artifacts with pnpm run
gql:generate; do not hand-edit the generated files—change the generator
configuration to produce the correct type for SdkFunctionWrapper.
There was a problem hiding this comment.
Acknowledged as a known limitation in commit 11cf541 via a comment in codegen.ts.
The typescript-graphql-request plugin hardcodes variables?: any in its own Handlebars template — there is no config option to change it to unknown without forking the plugin or post-processing the generated output (both ruled out by the "never edit generated files" convention). The generated Sdk class is used exclusively in github-client.ts behind our own typed wrapper; it is never part of the public API, and Biome linting is disabled for generated files, so the any has no runtime or CI impact. Accepting as a known plugin limitation rather than introducing a custom plugin fork for a cosmetic type in code that is never called directly.
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the comment exists in codegen.ts
cat packages/core/codegen.ts
Length of output: 1140
@aryeko, thank you for the thorough explanation and documentation in codegen.ts. Your reasoning is sound—since the SdkFunctionWrapper is internal to the generated SDK, never exposed in the public API, and wrapped by your own typed interfaces in github-client.ts, accepting this plugin template limitation is pragmatic. The comment you've added clearly documents the rationale for future maintainers.
✏️ Learnings added
Learnt from: aryeko
Repo: aryeko/ghx PR: 60
File: packages/core/src/gql/operations/issue-node-id-lookup.generated.ts:29-34
Timestamp: 2026-02-21T16:58:10.442Z
Learning: In packages/core/src/gql/operations/*.generated.ts, the `SdkFunctionWrapper` type with `variables?: any` is an accepted limitation because the typescript-graphql-request plugin hardcodes this in its template with no configuration option to change it to `unknown`. Generated files are never manually edited, the Sdk is only used internally behind typed wrappers in github-client.ts (never part of the public API), and linting is disabled for generated files, so there is no runtime or CI impact.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: aryeko
Repo: aryeko/ghx PR: 9
File: packages/core/src/gql/client.ts:312-370
Timestamp: 2026-02-14T06:03:04.559Z
Learning: In packages/core/src/gql/client.ts, input validation for GraphQL client methods (e.g., assertPrReviewsListInput, assertPrCommentsListInput) uses explicit type guards rather than AJV schemas to avoid coupling the gql client internals to the registry validator plumbing, maintaining module boundary separation.
Learnt from: CR
Repo: aryeko/ghx PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-19T03:59:35.112Z
Learning: Applies to **/*.{ts,tsx} : Use `unknown` type with narrowing instead of `any` in TypeScript
Learnt from: CR
Repo: aryeko/ghx PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-19T03:59:35.112Z
Learning: Applies to packages/core/src/gql/{generated,operations}/**/*.{ts,tsx} : Never edit generated code manually — regenerate `packages/core/src/gql/generated/**` and `packages/core/src/gql/operations/*.generated.ts` via codegen script
Learnt from: CR
Repo: aryeko/ghx PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-19T03:59:53.308Z
Learning: Applies to **/*.{ts,tsx} : Prefer `unknown` + narrowing over `any` for type safety
Learnt from: CR
Repo: aryeko/ghx PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-19T03:59:53.308Z
Learning: Applies to packages/core/src/gql/generated/**,packages/core/src/gql/operations/*.generated.ts : Treat generated files (`packages/core/src/gql/generated/**` and `packages/core/src/gql/operations/*.generated.ts`) as artifacts; prefer regeneration over manual edits
Learnt from: CR
Repo: aryeko/ghx PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-19T03:59:53.308Z
Learning: Applies to **/*.{ts,tsx,js,mjs} : Respect unused variable conventions with `_` prefix allowed by lint config
Learnt from: aryeko
Repo: aryeko/ghx PR: 46
File: packages/core/src/core/execution/adapters/cli-capability-adapter.ts:522-540
Timestamp: 2026-02-19T00:36:27.711Z
Learning: In packages/core/src/core/execution/adapters/cli-capability-adapter.ts, some CLI fallback branches intentionally validate inputs that are primarily consumed by the GraphQL-preferred route to keep a consistent, route-agnostic input contract. Example: for capability "pr.diff.files", the CLI path validates `first` even though `gh pr view --json files` cannot limit results.
Learnt from: aryeko
Repo: aryeko/ghx PR: 46
File: packages/core/src/core/execution/adapters/cli-capability-adapter.ts:68-68
Timestamp: 2026-02-19T00:35:28.632Z
Learning: Repo: aryeko/ghx PR: 46
In packages/core/src/core/execution/adapters/cli-capability-adapter.ts, pr.create is intentionally included in NON_JSON_STDOUT_CAPABILITIES because `gh pr create` emits a plain-text URL (not JSON). As of PR `#46`, normalizeCliData("pr.create") returns a stub (number: 0, url: ""), and improving this (e.g., capturing stdout URL or parsing PR number) is a known limitation deferred to a future PR.
- Register issue.assignees.add/remove GraphQL handlers (types, mutations, client interface, capability registry) so routing uses GraphQL instead of falling back to CLI - Sweep expired cache entries before FIFO eviction in resolution cache - Check response.ok before parsing JSON in GraphQL transport to avoid SyntaxError on non-JSON error bodies - Extract buildLookupVars helper in engine to eliminate duplication - Pass strings directly to mapErrorToCode instead of wrapping in Error - Add missing executeRaw all-errors test, pre-expiry TTL assertion, and simplify transport test imports to static https://claude.ai/code/session_015z5iPcLzyLN2Co2Ck11joN
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
packages/core/src/core/routing/engine.ts (1)
358-401: Resolution cache integration is clean — read-before-schedule, write-after-fetch.The cache-hit path at lines 362–368 correctly skips the network call, and the write path at lines 391–400 stores fresh results keyed identically. One minor note:
resultstored in the cache (line 398) could beundefinedif the batch response lacks the expected alias — that entry would be a no-op on cache read sincegetreturnsundefinedfor misses anyway, but it silently pollutes the cache with an undefined value.Consider guarding the cache write:
Suggested guard
if (card?.graphql?.resolution && req) { const lookupVars = buildLookupVars(card, req) + if (result !== undefined) { deps.resolutionCache.set( buildCacheKey(card.graphql.resolution.lookup.operationName, lookupVars), result, ) + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/core/routing/engine.ts` around lines 358 - 401, The cache write may store undefined when the batch response lacks the alias; update the post-fetch cache population in the block that iterates lookupInputs (where result is derived from rawResult[alias]) to only call deps.resolutionCache.set when result !== undefined (or otherwise truthy as your semantics require). Keep using buildLookupVars(card, req) and buildCacheKey(card.graphql.resolution.lookup.operationName, lookupVars) to compute the cache key and ensure the guard runs before invoking deps.resolutionCache.set so undefined responses are not persisted.packages/core/src/gql/transport.ts (1)
137-166:fetchGraphql— priorresponse.ok-before-parse issue is correctly resolved.The
!response.okguard (line 152) now precedesresponse.json(), and the innercatchon line 159 silently absorbs non-JSON error bodies with a clear comment. This is the right pattern.One optional hardening:
response.json()on line 165 (happy path) has notry/catch. A truncated 200 response would throw a bareSyntaxError. For a GraphQL endpoint this is extremely unlikely, but wrapping in a try/catch with a descriptive message would make debugging easier.🛡️ Optional defensive wrap
- return (await response.json()) as JsonPayload<TData> + let payload: JsonPayload<TData> + try { + payload = (await response.json()) as JsonPayload<TData> + } catch { + throw new Error("GraphQL response body is not valid JSON") + } + return payload🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/gql/transport.ts` around lines 137 - 166, fetchGraphql currently parses the successful response body with response.json() without guarding for JSON parse errors; wrap the happy-path JSON parse in a try/catch inside fetchGraphql so any SyntaxError or parse issue is caught and rethrown with a clear message (e.g., "Invalid JSON in GraphQL response") including status/URL/context; update the return path to return the parsed JsonPayload<TData> only if parse succeeds and throw a descriptive Error otherwise, referencing the fetchGraphql function and the successful-response parsing location.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/core/src/gql/transport.ts`:
- Around line 69-88: The behavior of queryRaw is inconsistent: when
transport.executeRaw exists HTTP errors bubble up but the fallback path catches
and returns a GraphqlRawResult; make them consistent by wrapping the
transport.executeRaw<TData>(queryText, variables) call in a try/catch (same
pattern used for transport.execute) and return { data: undefined, errors: [{
message: err instanceof Error ? err.message : String(err) }] } on error so
callers always receive a settled GraphqlRawResult; update queryRaw to try/catch
around transport.executeRaw and mirror the fallback error-normalization logic
(referencing queryRaw, transport.executeRaw, transport.execute and error
handling).
- Around line 6-10: Update the GraphqlError.type so its path property accepts
numeric indices as well as strings: change the path declaration on the
GraphqlError type (in packages/core/src/gql/transport.ts) from string[] to a
union-based array type compatible with graphql-js (e.g., ReadonlyArray<string |
number> or (string | number)[]), ensuring consumers can handle paths like
["step0","items",2,"field"] without TypeScript errors.
---
Nitpick comments:
In `@packages/core/src/core/routing/engine.ts`:
- Around line 358-401: The cache write may store undefined when the batch
response lacks the alias; update the post-fetch cache population in the block
that iterates lookupInputs (where result is derived from rawResult[alias]) to
only call deps.resolutionCache.set when result !== undefined (or otherwise
truthy as your semantics require). Keep using buildLookupVars(card, req) and
buildCacheKey(card.graphql.resolution.lookup.operationName, lookupVars) to
compute the cache key and ensure the guard runs before invoking
deps.resolutionCache.set so undefined responses are not persisted.
In `@packages/core/src/gql/transport.ts`:
- Around line 137-166: fetchGraphql currently parses the successful response
body with response.json() without guarding for JSON parse errors; wrap the
happy-path JSON parse in a try/catch inside fetchGraphql so any SyntaxError or
parse issue is caught and rethrown with a clear message (e.g., "Invalid JSON in
GraphQL response") including status/URL/context; update the return path to
return the parsed JsonPayload<TData> only if parse succeeds and throw a
descriptive Error otherwise, referencing the fetchGraphql function and the
successful-response parsing location.
- Change GraphqlError.path to ReadonlyArray<string | number> per GraphQL spec (supports numeric indices for list positions) - Normalize queryRaw: wrap executeRaw in try/catch so HTTP errors settle consistently regardless of transport implementation - Guard resolution cache write against undefined values from missing batch aliases - Wrap happy-path response.json() in try/catch to surface truncated JSON errors clearly - Add issue-assignees-mutations tests (add/remove: validation, lookup failures, assignee resolution, case-insensitive matching) - Add transport tests for non-JSON error bodies, truncated JSON, missing data, and queryRaw error normalization - Add graphql-capability-adapter coverage for issue.assignees.add/remove https://claude.ai/code/session_015z5iPcLzyLN2Co2Ck11joN
There was a problem hiding this comment.
🧹 Nitpick comments (1)
packages/core/src/core/routing/engine.ts (1)
538-551: Minor: truthy check onstepErrorcould mask an empty-string error message.
stepErrors.get(mutInput.alias)returnsundefinedfor missing keys, so theif (stepError)check works in practice. However, if a GraphQL error ever returned an emptymessage: "", this would silently skip the error. Using an explicitundefinedcheck would be marginally safer.Suggested tweak
const stepError = stepErrors.get(mutInput.alias) - if (stepError) { + if (stepError !== undefined) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/core/routing/engine.ts` around lines 538 - 551, The current truthy check on stepError when reading from stepErrors.get(mutInput.alias) can miss an explicit empty-string error; change the conditional to check explicitly for undefined (e.g., if (stepError !== undefined)) so any non-undefined error string — including "" — will be handled; keep the existing return shape that uses mapErrorToCode(stepError) and the same task/ok/error structure.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@packages/core/src/core/routing/engine.ts`:
- Around line 538-551: The current truthy check on stepError when reading from
stepErrors.get(mutInput.alias) can miss an explicit empty-string error; change
the conditional to check explicitly for undefined (e.g., if (stepError !==
undefined)) so any non-undefined error string — including "" — will be handled;
keep the existing return shape that uses mapErrorToCode(stepError) and the same
task/ok/error structure.
- Fix stepError !== undefined check in engine Phase 2 error attribution - Remove vitest/no-conditional-expect violations in engine tests - Add 13 chain-command tests covering no-args, parsing, executeRaw, errors - Add Phase 2 missing-alias and transport-failure engine tests - Update changeset to comprehensively reflect all PR #60 changes - Update followup doc with completion status for items 3, 5, 6 https://claude.ai/code/session_015z5iPcLzyLN2Co2Ck11joN
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
packages/core/src/core/routing/engine.ts (1)
386-401: Cache write guard is appropriate, but consider also skippingnullresults.Line 391 guards against
undefined, but a GraphQL batch response can returnnullfor a failed alias (e.g., the field resolved tonull). Cachingnullwould cause subsequent lookups to skip the network call and injectnullintoapplyInject, which may produce incorrect resolved variables.If
nullis a valid lookup result in your domain, ignore this; otherwise consider tightening the guard:Suggested tightening
- if (deps.resolutionCache && result !== undefined) { + if (deps.resolutionCache && result != null) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/core/routing/engine.ts` around lines 386 - 401, The loop that writes to the resolution cache (inside the lookupInputs iteration) currently only skips undefined values but may cache nulls; update the guard in that block to skip both undefined and null (e.g., require result != null) before calling buildLookupVars and deps.resolutionCache.set so null batch responses are not cached; locate the logic around lookupInputs, lookupResults assignment, and the cache write that uses buildLookupVars, buildCacheKey, cards[stepIndex], and requests[stepIndex], and tighten the condition accordingly.packages/core/test/unit/chain-command.test.ts (1)
199-245: Test name promises more than it asserts.The test is titled "provides executeRaw transport that returns settled results" but only asserts that
executeRawis defined (line 243). The_rawForTestvalue captured in the mock (line 218) is never verified. Consider asserting the shape of the raw result to match the test's intent.Suggested additional assertion
expect(capturedTransport?.executeRaw).toBeDefined() + + const stdoutSpy = vi.spyOn(process.stdout, "write").mockImplementation(() => true) + // If _rawForTest was propagated in the result, verify its shape: + // expect(raw).toMatchObject({ data: { step0: { id: "I_1" } }, errors: [...] }) + stdoutSpy.mockRestore()Alternatively, rename the test to "provides executeRaw on the transport" to match its actual assertion scope.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/test/unit/chain-command.test.ts` around lines 199 - 245, Update the test to actually assert the returned raw payload captured in the mock instead of only asserting executeRaw exists: when calling chainCommand use a variable to capture its return (e.g., const result = await chainCommand(...)) and add an assertion that result._rawForTest (or the appropriate shape) matches the mocked GraphQL response (the object with data and errors), referencing the existing capturedTransport, _rawForTest, executeRaw, and chainCommand symbols; alternatively if you prefer not to assert the payload, rename the test title to "provides executeRaw on the transport" so the name matches the single assertion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/plans/2026-02-20-atomic-chaining-followup.md`:
- Around line 99-106: Update the notes for the table rows for
issue.milestone.clear and issue.labels.remove to accurately reflect missing
work: for issue.milestone.clear, replace "Needs resolution lookup for milestone
ID" with that GraphQL mutation file, codegen, card YAML updates, and handler
registration are missing and note that the new null_literal inject type covers
the clear behavior; for issue.labels.remove, indicate that mutation file and
codegen exist but the GraphQL handler is not registered in
capability-registry.ts (so only CLI fallback works) and that handler
registration plus card YAML updates are required to enable chaining through
GraphQL.
---
Nitpick comments:
In `@packages/core/src/core/routing/engine.ts`:
- Around line 386-401: The loop that writes to the resolution cache (inside the
lookupInputs iteration) currently only skips undefined values but may cache
nulls; update the guard in that block to skip both undefined and null (e.g.,
require result != null) before calling buildLookupVars and
deps.resolutionCache.set so null batch responses are not cached; locate the
logic around lookupInputs, lookupResults assignment, and the cache write that
uses buildLookupVars, buildCacheKey, cards[stepIndex], and requests[stepIndex],
and tighten the condition accordingly.
In `@packages/core/test/unit/chain-command.test.ts`:
- Around line 199-245: Update the test to actually assert the returned raw
payload captured in the mock instead of only asserting executeRaw exists: when
calling chainCommand use a variable to capture its return (e.g., const result =
await chainCommand(...)) and add an assertion that result._rawForTest (or the
appropriate shape) matches the mocked GraphQL response (the object with data and
errors), referencing the existing capturedTransport, _rawForTest, executeRaw,
and chainCommand symbols; alternatively if you prefer not to assert the payload,
rename the test title to "provides executeRaw on the transport" so the name
matches the single assertion.
- Add pageInfo.hasNextPage guard to 4 GraphQL lookup files (assignees + labels, by-id and by-number variants); applyInject map_array branch now throws a descriptive error when the 100-item connection cap is exceeded - Wire ResolutionCache into chainCommand() so CLI invocations benefit from cross-call lookup caching - Fix resolution-cache eviction loop: collect expired keys into array before deleting (no deletion during Map iteration) - Add numeric path[0] test in engine.test.ts: non-string path element falls through step attribution and marks all steps failed - Add resolve.test.ts tests for map_array truncation guard (throws / succeeds) - Correct inaccurate capability status notes in atomic-chaining followup doc - Add codegen.ts comment explaining SdkFunctionWrapper variables?: any is a hardcoded plugin limitation Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Updated generated files for the 4 lookup operations that now include
pageInfo { hasNextPage } in their connection blocks.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (5)
packages/core/codegen.ts (1)
13-17: Clarify "Biome linting is disabled for this project"The phrasing "Biome linting is disabled for this project" could mislead future readers — Biome is actively used across the repo; it is only disabled for the generated output files specifically. Consider tightening the wording.
📝 Suggested wording
- // Note: typescript-graphql-request generates a `SdkFunctionWrapper` type with - // `variables?: any`. This is hardcoded in the plugin template and cannot be - // changed without a custom plugin or fork. Since generated files are never - // edited manually and Biome linting is disabled for this project, this `any` - // does not affect CI or runtime behavior. Tracked as a known limitation. + // Note: typescript-graphql-request generates a `SdkFunctionWrapper` type with + // `variables?: any`. This is hardcoded in the plugin template and cannot be + // changed without a custom plugin or fork. Since generated files are never + // edited manually and Biome linting is disabled for generated output, this `any` + // does not affect CI or runtime behavior. Tracked as a known limitation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/codegen.ts` around lines 13 - 17, Update the explanatory comment in the codegen.ts header to avoid implying Biome is disabled project-wide; change the sentence that currently reads "Biome linting is disabled for this project" to clarify that Biome remains enabled across the repo but is intentionally disabled only for generated output files (e.g., "Biome linting is enabled for the repository but disabled for generated output files such as those produced by typescript-graphql-request"). Locate and edit the comment block containing the note about typescript-graphql-request and SdkFunctionWrapper to replace the ambiguous phrasing accordingly.packages/core/src/gql/operations/issue-labels-lookup.graphql (1)
6-6: Consider includingendCursoralongsidehasNextPagefor future pagination support.The current change is correct for the stated goal — detecting when the 100-label cap is hit to throw a descriptive error. However, if a future PR ever needs to walk all pages rather than hard-abort,
endCursorwon't be available without amending the query again.♻️ Proposed addition
- pageInfo { hasNextPage } + pageInfo { hasNextPage endCursor }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/gql/operations/issue-labels-lookup.graphql` at line 6, The GraphQL selection for pageInfo only requests hasNextPage; update the pageInfo selection in issue-labels-lookup.graphql to also request endCursor so future pagination can resume/iterate (i.e., include endCursor alongside hasNextPage in the pageInfo selection used by the query).packages/core/src/core/routing/resolution-cache.ts (1)
64-71:buildCacheKeyonly sorts top-level keys — nested objects won't have deterministic serialization.If a variable value is itself an object (e.g.,
{ filter: { state: "open" } }), the nested keys won't be sorted, potentially producing different cache keys for semantically identical inputs. This is low risk given that GraphQL lookup variables are typically flat primitives, but worth noting.♻️ Optional: recursive key sorting for full determinism
+function sortDeep(val: unknown): unknown { + if (typeof val !== "object" || val === null) return val + if (Array.isArray(val)) return val.map(sortDeep) + const sorted: Record<string, unknown> = {} + for (const k of Object.keys(val as Record<string, unknown>).sort()) { + sorted[k] = sortDeep((val as Record<string, unknown>)[k]) + } + return sorted +} + export function buildCacheKey(operationName: string, variables: Record<string, unknown>): string { - const sortedKeys = Object.keys(variables).sort() - const sorted: Record<string, unknown> = {} - for (const k of sortedKeys) { - sorted[k] = variables[k] - } - return `${operationName}:${JSON.stringify(sorted)}` + return `${operationName}:${JSON.stringify(sortDeep(variables))}` }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/core/routing/resolution-cache.ts` around lines 64 - 71, buildCacheKey currently only sorts top-level keys so nested objects can serialize nondeterministically; modify buildCacheKey to deep-sort keys recursively before stringifying by implementing a helper (e.g., sortObjectDeep) that: for objects creates a new object with keys sorted and values processed recursively, for arrays maps each element through the helper, and for primitives returns the value as-is; then call this helper on the variables inside buildCacheKey (without mutating the original variables) so JSON.stringify yields deterministic output for nested structures.packages/core/src/cli/commands/chain.ts (1)
99-104: Consider narrowing theresponse.json()cast.
response.json()returnsPromise<any>in the Fetch API. Theas GqlPayload<TData>cast on line 101 is a trust boundary — if the response shape is unexpected (e.g., a non-object like a string or number), downstream code could fail with confusing errors. A lightweight runtime check (e.g.,typeof payload === "object" && payload !== null) before trusting the cast would harden this.🛡️ Optional: add a runtime shape check
try { - payload = (await response.json()) as GqlPayload<TData> + const raw: unknown = await response.json() + if (typeof raw !== "object" || raw === null) { + throw new Error(`GitHub GraphQL returned unexpected JSON type (status ${response.status})`) + } + payload = raw as GqlPayload<TData> } catch {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/cli/commands/chain.ts` around lines 99 - 104, The current cast of response.json() to GqlPayload<TData> is unsafe; after awaiting response.json() in the try block where payload is assigned, validate the runtime shape before trusting the cast: store the raw result (e.g., raw = await response.json()), check that typeof raw === "object" && raw !== null (and optionally Array.isArray(raw) if arrays are invalid), and only then assign payload = raw as GqlPayload<TData>; otherwise throw a descriptive Error mentioning the non-object response and status. Update the logic around payload/GqlPayload<TData> and response.json() in the same try/catch to perform this validation.packages/core/test/unit/engine.test.ts (1)
744-751: Consider extracting repeatedvi.resetModules+vi.doMockboilerplate into a shared helper.Every test in the partial-error and resolution-cache describe blocks repeats the same 7-line
vi.resetModules/vi.doMockpreamble forexecuteandgetOperationCard. A sharedresetEngineModuleMocks()helper would reduce ~40 lines of duplication and make each test body more focused on the scenario-specific setup.Also applies to: 805-811, 863-869, 922-928, 979-985, 1034-1040
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/test/unit/engine.test.ts` around lines 744 - 751, Extract the repeated vi.resetModules + vi.doMock boilerplate into a single helper function (e.g., resetEngineModuleMocks) that calls vi.resetModules() and sets up the two mocks for "@core/core/execute/execute.js" (mapping to executeMock) and "@core/core/registry/index.js" (mapping to getOperationCardMock); replace each 7-line preamble in the tests with a call to resetEngineModuleMocks() so tests call the helper before per-test mock configuration. Ensure the helper references executeMock and getOperationCardMock so existing test-specific overrides still work and export or define it in a shared test utils file so all affected blocks can import and reuse it.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/plans/2026-02-20-atomic-chaining-followup.md`:
- Around line 105-106: The summary paragraph incorrectly states that both
issue.labels.remove and issue.milestone.clear "still need GraphQL mutation
files, codegen"; update the summary to reflect the table: keep
issue.milestone.clear listed as needing mutation files and codegen, but change
issue.labels.remove to indicate only the GraphQL handler registration is missing
(mutation file and codegen already exist), and ensure the wording matches the
detail at line ~100 so the summary and table are consistent.
In `@packages/core/src/gql/operations/issue-labels-lookup-by-number.graphql`:
- Around line 6-12: Add an inline comment above the labels(first: 100) field
documenting that the query intentionally caps results at 100 items and that the
caller (applyInject.map_array) will throw if hasNextPage is true (i.e.,
truncation is a hard failure), and also extend the pageInfo selection to include
endCursor so future pagination can be enabled without changing the schema;
reference the labels(first: 100) selection and pageInfo { hasNextPage } / nodes
{ id name } in your edit.
In `@packages/core/src/gql/resolve.ts`:
- Around line 52-58: Add validation to the operation-card schema so map_array
specs cannot register a nodes_path that doesn't end with ".nodes": in
operation-card-schema.ts, update the nodes_path field definition (used by
map_array) to be a non-empty string with a regex pattern enforcing the suffix
(e.g. pattern "\\.nodes$"); this ensures the guard in resolve.ts that computes
pageInfoPath from spec.nodes_path always finds the ".nodes" suffix and cannot be
silently bypassed by malformed specs.
---
Nitpick comments:
In `@packages/core/codegen.ts`:
- Around line 13-17: Update the explanatory comment in the codegen.ts header to
avoid implying Biome is disabled project-wide; change the sentence that
currently reads "Biome linting is disabled for this project" to clarify that
Biome remains enabled across the repo but is intentionally disabled only for
generated output files (e.g., "Biome linting is enabled for the repository but
disabled for generated output files such as those produced by
typescript-graphql-request"). Locate and edit the comment block containing the
note about typescript-graphql-request and SdkFunctionWrapper to replace the
ambiguous phrasing accordingly.
In `@packages/core/src/cli/commands/chain.ts`:
- Around line 99-104: The current cast of response.json() to GqlPayload<TData>
is unsafe; after awaiting response.json() in the try block where payload is
assigned, validate the runtime shape before trusting the cast: store the raw
result (e.g., raw = await response.json()), check that typeof raw === "object"
&& raw !== null (and optionally Array.isArray(raw) if arrays are invalid), and
only then assign payload = raw as GqlPayload<TData>; otherwise throw a
descriptive Error mentioning the non-object response and status. Update the
logic around payload/GqlPayload<TData> and response.json() in the same try/catch
to perform this validation.
In `@packages/core/src/core/routing/resolution-cache.ts`:
- Around line 64-71: buildCacheKey currently only sorts top-level keys so nested
objects can serialize nondeterministically; modify buildCacheKey to deep-sort
keys recursively before stringifying by implementing a helper (e.g.,
sortObjectDeep) that: for objects creates a new object with keys sorted and
values processed recursively, for arrays maps each element through the helper,
and for primitives returns the value as-is; then call this helper on the
variables inside buildCacheKey (without mutating the original variables) so
JSON.stringify yields deterministic output for nested structures.
In `@packages/core/src/gql/operations/issue-labels-lookup.graphql`:
- Line 6: The GraphQL selection for pageInfo only requests hasNextPage; update
the pageInfo selection in issue-labels-lookup.graphql to also request endCursor
so future pagination can resume/iterate (i.e., include endCursor alongside
hasNextPage in the pageInfo selection used by the query).
In `@packages/core/test/unit/engine.test.ts`:
- Around line 744-751: Extract the repeated vi.resetModules + vi.doMock
boilerplate into a single helper function (e.g., resetEngineModuleMocks) that
calls vi.resetModules() and sets up the two mocks for
"@core/core/execute/execute.js" (mapping to executeMock) and
"@core/core/registry/index.js" (mapping to getOperationCardMock); replace each
7-line preamble in the tests with a call to resetEngineModuleMocks() so tests
call the helper before per-test mock configuration. Ensure the helper references
executeMock and getOperationCardMock so existing test-specific overrides still
work and export or define it in a shared test utils file so all affected blocks
can import and reuse it.
- Fix misleading summary paragraph in followup doc: issue.labels.remove only needs handler registration (mutation + codegen exist); milestone.clear needs the full stack - Add JSDoc to ResolutionCache.size noting lazy-eviction overcount - Add nodes_path pattern constraint (\\.nodes$) in operation-card-schema so malformed map_array specs cannot silently bypass the truncation guard - Add inline cap comments and endCursor to all 4 lookup graphql files for forward-compatibility with pagination; regenerate codegen Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@coderabbitai please resolve addressed comments and approve pr if its ready |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 66529 🏁 Script executed: Length of output: 13799
✅ All review comments have been addressed:
✅ All CI checks are passing:
✅ Comprehensive improvements delivered:
The PR is ready for approval. I'll resolve all review comments and approve now. [resolve] 🧠 Learnings used |
Summary
Hardens atomic chaining: partial error handling, resolution cache, expanded chainable coverage, and follow-up fixes from PR review.
Phase 2 partial error handling
executeRaw/queryRawto GraphQL transport for partial error handling in batch mutationspath[0]alias; unattributed errors mark all steps failedGraphqlError.pathtyped asReadonlyArray<string | number>per specResolution cache
resolution-cache.ts— TTL-based cache keyed byoperationName:stableStringify(variables)chainCommand()so CLI invocations benefit from cross-call lookup cachingPagination truncation guard
pageInfo { hasNextPage }to 4 GraphQL lookup files (assignees + labels, by-id and by-number variants)applyInjectmap_arraybranch now throws a descriptive error when the 100-item connection cap is exceededExpanded chainable coverage
issue.assignees.add,issue.assignees.remove— full GraphQL handlers registeredissue.labels.remove— mutation file + codegen exist; GraphQL handler registration pendingissue.milestone.clear— usesnull_literalinject type; GraphQL mutation registration pendingOther fixes
null_literalinject type for milestone clearingcodegen.tscomment documentingSdkFunctionWrapper variables?: anyas a known plugin limitationTests
path[0]fallback test: non-string path element → all steps failedresolve.test.ts: map_array truncation throws / succeedshttps://claude.ai/code/session_015z5iPcLzyLN2Co2Ck11joN
Summary by CodeRabbit
New Features
Improvements
Documentation
Tests