Skip to content

fix(discovery): raise maxScanCount default and sort before slice#253

Merged
unifiedh merged 1 commit intoConway-Research:mainfrom
uibeka:fix/discovery-scan-limit-and-ordering
Mar 3, 2026
Merged

fix(discovery): raise maxScanCount default and sort before slice#253
unifiedh merged 1 commit intoConway-Research:mainfrom
uibeka:fix/discovery-scan-limit-and-ordering

Conversation

@uibeka
Copy link
Contributor

@uibeka uibeka commented Mar 2, 2026

Problem

discover_agents(limit=50) silently returns only 20 agents despite 22,000+ agents registered on-chain. Additionally, when more agents are scanned than the limit, the returned subset may not contain the most recently registered agents.

Root Cause

1. maxScanCount silently clamps the user's limit (types.ts)

DEFAULT_DISCOVERY_CONFIG.maxScanCount is set to 20. In discoverAgents(), the limit is clamped via Math.min(limit, cfg.maxScanCount) before being passed to the event scanner. This makes the user-specified limit parameter ineffective above 20 — discover_agents(limit=50) passes 20 to the scanner, which breaks early at 20 logs and slices to 20 results.

2. .reverse().slice() selects wrong agents after paginated collection (erc8004.ts)

PR #228 introduced paginated backward scanning — chunks are collected newest-first, but eth_getLogs returns ascending order within each chunk. .reverse() on the concatenated array doesn't produce true descending order across chunk boundaries. .slice(0, limit) then selects from the wrong end. The existing post-slice .sort() corrects final ordering but can't recover agents discarded by the slice.

Fix

1. Raise maxScanCount from 20 to 100

The scan loop already has robust safeguards: per-chunk timeouts (PER_CHUNK_TIMEOUT_MS), consecutive failure limits (MAX_CONSECUTIVE_FAILURES), and an overall 60-second discovery timeout. These make a low scan count cap redundant. Raising to 100 allows meaningful discovery calls while retaining the safety net.

2. Sort by tokenId descending before .slice(0, limit)

Replace the .reverse().slice().sort() chain with .sort().slice(). Since ERC-8004 tokenIds are monotonically increasing on mint, sorting by tokenId descending before slicing ensures the newest agents are selected and correctly ordered regardless of chunk collection order.

Impact

  • discover_agents(limit=50) returns up to 50 agents (previously silently capped at 20)
  • Returned agents are the most recently registered (previously could include older agents from wrong chunk order)
  • No performance regression — scan loop safeguards unchanged
  • Backward compatible — callers passing limit ≤ 20 see identical behavior

Testing

  • pnpm build — zero errors
  • pnpm test — all existing tests pass

Open with Devin

Two related fixes to the agent discovery pipeline:

1. Raise DEFAULT_DISCOVERY_CONFIG.maxScanCount from 20 to 100 — the
   previous default silently clamped the user-specified limit parameter,
   making discover_agents(limit=50) return only 20 results. The scan
   loop already has proper safeguards (chunk timeouts, consecutive
   failure limits, overall discovery timeout) making a low cap redundant.

2. Sort agents by tokenId descending before .slice(0, limit) instead of
   after — the paginated backward scan collects chunks newest-first but
   logs within each chunk are ascending. A flat .reverse() on
   concatenated chunks produces garbled order, causing .slice() to
   select agents from the wrong end. Sorting by tokenId (monotonically
   increasing on mint) before slicing ensures the newest agents are
   correctly selected and returned.
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no potential bugs to report.

View in Devin Review to see 3 additional findings.

Open in Devin Review

@unifiedh unifiedh merged commit 2d29156 into Conway-Research:main Mar 3, 2026
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants