Skip to content

perf: optimize explorer + fix: deterministic CI/CD + feat: consensus result column#1544

Merged
danieljrc888 merged 21 commits intomainfrom
perf/improve-view-loading-time
Mar 20, 2026
Merged

perf: optimize explorer + fix: deterministic CI/CD + feat: consensus result column#1544
danieljrc888 merged 21 commits intomainfrom
perf/improve-view-loading-time

Conversation

@danieljrc888
Copy link
Contributor

@danieljrc888 danieljrc888 commented Mar 19, 2026

Summary

This PR contains three bodies of work:

1. Explorer Performance Optimization

  • Backend: Replace correlated subqueries in contracts list query with a separate lightweight count query and batch stats fetch for only the current page. Default page load goes from O(N×contracts) to O(page_size).
  • Frontend: Remove prefetch _rsc requests, remove "State Fields" column from contracts table, stop returning data JSONB blob in contracts list API response.
  • New /api/explorer/stats/counts and /api/explorer/address/{address} endpoints.

2. CI/CD Flakiness Fixes

  • Docker healthchecks: New docker-compose.ci.yml with CI-tuned healthcheck overrides, replacing fragile shell polling loops with docker compose up --wait.
  • Advisory lock NULL bug: pg_try_advisory_xact_lock(hashtext(t.to_address)) returns NULL when to_address is NULL (burn/deploy txs), silently skipping them forever. Fixed with COALESCE(t.to_address, t.hash) in all 3 claim functions.
  • NULL = NULL in NOT EXISTS: t2.to_address = t.to_address evaluates to unknown when both are NULL, bypassing concurrent-processing guards. Fixed with IS NOT DISTINCT FROM.
  • Validators manager resilience: _cached_snapshot was set to None before start_module() — if it failed, snapshot stayed None permanently. Now preserves and restores previous snapshot on failure.
  • Load test fixes: Proper wait_for_tx_finalized polling, bigint storage type in counter contract, retry loops for validator creation.
  • Integration test stabilization: Reduced validator test to smoke test, added rate-limit guard fixture, increased retry defaults.

3. Explorer UI: Consensus Result Column & Tooltips

  • Consensus Result column: Added to TransactionTable, AddressTransactionTable, and transaction detail OverviewTab. Shows the last consensus round outcome (Accepted, Leader Rotation, Majority Disagree, etc.) with color-coded badges.
  • ConsensusResultBadge component: Shared badge with inline and badge variants, color logic extracted into single source of truth (green=Accepted, red=failure, amber=other).
  • ColumnHeaderWithTooltip component: Help icon with tooltip for GenLayer protocol-specific columns (Status, GenVM Result, Consensus Result). Tooltip texts centralized in COLUMN_TOOLTIPS constant.
  • Tooltip style update: Switched from high-contrast bg-primary to bg-popover with border and shadow to match the card/dropdown design system.
  • Removed Value column and Direction header text from address transaction table.

Test plan

  • All db-sqlalchemy tests pass (98 tests)
  • All unit tests pass
  • Integration tests pass (including test_accounts_burn)
  • Load test passes (state integrity test: 5/5 increments preserved)
  • All 5 CI workflows green
  • TypeScript compiles with no errors
  • Verify contracts page loads faster on staging
  • Verify Consensus Result column displays correctly in all table views
  • Verify tooltip popups appear on hover for Status, GenVM Result, Consensus Result columns

🤖 Generated with Claude Code

- Replace correlated subqueries with separate count query and batch stats
  fetch, reducing contracts list from O(N*contracts) to O(page_size)
- Disable Next.js link prefetching app-wide via AppLink wrapper to prevent
  dozens of _rsc requests on pages with many links
- Remove unused state data field from contracts list API response and UI

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 19, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Refactored backend explorer queries: removed transaction contract_snapshot, added optional include_data to state serialization, batched contract stats and new pagination helpers, and added address-filtering for transactions. Frontend: added AppLink (prefetch=false) and migrated Link imports, removed contract snapshot UI and TS field, removed “State Fields” column, and added address filtering UI.

Changes

Cohort / File(s) Summary
Backend: queries & router
backend/protocol_rpc/explorer/queries.py, backend/protocol_rpc/explorer/router.py
Removed contract_snapshot emission; _serialize_state(include_data: bool) added; rewrote get_all_states() with _batch_contract_stats(), _pagination(), _empty_page() and aggregate paths for sorting by tx_count/created_at; added address filter to get_all_transactions_paginated(...) and forwarded from get_transactions(...).
Frontend: AppLink + Link migrations
explorer/src/components/AppLink.tsx, explorer/src/app/.../D*, explorer/src/app/address/.../AddressContent.tsx, explorer/src/app/address/.../page.tsx, explorer/src/app/transactions/.../OverviewTab.tsx, explorer/src/app/transactions/.../RelatedTab.tsx, explorer/src/app/transactions/.../page.tsx, explorer/src/components/AddressDisplay.tsx, explorer/src/components/Navigation.tsx, explorer/src/components/StatCard.tsx, explorer/src/components/TransactionTable.tsx
Added AppLink wrapper (default prefetch=false) and replaced next/link imports across UI components; usage sites unchanged.
Transactions UI / types
explorer/src/app/transactions/[hash]/components/DataTab.tsx, explorer/src/lib/types.ts
Removed UI block showing tx.contract_snapshot and removed contract_snapshot from Transaction TypeScript interface.
Contracts table UI
explorer/src/app/contracts/page.tsx
Deleted "State Fields" column and per-row rendering; adjusted empty-state colSpan.
Address page UX
explorer/src/app/address/[addr]/AddressContent.tsx
Switched to AppLink; compute txCount = data.tx_count ?? txs.length and use localized display; added header "Latest … from a total of …" and conditional "VIEW ALL TRANSACTIONS →" link; removed conditional State tab rendering in contract view.
Transactions listing filter
explorer/src/app/transactions/page.tsx
Added address query param handling (addressFilter) into fetch, UI subtitle, clear-filters behavior, and callback dependencies.
Method type rendering
explorer/src/components/MethodForm.tsx
Render non-string parameter/return types via JSON.stringify when needed.
Tests & helpers
tests/common/request.py, tests/load/test_state_integrity.py
Added _TERMINAL_FAILURE_STATUSES and TransactionTerminalError (stop polling early on terminal failure) and replaced fixed sleeps with bounded retry loops for contract discovery and initial reads.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Client
    participant API as Explorer API
    participant DB as Database
    rect rgba(200,200,255,0.5)
    Client->>API: GET /contracts?page,limit,sort_by
    API->>API: validate params, choose path (aggregate vs default)
    alt aggregate path (tx_count / created_at)
        API->>DB: run grouped aggregated subqueries (from/to/overlap)
        DB-->>API: aggregated stats (tx_count, created_at)
        API->>API: paginate aggregates, serialize states (include_data=false)
    else default path
        API->>DB: query CurrentState page ordered by updated_at
        DB-->>API: page of CurrentState rows
        API->>DB: _batch_contract_stats(contract_ids_of_page)
        DB-->>API: tx counts & created_at for page ids
        API->>API: assemble states and serialize (include_data=false)
    end
    API-->>Client: JSON page (states list, pagination metadata)
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • cristiam86
  • dohernandez

Poem

🐇 I hop through code with a twitch and a wink,
Links wear AppLink now — no prefetch to blink,
Snapshots trimmed, states batched in a row,
Counts and pages hum steady and slow,
Rabbit applauds — clean explorer, let's go!

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 34.78% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The PR title references 'perf: optimize explorer' and 'fix: deterministic CI/CD' and 'feat: consensus result column', but the raw summary and description show the title only partially captures the work—particularly missing the consensus result column UI changes and CI/CD fixes that are substantial parts of this PR. Consider a more complete title like 'perf: optimize explorer contracts + fix: deterministic CI/CD + feat: consensus result column' or split into multiple PRs with focused titles.
✅ Passed checks (1 passed)
Check name Status Explanation
Description check ✅ Passed The PR description provides a detailed summary of changes across three main areas (performance optimization, CI/CD fixes, and UI features) with test plan checklist, but does not follow the provided template structure with required sections like 'What', 'Why', 'Testing done', and 'Decisions made'.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch perf/improve-view-loading-time
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

The contract_snapshot JSONB blob is never needed by the explorer frontend.
Stop serializing it and defer loading it on all explorer queries.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
backend/protocol_rpc/explorer/queries.py (1)

514-527: Repeated dictionary lookups can be simplified.

The code calls stats_map.get(state.id, (0, None)) three times per iteration. Extract it once to improve readability and avoid redundant lookups.

♻️ Proposed refactor
     return {
         "states": [
             {
-                **_serialize_state(state, tx_count=stats_map.get(state.id, (0, None))[0], include_data=False),
-                "created_at": (
-                    stats_map.get(state.id, (0, None))[1].isoformat()
-                    if stats_map.get(state.id, (0, None))[1]
-                    else None
-                ),
+                **_serialize_state(state, tx_count=(stats := stats_map.get(state.id, (0, None)))[0], include_data=False),
+                "created_at": stats[1].isoformat() if stats[1] else None,
             }
             for state in states
         ],
         "pagination": _pagination(page, limit, total),
     }

Or for Python < 3.8 style or clarity:

def _build_state_row(state, stats_map):
    tx_count, created_at = stats_map.get(state.id, (0, None))
    return {
        **_serialize_state(state, tx_count=tx_count, include_data=False),
        "created_at": created_at.isoformat() if created_at else None,
    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/protocol_rpc/explorer/queries.py` around lines 514 - 527, The loop
repeatedly calls stats_map.get(state.id, (0, None)) for each state; extract that
tuple once into local names (e.g. tx_count, created_at) to avoid redundant
lookups and improve readability. Modify the list comprehension in the return
block (or add a small helper like _build_state_row(state, stats_map)) to
retrieve tx_count, created_at = stats_map.get(state.id, (0, None)) once, pass
tx_count into _serialize_state(state, tx_count=tx_count, include_data=False),
and use created_at.isoformat() if created_at else None for "created_at". Ensure
you update the reference to stats_map and _serialize_state accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/protocol_rpc/explorer/queries.py`:
- Around line 423-433: The file has Black formatting violations; run the
formatter and commit the updated file so CI passes. Run `black` on
backend/protocol_rpc/explorer/queries.py (or your repo root) to reformat
occurrences around base_filter, count_q, CurrentState and deploy_addresses,
verify the resulting changes (e.g., spacing/line breaks in the count_q/query
chain), and commit the reformatted file.

---

Nitpick comments:
In `@backend/protocol_rpc/explorer/queries.py`:
- Around line 514-527: The loop repeatedly calls stats_map.get(state.id, (0,
None)) for each state; extract that tuple once into local names (e.g. tx_count,
created_at) to avoid redundant lookups and improve readability. Modify the list
comprehension in the return block (or add a small helper like
_build_state_row(state, stats_map)) to retrieve tx_count, created_at =
stats_map.get(state.id, (0, None)) once, pass tx_count into
_serialize_state(state, tx_count=tx_count, include_data=False), and use
created_at.isoformat() if created_at else None for "created_at". Ensure you
update the reference to stats_map and _serialize_state accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 634f2dee-17b4-45c9-a1ae-c857eca4a7e7

📥 Commits

Reviewing files that changed from the base of the PR and between 385169d and 288bddc.

📒 Files selected for processing (13)
  • backend/protocol_rpc/explorer/queries.py
  • explorer/src/app/DashboardSections.tsx
  • explorer/src/app/address/[addr]/AddressContent.tsx
  • explorer/src/app/address/[addr]/page.tsx
  • explorer/src/app/contracts/page.tsx
  • explorer/src/app/transactions/[hash]/components/OverviewTab.tsx
  • explorer/src/app/transactions/[hash]/components/RelatedTab.tsx
  • explorer/src/app/transactions/[hash]/page.tsx
  • explorer/src/components/AddressDisplay.tsx
  • explorer/src/components/AppLink.tsx
  • explorer/src/components/Navigation.tsx
  • explorer/src/components/StatCard.tsx
  • explorer/src/components/TransactionTable.tsx

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
backend/protocol_rpc/explorer/queries.py (1)

509-519: Simplify repetitive dictionary lookups.

stats_map.get(state.id, (0, None)) is called three times per iteration. Consider extracting to a variable for clarity and minor efficiency.

♻️ Proposed refactor
     return {
         "states": [
             {
-                **_serialize_state(state, tx_count=stats_map.get(state.id, (0, None))[0], include_data=False),
-                "created_at": (
-                    stats_map.get(state.id, (0, None))[1].isoformat()
-                    if stats_map.get(state.id, (0, None))[1]
-                    else None
-                ),
+                **_serialize_state(
+                    state,
+                    tx_count=(stats := stats_map.get(state.id, (0, None)))[0],
+                    include_data=False,
+                ),
+                "created_at": stats[1].isoformat() if stats[1] else None,
             }
             for state in states
         ],
         "pagination": _pagination(page, limit, total),
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/protocol_rpc/explorer/queries.py` around lines 509 - 519, The
comprehension repeatedly calls stats_map.get(state.id, (0, None)); assign that
result to a local variable inside the loop to avoid three identical lookups.
Update the list comprehension that builds "states" to first retrieve stats =
stats_map.get(state.id, (0, None)) and then use stats[0] for tx_count passed to
_serialize_state(state, tx_count=...) and stats[1] (checked for None) for
created_at.isoformat(); keep include_data=False and the surrounding structure
unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/protocol_rpc/explorer/queries.py`:
- Line 247: Several calls to _serialize_tx include an extraneous trailing comma
inside the call (e.g. _serialize_tx(tx, )), which breaks Black formatting;
update each call to remove the dangling comma so they read _serialize_tx(tx).
Search for all occurrences of _serialize_tx( with a trailing comma in this file
(the ones reported around the recent/recently-used query logic and similar list
comprehensions) and change them to the normal call form _serialize_tx(arg) so
pre-commit/Black passes.

---

Nitpick comments:
In `@backend/protocol_rpc/explorer/queries.py`:
- Around line 509-519: The comprehension repeatedly calls
stats_map.get(state.id, (0, None)); assign that result to a local variable
inside the loop to avoid three identical lookups. Update the list comprehension
that builds "states" to first retrieve stats = stats_map.get(state.id, (0,
None)) and then use stats[0] for tx_count passed to _serialize_state(state,
tx_count=...) and stats[1] (checked for None) for created_at.isoformat(); keep
include_data=False and the surrounding structure unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5782a6f2-54fe-4afb-9f3f-66bbb06aa805

📥 Commits

Reviewing files that changed from the base of the PR and between 288bddc and cccfdc6.

📒 Files selected for processing (3)
  • backend/protocol_rpc/explorer/queries.py
  • explorer/src/app/transactions/[hash]/components/DataTab.tsx
  • explorer/src/lib/types.ts
💤 Files with no reviewable changes (2)
  • explorer/src/lib/types.ts
  • explorer/src/app/transactions/[hash]/components/DataTab.tsx

danieljrc888 and others added 5 commits March 18, 2026 23:48
…" link

- Backend: return tx_count in contract detail response, add address
  filter param to transactions list endpoint
- Frontend: show "Latest N from a total of X transactions" on address
  pages when transactions exceed the 50-item limit, with a
  "VIEW ALL TRANSACTIONS" link to /transactions?address=...
- Transactions page: support ?address= query param for filtering

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…scan

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…rendering in MethodForm

- Remove State tab from contract detail view
- Backend: exclude state data JSONB from contract detail response
- Fix MethodForm crashing when method params/return types are objects
  (e.g. {$dict: ...}) instead of plain strings
- Center "Latest N from total" text vertically with icon

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@danieljrc888 danieljrc888 self-assigned this Mar 19, 2026
- wait_for_transaction now fails fast on terminal statuses (CANCELED,
  UNDETERMINED, LEADER_TIMEOUT, VALIDATORS_TIMEOUT) instead of polling
  for 300s, and reports last status on timeout
- test_state_integrity replaces fragile sleep(3) with retry loops for
  contract address lookup and initial state read

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
20 retries with 5s delay (100s total) instead of 10 retries with 3s (30s).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
backend/protocol_rpc/explorer/queries.py (1)

454-470: Filter these stats subqueries to deployed contract ids.

This branch still groups every from_address/to_address in Transactions, then discards non-contract rows in the outer query. Restricting both subqueries to deploy_addresses will keep sort_by=tx_count/created_at much closer to the performance win this PR is targeting.

♻️ Suggested patch
+        contract_ids_q = select(deploy_addresses.c.to_address)
         to_stats = (
             session.query(
                 Transactions.to_address.label("addr"),
                 func.count().label("cnt"),
                 func.min(Transactions.created_at).label("min_ts"),
             )
+            .filter(Transactions.to_address.in_(contract_ids_q))
             .group_by(Transactions.to_address)
             .subquery()
         )
         from_stats = (
             session.query(
                 Transactions.from_address.label("addr"),
                 func.count().label("cnt"),
                 func.min(Transactions.created_at).label("min_ts"),
             )
+            .filter(Transactions.from_address.in_(contract_ids_q))
             .group_by(Transactions.from_address)
             .subquery()
         )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/protocol_rpc/explorer/queries.py` around lines 454 - 470, The
to_stats and from_stats subqueries currently aggregate over all Transactions;
restrict them to deployed contract addresses by adding a filter that limits
Transactions.to_address and Transactions.from_address to deploy_addresses (e.g.,
.filter(Transactions.to_address.in_(deploy_addresses)) and
.filter(Transactions.from_address.in_(deploy_addresses))) or by joining against
the deployments table/Deployments.contract_id if that is the canonical source;
update the to_stats and from_stats query definitions accordingly so only
transactions involving deployed contracts are grouped.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/protocol_rpc/explorer/queries.py`:
- Around line 473-475: The tx_count_col currently sums to_stats.c.cnt and
from_stats.c.cnt which double-counts rows where to_address == from_address;
change the aggregation to subtract the overlap by computing an overlap count
(e.g., both_stats = select count(*) where to_address = from_address grouped by
contract_id) and set tx_count_col = coalesce(to_stats.c.cnt,0) +
coalesce(from_stats.c.cnt,0) - coalesce(both_stats.c.cnt,0); apply the same fix
to the analogous aggregation around lines 564-574 so totals match
get_state_with_transactions() and get_address_info() which use an OR-based
de-duplicated count.
- Around line 494-496: Pagination currently orders only by a single derived
column (sort_col) so ties cause unstable page boundaries; update the order_by
calls (the q.order_by(order_dir(sort_col)) lines used around the
sort_col/tx_count_col/created_at_col logic and the equivalent branch at lines
~514-515) to append a secondary deterministic tiebreaker on CurrentState.id
(e.g. pass both order_dir(sort_col) and a stable order for CurrentState.id to
q.order_by) so rows with equal primary keys are consistently ordered before
applying q.offset and q.limit.

In `@tests/common/request.py`:
- Around line 176-178: Terminal failure handling was changed to raise
RuntimeError which breaks existing except handlers; restore backward
compatibility by introducing a custom exception class (e.g.,
TerminalFailureError) that inherits from TimeoutError and use it when terminal
states in _TERMINAL_FAILURE_STATUSES are detected, or alternatively update the
callers (scripts/upgrade_contract.py and
tests/load/script_archive_unused/deploy_with_genlayer_sdk.py) to catch
RuntimeError as well; prefer adding TerminalFailureError (subclass of
TimeoutError) and raise that so existing except TimeoutError handlers continue
to work, and update any raise sites to use TerminalFailureError.

In `@tests/load/test_state_integrity.py`:
- Around line 135-146: The retry loop unnecessarily sleeps before the first call
to get_contract_address; change the loop in which contract_address is resolved
(using api_url and deploy_hash) so that time.sleep(5) is executed only after a
failed get_contract_address attempt (i.e., call get_contract_address first
inside the for attempt in range(20) loop, catch RuntimeError, print the waiting
message, then sleep), preserving the break when contract_address is found and
the existing retry count/timeout behavior.

---

Nitpick comments:
In `@backend/protocol_rpc/explorer/queries.py`:
- Around line 454-470: The to_stats and from_stats subqueries currently
aggregate over all Transactions; restrict them to deployed contract addresses by
adding a filter that limits Transactions.to_address and
Transactions.from_address to deploy_addresses (e.g.,
.filter(Transactions.to_address.in_(deploy_addresses)) and
.filter(Transactions.from_address.in_(deploy_addresses))) or by joining against
the deployments table/Deployments.contract_id if that is the canonical source;
update the to_stats and from_stats query definitions accordingly so only
transactions involving deployed contracts are grouped.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5b453a3d-e2fb-42e9-ab5a-bed4df7abc4f

📥 Commits

Reviewing files that changed from the base of the PR and between cccfdc6 and 6a7f963.

📒 Files selected for processing (7)
  • backend/protocol_rpc/explorer/queries.py
  • backend/protocol_rpc/explorer/router.py
  • explorer/src/app/address/[addr]/AddressContent.tsx
  • explorer/src/app/transactions/page.tsx
  • explorer/src/components/MethodForm.tsx
  • tests/common/request.py
  • tests/load/test_state_integrity.py
✅ Files skipped from review due to trivial changes (1)
  • explorer/src/components/MethodForm.tsx

danieljrc888 and others added 2 commits March 19, 2026 10:59
Debugging CI where time.sleep appears to not execute between retries.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix double-counting self-addressed txs in contract stats (union instead
  of union_all, overlap subtraction in JOIN branch)
- Add deterministic tiebreaker (CurrentState.id) to pagination ORDER BY
- Use TransactionTerminalError(TimeoutError) for backward compatibility
  with callers that catch TimeoutError
- Move sleep after failed attempt in load test retry loop

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
tests/load/test_state_integrity.py (1)

135-146: ⚠️ Potential issue | 🟡 Minor

Move retry sleep to after a failed lookup.

At Line 138, time.sleep(5) runs before the first get_contract_address attempt, adding an unconditional delay even when indexing is already ready.

Proposed fix
     # Wait for indexing with retry instead of a fixed sleep.
     contract_address = None
     for attempt in range(20):
-        time.sleep(5)
         try:
             contract_address = get_contract_address(api_url, deploy_hash)
             break
         except RuntimeError:
             print(f"  Waiting for contract indexing (attempt {attempt + 1}/20)...")
+            if attempt < 19:
+                time.sleep(5)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/load/test_state_integrity.py` around lines 135 - 146, The loop
unconditionally sleeps before the first get_contract_address call causing an
unnecessary 5s delay; change the retry logic in the block that sets
contract_address so you call get_contract_address(api_url, deploy_hash)
immediately on each iteration and only sleep after a failed attempt (i.e., move
time.sleep(5) into the except RuntimeError branch or after a failure check),
keeping the same max attempts (for attempt in range(20)) and preserving the
break when contract_address is found.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@tests/load/test_state_integrity.py`:
- Around line 135-146: The loop unconditionally sleeps before the first
get_contract_address call causing an unnecessary 5s delay; change the retry
logic in the block that sets contract_address so you call
get_contract_address(api_url, deploy_hash) immediately on each iteration and
only sleep after a failed attempt (i.e., move time.sleep(5) into the except
RuntimeError branch or after a failure check), keeping the same max attempts
(for attempt in range(20)) and preserving the break when contract_address is
found.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: affd4433-4bb9-471f-b547-4f7ab39d3f02

📥 Commits

Reviewing files that changed from the base of the PR and between 6a7f963 and 0d82c2e.

📒 Files selected for processing (1)
  • tests/load/test_state_integrity.py

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
tests/load/test_state_integrity.py (1)

137-146: Skip sleep after the last failed indexing attempt.

Line 143 still sleeps on the final retry, adding an unnecessary delay before returning failure.

Proposed tweak
     for attempt in range(20):
         try:
             contract_address = get_contract_address(api_url, deploy_hash)
             break
         except RuntimeError:
             print(f"  Waiting for contract indexing (attempt {attempt + 1}/20)...")
-            time.sleep(5)
+            if attempt < 19:
+                time.sleep(5)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/load/test_state_integrity.py` around lines 137 - 146, The retry loop
around get_contract_address sleeps even after the final attempt which causes an
unnecessary delay; modify the for-loop in the test_state_integrity retry block
(where contract_address is set and get_contract_address is called) so that
time.sleep(5) is only invoked if the current attempt is not the last (e.g.,
check attempt < 19 or move the sleep into an else that runs only when attempt <
max_attempts-1), and ensure contract_address is still checked after the loop to
return 1 on failure.
backend/protocol_rpc/explorer/queries.py (1)

629-632: Consider deferring the data column to avoid loading unused data.

Since the state is serialized with include_data=False, the data column is loaded from the database but never used. Adding a defer option would prevent loading this potentially large JSONB blob.

♻️ Suggested optimization
 def get_state_with_transactions(session: Session, state_id: str) -> Optional[dict]:
-    state = session.query(CurrentState).filter(CurrentState.id == state_id).first()
+    state = (
+        session.query(CurrentState)
+        .options(defer(CurrentState.data))
+        .filter(CurrentState.id == state_id)
+        .first()
+    )
     if not state:
         return None
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/protocol_rpc/explorer/queries.py` around lines 629 - 632, The query
in get_state_with_transactions loads the CurrentState.data JSONB column but
never uses it (serialization is done with include_data=False); update the query
to defer that column to avoid pulling the large blob. Specifically, modify the
session.query(CurrentState) call in get_state_with_transactions to apply a
SQLAlchemy loader option such as options(defer(CurrentState.data)) or
options(load_only(...)) that excludes the data attribute before calling
filter(CurrentState.id == state_id).first(), so the data column isn't loaded
unnecessarily.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/load/test_state_integrity.py`:
- Around line 159-169: The retry loop currently uses a broad "except Exception
as exc" which swallows unexpected errors from read_contract(); update it to only
catch transient/network/RPC errors (e.g., ConnectionError, TimeoutError,
requests.exceptions.RequestException, or the genlayer RPC error class if
available) and re-raise any other exceptions so non-transient issues surface;
locate the retry around read_contract() in tests/load/test_state_integrity.py,
replace the generic except with an explicit tuple of transient exception types
(or a helper like TransientRPCError), and add an else branch to re-raise
unexpected exceptions.

---

Nitpick comments:
In `@backend/protocol_rpc/explorer/queries.py`:
- Around line 629-632: The query in get_state_with_transactions loads the
CurrentState.data JSONB column but never uses it (serialization is done with
include_data=False); update the query to defer that column to avoid pulling the
large blob. Specifically, modify the session.query(CurrentState) call in
get_state_with_transactions to apply a SQLAlchemy loader option such as
options(defer(CurrentState.data)) or options(load_only(...)) that excludes the
data attribute before calling filter(CurrentState.id == state_id).first(), so
the data column isn't loaded unnecessarily.

In `@tests/load/test_state_integrity.py`:
- Around line 137-146: The retry loop around get_contract_address sleeps even
after the final attempt which causes an unnecessary delay; modify the for-loop
in the test_state_integrity retry block (where contract_address is set and
get_contract_address is called) so that time.sleep(5) is only invoked if the
current attempt is not the last (e.g., check attempt < 19 or move the sleep into
an else that runs only when attempt < max_attempts-1), and ensure
contract_address is still checked after the loop to return 1 on failure.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e8a021a0-5e5d-4ab8-a607-25a7f1042ee7

📥 Commits

Reviewing files that changed from the base of the PR and between 0d82c2e and 50db1c7.

📒 Files selected for processing (3)
  • backend/protocol_rpc/explorer/queries.py
  • tests/common/request.py
  • tests/load/test_state_integrity.py

danieljrc888 and others added 8 commits March 19, 2026 11:33
Add CI-tuned healthcheck overrides to docker-compose.ci.yml so
`docker compose up --wait` finishes in ~30-60s. Replace shell
curl/sleep loops with --wait in both workflows, and add per-validator
retry logic with python3 JSON parsing instead of brittle grep regex.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
jsonrpc /ready needs more time for GenVM startup: bump start_period
to 30s and retries to 36 (total ~210s). Load test timeout-minutes
was 3min but includes full Docker build — increase to 10min.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The CI overlay mounts /tmp/genvm-cache into containers, but the load
test workflow never created this directory. GenVM precompile fails with
permission denied when writing to the mount.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- wait_for_transaction: increase default retries from 30 to 140 to
  match the --default-wait-retries value passed to gltest in CI. The
  old default caused test_accounts_burn to fail when workers were
  temporarily saturated (3 workers, max_parallel_txs=1, 8 parallel
  test processes).

- test_state_integrity: increase contract indexing and state retries
  from 20 to 40. Contract registration happens asynchronously after
  the deploy tx reaches ACCEPTED, so under CI load the 100s window
  was insufficient.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1. ValidatorsManager: preserve previous snapshot on LLM module restart
   failure. Previously, _cached_snapshot was set to None before calling
   genvm_manager.start_module(). If that call failed, the snapshot
   stayed None permanently, causing the worker to silently fail on
   every transaction claim (RuntimeError caught and released). Now the
   previous snapshot is restored on failure and the error is re-raised.

2. State integrity test: replace wait_for_transaction_receipt with
   proper FINALIZED status polling. The eth_getTransactionReceipt
   endpoint returns status=0x1 for any transaction state (including
   PENDING) because the status field is always truthy. This meant
   wait_for_transaction_receipt returned immediately without waiting.
   Now the test polls eth_getTransactionByHash for FINALIZED status,
   ensuring contract state is committed before querying.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
pg_try_advisory_xact_lock(hashtext(NULL)) returns NULL, which is falsy
in a WHERE clause. This meant any transaction with to_address=NULL
(burn transactions, native transfers) could never be claimed by any
consensus worker — they stayed in PENDING forever.

Fix: use COALESCE(t.to_address, t.hash) so each NULL-address tx gets
its own advisory lock keyed on its hash. Applied to all three claim
queries: claim_next_transaction, claim_next_appeal, and
claim_next_finalization.

Also fix counter.py test contract to use bigint instead of int per
GenVM storage requirements.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add missing COALESCE in claim_next_finalization advisory lock for NULL to_address
- Use IS NOT DISTINCT FROM in all NOT EXISTS guards to handle NULL = NULL correctly
- Remove duplicate COMPOSE_RPC_MEM_RESERVATION sed lines in CI workflow

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@danieljrc888 danieljrc888 changed the title perf: optimize explorer contracts page load time perf: optimize explorer + fix: deterministic CI/CD service readiness Mar 19, 2026
- Add Consensus Result column to TransactionTable, AddressTransactionTable, and transaction detail OverviewTab
- Extract ConsensusResultBadge component for DRY color logic across all views
- Add ColumnHeaderWithTooltip with shared COLUMN_TOOLTIPS for Status, GenVM Result, and Consensus Result
- Update tooltip style to match popover/card design system
- Remove Value column and Direction header text from address transaction table

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@danieljrc888 danieljrc888 changed the title perf: optimize explorer + fix: deterministic CI/CD service readiness perf: optimize explorer + fix: deterministic CI/CD + feat: consensus result column Mar 20, 2026
The branch ruleset requires a check named "Load Tests" but the job
reported as "load-test", causing the check to be stuck as pending.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@danieljrc888 danieljrc888 merged commit dd0fdb6 into main Mar 20, 2026
18 checks passed
@github-actions
Copy link
Contributor

🎉 This PR is included in version 0.111.5 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants