perf: optimize explorer + fix: deterministic CI/CD + feat: consensus result column#1544
perf: optimize explorer + fix: deterministic CI/CD + feat: consensus result column#1544danieljrc888 merged 21 commits intomainfrom
Conversation
- Replace correlated subqueries with separate count query and batch stats fetch, reducing contracts list from O(N*contracts) to O(page_size) - Disable Next.js link prefetching app-wide via AppLink wrapper to prevent dozens of _rsc requests on pages with many links - Remove unused state data field from contracts list API response and UI Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughRefactored backend explorer queries: removed transaction Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant API as Explorer API
participant DB as Database
rect rgba(200,200,255,0.5)
Client->>API: GET /contracts?page,limit,sort_by
API->>API: validate params, choose path (aggregate vs default)
alt aggregate path (tx_count / created_at)
API->>DB: run grouped aggregated subqueries (from/to/overlap)
DB-->>API: aggregated stats (tx_count, created_at)
API->>API: paginate aggregates, serialize states (include_data=false)
else default path
API->>DB: query CurrentState page ordered by updated_at
DB-->>API: page of CurrentState rows
API->>DB: _batch_contract_stats(contract_ids_of_page)
DB-->>API: tx counts & created_at for page ids
API->>API: assemble states and serialize (include_data=false)
end
API-->>Client: JSON page (states list, pagination metadata)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
The contract_snapshot JSONB blob is never needed by the explorer frontend. Stop serializing it and defer loading it on all explorer queries. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
backend/protocol_rpc/explorer/queries.py (1)
514-527: Repeated dictionary lookups can be simplified.The code calls
stats_map.get(state.id, (0, None))three times per iteration. Extract it once to improve readability and avoid redundant lookups.♻️ Proposed refactor
return { "states": [ { - **_serialize_state(state, tx_count=stats_map.get(state.id, (0, None))[0], include_data=False), - "created_at": ( - stats_map.get(state.id, (0, None))[1].isoformat() - if stats_map.get(state.id, (0, None))[1] - else None - ), + **_serialize_state(state, tx_count=(stats := stats_map.get(state.id, (0, None)))[0], include_data=False), + "created_at": stats[1].isoformat() if stats[1] else None, } for state in states ], "pagination": _pagination(page, limit, total), }Or for Python < 3.8 style or clarity:
def _build_state_row(state, stats_map): tx_count, created_at = stats_map.get(state.id, (0, None)) return { **_serialize_state(state, tx_count=tx_count, include_data=False), "created_at": created_at.isoformat() if created_at else None, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/protocol_rpc/explorer/queries.py` around lines 514 - 527, The loop repeatedly calls stats_map.get(state.id, (0, None)) for each state; extract that tuple once into local names (e.g. tx_count, created_at) to avoid redundant lookups and improve readability. Modify the list comprehension in the return block (or add a small helper like _build_state_row(state, stats_map)) to retrieve tx_count, created_at = stats_map.get(state.id, (0, None)) once, pass tx_count into _serialize_state(state, tx_count=tx_count, include_data=False), and use created_at.isoformat() if created_at else None for "created_at". Ensure you update the reference to stats_map and _serialize_state accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/protocol_rpc/explorer/queries.py`:
- Around line 423-433: The file has Black formatting violations; run the
formatter and commit the updated file so CI passes. Run `black` on
backend/protocol_rpc/explorer/queries.py (or your repo root) to reformat
occurrences around base_filter, count_q, CurrentState and deploy_addresses,
verify the resulting changes (e.g., spacing/line breaks in the count_q/query
chain), and commit the reformatted file.
---
Nitpick comments:
In `@backend/protocol_rpc/explorer/queries.py`:
- Around line 514-527: The loop repeatedly calls stats_map.get(state.id, (0,
None)) for each state; extract that tuple once into local names (e.g. tx_count,
created_at) to avoid redundant lookups and improve readability. Modify the list
comprehension in the return block (or add a small helper like
_build_state_row(state, stats_map)) to retrieve tx_count, created_at =
stats_map.get(state.id, (0, None)) once, pass tx_count into
_serialize_state(state, tx_count=tx_count, include_data=False), and use
created_at.isoformat() if created_at else None for "created_at". Ensure you
update the reference to stats_map and _serialize_state accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 634f2dee-17b4-45c9-a1ae-c857eca4a7e7
📒 Files selected for processing (13)
backend/protocol_rpc/explorer/queries.pyexplorer/src/app/DashboardSections.tsxexplorer/src/app/address/[addr]/AddressContent.tsxexplorer/src/app/address/[addr]/page.tsxexplorer/src/app/contracts/page.tsxexplorer/src/app/transactions/[hash]/components/OverviewTab.tsxexplorer/src/app/transactions/[hash]/components/RelatedTab.tsxexplorer/src/app/transactions/[hash]/page.tsxexplorer/src/components/AddressDisplay.tsxexplorer/src/components/AppLink.tsxexplorer/src/components/Navigation.tsxexplorer/src/components/StatCard.tsxexplorer/src/components/TransactionTable.tsx
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
backend/protocol_rpc/explorer/queries.py (1)
509-519: Simplify repetitive dictionary lookups.
stats_map.get(state.id, (0, None))is called three times per iteration. Consider extracting to a variable for clarity and minor efficiency.♻️ Proposed refactor
return { "states": [ { - **_serialize_state(state, tx_count=stats_map.get(state.id, (0, None))[0], include_data=False), - "created_at": ( - stats_map.get(state.id, (0, None))[1].isoformat() - if stats_map.get(state.id, (0, None))[1] - else None - ), + **_serialize_state( + state, + tx_count=(stats := stats_map.get(state.id, (0, None)))[0], + include_data=False, + ), + "created_at": stats[1].isoformat() if stats[1] else None, } for state in states ], "pagination": _pagination(page, limit, total), }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/protocol_rpc/explorer/queries.py` around lines 509 - 519, The comprehension repeatedly calls stats_map.get(state.id, (0, None)); assign that result to a local variable inside the loop to avoid three identical lookups. Update the list comprehension that builds "states" to first retrieve stats = stats_map.get(state.id, (0, None)) and then use stats[0] for tx_count passed to _serialize_state(state, tx_count=...) and stats[1] (checked for None) for created_at.isoformat(); keep include_data=False and the surrounding structure unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/protocol_rpc/explorer/queries.py`:
- Line 247: Several calls to _serialize_tx include an extraneous trailing comma
inside the call (e.g. _serialize_tx(tx, )), which breaks Black formatting;
update each call to remove the dangling comma so they read _serialize_tx(tx).
Search for all occurrences of _serialize_tx( with a trailing comma in this file
(the ones reported around the recent/recently-used query logic and similar list
comprehensions) and change them to the normal call form _serialize_tx(arg) so
pre-commit/Black passes.
---
Nitpick comments:
In `@backend/protocol_rpc/explorer/queries.py`:
- Around line 509-519: The comprehension repeatedly calls
stats_map.get(state.id, (0, None)); assign that result to a local variable
inside the loop to avoid three identical lookups. Update the list comprehension
that builds "states" to first retrieve stats = stats_map.get(state.id, (0,
None)) and then use stats[0] for tx_count passed to _serialize_state(state,
tx_count=...) and stats[1] (checked for None) for created_at.isoformat(); keep
include_data=False and the surrounding structure unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5782a6f2-54fe-4afb-9f3f-66bbb06aa805
📒 Files selected for processing (3)
backend/protocol_rpc/explorer/queries.pyexplorer/src/app/transactions/[hash]/components/DataTab.tsxexplorer/src/lib/types.ts
💤 Files with no reviewable changes (2)
- explorer/src/lib/types.ts
- explorer/src/app/transactions/[hash]/components/DataTab.tsx
…" link - Backend: return tx_count in contract detail response, add address filter param to transactions list endpoint - Frontend: show "Latest N from a total of X transactions" on address pages when transactions exceed the 50-item limit, with a "VIEW ALL TRANSACTIONS" link to /transactions?address=... - Transactions page: support ?address= query param for filtering Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…scan Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…rendering in MethodForm
- Remove State tab from contract detail view
- Backend: exclude state data JSONB from contract detail response
- Fix MethodForm crashing when method params/return types are objects
(e.g. {$dict: ...}) instead of plain strings
- Center "Latest N from total" text vertically with icon
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- wait_for_transaction now fails fast on terminal statuses (CANCELED, UNDETERMINED, LEADER_TIMEOUT, VALIDATORS_TIMEOUT) instead of polling for 300s, and reports last status on timeout - test_state_integrity replaces fragile sleep(3) with retry loops for contract address lookup and initial state read Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
20 retries with 5s delay (100s total) instead of 10 retries with 3s (30s). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
backend/protocol_rpc/explorer/queries.py (1)
454-470: Filter these stats subqueries to deployed contract ids.This branch still groups every
from_address/to_addressinTransactions, then discards non-contract rows in the outer query. Restricting both subqueries todeploy_addresseswill keepsort_by=tx_count/created_atmuch closer to the performance win this PR is targeting.♻️ Suggested patch
+ contract_ids_q = select(deploy_addresses.c.to_address) to_stats = ( session.query( Transactions.to_address.label("addr"), func.count().label("cnt"), func.min(Transactions.created_at).label("min_ts"), ) + .filter(Transactions.to_address.in_(contract_ids_q)) .group_by(Transactions.to_address) .subquery() ) from_stats = ( session.query( Transactions.from_address.label("addr"), func.count().label("cnt"), func.min(Transactions.created_at).label("min_ts"), ) + .filter(Transactions.from_address.in_(contract_ids_q)) .group_by(Transactions.from_address) .subquery() )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/protocol_rpc/explorer/queries.py` around lines 454 - 470, The to_stats and from_stats subqueries currently aggregate over all Transactions; restrict them to deployed contract addresses by adding a filter that limits Transactions.to_address and Transactions.from_address to deploy_addresses (e.g., .filter(Transactions.to_address.in_(deploy_addresses)) and .filter(Transactions.from_address.in_(deploy_addresses))) or by joining against the deployments table/Deployments.contract_id if that is the canonical source; update the to_stats and from_stats query definitions accordingly so only transactions involving deployed contracts are grouped.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/protocol_rpc/explorer/queries.py`:
- Around line 473-475: The tx_count_col currently sums to_stats.c.cnt and
from_stats.c.cnt which double-counts rows where to_address == from_address;
change the aggregation to subtract the overlap by computing an overlap count
(e.g., both_stats = select count(*) where to_address = from_address grouped by
contract_id) and set tx_count_col = coalesce(to_stats.c.cnt,0) +
coalesce(from_stats.c.cnt,0) - coalesce(both_stats.c.cnt,0); apply the same fix
to the analogous aggregation around lines 564-574 so totals match
get_state_with_transactions() and get_address_info() which use an OR-based
de-duplicated count.
- Around line 494-496: Pagination currently orders only by a single derived
column (sort_col) so ties cause unstable page boundaries; update the order_by
calls (the q.order_by(order_dir(sort_col)) lines used around the
sort_col/tx_count_col/created_at_col logic and the equivalent branch at lines
~514-515) to append a secondary deterministic tiebreaker on CurrentState.id
(e.g. pass both order_dir(sort_col) and a stable order for CurrentState.id to
q.order_by) so rows with equal primary keys are consistently ordered before
applying q.offset and q.limit.
In `@tests/common/request.py`:
- Around line 176-178: Terminal failure handling was changed to raise
RuntimeError which breaks existing except handlers; restore backward
compatibility by introducing a custom exception class (e.g.,
TerminalFailureError) that inherits from TimeoutError and use it when terminal
states in _TERMINAL_FAILURE_STATUSES are detected, or alternatively update the
callers (scripts/upgrade_contract.py and
tests/load/script_archive_unused/deploy_with_genlayer_sdk.py) to catch
RuntimeError as well; prefer adding TerminalFailureError (subclass of
TimeoutError) and raise that so existing except TimeoutError handlers continue
to work, and update any raise sites to use TerminalFailureError.
In `@tests/load/test_state_integrity.py`:
- Around line 135-146: The retry loop unnecessarily sleeps before the first call
to get_contract_address; change the loop in which contract_address is resolved
(using api_url and deploy_hash) so that time.sleep(5) is executed only after a
failed get_contract_address attempt (i.e., call get_contract_address first
inside the for attempt in range(20) loop, catch RuntimeError, print the waiting
message, then sleep), preserving the break when contract_address is found and
the existing retry count/timeout behavior.
---
Nitpick comments:
In `@backend/protocol_rpc/explorer/queries.py`:
- Around line 454-470: The to_stats and from_stats subqueries currently
aggregate over all Transactions; restrict them to deployed contract addresses by
adding a filter that limits Transactions.to_address and
Transactions.from_address to deploy_addresses (e.g.,
.filter(Transactions.to_address.in_(deploy_addresses)) and
.filter(Transactions.from_address.in_(deploy_addresses))) or by joining against
the deployments table/Deployments.contract_id if that is the canonical source;
update the to_stats and from_stats query definitions accordingly so only
transactions involving deployed contracts are grouped.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5b453a3d-e2fb-42e9-ab5a-bed4df7abc4f
📒 Files selected for processing (7)
backend/protocol_rpc/explorer/queries.pybackend/protocol_rpc/explorer/router.pyexplorer/src/app/address/[addr]/AddressContent.tsxexplorer/src/app/transactions/page.tsxexplorer/src/components/MethodForm.tsxtests/common/request.pytests/load/test_state_integrity.py
✅ Files skipped from review due to trivial changes (1)
- explorer/src/components/MethodForm.tsx
Debugging CI where time.sleep appears to not execute between retries. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix double-counting self-addressed txs in contract stats (union instead of union_all, overlap subtraction in JOIN branch) - Add deterministic tiebreaker (CurrentState.id) to pagination ORDER BY - Use TransactionTerminalError(TimeoutError) for backward compatibility with callers that catch TimeoutError - Move sleep after failed attempt in load test retry loop Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
♻️ Duplicate comments (1)
tests/load/test_state_integrity.py (1)
135-146:⚠️ Potential issue | 🟡 MinorMove retry sleep to after a failed lookup.
At Line 138,
time.sleep(5)runs before the firstget_contract_addressattempt, adding an unconditional delay even when indexing is already ready.Proposed fix
# Wait for indexing with retry instead of a fixed sleep. contract_address = None for attempt in range(20): - time.sleep(5) try: contract_address = get_contract_address(api_url, deploy_hash) break except RuntimeError: print(f" Waiting for contract indexing (attempt {attempt + 1}/20)...") + if attempt < 19: + time.sleep(5)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/load/test_state_integrity.py` around lines 135 - 146, The loop unconditionally sleeps before the first get_contract_address call causing an unnecessary 5s delay; change the retry logic in the block that sets contract_address so you call get_contract_address(api_url, deploy_hash) immediately on each iteration and only sleep after a failed attempt (i.e., move time.sleep(5) into the except RuntimeError branch or after a failure check), keeping the same max attempts (for attempt in range(20)) and preserving the break when contract_address is found.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@tests/load/test_state_integrity.py`:
- Around line 135-146: The loop unconditionally sleeps before the first
get_contract_address call causing an unnecessary 5s delay; change the retry
logic in the block that sets contract_address so you call
get_contract_address(api_url, deploy_hash) immediately on each iteration and
only sleep after a failed attempt (i.e., move time.sleep(5) into the except
RuntimeError branch or after a failure check), keeping the same max attempts
(for attempt in range(20)) and preserving the break when contract_address is
found.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: affd4433-4bb9-471f-b547-4f7ab39d3f02
📒 Files selected for processing (1)
tests/load/test_state_integrity.py
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
tests/load/test_state_integrity.py (1)
137-146: Skip sleep after the last failed indexing attempt.Line 143 still sleeps on the final retry, adding an unnecessary delay before returning failure.
Proposed tweak
for attempt in range(20): try: contract_address = get_contract_address(api_url, deploy_hash) break except RuntimeError: print(f" Waiting for contract indexing (attempt {attempt + 1}/20)...") - time.sleep(5) + if attempt < 19: + time.sleep(5)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/load/test_state_integrity.py` around lines 137 - 146, The retry loop around get_contract_address sleeps even after the final attempt which causes an unnecessary delay; modify the for-loop in the test_state_integrity retry block (where contract_address is set and get_contract_address is called) so that time.sleep(5) is only invoked if the current attempt is not the last (e.g., check attempt < 19 or move the sleep into an else that runs only when attempt < max_attempts-1), and ensure contract_address is still checked after the loop to return 1 on failure.backend/protocol_rpc/explorer/queries.py (1)
629-632: Consider deferring thedatacolumn to avoid loading unused data.Since the state is serialized with
include_data=False, thedatacolumn is loaded from the database but never used. Adding a defer option would prevent loading this potentially large JSONB blob.♻️ Suggested optimization
def get_state_with_transactions(session: Session, state_id: str) -> Optional[dict]: - state = session.query(CurrentState).filter(CurrentState.id == state_id).first() + state = ( + session.query(CurrentState) + .options(defer(CurrentState.data)) + .filter(CurrentState.id == state_id) + .first() + ) if not state: return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/protocol_rpc/explorer/queries.py` around lines 629 - 632, The query in get_state_with_transactions loads the CurrentState.data JSONB column but never uses it (serialization is done with include_data=False); update the query to defer that column to avoid pulling the large blob. Specifically, modify the session.query(CurrentState) call in get_state_with_transactions to apply a SQLAlchemy loader option such as options(defer(CurrentState.data)) or options(load_only(...)) that excludes the data attribute before calling filter(CurrentState.id == state_id).first(), so the data column isn't loaded unnecessarily.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/load/test_state_integrity.py`:
- Around line 159-169: The retry loop currently uses a broad "except Exception
as exc" which swallows unexpected errors from read_contract(); update it to only
catch transient/network/RPC errors (e.g., ConnectionError, TimeoutError,
requests.exceptions.RequestException, or the genlayer RPC error class if
available) and re-raise any other exceptions so non-transient issues surface;
locate the retry around read_contract() in tests/load/test_state_integrity.py,
replace the generic except with an explicit tuple of transient exception types
(or a helper like TransientRPCError), and add an else branch to re-raise
unexpected exceptions.
---
Nitpick comments:
In `@backend/protocol_rpc/explorer/queries.py`:
- Around line 629-632: The query in get_state_with_transactions loads the
CurrentState.data JSONB column but never uses it (serialization is done with
include_data=False); update the query to defer that column to avoid pulling the
large blob. Specifically, modify the session.query(CurrentState) call in
get_state_with_transactions to apply a SQLAlchemy loader option such as
options(defer(CurrentState.data)) or options(load_only(...)) that excludes the
data attribute before calling filter(CurrentState.id == state_id).first(), so
the data column isn't loaded unnecessarily.
In `@tests/load/test_state_integrity.py`:
- Around line 137-146: The retry loop around get_contract_address sleeps even
after the final attempt which causes an unnecessary delay; modify the for-loop
in the test_state_integrity retry block (where contract_address is set and
get_contract_address is called) so that time.sleep(5) is only invoked if the
current attempt is not the last (e.g., check attempt < 19 or move the sleep into
an else that runs only when attempt < max_attempts-1), and ensure
contract_address is still checked after the loop to return 1 on failure.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e8a021a0-5e5d-4ab8-a607-25a7f1042ee7
📒 Files selected for processing (3)
backend/protocol_rpc/explorer/queries.pytests/common/request.pytests/load/test_state_integrity.py
Add CI-tuned healthcheck overrides to docker-compose.ci.yml so `docker compose up --wait` finishes in ~30-60s. Replace shell curl/sleep loops with --wait in both workflows, and add per-validator retry logic with python3 JSON parsing instead of brittle grep regex. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
jsonrpc /ready needs more time for GenVM startup: bump start_period to 30s and retries to 36 (total ~210s). Load test timeout-minutes was 3min but includes full Docker build — increase to 10min. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The CI overlay mounts /tmp/genvm-cache into containers, but the load test workflow never created this directory. GenVM precompile fails with permission denied when writing to the mount. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- wait_for_transaction: increase default retries from 30 to 140 to match the --default-wait-retries value passed to gltest in CI. The old default caused test_accounts_burn to fail when workers were temporarily saturated (3 workers, max_parallel_txs=1, 8 parallel test processes). - test_state_integrity: increase contract indexing and state retries from 20 to 40. Contract registration happens asynchronously after the deploy tx reaches ACCEPTED, so under CI load the 100s window was insufficient. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1. ValidatorsManager: preserve previous snapshot on LLM module restart failure. Previously, _cached_snapshot was set to None before calling genvm_manager.start_module(). If that call failed, the snapshot stayed None permanently, causing the worker to silently fail on every transaction claim (RuntimeError caught and released). Now the previous snapshot is restored on failure and the error is re-raised. 2. State integrity test: replace wait_for_transaction_receipt with proper FINALIZED status polling. The eth_getTransactionReceipt endpoint returns status=0x1 for any transaction state (including PENDING) because the status field is always truthy. This meant wait_for_transaction_receipt returned immediately without waiting. Now the test polls eth_getTransactionByHash for FINALIZED status, ensuring contract state is committed before querying. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
pg_try_advisory_xact_lock(hashtext(NULL)) returns NULL, which is falsy in a WHERE clause. This meant any transaction with to_address=NULL (burn transactions, native transfers) could never be claimed by any consensus worker — they stayed in PENDING forever. Fix: use COALESCE(t.to_address, t.hash) so each NULL-address tx gets its own advisory lock keyed on its hash. Applied to all three claim queries: claim_next_transaction, claim_next_appeal, and claim_next_finalization. Also fix counter.py test contract to use bigint instead of int per GenVM storage requirements. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add missing COALESCE in claim_next_finalization advisory lock for NULL to_address - Use IS NOT DISTINCT FROM in all NOT EXISTS guards to handle NULL = NULL correctly - Remove duplicate COMPOSE_RPC_MEM_RESERVATION sed lines in CI workflow Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add Consensus Result column to TransactionTable, AddressTransactionTable, and transaction detail OverviewTab - Extract ConsensusResultBadge component for DRY color logic across all views - Add ColumnHeaderWithTooltip with shared COLUMN_TOOLTIPS for Status, GenVM Result, and Consensus Result - Update tooltip style to match popover/card design system - Remove Value column and Direction header text from address transaction table Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The branch ruleset requires a check named "Load Tests" but the job reported as "load-test", causing the check to be stuck as pending. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
🎉 This PR is included in version 0.111.5 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
Summary
This PR contains three bodies of work:
1. Explorer Performance Optimization
_rscrequests, remove "State Fields" column from contracts table, stop returningdataJSONB blob in contracts list API response./api/explorer/stats/countsand/api/explorer/address/{address}endpoints.2. CI/CD Flakiness Fixes
docker-compose.ci.ymlwith CI-tuned healthcheck overrides, replacing fragile shell polling loops withdocker compose up --wait.pg_try_advisory_xact_lock(hashtext(t.to_address))returns NULL whento_addressis NULL (burn/deploy txs), silently skipping them forever. Fixed withCOALESCE(t.to_address, t.hash)in all 3 claim functions.t2.to_address = t.to_addressevaluates to unknown when both are NULL, bypassing concurrent-processing guards. Fixed withIS NOT DISTINCT FROM._cached_snapshotwas set to None beforestart_module()— if it failed, snapshot stayed None permanently. Now preserves and restores previous snapshot on failure.wait_for_tx_finalizedpolling,bigintstorage type in counter contract, retry loops for validator creation.3. Explorer UI: Consensus Result Column & Tooltips
TransactionTable,AddressTransactionTable, and transaction detailOverviewTab. Shows the last consensus round outcome (Accepted, Leader Rotation, Majority Disagree, etc.) with color-coded badges.ConsensusResultBadgecomponent: Shared badge withinlineandbadgevariants, color logic extracted into single source of truth (green=Accepted, red=failure, amber=other).ColumnHeaderWithTooltipcomponent: Help icon with tooltip for GenLayer protocol-specific columns (Status, GenVM Result, Consensus Result). Tooltip texts centralized inCOLUMN_TOOLTIPSconstant.bg-primarytobg-popoverwith border and shadow to match the card/dropdown design system.Test plan
test_accounts_burn)🤖 Generated with Claude Code