Skip to content

feat(devnet): optimize Docker builds, enable MPT mode, bump go-ethereum#925

Open
panos-xyz wants to merge 2 commits intomainfrom
feat/anvil-l1-devnet
Open

feat(devnet): optimize Docker builds, enable MPT mode, bump go-ethereum#925
panos-xyz wants to merge 2 commits intomainfrom
feat/anvil-l1-devnet

Conversation

@panos-xyz
Copy link
Copy Markdown
Contributor

@panos-xyz panos-xyz commented Apr 2, 2026

Summary

  • Upgrade all Ubuntu base images from 20.04 to 22.04 across Dockerfiles
  • Upgrade Rust base image to 1.88-slim for gas-oracle (preparing for reth integration)
  • Expand .dockerignore to reduce Docker build context by ~1.6GB
  • Enable MPT mode in L2 genesis (UseZktrie=false, JadeForkTime=0)
  • Update go-ethereum submodule to latest main (b3c55522a)
  • Fix missing MORPH_NODE_ROLLUP_ADDRESS env var in node-0~3 and sentry-node-0

Test plan

  • make devnet-up succeeds with all 20 containers running
  • L1 and L2 blocks are being produced
  • Verify L2 genesis uses MPT mode (not zkTrie)
  • Verify Docker images build successfully with updated base images

Summary by CodeRabbit

  • Chores

    • Updated Docker build environments to Ubuntu 22.04 across multiple builders
    • Upgraded Rust builder base to 1.88-slim and added pkg-config/libssl-dev
    • Extended Docker ignore to exclude additional paths from build context
    • Made Makefile devnet cleanup more thorough and adjusted target prerequisites
  • Configuration

    • Added rollup address environment variable for node containers with a default fallback
    • Updated genesis settings: disabled Zktrie, set JadeForkTime, and initialized base fee default

@panos-xyz panos-xyz requested a review from a team as a code owner April 2, 2026 02:53
@panos-xyz panos-xyz requested review from twcctop and removed request for a team April 2, 2026 02:53
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 2, 2026

📝 Walkthrough

Walkthrough

Updated multiple Docker base images and expanded .dockerignore entries; added an environment variable to the 4-node docker-compose services; adjusted Makefile cleanup behavior; and changed genesis configuration (disabled UseZktrie, initialized JadeForkTime, and set a default baseFee).

Changes

Cohort / File(s) Summary
Docker Base Image Updates
ops/docker/Dockerfile.l1-beacon, ops/docker/go-rust-builder.Dockerfile, ops/docker/intermediate/go-rust-builder.Dockerfile, ops/docker/intermediate/go-ubuntu-builder.Dockerfile
Upgraded Ubuntu base images from 20.04 to 22.04.
Rust Builder Update
ops/docker/Dockerfile.oracle
Switched Rust base from rust:1.70rust:1.88-slim; added pkg-config and libssl-dev installation and apt cleanup.
Docker Build Context
.dockerignore
Extended ignore list to exclude .git/, contracts/node_modules/, ops/reth-cross-test/, ops/publicnode/, and docs/.
Genesis Chain Configuration
ops/l2-genesis/morph-chain-ops/genesis/genesis.go
Set Morph.UseZktrie to false, initialized JadeForkTime (new(uint64)), and defaulted baseFee to big.NewInt(1_000_000) when unset.
Docker Compose Env
ops/docker/docker-compose-4nodes.yml
Added MORPH_NODE_ROLLUP_ADDRESS env var to morph-node and sentry-node-0 services with ${MORPH_ROLLUP} fallback.
Makefile Cleanup
Makefile
devnet-clean-build target no longer depends on devnet-down; performs compose down with volumes/remove-orphans and adjusts Docker volume removal pattern.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Suggested reviewers

  • tomatoishealthy
  • FletcherMan

Poem

🐰 I hopped through Dockerfiles, nibbling 20.04 away,
Ubuntu rose to twenty-two, brightening my day,
Genesis seeds rearranged — a tiny, clever trick,
Rollup flags unfurled, the compose now does the click,
I twitch my nose and bless this tidy, bouncy pick.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 76.79% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main changes: Docker build optimizations, MPT mode enablement, and go-ethereum version bump across multiple Dockerfiles and configuration files.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/anvil-l1-devnet

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@panos-xyz panos-xyz force-pushed the feat/anvil-l1-devnet branch from 8114ea3 to 3328cd1 Compare April 2, 2026 02:56
…v var

- Upgrade Rust base image to 1.88-slim (Dockerfile.oracle)
- Unify all Ubuntu base images from 20.04 to 22.04
- Expand .dockerignore to reduce build context by ~1.6GB
- Enable MPT mode in L2 genesis (UseZktrie=false, JadeForkTime=0)
- Add missing MORPH_NODE_ROLLUP_ADDRESS to node-0~3 and sentry-node-0
@panos-xyz panos-xyz force-pushed the feat/anvil-l1-devnet branch from 3328cd1 to 63c950f Compare April 2, 2026 02:59
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 18

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

🟡 Minor comments (4)
ops/reth-cross-test/phases/validator_phase.py-40-46 (1)

40-46: ⚠️ Potential issue | 🟡 Minor

Guard against checking an empty block range when validator height is 0.

If validator_reth key is missing or height is 0, min(reached, 0) evaluates to 0, causing check_blocks(rpcs, 1, 0, ...) which iterates range(1, 1) — an empty range. This silently skips all verification. Consider adding an explicit check.

🛡️ Proposed fix
             # Full-range block comparison on validators only
+            val_reth_height = val_result.get("validator_sync", {}).get("validator_reth", {}).get("height", 0)
+            end_block = min(reached, val_reth_height) if val_reth_height > 0 else reached
             val_block_check = block_checker.check_blocks(
                 config.VALIDATOR_RPCS,
                 1,
-                min(reached, val_result.get("validator_sync", {}).get("validator_reth", {}).get("height", 0)),
+                end_block,
                 self.reth_nodes,
             )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ops/reth-cross-test/phases/validator_phase.py` around lines 40 - 46, The call
to block_checker.check_blocks using min(reached,
val_result.get("validator_sync", {}).get("validator_reth", {}).get("height", 0))
can produce an end height of 0 and lead to an empty range; update the
validator_phase logic to guard against that by extracting the validator height
into a variable (from val_result["validator_sync"]["validator_reth"]["height"]
with safe default), check if that height is >= 1 and that reached >= 1 before
calling block_checker.check_blocks, and only invoke
block_checker.check_blocks(config.VALIDATOR_RPCS, 1, end_height,
self.reth_nodes) when end_height >= 1 (otherwise skip or log a warning). Ensure
references: val_result, validator_sync, validator_reth, reached,
block_checker.check_blocks, config.VALIDATOR_RPCS, and self.reth_nodes are used
to locate and update the code.
docs/superpowers/specs/2026-03-30-reth-cross-validation-test-design.md-369-383 (1)

369-383: ⚠️ Potential issue | 🟡 Minor

Refresh the sample reports for the new default MPT mode.

These examples still document chain_mode: "zktrie", jade_hardfork_active: false, and state_root_mode: "zktrie_divergence_expected". This PR flips the devnet genesis to MPT mode, so the documented Tier 2 expectations and sequencer-tag behavior are now backwards for the environment readers will actually run.

Also applies to: 395-449

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/superpowers/specs/2026-03-30-reth-cross-validation-test-design.md`
around lines 369 - 383, The sample reports still use chain_mode: "zktrie",
jade_hardfork_active: false, and state_root_mode: "zktrie_divergence_expected"
which are now outdated; update the examples (including the blocks around lines
referenced) to reflect the new default MPT/devnet genesis settings by changing
chain_mode to the MPT value used by the codebase, set jade_hardfork_active to
true if the new devnet enables it, and replace state_root_mode and any Tier 2 /
sequencer-tag expectations to the new MPT-mode semantics (adjust the documented
Tier 2 expectations and sequencer-tag behavior accordingly), ensuring all sample
report JSON blocks and narrative text (the referenced section and the later
395-449 region) consistently match the new defaults.
ops/reth-cross-test/verification/stress_runner.py-30-48 (1)

30-48: ⚠️ Potential issue | 🟡 Minor

Wait for funding to land before starting the stage.

fund_accounts() broadcasts 50 transfers and then sleeps for 10 seconds without checking any receipt. On a slower run, the first workers can start with zero balance and pad the stress stats with insufficient funds noise instead of real execution failures.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ops/reth-cross-test/verification/stress_runner.py` around lines 30 - 48, The
fund_accounts function currently broadcasts 50 transfers then sleeps 10s without
verifying inclusion; change fund_accounts to collect each sent tx hash (from
signed.raw_transaction) and call w3.eth.wait_for_transaction_receipt(tx_hash,
timeout=...) (or poll recipient balances) to ensure each funding tx is mined
before returning. Use the existing funded = Account.from_key(...) / nonce flow
and self._accounts[:50] to locate where to add the tracking; wait for receipts
(or confirmations) for either each tx or for the last tx in the batch before
allowing the stage to start so workers won't begin with zero balances.
docs/superpowers/specs/2026-03-30-reth-cross-validation-test-design.md-196-204 (1)

196-204: ⚠️ Potential issue | 🟡 Minor

Add languages to the unlabeled fenced blocks.

The blocks at Lines 196-204 and Lines 269-300 currently trip markdownlint MD040, so the docs lint job will stay red.

Also applies to: 269-300

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/superpowers/specs/2026-03-30-reth-cross-validation-test-design.md`
around lines 196 - 204, The unlabeled fenced code blocks containing the scenario
steps (starting with "Test script → L1 Portal (depositTransaction)" and the
other block at the later section) are triggering markdownlint MD040; label both
fenced blocks with a language (e.g., change ``` to ```text) so the markdown
linter recognizes them as code blocks. Update the two blocks (the one beginning
"Test script → L1 Portal (depositTransaction)" and the similar block around
lines 269-300) to use ```text (or another appropriate language) at the opening
fence and keep the closing fence as ```.
🧹 Nitpick comments (14)
ops/docker/Dockerfile.oracle (1)

11-14: Consider adding a non-root user for the runtime stage.

Static analysis flagged that the container runs as root. While acceptable for devnet, consider adding a non-root user for better security posture if this image will be used beyond local development.

🛡️ Optional: Add non-root user
 FROM ubuntu:22.04 as app
+RUN useradd -r -u 1000 appuser
 COPY --from=builder /gas-oracle/app/target/release/app /
+USER appuser
 
 CMD ["./app"]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ops/docker/Dockerfile.oracle` around lines 11 - 14, The runtime Docker stage
("app") currently runs the binary as root; create a non-root user and switch to
it in that stage: add a system user (e.g., "appuser") and group, chown the
copied binary to that user (referencing the COPY from=builder
/gas-oracle/app/target/release/app and the CMD ["./app"]), and set USER appuser
before CMD so the container executes ./app as the non-root account; keep the
binary executable and ensure any required runtime dirs/files are owned by that
user.
docs/superpowers/plans/2026-04-01-anvil-l1-devnet.md (1)

401-414: Add language specifiers to fenced code blocks.

Lines 401 and 407 have fenced code blocks without language identifiers.

📝 Add language specifiers
-```
+```dockerignore
 prover/

With:

- +dockerignore
prover/
.git/
contracts/node_modules/
ops/reth-cross-test/
ops/publicnode/
docs/

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/superpowers/plans/2026-04-01-anvil-l1-devnet.md` around lines 401 - 414,
The markdown contains fenced code blocks without language specifiers for the
.dockerignore content (the single-line block with "prover/" and the multiline
block with "prover/ .git/ contracts/node_modules/ ops/reth-cross-test/
ops/publicnode/ docs/"); update both fenced code blocks to include a language
identifier such as "dockerignore" (e.g., replace ``` with ```dockerignore) so
syntax-aware renderers treat them correctly and keep the exact block contents
unchanged.
ops/reth-cross-test/phases/validator_phase.py (1)

24-24: Use next(iter(...)) instead of list(...)[0].

Same as in stress_phase.py, prefer the more efficient iterator pattern.

♻️ Proposed fix
-            first_rpc = list(config.SEQUENCER_RPCS.values())[0]
+            first_rpc = next(iter(config.SEQUENCER_RPCS.values()))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ops/reth-cross-test/phases/validator_phase.py` at line 24, Replace the
inefficient list conversion used to get the first RPC in validator_phase.py:
instead of constructing a list and indexing it (first_rpc =
list(config.SEQUENCER_RPCS.values())[0]), use the iterator pattern to grab the
first value via next(iter(...)). Update the reference where first_rpc is
assigned so it reads first_rpc = next(iter(config.SEQUENCER_RPCS.values())) to
avoid building an intermediate list while keeping the same semantics.
ops/reth-cross-test/README.md (2)

22-24: Add language specifier to fenced code block.

The CLI usage code block should have a language specifier for proper syntax highlighting.

📝 Proposed fix
-```
+```bash
 python run.py [--phase PHASE] [--skip-build] [--fresh-l1]
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @ops/reth-cross-test/README.md around lines 22 - 24, The fenced code block
containing the CLI usage string "python run.py [--phase PHASE] [--skip-build]
[--fresh-l1]" needs a language specifier for proper highlighting—edit the
README's fenced block around that line (the triple-backtick block containing the
command) and change it from tobash so the block starts with bash and ends with to enable bash/shell syntax highlighting.


</details>

---

`62-68`: **Add language specifier to fenced code block.**

The directory structure should use a language identifier (e.g., `text` or `plaintext`) for consistency.


<details>
<summary>📝 Proposed fix</summary>

```diff
-```
+```text
 reports/
   20260330_120000_baseline (4G).json
   20260330_120500_1R+3G.json
   ...
   20260330_130000_summary.json
 ```
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @ops/reth-cross-test/README.md around lines 62 - 68, The fenced code block in
ops/reth-cross-test/README.md that shows the reports/ directory lacks a language
specifier; update the triple-backtick fence around the directory listing (the
block containing "reports/" and the example filenames like
"20260330_120000_baseline (4G).json") to include a language identifier such as
text or plaintext (e.g., change totext) so the block is consistently
rendered and highlighted.


</details>

</blockquote></details>
<details>
<summary>ops/reth-cross-test/verification/proposer_checker.py (2)</summary><blockquote>

`11-17`: **Remove unused `NODE_TO_GETH` mapping.**

This mapping is defined but never referenced in the module. If it's intended for future use, add a TODO comment; otherwise, remove it.


<details>
<summary>♻️ Proposed fix</summary>

```diff
-# Mapping: morphnode name -> its geth service name
-NODE_TO_GETH = {
-    "node-0": "morph-geth-0",
-    "node-1": "morph-geth-1",
-    "node-2": "morph-geth-2",
-    "node-3": "morph-geth-3",
-}
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@ops/reth-cross-test/verification/proposer_checker.py` around lines 11 - 17,
The NODE_TO_GETH constant in proposer_checker.py is defined but never used;
remove the unused mapping to avoid dead code (delete the NODE_TO_GETH dict) or,
if you intend to keep it for future use, add a clear TODO comment above
NODE_TO_GETH indicating planned usage and why it’s retained (e.g., "# TODO: keep
mapping for future cross-node geth lookup"). Ensure references to NODE_TO_GETH
are not required elsewhere before removing.
```

</details>

---

`39-40`: **Consider catching a more specific exception.**

Catching bare `Exception` can mask unexpected errors. Since this uses `requests`, catching `requests.RequestException` would be more precise.


<details>
<summary>♻️ Proposed fix</summary>

```diff
-    except Exception as e:
+    except requests.RequestException as e:
         return {"pass": False, "error": str(e)}
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@ops/reth-cross-test/verification/proposer_checker.py` around lines 39 - 40,
Replace the broad "except Exception as e" in proposer_checker.py with a more
specific requests.RequestException to avoid masking unrelated errors: change the
except to "except requests.RequestException as e" (and ensure requests is
imported) so only HTTP/network errors are caught and still return {"pass":
False, "error": str(e)}; if you need to preserve behavior for other exceptions,
add a separate generic except that re-raises unexpected errors instead of
swallowing them.
```

</details>

</blockquote></details>
<details>
<summary>docs/superpowers/plans/2026-03-30-reth-cross-validation-test.md (2)</summary><blockquote>

`298-301`: **Document that these are standard Hardhat devnet-only test keys.**

These private keys are from Hardhat's well-known test accounts and should never be used in production. Adding a comment prevents future confusion and silences static analysis warnings.


<details>
<summary>📝 Suggested comment</summary>

```diff
 # Funded accounts (from devnet genesis — these have ETH pre-allocated)
-# Using standard hardhat accounts
+# Using standard Hardhat test accounts — DEVNET ONLY, never use in production
 FUNDED_PRIVATE_KEY = "0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80"
 L1_FUNDED_PRIVATE_KEY = "0xd99870855d97327d20c666abc78588f1449b1fac76ed0c86c1afb9ce2db85f32"
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@docs/superpowers/plans/2026-03-30-reth-cross-validation-test.md` around lines
298 - 301, Add a clarifying comment above the two constants to state that
FUNDED_PRIVATE_KEY and L1_FUNDED_PRIVATE_KEY are well-known Hardhat devnet test
keys and must never be used in production; update the doc block where these
constants are declared to explicitly label them as standard Hardhat devnet-only
test keys and note they exist only for testing to silence static-analysis
warnings and prevent misuse.
```

</details>

---

`147-147`: **Clarify that this is a well-known test private key.**

The static analysis flagged this as a potential secret, but this is Hardhat's standard test account `#0` private key, which is publicly documented and used only for local devnets. Adding a comment clarifies this for future readers.


<details>
<summary>📝 Suggested comment</summary>

```diff
-      - MORPH_NODE_VALIDATOR_PRIVATE_KEY=ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80
+      # Hardhat test account `#0` - well-known devnet-only key
+      - MORPH_NODE_VALIDATOR_PRIVATE_KEY=ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@docs/superpowers/plans/2026-03-30-reth-cross-validation-test.md` at line 147,
Add a comment next to the MORPH_NODE_VALIDATOR_PRIVATE_KEY entry stating this
value is the well-known Hardhat test account `#0` private key used only for local
devnets (not a secret), so static analysis/secret scanners understand it's
intentional; update the line containing MORPH_NODE_VALIDATOR_PRIVATE_KEY to
include that clarification.
```

</details>

</blockquote></details>
<details>
<summary>ops/reth-cross-test/phases/stress_phase.py (2)</summary><blockquote>

`22-24`: **Use `next(iter(...))` instead of `list(...)[0]`.**

As flagged by Ruff, creating a list just to access the first element is inefficient.


<details>
<summary>♻️ Proposed fix</summary>

```diff
-            first_rpc = list(config.SEQUENCER_RPCS.values())[0]
-            from web3 import Web3
+            first_rpc = next(iter(config.SEQUENCER_RPCS.values()))
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@ops/reth-cross-test/phases/stress_phase.py` around lines 22 - 24, Replace the
inefficient list(...) access when grabbing the first RPC: instead of creating a
list from config.SEQUENCER_RPCS.values() and indexing [0] (the current first_rpc
assignment), use next(iter(config.SEQUENCER_RPCS.values())) to get the first
element lazily; keep the rest of the code (the Web3 import and w3 =
Web3(Web3.HTTPProvider(...))) unchanged and assign that result to first_rpc
before passing it into Web3.HTTPProvider.
```

</details>

---

`1-5`: **Move imports to module level.**

`Web3` and `time` are imported inline within the function body. Moving them to the module level improves readability and avoids repeated import overhead if `run()` is called multiple times.


<details>
<summary>♻️ Proposed fix</summary>

```diff
 """Stress test phase (Phase 6)."""
+import time
+
 from phases.base import Phase
 from verification import block_checker, stress_runner, tx_sender
+from web3 import Web3
 import config
```

Then remove the inline imports at lines 23 and 43.
</details>


Also applies to: 23-23, 43-44

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@ops/reth-cross-test/phases/stress_phase.py` around lines 1 - 5, The inline
imports of Web3 and time inside the run() function should be moved to the module
level to avoid repeated import overhead and improve readability: add "from web3
import Web3" and "import time" alongside the existing top-level imports (near
Phase, block_checker, stress_runner, tx_sender, config) and remove the inline
"from web3 import Web3" and "import time" statements from inside the run()
function so the function uses the module-level symbols.
```

</details>

</blockquote></details>
<details>
<summary>ops/reth-cross-test/docker/docker-compose.override-3r1g.yml (1)</summary><blockquote>

`9-18`: **Consider documenting why `user: root` is required.**

Running containers as root increases the attack surface. If this is necessary for reth initialization or data directory permissions, add a comment explaining the requirement. If not strictly necessary, consider using a non-root user.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@ops/reth-cross-test/docker/docker-compose.override-3r1g.yml` around lines 9 -
18, The docker-compose override sets user: root for the morph-geth-0 service
which raises security concerns; either add an inline comment above the user:
root line explaining why root is required (e.g., required by /entrypoint-reth.sh
for initialization, chown of data dirs, or needing elevated privileges for
RETH), or if root is not strictly necessary modify the service to run as a
non-root user and ensure the entrypoint (/entrypoint-reth.sh) and data directory
permissions are adjusted accordingly; reference the morph-geth-0 service, the
user: root field, and the entrypoint /entrypoint-reth.sh when making the change
or adding the explanatory comment.
```

</details>

</blockquote></details>
<details>
<summary>ops/reth-cross-test/verification/sequencer_tag_checker.py (1)</summary><blockquote>

`33-44`: **Consider logging RPC failures for observability.**

Silently catching exceptions with `pass` makes debugging difficult when nodes are unreachable. Adding debug-level logging would help diagnose issues without breaking the flow.


<details>
<summary>♻️ Proposed fix with logging</summary>

```diff
+import logging
+
+logger = logging.getLogger(__name__)
+
 # ... inside the loop:
         try:
             latest = w3.eth.block_number
-        except Exception:
-            pass
+        except Exception as e:
+            logger.debug("Failed to get block_number for %s: %s", name, e)
         try:
             safe = w3.eth.get_block("safe")["number"]
-        except Exception:
-            pass
+        except Exception as e:
+            logger.debug("Failed to get safe block for %s: %s", name, e)
         try:
             finalized = w3.eth.get_block("finalized")["number"]
-        except Exception:
-            pass
+        except Exception as e:
+            logger.debug("Failed to get finalized block for %s: %s", name, e)
```
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@ops/reth-cross-test/verification/sequencer_tag_checker.py` around lines 33 -
44, The three try/except blocks that swallow RPC errors (reading
w3.eth.block_number into latest and calls to w3.eth.get_block("safe") and
w3.eth.get_block("finalized") into safe and finalized) should log the caught
exceptions instead of silently passing; update the except handlers to call a
module logger (e.g., logging.getLogger(__name__) or the existing logger) with a
debug-level message that includes context ("failed to get latest/safe/finalized
block") and the exception details (use exc_info=True or include the exception
string) so RPC failures are observable without changing control flow.
```

</details>

</blockquote></details>
<details>
<summary>ops/reth-cross-test/reports/20260330_192134_baseline (4G).json (1)</summary><blockquote>

`1-102`: **Consider whether committing test reports is appropriate.**

This baseline report shows multiple test failures and appears to be from a specific test run. Committing test reports can be useful for documentation, but they may become stale. Consider:
1. Adding this to `.gitignore` and only committing a sanitized example.
2. Or documenting that this is a reference snapshot for expected baseline behavior.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@ops/reth-cross-test/reports/20260330_192134_baseline` (4G).json around lines
1 - 102, This file is a raw, timestamped test run with failing results and
should not be committed as-is; remove
ops/reth-cross-test/reports/20260330_192134_baseline (4G).json from the repo,
add a gitignore entry for ops/reth-cross-test/reports/*.json, and instead commit
a sanitized reference snapshot (e.g.,
ops/reth-cross-test/reports/baseline-example.json) that strips or redacts
volatile fields such as test_run_timestamp, morph_*_image_digest,
proposer_coverage.error, validator_sync heights/lag,
l1_message_verification.deposit_tx, and any other run-specific values, and add a
short README note documenting that the example is a non-live reference snapshot
for baseline expectations.
```

</details>

</blockquote></details>

</blockquote></details>

<details>
<summary>🤖 Prompt for all review comments with AI agents</summary>

Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.dockerignore:

  • Around line 2-6: The .dockerignore entry excluding ops/publicnode/ prevents
    Docker from seeing ops/publicnode/Dockerfile.geth-nccc and
    ops/publicnode/Dockerfile.node (referenced by ops/publicnode/docker-compose.yml
    and invoked by the make run-holesky-node flow), causing docker-compose build
    failures; fix by either removing ops/publicnode/ from .dockerignore so Docker’s
    build context includes those Dockerfiles, or update
    ops/publicnode/docker-compose.yml to reference accessible paths (use absolute
    paths or adjust build.context and dockerfile fields so Docker can locate
    Dockerfile.geth-nccc and Dockerfile.node during the make
    run-holesky-node/docker-compose up sequence).

In @ops/l2-genesis/morph-chain-ops/genesis/genesis.go:

  • Around line 89-94: The default baseFee value is off by three orders of
    magnitude; in the genesis construction code where baseFee is set when
    config.L2GenesisBlockBaseFeePerGas == nil (the variable baseFee in genesis.go),
    replace the BigInt default of 1_000_000 with 1_000_000_000 (1 gwei / 0x3b9aca00)
    so environments that omit L2GenesisBlockBaseFeePerGas (e.g.,
    mainnet/devnet/testnet/holesky/hoodi/qanet) use the correct 1 gwei default.

In @ops/reth-cross-test/docker/docker-compose.reth-validator.yml:

  • Around line 45-47: The docker-compose depends_on for validator_reth currently
    uses condition: service_started which only waits for the container process;
    change the dependency in the validator_reth_node service so it depends on
    validator_reth using condition: service_healthy (i.e., replace service_started
    with service_healthy for the validator_reth entry) and ensure the validator_reth
    service has a working healthcheck so the validator will only start after reth
    reports healthy.

In @ops/reth-cross-test/phases/base.py:

  • Around line 157-179: The current subprocess.run invocation uses a single
    "sleep 5 && curl" which can fail on slow machines; change the shell command
    passed to subprocess.run (the docker run invocation that references genesis_path
    and sets result) to poll the temporary geth RPC instead of a one-off curl: start
    geth in the container as before, then run a small retry loop (e.g., try curl for
    eth_getBlockByNumber in a loop with a short sleep between attempts, breaking on
    a successful JSON response) and fail loudly after a configurable timeout/number
    of attempts; keep referring to the same subprocess.run call and the variables
    result, genesis_path, and the subsequent JSON parsing/exception handling so that
    if polling exhausts attempts you still print the geth stderr and return/raise as
    currently done.
  • Around line 46-82: The current devnet readiness check reuses any existing
    genesis-l2.json and skips patching when genesisStateRoot exists, which lets
    stale pre-MPT files persist; update the logic around genesis_path and the
    _patch_genesis_with_zktrie_root() flow so the genesis file is only treated as
    valid if its contents match the expected trie mode: read
    genesis_path.read_text() and verify the presence or absence of the specific
    marker/field used by the new MPT flow (e.g., "genesisStateRoot" or the exact
    JSON structure produced by _patch_genesis_with_zktrie_root()), and if it does
    not match the expected mode then treat genesis as missing (append "L2 genesis
    missing" to missing or delete/overwrite the file) so
    subprocess.run(["make","devnet-up"]) and _patch_genesis_with_zktrie_root() will
    be executed to produce a fresh, mode-correct genesis.json.

In @ops/reth-cross-test/phases/fault_phase.py:

  • Around line 18-66: The run() method currently aborts the whole phase when any
    scenario raises, so wrap each scenario execution in its own try/except/finally:
    for each scenario key call (e.g. _restart_reth_sequencer, _stop_all_submitters,
    _pause_validator, _stop_finalizer_only, _wipe_reth_sequencer) perform setup(),
    wait_ready(), wait_blocks(...), then try to run the scenario and store either
    the returned result or the caught exception string into scenarios["..."], and
    always call teardown() in the scenario-level finally; keep the outer try to
    capture catastrophic failures but do not let a single scenario exception
    short-circuit the remaining scenarios.
  • Around line 183-185: The wipe is deleting the wrong Docker volume name in
    _wipe_reth_sequencer(): it removes "morph_data_0" while the shared cleanup uses
    "docker_morph_data_0", so resync-from-empty-datadir may be skipped; update
    _wipe_reth_sequencer() to remove the correctly prefixed volume (use
    "docker_morph_data_0") or, more robustly, attempt to remove both variants
    (morph_data_0 and docker_morph_data_0) or derive the volume name from the
    Compose project name so it matches the shared cleanup in
    ops/reth-cross-test/phases/base.py. Ensure the change targets the
    subprocess.run(["docker", "volume", "rm", ...]) call in _wipe_reth_sequencer().
  • Around line 143-171: The variable height_during may remain unassigned if the
    first RPC read of w3_seq.eth.block_number fails; initialize height_during (e.g.,
    set height_during = None or height_before) before the try/except that sets
    blocks_produced so it always has a value on the success path, and keep
    blocks_produced = False in the except branch; adjust the return to rely on those
    variables (height_before, height_during, blocks_produced, finalization_resumed,
    lag) as before.

In @ops/reth-cross-test/phases/sequencer_phase.py:

  • Around line 41-53: The validator sync check is using the old target_block
    variable reached after mining extra blocks with self.wait_blocks(reached + 5), allowing the phase to pass before validators replay transaction-bearing
    blocks; update the call to validator_checker.wait_validator_sync to use the
    new post-mine height (e.g., reached + 5 or the actual latest height
    returned/observed after self.wait_blocks) as the target_block argument so
    validators are waited-to the transaction-bearing block height instead of the
    stale reached.

In @ops/reth-cross-test/run.py:

  • Around line 43-64: The reducer currently only checks specific section names
    and misses others like rollup_alignment; update the loop that computes
    phase_pass (inside the for r in results block) to treat any nested dict in r as
    a set of pass flags: for each key/value in such dicts, if the key is "pass" and
    value is falsy or the key endswith "_pass" and value is falsy then set
    phase_pass = False; keep the existing explicit checks for "block_consistency",
    "tx_verification", and "final_consistency", but add this generic scan over
    r.items() (or r["scenarios"] already handled) so keys like "rollup_alignment"
    with withdrawal_roots_pass/state_roots_pass will correctly flip phase_pass and
    thus summary["all_pass"].
  • Around line 27-32: The prepare_images function currently ignores docker pull
    failures; update it to fail fast by checking subprocess.run's result (or using
    check=True) when pulling each image and aborting on non-zero exit—e.g., in
    prepare_images, iterate images and for each call
    subprocess.run(["docker","pull", img], check=True) (or inspect
    CompletedProcess.returncode and call sys.exit(1) with a clear error message) so
    the script stops immediately if pulling "ghcr.io/morph-l2/morph-reth:latest" (or
    any image) fails.

In @ops/reth-cross-test/verification/block_checker.py:

  • Around line 104-110: The code is incorrectly defaulting unknown or errored
    chain_mode into the zktrie path; update the logic around tier2_pass and
    state_root_mode to only treat explicit "mpt" as MPT and explicit "zktrie" as
    zktrie, and treat any other/None chain_mode as "unknown" and fail-safe (do not
    skip Tier 2). Concretely, change the calculation of tier2_pass (currently using
    chain_mode == "mpt") to: if chain_mode == "mpt" -> tier2_pass =
    len(tier2_mismatches) == 0; elif chain_mode == "zktrie" -> tier2_pass = False
    (or keep defined behavior for zktrie); else -> tier2_pass = False (or explicitly
    indicate unknown/failure). Also set state_root_mode to "mpt" only when
    chain_mode == "mpt", to "zktrie" only when chain_mode == "zktrie", and to
    "unknown" otherwise. Update references to tier1_pass, tier2_pass,
    state_root_mode, chain_mode, tier1_mismatches, and tier2_mismatches accordingly
    (apply same fix for the other block at lines 117-126).

In @ops/reth-cross-test/verification/l1_message_checker.py:

  • Around line 56-93: The scan currently treats the first 0x7E L2 tx as the
    deposit message and ignores l1_tx_hash; update verify_l1_message_inclusion to
    actually correlate the L2 message to the provided l1_tx_hash by: when you detect
    a candidate tx (tx_type == 0x7E) call w3.eth.get_transaction(tx["hash"]) and/or
    w3.eth.get_transaction_receipt(tx["hash"]) and search the tx input and
    receipt.logs (topics and data) for the normalized l1_tx_hash hex string; only
    set l1_msg_block and break when that match is found (leave the rest of the loop
    logic intact), and ensure l1_tx_hash is normalized once at start for
    comparisons.
  • Around line 100-115: The final assignment currently can overwrite an earlier
    failure; change the logic so deposit_at_head isn't reset to True after you've
    already flagged a bad ordering. Update the assignment to respect prior failures
    and require deposits to be contiguous: set results["checks"]["deposit_at_head"]
    = results["checks"].get("deposit_at_head", True) and (not l1_msg_ended and
    l1_count > 0) (or equivalently compute deposit_ok = (not l1_msg_ended and
    l1_count > 0) and only assign it if the key isn't already False) so that
    l1_msg_ended and l1_count are evaluated together and previous False values
    aren't overwritten.

In @ops/reth-cross-test/verification/rollup_checker.py:

  • Around line 27-35: The L1 log query in l1_w3.eth.get_logs is currently
    unfiltered and counts every event from ROLLUP_ADDRESS; restrict the query to
    only the CommitBatch event by adding a topics filter for the CommitBatch event
    signature (keccak256 of its ABI signature) when calling l1_w3.eth.get_logs,
    keeping the same address (ROLLUP_ADDRESS) and block range, so batches_found
    correctly reflects only CommitBatch logs; update the existing exception handling
    and length-based logic around batches_found as needed.

In @ops/reth-cross-test/verification/stress_runner.py:

  • Around line 50-66: The stage teardown can let old workers leak into the next
    stage because run_stage sets self._stop=True then only calls t.join(timeout=10)
    while RPC calls in _worker (get_block/send_raw_transaction) use a 30s timeout;
    increase the join waiting to ensure workers fully exit before clearing _stop for
    the next stage — e.g., replace the 10s timeout with a value >= the RPC timeout
    plus a small buffer (or call t.join() with no timeout) so all threads in threads
    are guaranteed to finish after self._stop is set before the next invocation of
    run_stage; update references in run_stage and ensure _worker still checks
    self._stop between RPC calls.
  • Around line 74-103: The worker currently calls
    w3.eth.get_transaction_count(account.address) each time it builds tx and call_tx
    which returns 'latest' by default and causes duplicate nonces when sending many
    rapid transactions; fix by fetching the pending nonce once per sender (using
    w3.eth.get_transaction_count(account.address, "pending")) at worker startup or
    before the send loop, store it in a local nonce variable (e.g., worker_nonce)
    and increment worker_nonce after each send instead of re-calling
    get_transaction_count, or if you prefer a simpler change, explicitly pass
    block_identifier='pending' (or "pending") to the existing
    w3.eth.get_transaction_count calls for tx and call_tx so the pending nonce is
    used.

In @ops/reth-cross-test/verification/validator_checker.py:

  • Around line 81-83: The current computation of min_height uses min(...) over a
    generator filtered by v["height"] > 0 and will raise ValueError if every
    validator timed out/errored (i.e., the filtered set is empty); update the logic
    around the min_height calculation (the code that reads validator_results and
    computes min_height) to first collect or test the filtered heights (e.g.,
    heights = [v["height"] for v in validator_results.values() if v["height"] > 0])
    and if that list is empty, short-circuit and return the intended "validators did
    not sync" result (or set a safe default) instead of calling min(), otherwise
    compute min_height = min(heights).

Minor comments:
In @docs/superpowers/specs/2026-03-30-reth-cross-validation-test-design.md:

  • Around line 369-383: The sample reports still use chain_mode: "zktrie",
    jade_hardfork_active: false, and state_root_mode: "zktrie_divergence_expected"
    which are now outdated; update the examples (including the blocks around lines
    referenced) to reflect the new default MPT/devnet genesis settings by changing
    chain_mode to the MPT value used by the codebase, set jade_hardfork_active to
    true if the new devnet enables it, and replace state_root_mode and any Tier 2 /
    sequencer-tag expectations to the new MPT-mode semantics (adjust the documented
    Tier 2 expectations and sequencer-tag behavior accordingly), ensuring all sample
    report JSON blocks and narrative text (the referenced section and the later
    395-449 region) consistently match the new defaults.
  • Around line 196-204: The unlabeled fenced code blocks containing the scenario
    steps (starting with "Test script → L1 Portal (depositTransaction)" and the
    other block at the later section) are triggering markdownlint MD040; label both
    fenced blocks with a language (e.g., change totext) so the markdown
    linter recognizes them as code blocks. Update the two blocks (the one beginning
    "Test script → L1 Portal (depositTransaction)" and the similar block around
    lines 269-300) to use text (or another appropriate language) at the opening fence and keep the closing fence as .

In @ops/reth-cross-test/phases/validator_phase.py:

  • Around line 40-46: The call to block_checker.check_blocks using min(reached,
    val_result.get("validator_sync", {}).get("validator_reth", {}).get("height", 0))
    can produce an end height of 0 and lead to an empty range; update the
    validator_phase logic to guard against that by extracting the validator height
    into a variable (from val_result["validator_sync"]["validator_reth"]["height"]
    with safe default), check if that height is >= 1 and that reached >= 1 before
    calling block_checker.check_blocks, and only invoke
    block_checker.check_blocks(config.VALIDATOR_RPCS, 1, end_height,
    self.reth_nodes) when end_height >= 1 (otherwise skip or log a warning). Ensure
    references: val_result, validator_sync, validator_reth, reached,
    block_checker.check_blocks, config.VALIDATOR_RPCS, and self.reth_nodes are used
    to locate and update the code.

In @ops/reth-cross-test/verification/stress_runner.py:

  • Around line 30-48: The fund_accounts function currently broadcasts 50
    transfers then sleeps 10s without verifying inclusion; change fund_accounts to
    collect each sent tx hash (from signed.raw_transaction) and call
    w3.eth.wait_for_transaction_receipt(tx_hash, timeout=...) (or poll recipient
    balances) to ensure each funding tx is mined before returning. Use the existing
    funded = Account.from_key(...) / nonce flow and self._accounts[:50] to locate
    where to add the tracking; wait for receipts (or confirmations) for either each
    tx or for the last tx in the batch before allowing the stage to start so workers
    won't begin with zero balances.

Nitpick comments:
In @docs/superpowers/plans/2026-03-30-reth-cross-validation-test.md:

  • Around line 298-301: Add a clarifying comment above the two constants to state
    that FUNDED_PRIVATE_KEY and L1_FUNDED_PRIVATE_KEY are well-known Hardhat devnet
    test keys and must never be used in production; update the doc block where these
    constants are declared to explicitly label them as standard Hardhat devnet-only
    test keys and note they exist only for testing to silence static-analysis
    warnings and prevent misuse.
  • Line 147: Add a comment next to the MORPH_NODE_VALIDATOR_PRIVATE_KEY entry
    stating this value is the well-known Hardhat test account #0 private key used
    only for local devnets (not a secret), so static analysis/secret scanners
    understand it's intentional; update the line containing
    MORPH_NODE_VALIDATOR_PRIVATE_KEY to include that clarification.

In @docs/superpowers/plans/2026-04-01-anvil-l1-devnet.md:

  • Around line 401-414: The markdown contains fenced code blocks without language
    specifiers for the .dockerignore content (the single-line block with "prover/"
    and the multiline block with "prover/ .git/ contracts/node_modules/
    ops/reth-cross-test/ ops/publicnode/ docs/"); update both fenced code blocks to
    include a language identifier such as "dockerignore" (e.g., replace ``` with
exact block contents unchanged.

In `@ops/docker/Dockerfile.oracle`:
- Around line 11-14: The runtime Docker stage ("app") currently runs the binary
as root; create a non-root user and switch to it in that stage: add a system
user (e.g., "appuser") and group, chown the copied binary to that user
(referencing the COPY from=builder /gas-oracle/app/target/release/app and the
CMD ["./app"]), and set USER appuser before CMD so the container executes ./app
as the non-root account; keep the binary executable and ensure any required
runtime dirs/files are owned by that user.

In `@ops/reth-cross-test/docker/docker-compose.override-3r1g.yml`:
- Around line 9-18: The docker-compose override sets user: root for the
morph-geth-0 service which raises security concerns; either add an inline
comment above the user: root line explaining why root is required (e.g.,
required by /entrypoint-reth.sh for initialization, chown of data dirs, or
needing elevated privileges for RETH), or if root is not strictly necessary
modify the service to run as a non-root user and ensure the entrypoint
(/entrypoint-reth.sh) and data directory permissions are adjusted accordingly;
reference the morph-geth-0 service, the user: root field, and the entrypoint
/entrypoint-reth.sh when making the change or adding the explanatory comment.

In `@ops/reth-cross-test/phases/stress_phase.py`:
- Around line 22-24: Replace the inefficient list(...) access when grabbing the
first RPC: instead of creating a list from config.SEQUENCER_RPCS.values() and
indexing [0] (the current first_rpc assignment), use
next(iter(config.SEQUENCER_RPCS.values())) to get the first element lazily; keep
the rest of the code (the Web3 import and w3 = Web3(Web3.HTTPProvider(...)))
unchanged and assign that result to first_rpc before passing it into
Web3.HTTPProvider.
- Around line 1-5: The inline imports of Web3 and time inside the run() function
should be moved to the module level to avoid repeated import overhead and
improve readability: add "from web3 import Web3" and "import time" alongside the
existing top-level imports (near Phase, block_checker, stress_runner, tx_sender,
config) and remove the inline "from web3 import Web3" and "import time"
statements from inside the run() function so the function uses the module-level
symbols.

In `@ops/reth-cross-test/phases/validator_phase.py`:
- Line 24: Replace the inefficient list conversion used to get the first RPC in
validator_phase.py: instead of constructing a list and indexing it (first_rpc =
list(config.SEQUENCER_RPCS.values())[0]), use the iterator pattern to grab the
first value via next(iter(...)). Update the reference where first_rpc is
assigned so it reads first_rpc = next(iter(config.SEQUENCER_RPCS.values())) to
avoid building an intermediate list while keeping the same semantics.

In `@ops/reth-cross-test/README.md`:
- Around line 22-24: The fenced code block containing the CLI usage string
"python run.py [--phase PHASE] [--skip-build] [--fresh-l1]" needs a language
specifier for proper highlighting—edit the README's fenced block around that
line (the triple-backtick block containing the command) and change it from ```
to ```bash so the block starts with ```bash and ends with ``` to enable
bash/shell syntax highlighting.
- Around line 62-68: The fenced code block in ops/reth-cross-test/README.md that
shows the reports/ directory lacks a language specifier; update the
triple-backtick fence around the directory listing (the block containing
"reports/" and the example filenames like "20260330_120000_baseline (4G).json")
to include a language identifier such as text or plaintext (e.g., change ``` to
```text) so the block is consistently rendered and highlighted.

In `@ops/reth-cross-test/reports/20260330_192134_baseline` (4G).json:
- Around line 1-102: This file is a raw, timestamped test run with failing
results and should not be committed as-is; remove
ops/reth-cross-test/reports/20260330_192134_baseline (4G).json from the repo,
add a gitignore entry for ops/reth-cross-test/reports/*.json, and instead commit
a sanitized reference snapshot (e.g.,
ops/reth-cross-test/reports/baseline-example.json) that strips or redacts
volatile fields such as test_run_timestamp, morph_*_image_digest,
proposer_coverage.error, validator_sync heights/lag,
l1_message_verification.deposit_tx, and any other run-specific values, and add a
short README note documenting that the example is a non-live reference snapshot
for baseline expectations.

In `@ops/reth-cross-test/verification/proposer_checker.py`:
- Around line 11-17: The NODE_TO_GETH constant in proposer_checker.py is defined
but never used; remove the unused mapping to avoid dead code (delete the
NODE_TO_GETH dict) or, if you intend to keep it for future use, add a clear TODO
comment above NODE_TO_GETH indicating planned usage and why it’s retained (e.g.,
"# TODO: keep mapping for future cross-node geth lookup"). Ensure references to
NODE_TO_GETH are not required elsewhere before removing.
- Around line 39-40: Replace the broad "except Exception as e" in
proposer_checker.py with a more specific requests.RequestException to avoid
masking unrelated errors: change the except to "except requests.RequestException
as e" (and ensure requests is imported) so only HTTP/network errors are caught
and still return {"pass": False, "error": str(e)}; if you need to preserve
behavior for other exceptions, add a separate generic except that re-raises
unexpected errors instead of swallowing them.

In `@ops/reth-cross-test/verification/sequencer_tag_checker.py`:
- Around line 33-44: The three try/except blocks that swallow RPC errors
(reading w3.eth.block_number into latest and calls to w3.eth.get_block("safe")
and w3.eth.get_block("finalized") into safe and finalized) should log the caught
exceptions instead of silently passing; update the except handlers to call a
module logger (e.g., logging.getLogger(__name__) or the existing logger) with a
debug-level message that includes context ("failed to get latest/safe/finalized
block") and the exception details (use exc_info=True or include the exception
string) so RPC failures are observable without changing control flow.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 65c945d9-1601-4cfc-be10-44d878d7baed

📥 Commits

Reviewing files that changed from the base of the PR and between df02f26 and 8114ea3.

⛔ Files ignored due to path filters (17)
  • ops/reth-cross-test/__pycache__/config.cpython-314.pyc is excluded by !**/*.pyc
  • ops/reth-cross-test/phases/__pycache__/__init__.cpython-314.pyc is excluded by !**/*.pyc
  • ops/reth-cross-test/phases/__pycache__/base.cpython-314.pyc is excluded by !**/*.pyc
  • ops/reth-cross-test/phases/__pycache__/fault_phase.cpython-314.pyc is excluded by !**/*.pyc
  • ops/reth-cross-test/phases/__pycache__/sequencer_phase.cpython-314.pyc is excluded by !**/*.pyc
  • ops/reth-cross-test/phases/__pycache__/stress_phase.cpython-314.pyc is excluded by !**/*.pyc
  • ops/reth-cross-test/phases/__pycache__/validator_phase.cpython-314.pyc is excluded by !**/*.pyc
  • ops/reth-cross-test/verification/__pycache__/__init__.cpython-314.pyc is excluded by !**/*.pyc
  • ops/reth-cross-test/verification/__pycache__/block_checker.cpython-314.pyc is excluded by !**/*.pyc
  • ops/reth-cross-test/verification/__pycache__/l1_message_checker.cpython-314.pyc is excluded by !**/*.pyc
  • ops/reth-cross-test/verification/__pycache__/proposer_checker.cpython-314.pyc is excluded by !**/*.pyc
  • ops/reth-cross-test/verification/__pycache__/rollup_checker.cpython-314.pyc is excluded by !**/*.pyc
  • ops/reth-cross-test/verification/__pycache__/sequencer_tag_checker.cpython-314.pyc is excluded by !**/*.pyc
  • ops/reth-cross-test/verification/__pycache__/stress_runner.cpython-314.pyc is excluded by !**/*.pyc
  • ops/reth-cross-test/verification/__pycache__/tx_sender.cpython-314.pyc is excluded by !**/*.pyc
  • ops/reth-cross-test/verification/__pycache__/tx_verifier.cpython-314.pyc is excluded by !**/*.pyc
  • ops/reth-cross-test/verification/__pycache__/validator_checker.cpython-314.pyc is excluded by !**/*.pyc
📒 Files selected for processing (46)
  • .dockerignore
  • docs/superpowers/plans/2026-03-30-reth-cross-validation-test.md
  • docs/superpowers/plans/2026-04-01-anvil-l1-devnet.md
  • docs/superpowers/specs/2026-03-30-reth-cross-validation-test-design.md
  • go-ethereum
  • ops/docker/Dockerfile.l1-beacon
  • ops/docker/Dockerfile.oracle
  • ops/docker/docker-compose-4nodes.yml
  • ops/docker/go-rust-builder.Dockerfile
  • ops/docker/intermediate/go-rust-builder.Dockerfile
  • ops/docker/intermediate/go-ubuntu-builder.Dockerfile
  • ops/l2-genesis/morph-chain-ops/genesis/genesis.go
  • ops/reth-cross-test/.gitignore
  • ops/reth-cross-test/README.md
  • ops/reth-cross-test/config.py
  • ops/reth-cross-test/contracts/SimpleStorage.sol
  • ops/reth-cross-test/contracts/compiled/SimpleStorage.json
  • ops/reth-cross-test/docker/docker-compose.override-1r3g.yml
  • ops/reth-cross-test/docker/docker-compose.override-2r2g.yml
  • ops/reth-cross-test/docker/docker-compose.override-3r1g.yml
  • ops/reth-cross-test/docker/docker-compose.override-4r.yml
  • ops/reth-cross-test/docker/docker-compose.reth-validator.yml
  • ops/reth-cross-test/docker/entrypoint-l2-reth.sh
  • ops/reth-cross-test/phases/__init__.py
  • ops/reth-cross-test/phases/base.py
  • ops/reth-cross-test/phases/fault_phase.py
  • ops/reth-cross-test/phases/sequencer_phase.py
  • ops/reth-cross-test/phases/stress_phase.py
  • ops/reth-cross-test/phases/validator_phase.py
  • ops/reth-cross-test/reports/.gitkeep
  • ops/reth-cross-test/reports/20260330_192134_baseline (4G).json
  • ops/reth-cross-test/reports/20260330_192134_summary.json
  • ops/reth-cross-test/reports/20260330_193519_baseline (4G).json
  • ops/reth-cross-test/reports/20260330_193519_summary.json
  • ops/reth-cross-test/requirements.txt
  • ops/reth-cross-test/run.py
  • ops/reth-cross-test/verification/__init__.py
  • ops/reth-cross-test/verification/block_checker.py
  • ops/reth-cross-test/verification/l1_message_checker.py
  • ops/reth-cross-test/verification/proposer_checker.py
  • ops/reth-cross-test/verification/rollup_checker.py
  • ops/reth-cross-test/verification/sequencer_tag_checker.py
  • ops/reth-cross-test/verification/stress_runner.py
  • ops/reth-cross-test/verification/tx_sender.py
  • ops/reth-cross-test/verification/tx_verifier.py
  • ops/reth-cross-test/verification/validator_checker.py

Comment on lines +2 to +6
.git/
contracts/node_modules/
ops/reth-cross-test/
ops/publicnode/
docs/
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Read-only verification: find COPY/ADD references to newly ignored paths.
# Expected result: no COPY/ADD usage that depends on ignored directories.

set -euo pipefail

echo "== Dockerfiles mentioning potentially ignored paths =="
rg -n -C2 --iglob '*Dockerfile*' --iglob '*.dockerfile' \
  '(\.git/|contracts/node_modules/|ops/reth-cross-test/|ops/publicnode/|docs/)'

echo
echo "== docker-compose files with build contexts likely requiring ignored paths =="
rg -n -C2 --iglob '*compose*.yml' --iglob '*compose*.yaml' \
  '(build:|context:|dockerfile:)'

Repository: morph-l2/morph

Length of output: 112


🏁 Script executed:

#!/bin/bash
# Find all Docker-related files
echo "=== Finding all Dockerfile and docker-compose files ==="
fd -t f -e Dockerfile -e dockerfile
fd -t f -name '*compose*.yml' -o -name '*compose*.yaml'

# List all files found
echo ""
echo "=== All docker-compose files ==="
fd -t f \( -name '*compose*.yml' -o -name '*compose*.yaml' \)

# Count Dockerfiles
echo ""
echo "=== Dockerfile count ==="
fd -t f -e Dockerfile -e dockerfile | wc -l

Repository: morph-l2/morph

Length of output: 854


🏁 Script executed:

#!/bin/bash
# Search for COPY, ADD, and build context references in Docker files
echo "=== Searching for COPY/ADD in Dockerfiles ==="
rg '^\s*(COPY|ADD)' -t Dockerfile

echo ""
echo "=== Searching for build/context in docker-compose files ==="
rg '(build:|context:)' -t yaml

Repository: morph-l2/morph

Length of output: 2271


🏁 Script executed:

#!/bin/bash
# Read each Dockerfile for references to ignored paths
echo "=== Content of Dockerfiles ==="
for dockerfile in \
  ops/docker/go-rust-builder.Dockerfile \
  ops/docker/intermediate/go-rust-alpine-builder.Dockerfile \
  ops/docker/intermediate/go-rust-builder.Dockerfile \
  ops/docker/intermediate/go-ubuntu-builder.Dockerfile; do
  echo ""
  echo "### $dockerfile ###"
  cat -n "$dockerfile"
done

Repository: morph-l2/morph

Length of output: 5562


🏁 Script executed:

#!/bin/bash
# Check for indirect references to ignored paths in Dockerfiles (ENV, RUN, WORKDIR, etc.)
echo "=== Searching for any reference to ignored paths in Dockerfiles ==="
rg '(\.git|contracts/node_modules|ops/reth-cross-test|ops/publicnode|/docs)' ops/docker/ --no-heading

Repository: morph-l2/morph

Length of output: 125


🏁 Script executed:

#!/bin/bash
# Check the ops/publicnode/docker-compose.yml file since it's in an ignored directory
echo "=== ops/publicnode/docker-compose.yml content ==="
cat -n ops/publicnode/docker-compose.yml

Repository: morph-l2/morph

Length of output: 1708


🏁 Script executed:

#!/bin/bash
# Check if the dockerfiles referenced in ops/publicnode/docker-compose.yml exist
echo "=== Checking referenced Dockerfiles ==="
ls -la ops/publicnode/Dockerfile.geth-nccc ops/publicnode/Dockerfile.node 2>&1

# Also check what files are in ops/publicnode/
echo ""
echo "=== Files in ops/publicnode/ ==="
find ops/publicnode -type f | head -20

Repository: morph-l2/morph

Length of output: 686


🏁 Script executed:

#!/bin/bash
# Search for any other references to ops/publicnode dockerfiles or usage
echo "=== Searching for references to ops/publicnode dockerfiles ==="
rg 'ops/publicnode' --no-heading -i | head -30

Repository: morph-l2/morph

Length of output: 291


🏁 Script executed:

#!/bin/bash
# Read the Dockerfiles in ops/publicnode to understand the build dependency
echo "=== ops/publicnode/Dockerfile.geth-nccc ==="
cat -n ops/publicnode/Dockerfile.geth-nccc

echo ""
echo "=== ops/publicnode/Dockerfile.node ==="
cat -n ops/publicnode/Dockerfile.node

echo ""
echo "=== Check what's in ops/publicnode/Makefile ==="
cat -n ops/publicnode/Makefile

Repository: morph-l2/morph

Length of output: 4323


The .dockerignore change breaks ops/publicnode/ builds.

Excluding ops/publicnode/ from .dockerignore will prevent Docker from locating ops/publicnode/Dockerfile.geth-nccc and ops/publicnode/Dockerfile.node, which are actively referenced by ops/publicnode/docker-compose.yml with context: ../... When make run-holesky-node executes docker-compose up, the build will fail because the dockerfile paths are no longer available in the build context. Either remove ops/publicnode/ from .dockerignore or update the docker-compose references to use absolute paths or alternative contexts.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.dockerignore around lines 2 - 6, The .dockerignore entry excluding
ops/publicnode/ prevents Docker from seeing ops/publicnode/Dockerfile.geth-nccc
and ops/publicnode/Dockerfile.node (referenced by
ops/publicnode/docker-compose.yml and invoked by the make run-holesky-node
flow), causing docker-compose build failures; fix by either removing
ops/publicnode/ from .dockerignore so Docker’s build context includes those
Dockerfiles, or update ops/publicnode/docker-compose.yml to reference accessible
paths (use absolute paths or adjust build.context and dockerfile fields so
Docker can locate Dockerfile.geth-nccc and Dockerfile.node during the make
run-holesky-node/docker-compose up sequence).

Comment on lines 89 to 94
var baseFee *big.Int
if config.L2GenesisBlockBaseFeePerGas != nil {
baseFee = config.L2GenesisBlockBaseFeePerGas.ToInt()
} else {
baseFee = big.NewInt(1_000_000)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if there's documentation or other references for expected L2 base fee defaults
rg -n -C3 'BaseFee|baseFee|base_fee' --type go ops/l2-genesis/

# Check if any deploy configs explicitly set L2GenesisBlockBaseFeePerGas
fd -e json -e yaml | xargs rg -l 'L2GenesisBlockBaseFeePerGas' 2>/dev/null | head -5 | xargs -I{} sh -c 'echo "=== {} ===" && cat {}'

Repository: morph-l2/morph

Length of output: 8973


🏁 Script executed:

# Search for deploy config files and see what base fee values they use
fd -e json -e toml -e yaml | xargs rg -l 'l2GenesisBlockBaseFeePerGas|L2GenesisBlockBaseFeePerGas' 2>/dev/null

# Check for any comments or documentation about this field
rg -B5 -A5 'L2GenesisBlockBaseFeePerGas' ops/l2-genesis/morph-chain-ops/genesis/config.go

# Look at the actual usage in NewL2StorageConfig to understand the purpose
rg -B10 -A10 'l1BaseFee' ops/l2-genesis/morph-chain-ops/genesis/config.go

Repository: morph-l2/morph

Length of output: 1555


🏁 Script executed:

# Read the test deploy config to see what base fee value is used
cat ops/l2-genesis/morph-chain-ops/genesis/testdata/test-deploy-config-full.json | jq '.l2GenesisBlockBaseFeePerGas // "NOT_SET"'

# Check if there are other deploy config examples
fd -type f \( -name '*deploy*' -o -name '*config*' \) -path '*/testdata/*' | head -10

# Look for any comments in genesis.go about why this default was chosen
head -100 ops/l2-genesis/morph-chain-ops/genesis/genesis.go | tail -50

Repository: morph-l2/morph

Length of output: 1846


🏁 Script executed:

# Verify the hex to decimal conversion
python3 << 'EOF'
test_value = 0x3b9aca00
print(f"Test config value: 0x3b9aca00 = {test_value} wei = {test_value / 1e9} gwei")
default_value = 1_000_000
print(f"Code default: {default_value} wei = {default_value / 1e9} gwei")
print(f"Ratio: {test_value / default_value}x difference")
EOF

# Search for other deploy config examples
find ops/l2-genesis -name '*.json' | head -20

# Check if there are comments or related issues about this
rg -B3 -A3 '1_000_000|1000000' ops/l2-genesis/morph-chain-ops/genesis/

Repository: morph-l2/morph

Length of output: 5035


🏁 Script executed:

# Extract base fee values from all deploy configs
for config in ops/l2-genesis/deploy-config/*.json; do
  echo "=== $(basename $config) ==="
  jq '.l2GenesisBlockBaseFeePerGas // "NOT_SET"' "$config"
done

# Also check git blame for the 1_000_000 line to understand when it was added
cd ops/l2-genesis/morph-chain-ops/genesis && git log -p --all -S '1_000_000' -- genesis.go | head -80

Repository: morph-l2/morph

Length of output: 322


🏁 Script executed:

# Look for any comments or context around this change in genesis.go
rg -B10 'baseFee = big.NewInt' ops/l2-genesis/morph-chain-ops/genesis/genesis.go

# Search for any comments or documentation about base fee defaults
rg -i 'base.*fee.*default|default.*base.*fee' ops/l2-genesis/

# Check git log more carefully (with different approach)
git -C ops/l2-genesis log --oneline --all -- morph-chain-ops/genesis/genesis.go | head -20

Repository: morph-l2/morph

Length of output: 777


🏁 Script executed:

# Final check: look at the actual definition of this value and nearby code patterns
rg -B5 -A5 'defaultL2GasLimit' ops/l2-genesis/morph-chain-ops/genesis/genesis.go

# Check if there are any test failures or expectations about base fee
rg 'baseFee.*1_000_000|1_000_000.*baseFee' ops/l2-genesis/

Repository: morph-l2/morph

Length of output: 812


Fix the base fee default: should be 1,000,000,000 wei (1 gwei), not 1,000,000 wei (0.001 gwei).

The default baseFee value of 1,000,000 wei is incorrect by three orders of magnitude. The test deploy config expects 1,000,000,000 wei (0x3b9aca00 = 1 gwei), which aligns with Ethereum L2 base fee patterns. The current default of 0.001 gwei is economically unrealistic and contradicts the test expectations. Additionally, all six production and staging deploy configs (mainnet, devnet, testnet, holesky, hoodi, qanet) omit this field and rely on this buggy default, affecting all environments.

Change line 93 from baseFee = big.NewInt(1_000_000) to baseFee = big.NewInt(1_000_000_000).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ops/l2-genesis/morph-chain-ops/genesis/genesis.go` around lines 89 - 94, The
default baseFee value is off by three orders of magnitude; in the genesis
construction code where baseFee is set when config.L2GenesisBlockBaseFeePerGas
== nil (the variable baseFee in genesis.go), replace the BigInt default of
1_000_000 with 1_000_000_000 (1 gwei / 0x3b9aca00) so environments that omit
L2GenesisBlockBaseFeePerGas (e.g., mainnet/devnet/testnet/holesky/hoodi/qanet)
use the correct 1 gwei default.

Comment on lines +45 to +47
depends_on:
validator_reth:
condition: service_started
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use the reth healthcheck to gate the validator node.

service_started only waits for the container process to exist. Since validator_reth_node connects to validator_reth immediately on both 8545 and 8551, this leaves a startup race even though the reth service already has a healthcheck.

Suggested fix
   validator_reth_node:
     container_name: validator_reth_node
     depends_on:
       validator_reth:
-        condition: service_started
+        condition: service_healthy
       node-0:
         condition: service_started
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
depends_on:
validator_reth:
condition: service_started
depends_on:
validator_reth:
condition: service_healthy
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ops/reth-cross-test/docker/docker-compose.reth-validator.yml` around lines 45
- 47, The docker-compose depends_on for validator_reth currently uses condition:
service_started which only waits for the container process; change the
dependency in the validator_reth_node service so it depends on validator_reth
using condition: service_healthy (i.e., replace service_started with
service_healthy for the validator_reth entry) and ensure the validator_reth
service has a working healthcheck so the validator will only start after reth
reports healthy.

Comment on lines +46 to +82
genesis_path = config.DOCKER_DIR.parent / "l2-genesis" / ".devnet" / "genesis-l2.json"
env_path = config.DOCKER_DIR / ".env"

# Check if devnet is already initialized
l1_ready = False
try:
w3 = Web3(Web3.HTTPProvider(config.L1_RPC, request_kwargs={"timeout": 5}))
if w3.eth.block_number > 0:
l1_ready = True
except Exception:
pass

env_has_contracts = False
if env_path.exists():
env_content = env_path.read_text()
env_has_contracts = "MORPH_ROLLUP=0x" in env_content

if l1_ready and genesis_path.exists() and env_has_contracts:
print(f"Devnet already initialized (L1 running, genesis exists, contracts deployed).")
else:
missing = []
if not l1_ready:
missing.append("L1 not running")
if not genesis_path.exists():
missing.append("L2 genesis missing")
if not env_has_contracts:
missing.append(".env missing contract addresses")
print(f"Devnet not ready ({', '.join(missing)}). Running 'make devnet-up'...")
subprocess.run(
["make", "devnet-up"],
cwd=str(config.MORPH_ROOT),
check=True,
timeout=600,
)

# Patch genesis.json with ZkTrie state root so morph-reth uses the correct genesis hash
_patch_genesis_with_zktrie_root()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The cached-devnet shortcut can silently keep you on the old chain mode.

Lines 63-79 treat any existing genesis file as valid, and Lines 149-152 skip patching as soon as genesisStateRoot exists. After this PR's MPT switch, a previously initialized ZkTrie .devnet/genesis-l2.json still satisfies that check and gets reused, so the suite can “verify MPT mode” while actually running stale pre-change state.

Also applies to: 149-152

🧰 Tools
🪛 Ruff (0.15.7)

[error] 55-56: try-except-pass detected, consider logging the exception

(S110)


[warning] 55-55: Do not catch blind exception: Exception

(BLE001)


[error] 64-64: f-string without any placeholders

Remove extraneous f prefix

(F541)


[error] 75-75: Starting a process with a partial executable path

(S607)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ops/reth-cross-test/phases/base.py` around lines 46 - 82, The current devnet
readiness check reuses any existing genesis-l2.json and skips patching when
genesisStateRoot exists, which lets stale pre-MPT files persist; update the
logic around genesis_path and the _patch_genesis_with_zktrie_root() flow so the
genesis file is only treated as valid if its contents match the expected trie
mode: read genesis_path.read_text() and verify the presence or absence of the
specific marker/field used by the new MPT flow (e.g., "genesisStateRoot" or the
exact JSON structure produced by _patch_genesis_with_zktrie_root()), and if it
does not match the expected mode then treat genesis as missing (append "L2
genesis missing" to missing or delete/overwrite the file) so
subprocess.run(["make","devnet-up"]) and _patch_genesis_with_zktrie_root() will
be executed to produce a fresh, mode-correct genesis.json.

Comment on lines +157 to +179
result = subprocess.run(
["docker", "run", "--rm",
"-v", f"{genesis_path}:/genesis.json",
"-p", "18545:8545",
"morph-geth:latest",
"sh", "-c",
"geth init --datadir=/tmp/db /genesis.json 2>/dev/null && "
"geth --datadir=/tmp/db --http --http.addr=0.0.0.0 --http.port=8545 "
"--http.api=eth --networkid=53077 --nodiscover &"
"sleep 5 && "
"curl -s -X POST -H 'Content-Type: application/json' "
"--data '{\"jsonrpc\":\"2.0\",\"method\":\"eth_getBlockByNumber\",\"params\":[\"0x0\",false],\"id\":1}' "
"http://localhost:8545"],
capture_output=True, text=True, timeout=30,
)

try:
block = json.loads(result.stdout)["result"]
state_root = block["stateRoot"]
genesis_hash = block["hash"]
except (json.JSONDecodeError, KeyError, TypeError):
print(f" WARNING: Could not get ZkTrie genesis from geth: {result.stderr[:200]}")
return
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Poll the temporary geth RPC instead of sleeping once.

This patch path depends on a single sleep 5 && curl. On slower machines, geth init/startup can easily take longer than five seconds, which makes this branch log a warning and continue without patching the genesis. Since the mixed geth/reth phases depend on that patched file, this should retry until block 0 is readable or fail loudly.

🧰 Tools
🪛 Ruff (0.15.7)

[error] 157-157: subprocess call: check for execution of untrusted input

(S603)


[error] 158-169: Starting a process with a partial executable path

(S607)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ops/reth-cross-test/phases/base.py` around lines 157 - 179, The current
subprocess.run invocation uses a single "sleep 5 && curl" which can fail on slow
machines; change the shell command passed to subprocess.run (the docker run
invocation that references genesis_path and sets result) to poll the temporary
geth RPC instead of a one-off curl: start geth in the container as before, then
run a small retry loop (e.g., try curl for eth_getBlockByNumber in a loop with a
short sleep between attempts, breaking on a successful JSON response) and fail
loudly after a configurable timeout/number of attempts; keep referring to the
same subprocess.run call and the variables result, genesis_path, and the
subsequent JSON parsing/exception handling so that if polling exhausts attempts
you still print the geth stderr and return/raise as currently done.

Comment on lines +100 to +115
l1_msg_ended = False
l1_count = 0
for tx in txs:
tx_type = tx.get("type", 0)
if isinstance(tx_type, str):
tx_type = int(tx_type, 16)
if tx_type == 0x7E:
if l1_msg_ended:
results["checks"]["deposit_at_head"] = False
break
l1_count += 1
else:
l1_msg_ended = True

results["checks"]["deposit_at_head"] = not l1_msg_ended or l1_count > 0
results["checks"]["l1_message_count"] = l1_count
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Line 114 undoes the head-ordering failure.

If a block looks like [0x7E, normal, 0x7E], Lines 107-109 correctly flag the later deposit as invalid, but Line 114 then overwrites that failure back to True because l1_count > 0.

💡 Suggested fix
-    l1_msg_ended = False
+    l1_msg_ended = False
+    deposit_at_head = True
@@
         if tx_type == 0x7E:
             if l1_msg_ended:
-                results["checks"]["deposit_at_head"] = False
+                deposit_at_head = False
                 break
             l1_count += 1
@@
-    results["checks"]["deposit_at_head"] = not l1_msg_ended or l1_count > 0
+    results["checks"]["deposit_at_head"] = deposit_at_head and l1_count > 0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ops/reth-cross-test/verification/l1_message_checker.py` around lines 100 -
115, The final assignment currently can overwrite an earlier failure; change the
logic so deposit_at_head isn't reset to True after you've already flagged a bad
ordering. Update the assignment to respect prior failures and require deposits
to be contiguous: set results["checks"]["deposit_at_head"] =
results["checks"].get("deposit_at_head", True) and (not l1_msg_ended and
l1_count > 0) (or equivalently compute deposit_ok = (not l1_msg_ended and
l1_count > 0) and only assign it if the key isn't already False) so that
l1_msg_ended and l1_count are evaluated together and previous False values
aren't overwritten.

Comment on lines +27 to +35
try:
logs = l1_w3.eth.get_logs({
"fromBlock": 0,
"toBlock": "latest",
"address": Web3.to_checksum_address(ROLLUP_ADDRESS),
})
batches_found = len(logs)
except Exception as e:
return {"pass": False, "error": f"Failed to query L1 Rollup logs: {e}", "batches_checked": 0}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Filter the L1 log query to CommitBatch.

This currently counts every log emitted by the rollup contract. Any unrelated event makes batches_found > 0, so the "No CommitBatch events found" path stops being accurate.

Suggested fix
     try:
+        commit_batch_topic = Web3.keccak(text="CommitBatch(uint256,bytes32)").hex()
         logs = l1_w3.eth.get_logs({
             "fromBlock": 0,
             "toBlock": "latest",
             "address": Web3.to_checksum_address(ROLLUP_ADDRESS),
+            "topics": [commit_batch_topic],
         })
         batches_found = len(logs)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try:
logs = l1_w3.eth.get_logs({
"fromBlock": 0,
"toBlock": "latest",
"address": Web3.to_checksum_address(ROLLUP_ADDRESS),
})
batches_found = len(logs)
except Exception as e:
return {"pass": False, "error": f"Failed to query L1 Rollup logs: {e}", "batches_checked": 0}
try:
commit_batch_topic = Web3.keccak(text="CommitBatch(uint256,bytes32)").hex()
logs = l1_w3.eth.get_logs({
"fromBlock": 0,
"toBlock": "latest",
"address": Web3.to_checksum_address(ROLLUP_ADDRESS),
"topics": [commit_batch_topic],
})
batches_found = len(logs)
except Exception as e:
return {"pass": False, "error": f"Failed to query L1 Rollup logs: {e}", "batches_checked": 0}
🧰 Tools
🪛 Ruff (0.15.7)

[warning] 34-34: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ops/reth-cross-test/verification/rollup_checker.py` around lines 27 - 35, The
L1 log query in l1_w3.eth.get_logs is currently unfiltered and counts every
event from ROLLUP_ADDRESS; restrict the query to only the CommitBatch event by
adding a topics filter for the CommitBatch event signature (keccak256 of its ABI
signature) when calling l1_w3.eth.get_logs, keeping the same address
(ROLLUP_ADDRESS) and block range, so batches_found correctly reflects only
CommitBatch logs; update the existing exception handling and length-based logic
around batches_found as needed.

Comment on lines +50 to +66
def run_stage(self, concurrency: int, duration: int, contract_addr: str = None):
"""Run a load stage with given concurrency for given duration."""
self._stop = False
threads = []
for i in range(concurrency):
t = threading.Thread(
target=self._worker,
args=(i, contract_addr),
daemon=True,
)
threads.append(t)
t.start()

time.sleep(duration)
self._stop = True
for t in threads:
t.join(timeout=10)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Old workers can leak into the next stage.

Line 66 only waits 10 seconds, but each worker's RPC timeout is 30 seconds. A thread blocked in get_block() or send_raw_transaction() can miss the join, and Line 52 then flips _stop back to False for the next stage, letting that leaked worker resume and overlap with the new stage.

🧰 Tools
🪛 Ruff (0.15.7)

[warning] 50-50: PEP 484 prohibits implicit Optional

Convert to T | None

(RUF013)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ops/reth-cross-test/verification/stress_runner.py` around lines 50 - 66, The
stage teardown can let old workers leak into the next stage because run_stage
sets self._stop=True then only calls t.join(timeout=10) while RPC calls in
_worker (get_block/send_raw_transaction) use a 30s timeout; increase the join
waiting to ensure workers fully exit before clearing _stop for the next stage —
e.g., replace the 10s timeout with a value >= the RPC timeout plus a small
buffer (or call t.join() with no timeout) so all threads in threads are
guaranteed to finish after self._stop is set before the next invocation of
run_stage; update references in run_stage and ensure _worker still checks
self._stop between RPC calls.

Comment on lines +74 to +103
try:
roll = random.random()
if roll < config.STRESS_TX_MIX["transfer"]:
target = "0x" + random.randbytes(20).hex()
tx = {
"from": account.address,
"to": Web3.to_checksum_address(target),
"value": 1000,
"gas": 21000,
"maxFeePerGas": (w3.eth.get_block("latest").get("baseFeePerGas", 1000000)) * 2, "maxPriorityFeePerGas": w3.eth.get_block("latest").get("baseFeePerGas", 1000000), "type": 2,
"nonce": w3.eth.get_transaction_count(account.address),
"chainId": w3.eth.chain_id,
}
signed = account.sign_transaction(tx)
tx_hash = w3.eth.send_raw_transaction(signed.raw_transaction)
elif roll < config.STRESS_TX_MIX["transfer"] + config.STRESS_TX_MIX["call"] and contract_addr:
compiled = json.loads((COMPILED_DIR / "SimpleStorage.json").read_text())
contract = w3.eth.contract(
address=Web3.to_checksum_address(contract_addr),
abi=compiled["abi"],
)
call_tx = contract.functions.set(random.randint(1, 10000)).build_transaction({
"from": account.address,
"nonce": w3.eth.get_transaction_count(account.address),
"gas": 100000,
"maxFeePerGas": (w3.eth.get_block("latest").get("baseFeePerGas", 1000000)) * 2, "maxPriorityFeePerGas": w3.eth.get_block("latest").get("baseFeePerGas", 1000000), "type": 2,
"chainId": w3.eth.chain_id,
})
signed = account.sign_transaction(call_tx)
tx_hash = w3.eth.send_raw_transaction(signed.raw_transaction)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In web3.py 6.x / Ethereum JSON-RPC, what block identifier does Web3.eth.get_transaction_count(address) use by default, and does it include pending transactions?

💡 Result:

In web3.py 6.x, Web3.eth.get_transaction_count(address) uses 'latest' as the default block identifier, which is the value of web3.eth.default_block. By default, it does not include pending transactions; use block_identifier='pending' explicitly to include them.

Citations:


Track nonces locally per worker/account or fetch pending nonce once.

Lines 84 and 97 call get_transaction_count(account.address) on every send without specifying a block identifier. By default, this queries the 'latest' block, which does not include pending transactions. When the worker sends multiple transactions in rapid succession before prior ones confirm, each call returns the same nonce, resulting in duplicate-nonce errors or replacement traffic instead of real throughput. Seed each sender from the pending nonce once at startup and increment it locally after each broadcast, or explicitly pass block_identifier='pending' to the nonce lookups.

🧰 Tools
🪛 Ruff (0.15.7)

[error] 75-75: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)


[error] 77-77: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)


[error] 95-95: Standard pseudo-random generators are not suitable for cryptographic purposes

(S311)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ops/reth-cross-test/verification/stress_runner.py` around lines 74 - 103, The
worker currently calls w3.eth.get_transaction_count(account.address) each time
it builds tx and call_tx which returns 'latest' by default and causes duplicate
nonces when sending many rapid transactions; fix by fetching the pending nonce
once per sender (using w3.eth.get_transaction_count(account.address, "pending"))
at worker startup or before the send loop, store it in a local nonce variable
(e.g., worker_nonce) and increment worker_nonce after each send instead of
re-calling get_transaction_count, or if you prefer a simpler change, explicitly
pass block_identifier='pending' (or "pending") to the existing
w3.eth.get_transaction_count calls for tx and call_tx so the pending nonce is
used.

Comment on lines +81 to +83
min_height = min(
v["height"] for v in validator_results.values() if v["height"] > 0
) if validator_results else 0
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Handle the all-unreachable validator case before calling min().

If every validator times out or errors, the filtered iterator is empty and min() raises ValueError. That crashes the phase instead of returning the intended "validators did not sync" result.

Suggested fix
-    min_height = min(
-        v["height"] for v in validator_results.values() if v["height"] > 0
-    ) if validator_results else 0
+    synced_heights = [v["height"] for v in validator_results.values() if v["height"] > 0]
+    min_height = min(synced_heights) if synced_heights else 0
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
min_height = min(
v["height"] for v in validator_results.values() if v["height"] > 0
) if validator_results else 0
synced_heights = [v["height"] for v in validator_results.values() if v["height"] > 0]
min_height = min(synced_heights) if synced_heights else 0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ops/reth-cross-test/verification/validator_checker.py` around lines 81 - 83,
The current computation of min_height uses min(...) over a generator filtered by
v["height"] > 0 and will raise ValueError if every validator timed out/errored
(i.e., the filtered set is empty); update the logic around the min_height
calculation (the code that reads validator_results and computes min_height) to
first collect or test the filtered heights (e.g., heights = [v["height"] for v
in validator_results.values() if v["height"] > 0]) and if that list is empty,
short-circuit and return the intended "validators did not sync" result (or set a
safe default) instead of calling min(), otherwise compute min_height =
min(heights).

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@ops/docker/Dockerfile.l1-beacon`:
- Line 1: The Dockerfile stages currently inherit root from base images; for
each stage that uses gcr.io/prysmaticlabs/prysm/beacon-chain:v4.2.1 and
ethereum/client-go:v1.14.11 (e.g., the chain-genesis stage), add an explicit
non-root USER directive: create a non-root group/user (RUN groupadd -r <grp> &&
useradd -r -g <grp> -u 1000 <user>) and chown any runtime directories, then
switch to USER <user> after those RUN steps; ensure the USER directive appears
in each stage (referencing the chain-genesis stage and any other FROM stages
that use those images) so no stage runs as root.

In `@ops/docker/Dockerfile.oracle`:
- Around line 1-4: Add a non-root user in the final runtime stage (the `app`
stage) and switch to it before CMD: create a group/user (e.g., group and user
named app), set ownership of the application directory (chown -R app:app /app or
similar), set WORKDIR to that directory, and add USER app before the CMD
instruction so the container does not run as root; ensure any files copied from
the `builder` stage are owned/accessible by that user.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 8425916f-81e1-4327-ab30-54ae73056f65

📥 Commits

Reviewing files that changed from the base of the PR and between 8114ea3 and 63c950f.

📒 Files selected for processing (8)
  • .dockerignore
  • ops/docker/Dockerfile.l1-beacon
  • ops/docker/Dockerfile.oracle
  • ops/docker/docker-compose-4nodes.yml
  • ops/docker/go-rust-builder.Dockerfile
  • ops/docker/intermediate/go-rust-builder.Dockerfile
  • ops/docker/intermediate/go-ubuntu-builder.Dockerfile
  • ops/l2-genesis/morph-chain-ops/genesis/genesis.go
✅ Files skipped from review due to trivial changes (4)
  • ops/docker/intermediate/go-rust-builder.Dockerfile
  • ops/docker/intermediate/go-ubuntu-builder.Dockerfile
  • .dockerignore
  • ops/docker/go-rust-builder.Dockerfile
🚧 Files skipped from review as they are similar to previous changes (2)
  • ops/l2-genesis/morph-chain-ops/genesis/genesis.go
  • ops/docker/docker-compose-4nodes.yml

@@ -1,4 +1,4 @@
FROM --platform=linux/amd64 ubuntu:20.04 as chain-genesis
FROM --platform=linux/amd64 ubuntu:22.04 as chain-genesis
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

🌐 Web query:

`For these images, what is the default USER in image metadata?

  • gcr.io/prysmaticlabs/prysm/beacon-chain:v4.2.1
  • ethereum/client-go:v1.14.11`

💡 Result:

The default USER in the Docker image metadata for both images is "root". For gcr.io/prysmaticlabs/prysm/beacon-chain:v4.2.1, the image configuration explicitly states "User": "root" (from metadata of similar v4.1.1 image, as v4.2.1 uses the same Bazel-built distroless-like structure without a non-root USER instruction). For ethereum/client-go:v1.14.11, the Dockerfile is based on alpine:latest with no USER instruction, so it defaults to root (standard for Alpine-based Go static binaries).

Citations:


Add explicit non-root USER directive to comply with DS-0002

Both inherited base images (gcr.io/prysmaticlabs/prysm/beacon-chain:v4.2.1 and ethereum/client-go:v1.14.11) default to running as root. Add an explicit non-root USER directive in each stage to satisfy DS-0002 security requirements.

🧰 Tools
🪛 Trivy (0.69.3)

[error] 1-1: Image user should not be 'root'

Specify at least 1 USER command in Dockerfile with non-root user as argument

Rule: DS-0002

Learn more

(IaC/Dockerfile)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ops/docker/Dockerfile.l1-beacon` at line 1, The Dockerfile stages currently
inherit root from base images; for each stage that uses
gcr.io/prysmaticlabs/prysm/beacon-chain:v4.2.1 and ethereum/client-go:v1.14.11
(e.g., the chain-genesis stage), add an explicit non-root USER directive: create
a non-root group/user (RUN groupadd -r <grp> && useradd -r -g <grp> -u 1000
<user>) and chown any runtime directories, then switch to USER <user> after
those RUN steps; ensure the USER directive appears in each stage (referencing
the chain-genesis stage and any other FROM stages that use those images) so no
stage runs as root.

Comment on lines +1 to +4
FROM rust:1.88-slim as builder

RUN apt-get -qq update && apt-get -qq install -y --no-install-recommends \
pkg-config libssl-dev && rm -rf /var/lib/apt/lists/*
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Run the runtime image as non-root

Line 11-14 keeps the container default user as root, which weakens container isolation. Please add a non-root user in the app stage and switch to it before CMD.

🔧 Proposed fix
 FROM ubuntu:22.04 as app
+RUN useradd --system --create-home --uid 10001 morph && \
+    mkdir -p /app && chown -R morph:morph /app
 COPY --from=builder /gas-oracle/app/target/release/app /
+USER morph

 CMD ["./app"]
🧰 Tools
🪛 Trivy (0.69.3)

[error] 1-1: Image user should not be 'root'

Specify at least 1 USER command in Dockerfile with non-root user as argument

Rule: DS-0002

Learn more

(IaC/Dockerfile)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ops/docker/Dockerfile.oracle` around lines 1 - 4, Add a non-root user in the
final runtime stage (the `app` stage) and switch to it before CMD: create a
group/user (e.g., group and user named app), set ownership of the application
directory (chown -R app:app /app or similar), set WORKDIR to that directory, and
add USER app before the CMD instruction so the container does not run as root;
ensure any files copied from the `builder` stage are owned/accessible by that
user.

devnet-clean-build must use `docker compose down --volumes` to remove
named Docker volumes. Without this, geth containers reuse stale
chaindata from a previous genesis (e.g., ZkTrie data after switching
to MPT mode), causing state root mismatches between genesis-batch-header
and the actual L2 genesis.

Changes:
- devnet-clean-build: use `compose down --volumes --remove-orphans`
  instead of `devnet-down` (which doesn't delete volumes)
- Fix volume filter from `docker-*` to `docker_` (matching actual
  volume names like `docker_morph_data_0`)
- Add error suppression for volume cleanup edge cases
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
Makefile (1)

154-154: Scope volume deletion more narrowly to avoid accidental deletion of unrelated volumes.

The current filter --filter name=docker_ matches any volume with that prefix, not just those from the devnet compose setup. Since the compose file (run from ops/docker/) has no explicit project name, Docker defaults to the directory name "docker", and all managed volumes receive the label com.docker.compose.project=docker. Use this label instead of the prefix pattern, and add the -r flag to xargs to prevent spurious errors on empty results:

Suggested patch
-docker volume ls --filter name=docker_ --format='{{.Name}}' | xargs docker volume rm 2>/dev/null || true
+docker volume ls --filter "label=com.docker.compose.project=docker" --format='{{.Name}}' | xargs -r docker volume rm 2>/dev/null || true
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Makefile` at line 154, The volume cleanup line that currently uses "docker
volume ls --filter name=docker_ ... | xargs docker volume rm" is too broad;
change the filter to target the compose project label (use --filter
label=com.docker.compose.project=docker) and invoke xargs with the -r flag to
avoid errors when there are no volumes to remove (i.e., replace the existing
pipeline referencing "docker volume ls ... | xargs docker volume rm" with the
label-based filter and xargs -r).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@Makefile`:
- Line 154: The volume cleanup line that currently uses "docker volume ls
--filter name=docker_ ... | xargs docker volume rm" is too broad; change the
filter to target the compose project label (use --filter
label=com.docker.compose.project=docker) and invoke xargs with the -r flag to
avoid errors when there are no volumes to remove (i.e., replace the existing
pipeline referencing "docker volume ls ... | xargs docker volume rm" with the
label-based filter and xargs -r).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6fd61989-737f-4209-955f-92d347e6dc0e

📥 Commits

Reviewing files that changed from the base of the PR and between 63c950f and efdd27b.

📒 Files selected for processing (1)
  • Makefile

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants