-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: relayer fork handling #531
base: master
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
WalkthroughThis pull request introduces a new dependency in the project's Cargo configuration and adds a console log message to signal startup. It significantly refactors the relayer's functionality by updating the trait bound, removing an obsolete method, and adding new methods for block height management and state verification. The update also includes adjustments to the digest update mechanism through an integration with a GraphQL query and incorporates a set of tests covering the new behaviors. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Main
participant Relayer
participant Contract
User->>Main: Start Application
Main->>Relayer: run()
Relayer->>Relayer: latest_common_height()
Relayer->>Relayer: has_relayed(blockhash)
Relayer->>Contract: get_heaviest_relayed_block_header()
Contract-->>Relayer: RelayHeader
Relayer->>Relayer: update_best_digest(new_header)
Relayer-->>Main: Process complete
Suggested Reviewers
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
relayer/src/main.rs (1)
40-41
: Use a structured logger instead of rawprintln!
.While
println!
provides a quick solution for indicating that the relayer is starting, consider using a logging library (e.g.,env_logger
,tracing
) to unify and manage log levels and formats across your application. This allows you to control verbosity more effectively and integrate with distributed tracing if needed.relayer/src/relayer.rs (3)
67-78
: Transport error handling logic is good, but check for corner cases.Your code correctly interprets a
TransportError(RpcError::ErrorResp(_))
as a “not relayed” scenario. However, consider logging additional error details or differentiating between otherTransportError
variants that may need explicit handling.
107-107
: Slight duplication in logic withrun_once
.
run
also callslatest_common_height
before iteratively pulling headers. Consider factoring out shared logic if duplication grows.
336-410
: Unit tests appear thorough, but consider environment dependencies.You have tests verifying your relayer logic that rely on external endpoints and networks (e.g.,
anvil
,sepolia
,btc-signet
). Mocks or integration test frameworks might be helpful for local reproducibility.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
relayer/Cargo.toml
(1 hunks)relayer/src/main.rs
(1 hunks)relayer/src/relayer.rs
(4 hunks)
🔇 Additional comments (9)
relayer/Cargo.toml (1)
21-21
: Good addition of serde_json dependency.The
serde_json
crate is widely used and stable for handling JSON serialization and deserialization. Adding it here aligns well with your new code inrelayer.rs
that processes JSON responses from the GraphQL API.relayer/src/relayer.rs (8)
4-7
: Trait bound and import changes look good.Specifying
TransactionResponse = Transaction
clarifies the type returned by the network, and importingTransaction, SolCall, RpcError
is consistent with your new usage.
11-12
: Imports forreqwest::Client
andserde_json::Value
are appropriate.These are needed for handling your GraphQL request/response. No concerns here.
52-52
: Trait bound update is clear and concise.Defining the network trait to expect
TransactionResponse = Transaction
ensures compatibility with the contract calls and transaction lookups.
59-65
: Potentialunwrap()
risk fortry_into().unwrap()
.In
relayed_height
, you doself.contract.findHeight(...).call().await?._0.try_into().unwrap()
. If the returned value is not convertible tou32
, this will panic. Ensure that contract constraints guarantee valid conversions, or consider graceful error handling with?
.Would you like a script to confirm that all runtime values fit within
u32
?
100-100
:run_once
method clarity.
run_once
callinglatest_common_height()
is a logical approach for simulating a single cycle. This helps test code or short-run scenarios.
253-253
: Retainingfind_lca
logic is a suitable approach.It’s a valid approach for detecting the nearest common ancestor. Just keep in mind large reorg edge cases.
255-255
: Fetching current best from the contract is consistent.By retrieving the actual “heaviest” header from the smart contract, you ensure an authoritative starting point for your LCA.
277-334
: GraphQL approach is functionally correct but watch for potential errors and availability.The approach to query Goldsky’s endpoint and parse its JSON works, but be mindful of:
- Network errors or timeouts
- Subgraph indexing delays
- Unexpected schema changes
- Potential data unavailability
Also, consider fallback or retries if the GraphQL service fails.
Let us know if you'd like a script to simulate subgraph downtime or parse edge-case responses for robust testing.
async fn latest_common_height(&self) -> Result<u32> { | ||
// Start at the tip of the relayed chain, then move back until we find a block that matches bitcoin chain. | ||
// We do it like this because calling esplora.get_block_hash for a block in a fork will fail. | ||
let mut height = self.relayed_height().await?; | ||
|
||
Ok(latest_height) | ||
loop { | ||
let actual_hash = self.esplora_client.get_block_hash(height).await?; | ||
let is_relayed = self.has_relayed(actual_hash).await?; | ||
|
||
if is_relayed { | ||
return Ok(height); | ||
} | ||
|
||
println!("Found fork: {actual_hash} at height {height}"); | ||
height -= 1; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Infinite loop risk when decrementing height
.
If height
reaches 0 while iterating backward searching for a common block, the loop may underflow, resulting in a panic or unintended behavior:
loop {
...
height -= 1;
}
Consider adding a check to stop if height
is already 0.
Apply this diff to handle zero safely:
if height == 0 {
return Err(eyre!("No common height found before reaching 0"));
}
height -= 1;
Added fork handling.
If the latest relayed block is no longer on the main bitcoin chain, there was a problem with relaying new blocks, as fetching the block from esplora would fail. Unfortunately, we need to supply the whole block header (not just the hash) when changing the tip, so I had to fetch the block header from elsewhere. The easiest would be to just store the latest blockheader in the contract, but that would require redeployment which I didn't really want to do because of Fiamma. Plus, it would increase gas cost, although that's maybe not that big of deal
So I ended up starting a goldsky pipeline and get the blockheader from the calldata of the tx that submitted. Which works quite well but it does feel a bit over engineered
Summary by CodeRabbit
Chores
New Features