-
Notifications
You must be signed in to change notification settings - Fork 31
feat: cache latest block during validate_tx
#75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
| // RPC calls for the same block. | ||
| let block: Block<OpTransaction> = self | ||
| .block_cache | ||
| .get_with("latest".to_string(), async { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
although this is effectively a 1-entry cache, since latest is just a Tag, there isn't any need to maintain logic for the TTL aspect. this is beneficial for now as we prototype.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: need to verify if get_with also triggers the TTL timer
when looking at the source code, insert() calls insert_with_hash underneath. however, get_with in the insertion case, calls insert_with_hash_and_fn. It's not documented whether the latter also starts the timer
edit: i think we can verify this easily with a unit test
edit2: done
| builder_tx, | ||
| block_cache: Cache::builder() | ||
| // a block is every 2s, so we set 1.5s to be safe | ||
| .time_to_live(Duration::from_millis(1500)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the TTL policy evicts after the specified duration from insert()
https://docs.rs/moka/latest/moka/sync/struct.Cache.html#example-time-based-expirations
the TTL could perhaps be closer to 1.8-1.9s. accounting for some variable latency, there can be a situation where we fetch the same block n and then hold on to it for another +1.8-1.9s, when in reality, we're on n+1. This will always be a problem, but we can reduce the frequency.
A call to eth_getBlockByNumber also takes 40-350ms
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
im not sure how big of an issue this is because we only need an L2 block to get the L1BlockInfo. However, since ethereum L1 blocks are 12s, this would only arise towards the end of a L1 block
| // RPC calls for the same block. | ||
| let block: Block<OpTransaction> = self | ||
| .block_cache | ||
| .get_with("latest".to_string(), async { | ||
| warn!(message = "Block cache MISS! Fetching fresh 'Latest' from RPC..."); | ||
| let start = Instant::now(); | ||
| let res = self | ||
| .provider | ||
| .get_block(BlockId::Number(BlockNumberOrTag::Latest)) | ||
| .full() | ||
| .await | ||
| .unwrap_or_else(|_| { | ||
| warn!(message = "failed to fetch latest block"); | ||
| Some(Block::empty(Header::default())) | ||
| }) | ||
| .unwrap(); | ||
| record_histogram(start.elapsed(), "eth_getBlockByNumber".to_string()); | ||
| res | ||
| }) | ||
| .await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the implication of us getting stale data here? I'm assuming it's ok, as the node itself maybe lagging?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the only implication i can think of is if the next L1Block makes the l1_cost for the L2 transaction more expensive that should have invalidated the transaction, but validate_tx will still say it's acceptable based on the stale data
Overview
Since the block number is incremented every 2 seconds, there are scenarios where there could be numerous function calls to
validate_tx, and subsequently toeth_getBlockByNumberwithin that period of time.To reduce the number of times we make an RPC call, this PR caches the latest block using moka.
We set the TTL of the cache entries to 1.5s (which is an out-of-the-box feature from moka).
Tests