Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Productionize the EVM prover #160

Open
2 tasks
cmwaters opened this issue Feb 14, 2025 · 1 comment
Open
2 tasks

Productionize the EVM prover #160

cmwaters opened this issue Feb 14, 2025 · 1 comment
Labels
post demo Work on after the demo is complete

Comments

@cmwaters
Copy link
Collaborator

cmwaters commented Feb 14, 2025

For the testnet, there's a few things we need to set up in EVM prover:

  • Actively produce and store single block proofs: The evm prover should request proofs for each block that is produce immediately when it is generated. It should persist the proofs (to disk) so it should be able to fetch them to generate the aggregated proofs. We may want to expose an API for individual proofs etc.
  • Actively prove and store batches: Based on a batch rate, say every 16 blocks, it should produce both the aggregated STARK proofs and the groth 16 proofs and persist those so they can be served.

We still need to discuss if we want to either batch or recurse these proofs (currently we are batching):

  1. Batching means a fixed amount of blocks get compressed into the groth16 proof. The IBC client then is updated in those increments i.e. every 16 heights (the smaller the batch the more frequent and quicker the updates, the larger the batch the more efficiency gains are made). As a side note we should work out that optimum ratio.
  2. Recursing, just takes the previous recursed proof, appends the next block and generates the new recursive proof. It means we have a proof from genesis to each height, for each height.

We need to work out what the tradeoffs of the two are? cc @S1nus

@cmwaters cmwaters added the post demo Work on after the demo is complete label Feb 14, 2025
@S1nus
Copy link
Collaborator

S1nus commented Feb 14, 2025

You don't really need to make that tradeoff, because chain progression doesn't need to be blocked by proving.

Since it's possible to aggregate proofs in a tree structure, bridge latency is logarithmic in the number of blocks per client update. The overhead to verify the previous aggregate proof in a new aggregate proof is nearly zero.

Deciding on the batch size (# blocks per Zk bridge update) can be calculated as a function of STARK proof time, and it's quite easy to balance blocktime with bridge latency, resulting in best possible UX

We can discuss more IRL this week and work out the math ^_^

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
post demo Work on after the demo is complete
Projects
None yet
Development

No branches or pull requests

2 participants