-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial version of sp1-prover component for testing #12
base: es
Are you sure you want to change the base?
Conversation
c0a68e1
to
36cf9a6
Compare
Thanks for this analysis @akonring. Seems to me that we should go for the easiest route as we are working on a demo for now. So having both provers work side by side sounds the way to go. What is the input to our SP1 zkVM prover? The list of EVM transactions, something more? |
Thanks for taking a look. The original plan of the issue was to swap the prover to SP1. This turns out to be complicated because fully replacing the prover (incl. state transition proofs) is just a lot of work. On the other hand, merely setting up a demo with a single SP1 prover (producing only namespace proofs and ignoring state transition proofs) is also a heavy lift because existing CDK-validium-node stack relies on the zkEVM-prover running its other components (exectuor, hashdb). Naturally, this lead to the idea of having the two provers work in parallel which is what this PR explores. Currently the aggregator naively distributes the proof requests between available provers (provers that have initially connected to aggregator). Going forward, we might be able to modify the interface between aggregator and prover such that the aggregator can be aware of the type of prover that is connected to it such that only requests for namespace proofs will be distributed to our new SP1-prover while the original state-transition proofs will be handled by another zkEVM prover.
As of now, the new SP1 prover is just an additional prover mimicking the functionality of the existing mock zkEVM prover so the API is the same but we might want to change this (see above comment). The L2 batch and other input that is necessary to compute the proofs can be seen in the interface description here:
|
completely agree
we should start with the Fibonacci program input, which is basically and once Chengyu finishes his part, we update the input based on his script here: |
Closes: EspressoSystems/zkrollup-integration/issues/4
tl;dr
aggregator
,executor_service
andhashdb_service
.make run-sp1
This runs the full node test but with the new sp1-prover in addition to the existing zkevm-prover. The new sp1-prover will announce itself to the aggregator and will eventually receive proof requests. sp1-prover can be configured to emulate the zkevm-prover’s aggregator-client-mock.cpp using
data/mocked_data.json
as mock data with which verification will succeed. Or using other data (e.g.data/mocked_sp1_data.json
) which will cause the demo (aggregator) to eventually fail in verification. See more details below in: "Running e2e test with additional sp1 prover".make run-sp1-only
This configuration is for exploring the work needed to completely replace the zkevm-prover including
execturor_service
andhashdb_service
. Upon startup the synchronizer needs to connect to both executor and hashdb to be able to make progress.For now, only a few RPC calls are stubbed for these services and the rest remains unimplemented. See more details below: "Running e2e test with sp1-prover only".
make run-sp1
(and this PR) is to unblock any potential contracts work further up the stack (Aggregator/EthTxManager/L1). Fully replacing the zkevm-prover (blackbox) seems like a very heavy lift. This is mainly due to the prover’s responsibilities as executor and hashdb. It might be worth investigating if we can extract the “prover”-part of the zkevm-prover (communicating with the aggregator) while keeping the rest of the logic (executor/hashdb) intact. However, the intricacies of the prover logic make this seem non-trivial. Finally, since the purpose of this PR is mainly to unblock other work and explore different design strategies for replacing the prover, the code is unpolished and lags basic error handling and logging infra.Common Setup
Make sure that cargo is able to cross compile targeting musl libc (for MacOS/M1 see fx https://github.com/FiloSottile/homebrew-musl-cross) and add the correct linker. Fx:
[target.x86_64-unknown-linux-musl] linker = "x86_64-linux-musl-gcc"
Build the binary:
cdk-validium-node % cd sp1-prover sp1-prover % cargo build --release --target=x86_64-unknown-linux-musl
Build the docker images (sp1-prover and zkevm-node):
Running e2e test with additional sp1-prover
(Make sure that the Common Setup has been completed)
In order to test that the verification indeed fails when we replace the proof, we do the following:
This will run the e2e test with both sp1-prover and zkevm-prover. The sp1-prover will connect to the aggregator and await any proof requests. We can check if sp1-prover has been asked for a final proof:
To check that the zkevm-aggregator has received the proof and it has been correctly verified, we check:
Making the verification fail:
We can make the verification fail by submitting a different proof to the aggregator.
Change line 11 in
sp1-prover/src/aggregator
from:to
such that the aggregator fetches the mock data from
mocked_sp1_data.json
instead.Rebuild binary and docker image (see: Common Setup section).
Run the demo again and check that the verification fails:
Running e2e test with sp1-prover only
TodoPostponing further description of this configuration because (as of now) replacing the full prover (incl. executor and hashdb) seems infeasible to do within a reasonable timeline.
This PR does not:
Let nix support the cross compilation through the use of
cross-shell.nix
. For now, the cross compilation can be done outside of the nix environment. This should be fixed in the near future to avoid inconsistencies between environments.