Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update doсs #46

Closed
wants to merge 7 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/state-committees/architecture/architecture-overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ State Committee Nodes are the attesters within the Lagrange State Committees inf

Nodes wishing to join the cross-chain State Committees must restake via EigenLayer. A node must provide at least 10 ETH worth of rehypothecated collateral to join the network and must indicate which chain or roll-up they wish to provide attestations for.

Every node in the network must run a containerized validator or watcher of a relevant chain or roll-up. If If an operator manages multiple restaked Ethereum nodes, they can establish a secure RPC connection to a single instance of the chain’s validator, ideally running locally on their system, for all nodes. Once in the network, the attestation node will sign the batch of rollup blocks using their BLS key (BN254 curve).
Every node in the network must run a containerized validator or watcher of a relevant chain or roll-up. If an operator manages multiple restaked Ethereum nodes, they can establish a secure RPC connection to a single instance of the chain’s validator, ideally running locally on their system, for all nodes. Once in the network, the attestation node will sign the batch of rollup blocks using their BLS key (BN254 curve).

## Responsibilities of a Cross-Chain State Attester

Expand Down Expand Up @@ -99,4 +99,4 @@ The signature executed on the tuple creates slashable conditions for fraud proof

:::info
The next subsections go deeper into the technicals of the Sequencer, Database, gRPC Server, Consensus and Attestation nodes.
:::
:::
2 changes: 1 addition & 1 deletion docs/zk-coprocessor/avs-operators/stake-and-proofs.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,6 @@ This page describes the relation between the stake of an operator and the proof

In Lagrange Network, the stake of an operator is used to guarantee liveness: that when an operator receives a task, it delivers a valid proof within the allotted time. Given the user query relies on these proofs, it is paramount that the Lagrange Network is able to generate proofs as fast as possible.

A worker that accepts a tasks is "binding" some of its stake to the guarantee of answering back the proof in the allotted time. We can say some of its stake becomes now "active". An operator can take multiple tasks at once, as once as the amount of "active stake" does not grow more than its total stake delegated on the Lagrange Network !
A worker that accepts a tasks is "binding" some of its stake to the guarantee of answering back the proof in the allotted time. We can say some of its stake becomes now "active". An operator can take multiple tasks at once, as once as the amount of "active stake" does not grow more than its total stake delegated on the Lagrange Network!

It is up to the operator to manage its fleet of workers and decide what type of workers one wants to run. We will provide more guided documentation on that matter in the following weeks. However, given the early phases of the project, we recommend to be running a simple worker to start with and scale up with time.
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ description: A detailed overview of Block database

import ThemedImage from "@site/src/components/ThemedImage";

Remember the ZK Coprocessor is dealing with historical queries, so it needs to keep around each individual state and storage trees for each blocks ! In essence, the ZK Coprocessor is doing a snapshot of the database at each block. But to prove the correct transformation, it needs to prove that the latest state inserted really belongs to the corresponding block in the target blockchain, and the new block is consecutive to the previous one.
Remember the ZK Coprocessor is dealing with historical queries, so it needs to keep around each individual state and storage trees for each blocks! In essence, the ZK Coprocessor is doing a snapshot of the database at each block. But to prove the correct transformation, it needs to prove that the latest state inserted really belongs to the corresponding block in the target blockchain, and the new block is consecutive to the previous one.

The solution is to re-create a structure of consecutive blocks in another proof-friendly data structure, that is updated constantly for each new block produced!

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ block_. The root of the storage is stored in the contract information for that b
## Historical Values as Table

The zkCoprocessor exposes the data of the contract in a very well known form: an SQL table.
More specifically, the zkCoprocessor keeps historical valuess for each block. **In a nutshell, it is a
time serie database that keeps appending the values of all variables tracked for each block.**
More specifically, the zkCoprocessor keeps historical values for each block. **In a nutshell, it is a
time series database that keeps appending the values of all variables tracked for each block.**

Let's take the previous example with the two variables `price` and `balances`. The zkCoprocessor
exposes these two variables as two separated tables:
Expand Down Expand Up @@ -71,7 +71,7 @@ variables inside a single table, it would look like the following:
| b2 | k2 | 20 | p2 (same) |

There would be useless and sometimes large repetition of elements. This is because contracts have
variables have have only one dimension (uint256) or two dimensions (arrays) or more !
variables have only one dimension (uint256) or two dimensions (arrays) or more!
:::

The next subsections are going deeper in the technicals of the storage, state and blocks transformation.
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ description: A detailed overview of Storage database

import ThemedImage from "@site/src/components/ThemedImage";

The Storage Database is a proveably equivalent data structure that contains the subset of data that the original contract’s storage trie.
The Storage Database is a provably equivalent data structure that contains the subset of data that the original contract’s storage trie.

**The key difference between the original storage trie and the storage database is the usage of a different encoding function, a different hash function and a different design of the tree. These changes make the new database much more friendly to "ZK queries".**

Expand Down
2 changes: 1 addition & 1 deletion docs/zk-coprocessor/zkMapReduce/primer.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: "Primer on MapReduce"
description: Primer on MapReduce
---

Lagrange's proving network is built like a distributed MapReduce stack, bringing verifiability of the computation on top.To run computation over a large scale database, there are different architectural choices that one can do. There can be a single server downloading all the data and running the entire computation locally, but that requires to have a very powerful machine and is expensive in terms of bandwidth.Lagrange's network is architectured in the spirit of the famous MapReduce framework, which takes the approach of bringing the computation where the data resides. MapReduce works by running computation on small chunks of the large database, each on a different machine and then have multiple aggregation steps to collide results of different chunks together. Broadly, this computation follows two distinct steps:
Lagrange's proving network is built like a distributed MapReduce stack, bringing verifiability of the computation on top. To run computation over a large scale database, there are different architectural choices that one can do. There can be a single server downloading all the data and running the entire computation locally, but that requires to have a very powerful machine and is expensive in terms of bandwidth. Lagrange's network is architectured in the spirit of the famous MapReduce framework, which takes the approach of bringing the computation where the data resides. MapReduce works by running computation on small chunks of the large database, each on a different machine and then have multiple aggregation steps to collide results of different chunks together. Broadly, this computation follows two distinct steps:

1. Each machine performs a map operation on its chunk of data, transforming it into a set of key-value pairs.

Expand Down