diff --git a/docs/state-committees/architecture/architecture-overview.mdx b/docs/state-committees/architecture/architecture-overview.mdx index 3656018..5b07f09 100644 --- a/docs/state-committees/architecture/architecture-overview.mdx +++ b/docs/state-committees/architecture/architecture-overview.mdx @@ -52,7 +52,7 @@ State Committee Nodes are the attesters within the Lagrange State Committees inf Nodes wishing to join the cross-chain State Committees must restake via EigenLayer. A node must provide at least 10 ETH worth of rehypothecated collateral to join the network and must indicate which chain or roll-up they wish to provide attestations for. -Every node in the network must run a containerized validator or watcher of a relevant chain or roll-up. If If an operator manages multiple restaked Ethereum nodes, they can establish a secure RPC connection to a single instance of the chain’s validator, ideally running locally on their system, for all nodes. Once in the network, the attestation node will sign the batch of rollup blocks using their BLS key (BN254 curve). +Every node in the network must run a containerized validator or watcher of a relevant chain or roll-up. If an operator manages multiple restaked Ethereum nodes, they can establish a secure RPC connection to a single instance of the chain’s validator, ideally running locally on their system, for all nodes. Once in the network, the attestation node will sign the batch of rollup blocks using their BLS key (BN254 curve). ## Responsibilities of a Cross-Chain State Attester @@ -99,4 +99,4 @@ The signature executed on the tuple creates slashable conditions for fraud proof :::info The next subsections go deeper into the technicals of the Sequencer, Database, gRPC Server, Consensus and Attestation nodes. -::: \ No newline at end of file +::: diff --git a/docs/zk-coprocessor/avs-operators/stake-and-proofs.md b/docs/zk-coprocessor/avs-operators/stake-and-proofs.md index 5664d7f..74d437b 100644 --- a/docs/zk-coprocessor/avs-operators/stake-and-proofs.md +++ b/docs/zk-coprocessor/avs-operators/stake-and-proofs.md @@ -8,6 +8,6 @@ This page describes the relation between the stake of an operator and the proof In Lagrange Network, the stake of an operator is used to guarantee liveness: that when an operator receives a task, it delivers a valid proof within the allotted time. Given the user query relies on these proofs, it is paramount that the Lagrange Network is able to generate proofs as fast as possible. -A worker that accepts a tasks is "binding" some of its stake to the guarantee of answering back the proof in the allotted time. We can say some of its stake becomes now "active". An operator can take multiple tasks at once, as once as the amount of "active stake" does not grow more than its total stake delegated on the Lagrange Network ! +A worker that accepts a tasks is "binding" some of its stake to the guarantee of answering back the proof in the allotted time. We can say some of its stake becomes now "active". An operator can take multiple tasks at once, as once as the amount of "active stake" does not grow more than its total stake delegated on the Lagrange Network! It is up to the operator to manage its fleet of workers and decide what type of workers one wants to run. We will provide more guided documentation on that matter in the following weeks. However, given the early phases of the project, we recommend to be running a simple worker to start with and scale up with time. diff --git a/docs/zk-coprocessor/verifiable-database-architecture/block-database.mdx b/docs/zk-coprocessor/verifiable-database-architecture/block-database.mdx index 0bee803..9e5bfe9 100644 --- a/docs/zk-coprocessor/verifiable-database-architecture/block-database.mdx +++ b/docs/zk-coprocessor/verifiable-database-architecture/block-database.mdx @@ -6,7 +6,7 @@ description: A detailed overview of Block database import ThemedImage from "@site/src/components/ThemedImage"; -Remember the ZK Coprocessor is dealing with historical queries, so it needs to keep around each individual state and storage trees for each blocks ! In essence, the ZK Coprocessor is doing a snapshot of the database at each block. But to prove the correct transformation, it needs to prove that the latest state inserted really belongs to the corresponding block in the target blockchain, and the new block is consecutive to the previous one. +Remember the ZK Coprocessor is dealing with historical queries, so it needs to keep around each individual state and storage trees for each blocks! In essence, the ZK Coprocessor is doing a snapshot of the database at each block. But to prove the correct transformation, it needs to prove that the latest state inserted really belongs to the corresponding block in the target blockchain, and the new block is consecutive to the previous one. The solution is to re-create a structure of consecutive blocks in another proof-friendly data structure, that is updated constantly for each new block produced! diff --git a/docs/zk-coprocessor/verifiable-database-architecture/onchain-storage.mdx b/docs/zk-coprocessor/verifiable-database-architecture/onchain-storage.mdx index 29503d0..3e09452 100644 --- a/docs/zk-coprocessor/verifiable-database-architecture/onchain-storage.mdx +++ b/docs/zk-coprocessor/verifiable-database-architecture/onchain-storage.mdx @@ -26,8 +26,8 @@ block_. The root of the storage is stored in the contract information for that b ## Historical Values as Table The zkCoprocessor exposes the data of the contract in a very well known form: an SQL table. -More specifically, the zkCoprocessor keeps historical valuess for each block. **In a nutshell, it is a -time serie database that keeps appending the values of all variables tracked for each block.** +More specifically, the zkCoprocessor keeps historical values for each block. **In a nutshell, it is a +time series database that keeps appending the values of all variables tracked for each block.** Let's take the previous example with the two variables `price` and `balances`. The zkCoprocessor exposes these two variables as two separated tables: @@ -71,7 +71,7 @@ variables inside a single table, it would look like the following: | b2 | k2 | 20 | p2 (same) | There would be useless and sometimes large repetition of elements. This is because contracts have -variables have have only one dimension (uint256) or two dimensions (arrays) or more ! +variables have only one dimension (uint256) or two dimensions (arrays) or more! ::: The next subsections are going deeper in the technicals of the storage, state and blocks transformation. diff --git a/docs/zk-coprocessor/verifiable-database-architecture/storage-database.mdx b/docs/zk-coprocessor/verifiable-database-architecture/storage-database.mdx index 341e912..ad37334 100644 --- a/docs/zk-coprocessor/verifiable-database-architecture/storage-database.mdx +++ b/docs/zk-coprocessor/verifiable-database-architecture/storage-database.mdx @@ -6,7 +6,7 @@ description: A detailed overview of Storage database import ThemedImage from "@site/src/components/ThemedImage"; -The Storage Database is a proveably equivalent data structure that contains the subset of data that the original contract’s storage trie. +The Storage Database is a provably equivalent data structure that contains the subset of data that the original contract’s storage trie. **The key difference between the original storage trie and the storage database is the usage of a different encoding function, a different hash function and a different design of the tree. These changes make the new database much more friendly to "ZK queries".** diff --git a/docs/zk-coprocessor/zkMapReduce/primer.md b/docs/zk-coprocessor/zkMapReduce/primer.md index 1449e03..94d5284 100644 --- a/docs/zk-coprocessor/zkMapReduce/primer.md +++ b/docs/zk-coprocessor/zkMapReduce/primer.md @@ -4,7 +4,7 @@ title: "Primer on MapReduce" description: Primer on MapReduce --- -Lagrange's proving network is built like a distributed MapReduce stack, bringing verifiability of the computation on top.To run computation over a large scale database, there are different architectural choices that one can do. There can be a single server downloading all the data and running the entire computation locally, but that requires to have a very powerful machine and is expensive in terms of bandwidth.Lagrange's network is architectured in the spirit of the famous MapReduce framework, which takes the approach of bringing the computation where the data resides. MapReduce works by running computation on small chunks of the large database, each on a different machine and then have multiple aggregation steps to collide results of different chunks together. Broadly, this computation follows two distinct steps: +Lagrange's proving network is built like a distributed MapReduce stack, bringing verifiability of the computation on top. To run computation over a large scale database, there are different architectural choices that one can do. There can be a single server downloading all the data and running the entire computation locally, but that requires to have a very powerful machine and is expensive in terms of bandwidth. Lagrange's network is architectured in the spirit of the famous MapReduce framework, which takes the approach of bringing the computation where the data resides. MapReduce works by running computation on small chunks of the large database, each on a different machine and then have multiple aggregation steps to collide results of different chunks together. Broadly, this computation follows two distinct steps: 1. Each machine performs a map operation on its chunk of data, transforming it into a set of key-value pairs.