Replies: 1 comment 2 replies
-
|
Awesome draft! I have a few questions:
I also think it makes sense to plan for concurrent write to the DB, since in the future with federation there will be writes coming from the RPC layer, and not just the block producer. It would also make sense to think about how parallelism will be done for individual accounts, i.e. how do we use the mailbox concept for each account's actor. Here are a few ideas:
|
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
To better frame discussions in #121 and #126, I think it would make sense to talk through higher-level architecture of Miden node. Below is my first attempt at describing it.
Fist, here is a high-level diagram for Miden node. It includes most components that we need for the testnet, but it doesn't include components that go beyond that (i.e., consensus, P2P etc.).
Let's go through these components one-by-one
RPC provider
This component is responsible for handling external requests into Miden node. The request could be broadly categorized into two categories:
submit_transactionorexecute_transaction).get_block,check_nullifiers,get_notes_by_tagetc.).RPC provider is also responsible to for handling things like initial request validation, rate limiting (maybe something PoW-based) etc.
Transaction compiler/executor
These components are responsible for receiving raw (un-proven) transactions from clients, compiling them, and proving their execution (see here for more detail). We don't need these components for the private testnet - so, I won't describe them in detail now (we will need them for the public testnet though).
Tx compiler and executor will need to get data about the current state of the chain to read account states, note data etc. Once compiled and executed, transactions end up in the unproven tx pool and await further processing.
Proof validator
This component is responsible for verifying transaction proofs received from clients. It needs to read the chain state to perform such checks as:
After transaction proofs are verified, transactions end up in the proven tx pool.
Mempool
This component contains unproven and provent transactions mentioned previously, as well as transaction batches. For private testnent we won't have the unproven tx pool.
Mempool is responsible for coordinating progression of transactions from un-proven to proven to transaction batches, which eventually get included in blocks. My current idea of how it works is as follows:
Mempool will need to handle timeout, retry etc. logic in case one tx prover/aggregators takes transactions but then fails to prove or aggregate them in a reasonable amount of time. Most of the interfaces the mempool exposes would probably be asyncronous.
Block producer
This component would be responsible for taking a set of batches from the mempool and building the next block out of them.
To build a block, the block producer would need to read current chain state. For example, for each batch, the block producer will get a list of updated accounts. So, it will request Merkle paths from each updated account form the store. It would then put these paths into a Merkle store and run Miden VM block production program. Similar thing will be done for nullifiers as well.
Once the block is built, the block producer will send it to the store to update the chain state, and after that will start building the next block. To make this process more efficient, we could implement sophisticated caching strategies in the block producer, but this is not something we need for the testnet.
Store
This component will maintain the latest chain state. A more detailed description of every database inside the store is provided here. The store will expose a set of external interfaces which could be grouped into two categories:
apply_block. This would be the only way to update the data inside the store, and it would be performed atomically. That is, applying changes for a specific block will be done all at once, and no read requests would receive responses with partially updated data.Internally, the store will maintain a persistent database as well as in-memory structures required to facilitate the requests.
Beta Was this translation helpful? Give feedback.
All reactions