-
Notifications
You must be signed in to change notification settings - Fork 0
Implement the trigger for Stage 2 #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: stage-2
Are you sure you want to change the base?
Conversation
Signed-off-by: quacumque <[email protected]>
trigger/src/lib.rs
Outdated
| /// on the hub chain (hub-\*domestic). Therefore, on the hub chain, we use | ||
| /// a single admin storage to store multiple chain snapshots. We deploy a separate | ||
| /// trigger for each chain working with its own entry in the admin store. | ||
| admin_store: NftId, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the key-value store doesn’t need to be implemented as an NFT; in the future, account metadata or similar could be used instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which one is preferable?
From a development perspective, I feel it could be more straightforward to have "generic key-value stores" (with ownership and access restrictions) instead of "you can use metadata of domain, or account, or asset definition, or an NFT, or trigger metadata - there is really no difference, just pick one you like" (how do I choose?).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When viewed from a domain-oriented perspective, it seems reasonable to place data close to the entity that requires it. Doing so would likely make access rights more natural as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also: I've defined an agnostic KeyValueAddress type in the trigger, so that trigger itself does not need to care in which entities and keys it should read/write inputs/outputs.
As for where specifically to store the various data - thinking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Current model:
- Store
configandcheckpointin the trigger's metadata. Relay could not write to it. - Store
block_messagein the relay's account metadata. Relay could write to it.
| enum OperationMode { | ||
| /// Trigger is deployed on the hub chain | ||
| Hub, | ||
| /// Trigger is deployed on a domestic chain | ||
| Domestic(ChainId), | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although the purpose of OperationMode is still unclear, wouldn’t triggers change their behavior more based on whether they’re on the source (prover) or the destination (verifier) than on whether they’re on the hub or a domestic chain?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This trigger always acts as a verifier, and an external relay - as a prover.
How OperationMode alters behaviour:
- On hub:
- In the
BlockMessagefrom prover, it would detect transactions containing singleTransferto some omnibus account. Transaction must havedestinationin the metadata. (Transaction A, made by user) - Then, it would propagate the transfer locally (on the hub chain) from the source chain omnibus account to the target chain omnibus account (metadata with
sourceAccountIdon the original chain anddestinationAccountIdin the target chain). (Transaction B)
- In the
- On domestic chain:
- In the
BlockMessagefrom prover, it would look for Transaction B. - Then, it would propagate the transfer locally from an omnibus account to the final destination account. (Transaction C)
- In the
So, OperationMode acts as a hint on how to detect/propagate transactions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It turns out the divergence between hub-domestic is bigger.
Triggers can only submit individual instructions, and cannot attach metadata to it. Therefore, hub trigger cannot fully "replay" a transaction containing a single Transfer and with destination in metadata.
As a workaround, I am now using SetKeyValue instructions with "transfer payloads", which then could be decoded and replayed for real on the domestic chains.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an important finding.
The best approach is to allow time-triggered transactions to carry metadata, just like external transactions.
Ideally, it would look like this:
On
Fetching
On
Fetching
On
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a workaround, we could replace the
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am having troubles wrapping my head around this notation...
The best approach is to allow time-triggered transactions to carry metadata, just like external transactions.
That would also not be sufficient to fully make a "transfer on a hub chain". On domestic chain, each transfer is a separate transaction with its own metadata. On hub chain, the trigger will execute all transfers in a single run, as a single transaction. So, even if we now allow triggers to attach metadata to their overall execution, there is no way a trigger could submit multiple transactions with their distinct metadata.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, transfers initiated by relays are processed in batches, so we need to be aware that the unit of proof becomes larger.
The required information can be carried in metadata indexed in the same order as the instructions.
When there are multiple
However, to guard against DoS attacks by malicious relays, an appropriate batch size limit needs to be set.
In some cases, it may not be possible to process all interesting transactions contained in a single source-chain block in one go.
Signed-off-by: quacumque <[email protected]>
And update Cargo stuff Signed-off-by: quacumque <[email protected]>
Signed-off-by: quacumque <[email protected]>
Signed-off-by: quacumque <[email protected]>
Signed-off-by: quacumque <[email protected]>
Signed-off-by: quacumque <[email protected]>
Signed-off-by: quacumque <[email protected]>
Signed-off-by: quacumque <[email protected]>
Signed-off-by: quacumque <[email protected]>
Signed-off-by: quacumque <[email protected]>
Signed-off-by: quacumque <[email protected]>
- Complete `KeyValueAddress` implementation - Compose: generate triggers with their configs - Bump Iroha (hyperledger-iroha/iroha#5497) It works! Triggers are running and successfully reading their configuration. Next steps: - Update the UI. Make it display all entities with all metadata. This must make the architecture more understandable visually. - Update the relay. Make it scan metadata, scan blocks, and submit block messages to the trigger. Signed-off-by: quacumque <[email protected]>
8c022ca to
1a48dde
Compare
This PR introduces the core part of Stage 2 - the trigger that performs validation and application of results.
WiP.