Skip to content

Conversation

@0x009922
Copy link
Contributor

This PR introduces the core part of Stage 2 - the trigger that performs validation and application of results.

WiP.

@0x009922 0x009922 requested review from aoyako and s8sato August 29, 2025 02:30
@0x009922 0x009922 self-assigned this Aug 29, 2025
/// on the hub chain (hub-\*domestic). Therefore, on the hub chain, we use
/// a single admin storage to store multiple chain snapshots. We deploy a separate
/// trigger for each chain working with its own entry in the admin store.
admin_store: NftId,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the key-value store doesn’t need to be implemented as an NFT; in the future, account metadata or similar could be used instead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which one is preferable?

From a development perspective, I feel it could be more straightforward to have "generic key-value stores" (with ownership and access restrictions) instead of "you can use metadata of domain, or account, or asset definition, or an NFT, or trigger metadata - there is really no difference, just pick one you like" (how do I choose?).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When viewed from a domain-oriented perspective, it seems reasonable to place data close to the entity that requires it. Doing so would likely make access rights more natural as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also: I've defined an agnostic KeyValueAddress type in the trigger, so that trigger itself does not need to care in which entities and keys it should read/write inputs/outputs.

As for where specifically to store the various data - thinking.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current model:

  • Store config and checkpoint in the trigger's metadata. Relay could not write to it.
  • Store block_message in the relay's account metadata. Relay could write to it.

Comment on lines 55 to 60
enum OperationMode {
/// Trigger is deployed on the hub chain
Hub,
/// Trigger is deployed on a domestic chain
Domestic(ChainId),
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although the purpose of OperationMode is still unclear, wouldn’t triggers change their behavior more based on whether they’re on the source (prover) or the destination (verifier) than on whether they’re on the hub or a domestic chain?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This trigger always acts as a verifier, and an external relay - as a prover.

How OperationMode alters behaviour:

  • On hub:
    • In the BlockMessage from prover, it would detect transactions containing single Transfer to some omnibus account. Transaction must have destination in the metadata. (Transaction A, made by user)
    • Then, it would propagate the transfer locally (on the hub chain) from the source chain omnibus account to the target chain omnibus account (metadata with source AccountId on the original chain and destination AccountId in the target chain). (Transaction B)
  • On domestic chain:
    • In the BlockMessage from prover, it would look for Transaction B.
    • Then, it would propagate the transfer locally from an omnibus account to the final destination account. (Transaction C)

So, OperationMode acts as a hint on how to detect/propagate transactions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It turns out the divergence between hub-domestic is bigger.

Triggers can only submit individual instructions, and cannot attach metadata to it. Therefore, hub trigger cannot fully "replay" a transaction containing a single Transfer and with destination in metadata.

As a workaround, I am now using SetKeyValue instructions with "transfer payloads", which then could be decoded and replayed for real on the domestic chains.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an important finding.

The best approach is to allow time-triggered transactions to carry metadata, just like external transactions.
Ideally, it would look like this:

On $A$, an user $a$ performs a transaction $\verb|Tx|A$:

$$\verb|Tx|A := (\verb|transfer|(a \rightarrow \Omega_B,\ q),\ \verb|meta|(\verb|dest|:b))$$

Fetching $\verb|Tx|A$, relay $R_{AH}$ posts an intent $\verb|Tx|H_0$ on $H$:

$$\verb|Tx|H_0 := (\verb|setkv|(k = h(\verb|Tx|A),\ v = (\verb|payload|:\verb|Tx|A,\ \verb|proof|:p(\verb|Tx|A))),\ \verb|meta|())$$

On $H$, a time trigger verifies the intent and materializes $\verb|Tx|H$:

$$\verb|Tx|H := (\verb|transfer|(\Omega_A \rightarrow \Omega_B,\ q),\ \verb|meta|(\verb|src|:a,\ \verb|dest|:b))$$

Fetching $\verb|Tx|H$, relay $R_{HB}$ posts an intent $\verb|Tx|B_0$ on $B$:

$$\verb|Tx|B_0 := (\verb|setkv|(k = h(\verb|Tx|H),\ v = (\verb|payload|:\verb|Tx|H,\ \verb|proof|:p(\verb|Tx|H))),\ \verb|meta|())$$

On $B$, a time trigger verifies the intent and materializes $\verb|Tx|B$:

$$\verb|Tx|B := (\verb|transfer|(\Omega_A \rightarrow b,\ q),\ \verb|meta|(\verb|src|:a))$$

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a workaround, we could replace the $\verb|meta|$ of $\verb|Tx|H$ and $\verb|Tx|B$ with another instruction $\verb|setkv|$, but that would unnecessarily bloat the world state.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am having troubles wrapping my head around this notation...

The best approach is to allow time-triggered transactions to carry metadata, just like external transactions.

That would also not be sufficient to fully make a "transfer on a hub chain". On domestic chain, each transfer is a separate transaction with its own metadata. On hub chain, the trigger will execute all transfers in a single run, as a single transaction. So, even if we now allow triggers to attach metadata to their overall execution, there is no way a trigger could submit multiple transactions with their distinct metadata.

Copy link

@s8sato s8sato Sep 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, transfers initiated by relays are processed in batches, so we need to be aware that the unit of proof becomes larger.
The required information can be carried in metadata indexed in the same order as the instructions.
When there are multiple $\verb|Tx|A$ instances, the generalized $\verb|Tx|H$ and $\verb|Tx|B$ would look like this:

$$\verb|Tx|A_0 := (\text{transfer}(a_0 \rightarrow \Omega_B,\ q_0),\ \verb|meta|(\verb|dest|:b_0))$$ $$\verb|Tx|A_1 := (\text{transfer}(a_1 \rightarrow \Omega_B,\ q_1),\ \verb|meta|(\verb|dest|:b_1))$$ $$\verb|Tx|A_2 := (\text{transfer}(a_2 \rightarrow \Omega_B,\ q_2),\ \verb|meta|(\verb|dest|:b_2))$$ $$\vdots$$

$$\verb|Tx|H := (\verb|transfer|(\Omega_A \rightarrow \Omega_B,\ \sum_i q_i),\ \verb|meta|\sum_i i:(\verb|src|:a_i,\ \verb|dest|:b_i,\ \verb|amount|:q_i))$$

$$\verb|Tx|B := (\ \sum_i \verb|transfer|(\Omega_A \rightarrow b_i,\ q_i),\ \verb|meta| \sum_i i:(\verb|src|:a_i))$$

However, to guard against DoS attacks by malicious relays, an appropriate batch size limit needs to be set.
In some cases, it may not be possible to process all interesting transactions contained in a single source-chain block in one go.

@s8sato s8sato self-assigned this Sep 26, 2025
Signed-off-by: quacumque <[email protected]>
- Complete `KeyValueAddress` implementation
- Compose: generate triggers with their configs
- Bump Iroha (hyperledger-iroha/iroha#5497)

It works! Triggers are running and successfully reading their
configuration.

Next steps:

- Update the UI. Make it display all entities with all metadata. This
  must make the architecture more understandable visually.
- Update the relay. Make it scan metadata, scan blocks, and submit block
  messages to the trigger.

Signed-off-by: quacumque <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants