IB size at scale? #222
SupernaviX
started this conversation in
General
Replies: 1 comment 2 replies
-
Our eventual sharding scheme also probably impacts this; higher IB/s helps us cover each shard/partition concurrently. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Working on visualizations for the Rust sim, I can't help but notice that generating high volumes of IBs is inefficient unless the system has higher volumes of transactions. When I run the sim with 5 IBs/sec and 10 tx/sec, I consistently see something like 15-20% of IBs are empty. Every empty IB takes up extra bandwidth and extra space on disk (and makes EBs slightly larger), and earns fees without actually contributing to TX throughput.
Right now, the simulations are configured to use 320kb as the maximum size of an input block, and the performance comparisons are based on input blocks of a fixed 100kb size produced at different rates. Even when I crank transaction volume up to 100 tx/sec, we're not in danger of hitting that "maximum" size, with an average size of ~30kb and very few IBs reaching 100kb.
From conversations in Slack, it sounds like we expect to increase the IB production rate over time in response to higher transaction volume. Is there a compelling reason that we would want to scale with "more IBs", vs "bigger IBs"? It sounds to me like the system would scale more dynamically if we raised the maximum size vs raising the production rate. If the system produces 5 IBs/sec then it produces 5 IBs/sec regardless of TX volume, but an IB which could contain up to 1mb of transactions doesn't impose any inefficiencies if it only contains 50kb.
Also, will IBs have CPU/memory budgets, like individual transactions do?
Beta Was this translation helpful? Give feedback.
All reactions