You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Feb 27, 2025. It is now read-only.
One of the issue comes from the small committee size, and the fixed multiplicity of 1.
VID Cost Background
Think of the encoded payload as a matrix.
with multiplicity of 1, each row is a polynomial, the width determines the degree of the poly, the height is number of polynomials.
VID commons contains a polynomial commitment for each poly, meaning more rows => more commitments => larger VID common size
During parameter setting, the width is approximately the number of nodes in the network.
Thus, in testnet where there're only <10 nodes, we have a weird super tall matrix. This will be less of a problem when we run among 100 nodes.
with multiplicity of 2, each row is a polynomial of double the degree, i.e. double the width, halve the height
the total size for each node still stays the same as they get 2 points from each poly (previously only 1 point from each poly) with half as many polys.
The implications of wider/fatter matrix is: smaller VID commons, possibly* slower disperse as you need to compute more opening proofs for each node.
(heavy asterisk on "possibly slower": we are not sure of the cross-over point yet, no deterministic formula to estimate this atm)
Proposal
the idea is instead of a general intelligent algorithm to select the multiplicity (EspressoSystems/jellyfish#534), let's just hardcode a heuristic config derived from empirical experiments.
Taking the current block size and number of nodes, output a suggested multiplicity.
This should be a stopgap option before the new VID scheme got merged, should mitigate the motivating problem for the short term.
What work will need to be done to complete this task?
What is this task and why do we need to work on it?
Motivation
In water testnet, we noticed lots of "VID common not found" due to their size. (zulip thread)
From Jeb:
One of the issue comes from the small committee size, and the fixed multiplicity of 1.
VID Cost Background
Think of the encoded payload as a matrix.
with multiplicity of 1, each row is a polynomial, the width determines the degree of the poly, the height is number of polynomials.
VID commons contains a polynomial commitment for each poly, meaning more rows => more commitments => larger VID common size
During parameter setting, the width is approximately the number of nodes in the network.
Thus, in testnet where there're only <10 nodes, we have a weird super tall matrix. This will be less of a problem when we run among 100 nodes.
with multiplicity of 2, each row is a polynomial of double the degree, i.e. double the width, halve the height
the total size for each node still stays the same as they get 2 points from each poly (previously only 1 point from each poly) with half as many polys.
The implications of wider/fatter matrix is: smaller VID commons, possibly* slower disperse as you need to compute more opening proofs for each node.
(heavy asterisk on "possibly slower": we are not sure of the cross-over point yet, no deterministic formula to estimate this atm)
Proposal
the idea is instead of a general intelligent algorithm to select the multiplicity (EspressoSystems/jellyfish#534), let's just hardcode a heuristic config derived from empirical experiments.
Taking the current block size and number of nodes, output a suggested multiplicity.
This should be a stopgap option before the new VID scheme got merged, should mitigate the motivating problem for the short term.
What work will need to be done to complete this task?
Change jellyfish to accept flexible multiplicity, (currently it's fixed and not configurable)Jellyfish already supports flexible multiplicitydisperse()
Are there any other details to include?
No response
What are the acceptance criteria to close this issue?
Branch work will be merged to (if not the default branch)
main
cc @bfish713 @jbearer @akonring @ggutoski
The text was updated successfully, but these errors were encountered: