Upstream entry reconstruction fixes #61
Conversation
| ); | ||
|
|
||
| if should_reconstruct_shreds { | ||
| let _ = reconstruct_tx.try_send(packet_batch.clone()); |
There was a problem hiding this comment.
can we switch this back to try_send, don't to have OOM due to unbounded linked list. lets increase the buffer size instead
| ) -> usize { | ||
| deshredded_entries.clear(); | ||
| slot_fec_indexes_to_iterate.clear(); | ||
| let mut data_fec_indexes_to_reconstruct = Vec::new(); |
There was a problem hiding this comment.
lets re-use the vec and clear it instead of allocating a new one each time
| // failed to reconstruct the entries from the index due to never finding the start data boundary. | ||
| // To deal with this possible out-of-order scenario, we retry rebuilding the entrie(s) for the next | ||
| // fec index whenever we encounter a data complete boundary. | ||
| if let Some(next_slot) = &state_tracker.data_shreds[index + 1] { |
There was a problem hiding this comment.
im a bit confused by this logic, state_tracker.data_shreds only tracks shreds for a single slot. how does this give you next slot information?
There was a problem hiding this comment.
also we only ever push to this when shreds come out of order. if they always come in order, the never push to the vec, and line 186 for (slot, fec_set_index) in data_fec_indexes_to_reconstruct.iter() { wouldnt iterate on anything
|
Please run the test suite, the new changes decodes less transactions than before |
Overview
In this PR, I replace multiple issues causing deshredding to fail:
try_sendwithsendto ensure that shreds are delivered to the reconstructor thread. (Without this, I got a lot of missing shreds in the reconstructor thread).Tests
Verified locally that we didn't get any missing transactions for over 1000 slots after the two fixes above.