Conversation
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
| let steps = 0 | ||
| const maxSteps = v2 ? Math.ceil(receiverGroupSize / sendGroupSize) : Infinity | ||
| while (v2 ? steps < maxSteps : targetNumber < receiverGroupSize) { | ||
| //send all payload to this node | ||
| const destinationNode = wrappedIndex | ||
|
|
There was a problem hiding this comment.
Suggestion: The destinationNode is always set to wrappedIndex, which may cause duplicate or incorrect indices if wrappedIndex is not updated properly within the loop. Ensure that wrappedIndex is updated before pushing to destinationNodes to avoid incorrect node assignments, especially when v2 is true. [possible issue, importance: 6]
New proposed code:
let steps = 0
const maxSteps = v2 ? Math.ceil(receiverGroupSize / sendGroupSize) : Infinity
while (v2 ? steps < maxSteps : targetNumber < receiverGroupSize) {
//send all payload to this node
+ if (wrappedIndex >= transactionGroupSize) {
+ wrappedIndex = wrappedIndex - transactionGroupSize
+ }
const destinationNode = wrappedIndex
destinationNodes.push(destinationNode)| } | ||
|
|
||
| if (configContext.p2p.factv2) senderIndex = queueEntry.transactionGroup.findIndex((node) => node.id === Self.id) | ||
| if (configContext.p2p.factv2) senderIndex = queueEntry.executionGroup.findIndex((node) => node.id === Self.id) |
There was a problem hiding this comment.
Suggestion: Overwriting senderIndex here may conflict with the earlier assignment from wrappedIndex, potentially causing inconsistency if both conditions are true. Add a check to ensure only one assignment path is taken, or clarify precedence to avoid logic errors. [possible issue, importance: 7]
| if (configContext.p2p.factv2) senderIndex = queueEntry.executionGroup.findIndex((node) => node.id === Self.id) | |
| if (configContext.p2p.factv2 && wrappedIndex == null) { | |
| senderIndex = queueEntry.executionGroup.findIndex((node) => node.id === Self.id) | |
| } |
| if (configContext.p2p.factv2) { | ||
| senderNodeIndex = queueEntry.executionGroup.findIndex((node) => node.id === senderNodeId) | ||
| targetNodeIndex = queueEntry.transactionGroup.findIndex((node) => node.id === Self.id) | ||
| targetEndIndex = targetEndIndex - 1 | ||
| } else { |
There was a problem hiding this comment.
Suggestion: Assigning senderNodeIndex from executionGroup and targetNodeIndex from transactionGroup may cause mismatched indices if the groups are not aligned. Ensure both indices are derived from the same group to prevent logic errors in subsequent processing. [possible issue, importance: 8]
| if (configContext.p2p.factv2) { | |
| senderNodeIndex = queueEntry.executionGroup.findIndex((node) => node.id === senderNodeId) | |
| targetNodeIndex = queueEntry.transactionGroup.findIndex((node) => node.id === Self.id) | |
| targetEndIndex = targetEndIndex - 1 | |
| } else { | |
| if (configContext.p2p.factv2) { | |
| senderNodeIndex = queueEntry.executionGroup.findIndex((node) => node.id === senderNodeId) | |
| targetNodeIndex = queueEntry.executionGroup.findIndex((node) => node.id === Self.id) | |
| targetEndIndex = queueEntry.executionGroup.length - 1 | |
| } else { | |
| if (queueEntry.isSenderWrappedTxGroup[senderNodeId] != null) { |
PR Type
Bug fix
Description
Corrects group indexing logic for factv2 mode in transaction queue
Fixes loop bounds in getCorrespondingNodes for v2 logic
Ensures sender/receiver group calculations use correct group arrays
Prevents gaps and out-of-bounds errors in node correspondence
Changes walkthrough 📝
TransactionQueue.ts
Fix group array usage and indexing for factv2 modesrc/state-manager/TransactionQueue.ts
factv2 mode
fastAggregatedCorrespondingTell.ts
Fix v2 loop bounds in getCorrespondingNodes functionsrc/utils/fastAggregatedCorrespondingTell.ts
errors