-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[W.I.P] Enable passive squelching #5358
base: develop
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #5358 +/- ##
=========================================
- Coverage 78.2% 78.1% -0.1%
=========================================
Files 790 790
Lines 67738 67901 +163
Branches 8177 8227 +50
=========================================
+ Hits 52962 53024 +62
- Misses 14776 14877 +101
🚀 New features to boost your workflow:
|
@@ -45,8 +45,8 @@ static constexpr auto IDLED = std::chrono::seconds{8}; | |||
// of messages from the validator. We add peers who reach | |||
// MIN_MESSAGE_THRESHOLD to considered pool once MAX_SELECTED_PEERS | |||
// reach MAX_MESSAGE_THRESHOLD. | |||
static constexpr uint16_t MIN_MESSAGE_THRESHOLD = 9; | |||
static constexpr uint16_t MAX_MESSAGE_THRESHOLD = 10; | |||
static constexpr uint16_t MIN_MESSAGE_THRESHOLD = 19; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why *_MESSAGE_THRESHOLD
changed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The threshold is used to identify the fastest peers for delivering messages from a validator. In other words, the first N peers to reach min_message_threshold
are considered sources for messages from a given validator.
I increased these values to account for network lag variations. For example, a peer that is generally on the fastest path from a validator to us might experience temporary networking issues, causing delays. Without adjustment, we might switch to a slower peer as the message source, even though the original peer’s lag is only temporary. Increasing these values provides more samples, allowing the server to make a more informed decision about which peers are reliable message sources.
headers_, | ||
FEATURE_VPRR, | ||
app_.config().VP_REDUCE_RELAY_ENABLE)) | ||
, vpReduceRelayEnabled_(app_.config().VP_REDUCE_RELAY_ENABLE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit concerned about abandoning the handshake negotiation. A peer doesn't support this feature and basically indicates that it doesn't want to receive the squelch messages. It's sort of an honor system because a peer doesn't check if it supports reduce relay when it receives squelch messages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without this feature, squelching would only provide network-wide benefits if all servers enabled it simultaneously, which is too risky.
By allowing squelch messages to be submitted to peers—even if they don’t run squelching themselves—we can achieve the optimization’s benefits without requiring full network adoption. This enables safe testing and verification of squelching across the network.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm in agreement that this enables us to do a safe testing and gradual deployment.
I'm just saying that we are not honoring the handshake feature negotiation. Though in this particular case squelching a peer enables the peer to send fewer messages. But still, we kind of breaking a contract.
@@ -2651,16 +2648,6 @@ PeerImp::onMessage(std::shared_ptr<protocol::TMSquelch> const& m) | |||
} | |||
PublicKey key(slice); | |||
|
|||
// Ignore non-validator squelch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't expect this message with a non-validator key. Doesn't this open a door for spamming?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh, my reply wasn't submitted for some reason...
Majority of validators are untrusted, but only listening squelches for trusted validator we seriously limit the impact squelching may have.
Is your concern that the memory footprint of the underlying map where squelched validators are stored, might grow out of control?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the resource usage. We are currently charging a fee for a reason. Basically we don't expect this kind of message and will disconnect a peer if too many messages are received.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could add another mechanism, for example one that rate limits squelch messages?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes it complicated. So, I'm sending legitimate VP messages with a valid validator key to a peer. But the peer tells me to squelch messages from an invalid validator, which should not happen. If a peer doesn't want to see messages from me then I'll eventually disconnect this peer if I keep on receiving invalid squelch messages from that peer. The problem solved - the peer is not going to get any more messages from me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what is it that you're suggesting as an alternative.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do nothing. We don't want to reward a misbehaving peer by accepting its squelch request. We disconnect this peer and it will not receive messages from us.
Enable passive squelching.
In other words, if server A supports squelching, and server B does not support squelching, but A sends a squelch message to B, B will act on that message.
High Level Overview of Change
Context of Change
Type of Change
.gitignore
, formatting, dropping support for older tooling)API Impact
libxrpl
change (any change that may affectlibxrpl
or dependents oflibxrpl
)