Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[W.I.P] Enable passive squelching #5358

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

Tapanito
Copy link

Enable passive squelching.

In other words, if server A supports squelching, and server B does not support squelching, but A sends a squelch message to B, B will act on that message.

High Level Overview of Change

Context of Change

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Refactor (non-breaking change that only restructures code)
  • Performance (increase or change in throughput and/or latency)
  • Tests (you added tests for code that already exists, or your new feature included in this PR)
  • Documentation update
  • Chore (no impact to binary, e.g. .gitignore, formatting, dropping support for older tooling)
  • Release

API Impact

  • Public API: New feature (new methods and/or new fields)
  • Public API: Breaking change (in general, breaking changes should only impact the next api_version)
  • libxrpl change (any change that may affect libxrpl or dependents of libxrpl)
  • Peer protocol change (must be backward compatible or bump the peer protocol version)

Copy link

codecov bot commented Mar 19, 2025

Codecov Report

Attention: Patch coverage is 50.00000% with 1 line in your changes missing coverage. Please review.

Project coverage is 78.1%. Comparing base (ab44cc3) to head (7e53808).
Report is 20 commits behind head on develop.

Files with missing lines Patch % Lines
src/xrpld/overlay/detail/PeerImp.h 0.0% 1 Missing ⚠️
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff            @@
##           develop   #5358     +/-   ##
=========================================
- Coverage     78.2%   78.1%   -0.1%     
=========================================
  Files          790     790             
  Lines        67738   67901    +163     
  Branches      8177    8227     +50     
=========================================
+ Hits         52962   53024     +62     
- Misses       14776   14877    +101     
Files with missing lines Coverage Δ
src/xrpld/overlay/detail/PeerImp.cpp 3.7% <100.0%> (-0.1%) ⬇️
src/xrpld/overlay/detail/PeerImp.h 13.0% <0.0%> (+0.3%) ⬆️

... and 499 files with indirect coverage changes

Impacted file tree graph

🚀 New features to boost your workflow:
  • Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@@ -45,8 +45,8 @@ static constexpr auto IDLED = std::chrono::seconds{8};
// of messages from the validator. We add peers who reach
// MIN_MESSAGE_THRESHOLD to considered pool once MAX_SELECTED_PEERS
// reach MAX_MESSAGE_THRESHOLD.
static constexpr uint16_t MIN_MESSAGE_THRESHOLD = 9;
static constexpr uint16_t MAX_MESSAGE_THRESHOLD = 10;
static constexpr uint16_t MIN_MESSAGE_THRESHOLD = 19;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why *_MESSAGE_THRESHOLD changed?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The threshold is used to identify the fastest peers for delivering messages from a validator. In other words, the first N peers to reach min_message_threshold are considered sources for messages from a given validator.

I increased these values to account for network lag variations. For example, a peer that is generally on the fastest path from a validator to us might experience temporary networking issues, causing delays. Without adjustment, we might switch to a slower peer as the message source, even though the original peer’s lag is only temporary. Increasing these values provides more samples, allowing the server to make a more informed decision about which peers are reliable message sources.

headers_,
FEATURE_VPRR,
app_.config().VP_REDUCE_RELAY_ENABLE))
, vpReduceRelayEnabled_(app_.config().VP_REDUCE_RELAY_ENABLE)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit concerned about abandoning the handshake negotiation. A peer doesn't support this feature and basically indicates that it doesn't want to receive the squelch messages. It's sort of an honor system because a peer doesn't check if it supports reduce relay when it receives squelch messages.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without this feature, squelching would only provide network-wide benefits if all servers enabled it simultaneously, which is too risky.

By allowing squelch messages to be submitted to peers—even if they don’t run squelching themselves—we can achieve the optimization’s benefits without requiring full network adoption. This enables safe testing and verification of squelching across the network.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm in agreement that this enables us to do a safe testing and gradual deployment.
I'm just saying that we are not honoring the handshake feature negotiation. Though in this particular case squelching a peer enables the peer to send fewer messages. But still, we kind of breaking a contract.

@@ -2651,16 +2648,6 @@ PeerImp::onMessage(std::shared_ptr<protocol::TMSquelch> const& m)
}
PublicKey key(slice);

// Ignore non-validator squelch
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't expect this message with a non-validator key. Doesn't this open a door for spamming?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huh, my reply wasn't submitted for some reason...

Majority of validators are untrusted, but only listening squelches for trusted validator we seriously limit the impact squelching may have.

Is your concern that the memory footprint of the underlying map where squelched validators are stored, might grow out of control?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the resource usage. We are currently charging a fee for a reason. Basically we don't expect this kind of message and will disconnect a peer if too many messages are received.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could add another mechanism, for example one that rate limits squelch messages?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes it complicated. So, I'm sending legitimate VP messages with a valid validator key to a peer. But the peer tells me to squelch messages from an invalid validator, which should not happen. If a peer doesn't want to see messages from me then I'll eventually disconnect this peer if I keep on receiving invalid squelch messages from that peer. The problem solved - the peer is not going to get any more messages from me.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what is it that you're suggesting as an alternative.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do nothing. We don't want to reward a misbehaving peer by accepting its squelch request. We disconnect this peer and it will not receive messages from us.

@vlntb vlntb self-requested a review March 26, 2025 13:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants