-
Notifications
You must be signed in to change notification settings - Fork 107
GATEWAYS-4306: exporting metrics for conntrack per zone #137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Please provide a descriptive commit message. |
sure! for now updated the description |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR implements a new connection tracking monitoring system that leverages netlink to directly access kernel conntrack data. The system provides zone-based monitoring capabilities for granular network traffic analysis and DDoS detection.
- Adds a new
ConntrackService
that uses netlink to query kernel conntrack entries - Integrates the conntrack service into the main OVS client with proper lifecycle management
- Updates dependencies to support the new conntrack functionality
Reviewed Changes
Copilot reviewed 3 out of 4 changed files in this pull request and generated 5 comments.
File | Description |
---|---|
ovsnl/conntrack.go | New service implementing conntrack entry retrieval and conversion from kernel data |
ovsnl/client.go | Integration of ConntrackService into main client with initialization and cleanup |
go.mod | Dependency updates including ti-mo/conntrack library and Go version upgrade |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
"syscall" | ||
|
||
"github.com/digitalocean/go-openvswitch/ovsnl/internal/ovsh" | ||
"github.com/ti-mo/conntrack" |
Copilot
AI
Aug 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The conntrack import is added but there's no corresponding entry in the go.mod require section. The ti-mo/conntrack dependency should be explicitly listed in the require section rather than just indirect.
Copilot uses AI. Check for mistakes.
ovsnl/conntrack.go
Outdated
|
||
// List lists all conntrack entries from the kernel. | ||
// datapathName is not used in this direct Netlink query, as it's a global dump. | ||
// List lists all conntrack entries from the kernel. |
Copilot
AI
Aug 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment is duplicated on lines 94 and 95. The duplicate comment about datapathName on line 94 should be removed as it's incomplete and confusing.
Copilot uses AI. Check for mistakes.
ovsnl/conntrack.go
Outdated
|
||
// Handle TCP state specifically | ||
if f.TupleOrig.Proto.Protocol == unix.IPPROTO_TCP { | ||
entry.State = parseConntrackStateFlags(uint32(f.ProtoInfo.TCP.State)) |
Copilot
AI
Aug 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TCP state parsing only handles TCP protocol, but other protocols like UDP and ICMP may also have state information. Consider adding state parsing for other protocols or documenting why only TCP state is handled.
entry.State = parseConntrackStateFlags(uint32(f.ProtoInfo.TCP.State)) | |
// Handle protocol-specific state. | |
switch f.TupleOrig.Proto.Protocol { | |
case unix.IPPROTO_TCP: | |
entry.State = parseConntrackStateFlags(uint32(f.ProtoInfo.TCP.State)) | |
case unix.IPPROTO_UDP: | |
// UDP state is less detailed, but we can indicate if it's "UNREPLIED" or "REPLIED". | |
// See https://www.netfilter.org/projects/conntrack-tools/manpage.html | |
if f.ProtoInfo.UDP.State == 1 { | |
entry.State = "UNREPLIED" | |
} else if f.ProtoInfo.UDP.State == 2 { | |
entry.State = "REPLIED" | |
} else { | |
entry.State = "UNKNOWN" | |
} | |
case unix.IPPROTO_ICMP, unix.IPPROTO_ICMPV6: | |
// ICMP state is not tracked in the same way, but we can indicate type/code. | |
entry.State = fmt.Sprintf("TYPE_%d_CODE_%d", f.TupleOrig.Proto.ICMPType, f.TupleOrig.Proto.ICMPCode) | |
default: | |
// For other protocols, state is not tracked or not relevant. | |
entry.State = "UNTRACKED" |
Copilot uses AI. Check for mistakes.
go.mod
Outdated
github.com/mdlayher/ethtool v0.0.0-20210210192532-2b88debcdd43 // indirect | ||
github.com/mdlayher/socket v0.5.1 // indirect | ||
github.com/pkg/errors v0.9.1 // indirect | ||
github.com/ti-mo/conntrack v0.5.2 // indirect |
Copilot
AI
Aug 13, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ti-mo/conntrack dependency is marked as indirect but it's directly imported in ovsnl/conntrack.go. This should be moved to the direct require section.
github.com/ti-mo/conntrack v0.5.2 // indirect |
Copilot uses AI. Check for mistakes.
ovsnl/conntrack.go
Outdated
// Start dump in goroutine | ||
go func() { | ||
defer close(flowChan) | ||
flows, err := s.client.Dump(nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am curious how well Dump scales, and whether you should be using DumpFilter or DumpExpect instead (or maybe even some new variant that just counts if needed)
How many entries did you scale to , in your test setup?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did a POC, did not test yet on heavy traffic. Will check on DumpFilter or DumpExpect as well. Let me get back to you
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You won't need heavy traffic. Just scale up to like a million or two Conntrack entries and see if it performs well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated the code. Checked with 1 million conntrack entries. It is working. The code needs a lot of cleanup. Just a heads up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's encouraging. While you are at it, could you try the max conntrack limit as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the snapshot from your scaled run, it does indicate the irq plateau for the duration of the run. I am assuming there were no CPU lockup messages from the kernel during this run, correct?
Did you get a chance to optimize the frequency of metrics collection?
On another note, this collection should be controlled with a config knob and we should slow roll this carefully.
Also cc @jcooperdo for another pair of eyes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@do-msingh was working on the issue you caught from screenshot. It was not properly doing conntrack count refresh. It looks under control now. As of now I tested seeding conntracks to a specific droplet in a hyperviser. In this screenshot you will see only around 400K jump because for easy testing I kept the timeout 10 mins. But actually created 2.6 Million conntracks only. I will test scenarios like adding timeout for 1 hr, seeding conntracks to multiple droplets (10-20), and see how the system performs. Will keep posted here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While testing with 1 hr timeout with 2.6M conntracks created against 1 droplet in a single zone. There are some small discrepencies due to processing delay: ( For example we have missed 12k events to count while running sync for 2.6M conntracks )
I can try to fix it later as an improvement task
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in the same way when I tested creating conntracks without my changes in openvswitch_exporter the graph looks like above
The build is failing due to go version in my local machine. In this repository we are using older version. When the code is signed off I will try to install the older version and push it. Kept it like this for now. |
|
||
// NewZoneMarkAggregator creates a new aggregator with its own listening connection. | ||
func NewZoneMarkAggregator(s *ConntrackService) (*ZoneMarkAggregator, error) { | ||
log.Printf("Creating new conntrack zone mark aggregator...") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you remove these logs?
ovsnl/conntrack.go
Outdated
return nil, fmt.Errorf("failed to create listening connection: %w", err) | ||
} | ||
|
||
log.Printf("Successfully created conntrack listening connection") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as above
ovsnl/conntrack.go
Outdated
// Start dump in goroutine | ||
go func() { | ||
defer close(flowChan) | ||
flows, err := s.client.Dump(nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This script would be nice to integrate with chef and maybe export some metrics using node exporter so we can build some dashboards around it. In your tests, could you run at scale for an extended period like a couple of hours and check average CPU util? Do you only see CPU spikes around the time metrics are collected? For how long? Also @jcooper had a suggestion to reduce the frequency of collecting the metrics, or maybe optimizing it to reduce load.
Lastly, can you check dmesg output as well at scale to make sure we are not missing anything?
go.mod
Outdated
module github.com/digitalocean/go-openvswitch | ||
|
||
go 1.16 | ||
go 1.23.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The build is failing due to go version in my local machine. In this repository we are using older version. When the code is signed off I will try to install the older version and push it. Kept it like this for now.
It's probably time we bump this. Lets try to get tests to pass with a recent version.
ovsnl/client.go
Outdated
package ovsnl | ||
|
||
import ( | ||
"context" // Used in commented aggregator code |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I'm not seeing this used anywhere, can it be removed?
ovsnl/conntrack.go
Outdated
} | ||
|
||
// ForceSync performs a manual sync (disabled for large tables) | ||
func (a *ZoneMarkAggregator) ForceSync() error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method doesn't appear to be used, is it needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed it.
ovsnl/conntrack.go
Outdated
} | ||
|
||
// IsHealthy checks if the aggregator is in a healthy state | ||
func (a *ZoneMarkAggregator) IsHealthy() bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method doesn't appear to be used, is it needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed it. missed cleaning it up. Added this for debugging purpose.
ovsnl/conntrack.go
Outdated
CPUs int | ||
} | ||
|
||
// ConntrackService manages the connection to the kernel's conntrack via Netlink. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does ConntrackService
do what the comment suggests? Its only references I can find are no-op constructors/closers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
refactored
ovsnl/conntrack.go
Outdated
// primary counts (zone -> mark -> count) | ||
mu sync.RWMutex | ||
counts map[uint16]map[uint32]int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we gain any benefit from mapping by zone->mark? Could we instead simplify this by mapping by zmKey
?
ovsnl/conntrack.go
Outdated
return out | ||
} | ||
|
||
// GetTotalCount returns the total counted entries (best-effort) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this used by anything? Would it return the same as nf_conntrack_count
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed it. missed cleaning it up. Added this for debugging purpose.
Co-authored-by: jcooperdo <[email protected]>
* test 1 * test 2 * test 2 * test 3 * test 4 * test 5 * test 6 * test 7 * test 8 * test 9 * test 10 * test 12 * test 13 * test 14 * test 15 * test 16 * Update go.yml * clean up
This PR implements a new connection tracking monitoring system that leverages netlink to directly access kernel conntrack data. The implementation provides zone-based monitoring capabilities, allowing for more granular network traffic analysis