Skip to content

Conversation

shrouti1995
Copy link

@shrouti1995 shrouti1995 commented Aug 13, 2025

This PR implements a new connection tracking monitoring system that leverages netlink to directly access kernel conntrack data. The implementation provides zone-based monitoring capabilities, allowing for more granular network traffic analysis

  • Tested with 1 M and 2M conntracks created in s2r7node11.

@armando-migliaccio
Copy link
Contributor

_ No description provided. _

Please provide a descriptive commit message.

@shrouti1995
Copy link
Author

_ No description provided. _

Please provide a descriptive commit message.

sure! for now updated the description

@shrouti1995 shrouti1995 requested a review from Copilot August 13, 2025 10:38
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR implements a new connection tracking monitoring system that leverages netlink to directly access kernel conntrack data. The system provides zone-based monitoring capabilities for granular network traffic analysis and DDoS detection.

  • Adds a new ConntrackService that uses netlink to query kernel conntrack entries
  • Integrates the conntrack service into the main OVS client with proper lifecycle management
  • Updates dependencies to support the new conntrack functionality

Reviewed Changes

Copilot reviewed 3 out of 4 changed files in this pull request and generated 5 comments.

File Description
ovsnl/conntrack.go New service implementing conntrack entry retrieval and conversion from kernel data
ovsnl/client.go Integration of ConntrackService into main client with initialization and cleanup
go.mod Dependency updates including ti-mo/conntrack library and Go version upgrade

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

"syscall"

"github.com/digitalocean/go-openvswitch/ovsnl/internal/ovsh"
"github.com/ti-mo/conntrack"
Copy link

Copilot AI Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The conntrack import is added but there's no corresponding entry in the go.mod require section. The ti-mo/conntrack dependency should be explicitly listed in the require section rather than just indirect.

Copilot uses AI. Check for mistakes.


// List lists all conntrack entries from the kernel.
// datapathName is not used in this direct Netlink query, as it's a global dump.
// List lists all conntrack entries from the kernel.
Copy link

Copilot AI Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is duplicated on lines 94 and 95. The duplicate comment about datapathName on line 94 should be removed as it's incomplete and confusing.

Copilot uses AI. Check for mistakes.


// Handle TCP state specifically
if f.TupleOrig.Proto.Protocol == unix.IPPROTO_TCP {
entry.State = parseConntrackStateFlags(uint32(f.ProtoInfo.TCP.State))
Copy link

Copilot AI Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The TCP state parsing only handles TCP protocol, but other protocols like UDP and ICMP may also have state information. Consider adding state parsing for other protocols or documenting why only TCP state is handled.

Suggested change
entry.State = parseConntrackStateFlags(uint32(f.ProtoInfo.TCP.State))
// Handle protocol-specific state.
switch f.TupleOrig.Proto.Protocol {
case unix.IPPROTO_TCP:
entry.State = parseConntrackStateFlags(uint32(f.ProtoInfo.TCP.State))
case unix.IPPROTO_UDP:
// UDP state is less detailed, but we can indicate if it's "UNREPLIED" or "REPLIED".
// See https://www.netfilter.org/projects/conntrack-tools/manpage.html
if f.ProtoInfo.UDP.State == 1 {
entry.State = "UNREPLIED"
} else if f.ProtoInfo.UDP.State == 2 {
entry.State = "REPLIED"
} else {
entry.State = "UNKNOWN"
}
case unix.IPPROTO_ICMP, unix.IPPROTO_ICMPV6:
// ICMP state is not tracked in the same way, but we can indicate type/code.
entry.State = fmt.Sprintf("TYPE_%d_CODE_%d", f.TupleOrig.Proto.ICMPType, f.TupleOrig.Proto.ICMPCode)
default:
// For other protocols, state is not tracked or not relevant.
entry.State = "UNTRACKED"

Copilot uses AI. Check for mistakes.

go.mod Outdated
github.com/mdlayher/ethtool v0.0.0-20210210192532-2b88debcdd43 // indirect
github.com/mdlayher/socket v0.5.1 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/ti-mo/conntrack v0.5.2 // indirect
Copy link

Copilot AI Aug 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ti-mo/conntrack dependency is marked as indirect but it's directly imported in ovsnl/conntrack.go. This should be moved to the direct require section.

Suggested change
github.com/ti-mo/conntrack v0.5.2 // indirect

Copilot uses AI. Check for mistakes.

// Start dump in goroutine
go func() {
defer close(flowChan)
flows, err := s.client.Dump(nil)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am curious how well Dump scales, and whether you should be using DumpFilter or DumpExpect instead (or maybe even some new variant that just counts if needed)
How many entries did you scale to , in your test setup?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did a POC, did not test yet on heavy traffic. Will check on DumpFilter or DumpExpect as well. Let me get back to you

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You won't need heavy traffic. Just scale up to like a million or two Conntrack entries and see if it performs well.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated the code. Checked with 1 million conntrack entries. It is working. The code needs a lot of cleanup. Just a heads up.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's encouraging. While you are at it, could you try the max conntrack limit as well?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the snapshot from your scaled run, it does indicate the irq plateau for the duration of the run. I am assuming there were no CPU lockup messages from the kernel during this run, correct?

Did you get a chance to optimize the frequency of metrics collection?

On another note, this collection should be controlled with a config knob and we should slow roll this carefully.

Also cc @jcooperdo for another pair of eyes.

Copy link
Author

@shrouti1995 shrouti1995 Sep 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@do-msingh was working on the issue you caught from screenshot. It was not properly doing conntrack count refresh. It looks under control now. As of now I tested seeding conntracks to a specific droplet in a hyperviser. In this screenshot you will see only around 400K jump because for easy testing I kept the timeout 10 mins. But actually created 2.6 Million conntracks only. I will test scenarios like adding timeout for 1 hr, seeding conntracks to multiple droplets (10-20), and see how the system performs. Will keep posted here.

Copy link
Author

@shrouti1995 shrouti1995 Sep 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While testing with 1 hr timeout with 2.6M conntracks created against 1 droplet in a single zone. There are some small discrepencies due to processing delay: ( For example we have missed 12k events to count while running sync for 2.6M conntracks )
I can try to fix it later as an improvement task

Copy link
Author

@shrouti1995 shrouti1995 Sep 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tested with 3 droplets with three different zones each receiving 2.6 mililon events over 4 hrs.
Screenshot 2025-09-30 at 12 35 45 PM

Screenshot 2025-09-30 at 12 36 03 PM Screenshot 2025-09-30 at 12 36 27 PM Screenshot 2025-09-26 at 5 09 14 PM did not see any oom kill error Screenshot 2025-09-26 at 5 09 47 PM

Copy link
Author

@shrouti1995 shrouti1995 Sep 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in the same way when I tested creating conntracks without my changes in openvswitch_exporter the graph looks like above

@shrouti1995 shrouti1995 requested a review from do-msingh August 28, 2025 16:14
@shrouti1995 shrouti1995 marked this pull request as ready for review September 2, 2025 20:00
@shrouti1995
Copy link
Author

The build is failing due to go version in my local machine. In this repository we are using older version. When the code is signed off I will try to install the older version and push it. Kept it like this for now.


// NewZoneMarkAggregator creates a new aggregator with its own listening connection.
func NewZoneMarkAggregator(s *ConntrackService) (*ZoneMarkAggregator, error) {
log.Printf("Creating new conntrack zone mark aggregator...")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you remove these logs?

return nil, fmt.Errorf("failed to create listening connection: %w", err)
}

log.Printf("Successfully created conntrack listening connection")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above

// Start dump in goroutine
go func() {
defer close(flowChan)
flows, err := s.client.Dump(nil)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This script would be nice to integrate with chef and maybe export some metrics using node exporter so we can build some dashboards around it. In your tests, could you run at scale for an extended period like a couple of hours and check average CPU util? Do you only see CPU spikes around the time metrics are collected? For how long? Also @jcooper had a suggestion to reduce the frequency of collecting the metrics, or maybe optimizing it to reduce load.
Lastly, can you check dmesg output as well at scale to make sure we are not missing anything?

go.mod Outdated
module github.com/digitalocean/go-openvswitch

go 1.16
go 1.23.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The build is failing due to go version in my local machine. In this repository we are using older version. When the code is signed off I will try to install the older version and push it. Kept it like this for now.

It's probably time we bump this. Lets try to get tests to pass with a recent version.

ovsnl/client.go Outdated
package ovsnl

import (
"context" // Used in commented aggregator code
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I'm not seeing this used anywhere, can it be removed?

}

// ForceSync performs a manual sync (disabled for large tables)
func (a *ZoneMarkAggregator) ForceSync() error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method doesn't appear to be used, is it needed?

Copy link
Author

@shrouti1995 shrouti1995 Oct 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed it.

}

// IsHealthy checks if the aggregator is in a healthy state
func (a *ZoneMarkAggregator) IsHealthy() bool {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method doesn't appear to be used, is it needed?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed it. missed cleaning it up. Added this for debugging purpose.

CPUs int
}

// ConntrackService manages the connection to the kernel's conntrack via Netlink.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does ConntrackService do what the comment suggests? Its only references I can find are no-op constructors/closers.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

refactored

Comment on lines 96 to 98
// primary counts (zone -> mark -> count)
mu sync.RWMutex
counts map[uint16]map[uint32]int
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we gain any benefit from mapping by zone->mark? Could we instead simplify this by mapping by zmKey?

return out
}

// GetTotalCount returns the total counted entries (best-effort)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this used by anything? Would it return the same as nf_conntrack_count?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed it. missed cleaning it up. Added this for debugging purpose.

Co-authored-by: jcooperdo <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants