-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Feature/double hbone #1429
base: master
Are you sure you want to change the base?
Conversation
Skipping CI for Draft Pull Request. |
e27a4a8
to
40bbcd0
Compare
66db8ae
to
99d622f
Compare
@@ -107,7 +112,7 @@ impl Outbound { | |||
debug!(component="outbound", dur=?start.elapsed(), "connection completed"); | |||
}).instrument(span); | |||
|
|||
assertions::size_between_ref(1000, 1750, &serve_outbound_connection); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How did we get these numbers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
by looking a the current size and adding a small amount of buffer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we covered this live on the WG call, but just to close the loop here - this must not grow (beyond a trivial amount) else it meas every connection will use that much additional memory. Typically the fix here is to Box::pin the futures.
99d622f
to
96bb4de
Compare
src/proxy/pool.rs
Outdated
@@ -83,6 +83,26 @@ struct ConnSpawner { | |||
|
|||
// Does nothing but spawn new conns when asked | |||
impl ConnSpawner { | |||
async fn new_unpooled_conn( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Anything here we can do higher up I think, but things might change if we decide to implement pooling in this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah if we want double-hbone conns to be unpooled and thus need ~none of this surrounding machinery, then I'd be inclined to just start proxy/double-hbone.rs
and use that directly, rather than complicating the purpose of this file.
(Could also just have a common HboneConnMgr
trait or something too)
src/proxy/outbound.rs
Outdated
// This always drops ungracefully | ||
// drop(conn_client); | ||
// tokio::time::sleep(std::time::Duration::from_secs(1)).await; | ||
// drain_tx.send(true).unwrap(); | ||
// tokio::time::sleep(std::time::Duration::from_secs(1)).await; | ||
drain_tx.send(true).unwrap(); | ||
let _ = driver_task.await; | ||
// this sleep is important, so we have a race condition somewhere | ||
// tokio::time::sleep(std::time::Duration::from_secs(1)).await; | ||
res |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does anybody have any info on how to properly drop/terminate H2 connections over stream with nontrivial drops (e.g. shutting down TLS over HTTP2 CONNECT). Right now, I'm just dropping things/aborting tasks randomly until something works
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you asking about how to cleanup after, for example, a RST_STREAM to the inner tunnel? Or something else
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kinda. I mostly mean the outer TLS stream because that's what I've looked at. It seems like if I drop conn_client
before termination driver_task
the TCP connection will close without sending close notifies. So yes, I'm asking if there is a way to explicitly do cleanup rather than relying on implicit drops.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see the code changed; do you still need help figuring this out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Im still not confident in it. It works (on my machine), but I couldn't find any docs on proper connection termination/dropping.
@@ -217,12 +267,12 @@ impl OutboundConnection { | |||
copy::copy_bidirectional(copy::TcpStreamSplitter(stream), upgraded, connection_stats).await | |||
} | |||
|
|||
async fn send_hbone_request( | |||
fn create_hbone_request( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Git merge is getting confused here
src/config.rs
Outdated
@@ -70,7 +70,7 @@ const IPV6_ENABLED: &str = "IPV6_ENABLED"; | |||
|
|||
const UNSTABLE_ENABLE_SOCKS5: &str = "UNSTABLE_ENABLE_SOCKS5"; | |||
|
|||
const DEFAULT_WORKER_THREADS: u16 = 2; | |||
const DEFAULT_WORKER_THREADS: u16 = 40; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I may have missed in the description, but why the change here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was hoping it would making debugging async rust easier (it didn't)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(if you haven't already found it, tokio-console
can be helpful)
src/proxy/outbound.rs
Outdated
// This always drops ungracefully | ||
// drop(conn_client); | ||
// tokio::time::sleep(std::time::Duration::from_secs(1)).await; | ||
// drain_tx.send(true).unwrap(); | ||
// tokio::time::sleep(std::time::Duration::from_secs(1)).await; | ||
drain_tx.send(true).unwrap(); | ||
let _ = driver_task.await; | ||
// this sleep is important, so we have a race condition somewhere | ||
// tokio::time::sleep(std::time::Duration::from_secs(1)).await; | ||
res |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you asking about how to cleanup after, for example, a RST_STREAM to the inner tunnel? Or something else
1ea75fb
to
f1cc535
Compare
src/proxy/outbound.rs
Outdated
|
||
// Inner HBONE | ||
let upgraded = TokioH2Stream::new(upgraded); | ||
// TODO: dst should take a hostname? and upstream_sans currently contains E/W Gateway certs | ||
let inner_workload = pool::WorkloadKey { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will reorganize later.
a8856a4
to
565f41f
Compare
src/proxy/outbound.rs
Outdated
Protocol::TCP => None, | ||
}; | ||
let (upstream_sans, final_sans) = match us.workload.protocol { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding from talking to @keithmattix is that Upstream.service_sans
will be repurposed to contain the identities of remote pods/waypoints, so I should change the logic of the other protocols to only use us.workload.identity
instead of us.workload_and_services_san
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think this is correct; only the double hbone codepath needs to be added/changed because there are two sans being considered: the e/w gateway SAN and the SANs of the backends. So what you have looks right to me
src/proxy/pool.rs
Outdated
@@ -511,10 +578,10 @@ impl WorkloadHBONEPool { | |||
#[derive(Debug, Clone)] | |||
// A sort of faux-client, that represents a single checked-out 'request sender' which might | |||
// send requests over some underlying stream using some underlying http/2 client | |||
struct ConnClient { | |||
sender: H2ConnectClient, | |||
pub struct ConnClient { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixme
Not stale |
64042db
to
b2a21bd
Compare
@@ -50,6 +52,7 @@ async fn main() { | |||
.await | |||
.unwrap(); | |||
sender.wait_forever().await.unwrap(); | |||
tokio::time::sleep(Duration::from_secs(99999999)).await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove me
src/proxy/socks5.rs
Outdated
@@ -107,7 +107,7 @@ impl Socks5 { | |||
debug!(component="socks5", dur=?start.elapsed(), "connection completed"); | |||
}).instrument(span); | |||
|
|||
assertions::size_between_ref(1000, 2000, &serve); | |||
assertions::size_between_ref(1000, 10000, &serve); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
was erroring at at 5600
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we went from 2000 to 5600 we have a problem we need to address. The size here is the per-connection memory overhead, so increases here directly lead to increased memory utilization
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked into this a bit more and "solved" it by boxing the futures. However, I don't know if this makes a difference the resulting binary because the size of futures can change based on compiler optimizations. For example, if I leave proxy_to_double_hbone
unboxed, and restore the limit to 2000, the tests will pass if I build them with --release
, but fail otherwise. So the size of serve
seems more like a red herring.
/// If there is a network gateway, use <service hostname>:<port>. Otherwise, use | ||
/// <ip address>:<port>. Fortunately, for double-hbone, the authority/host is the same | ||
/// for inner and outer CONNECT request, so we can reuse this for inner double hbone, | ||
/// outer double hbone, and normal hbone. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/// If there is a network gateway, use <service hostname>:<port>. Otherwise, use | |
/// <ip address>:<port>. Fortunately, for double-hbone, the authority/host is the same | |
/// for inner and outer CONNECT request, so we can reuse this for inner double hbone, | |
/// outer double hbone, and normal hbone. | |
/// The authority header value for hbone requests. It is the same for normal hbone, and the inner/outer | |
/// requests of double hbone |
buf: &mut tokio::io::ReadBuf<'_>, | ||
) -> Poll<std::io::Result<()>> { | ||
let pinned = std::pin::Pin::new(&mut self.0.read); | ||
copy::ResizeBufRead::poll_bytes(pinned, cx).map_ok(|bytes| buf.put(bytes)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like there is no protection against ReadBuf's rules: Panics if self.remaining() is less than buf.len().
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we plausible write a test around this like we have in copy.rs wrapping in WeirdIO
? It detects these types of bugs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I'm holding back on unit tests until there is some agreement on the implementation
|
||
let _ = drain_tx.send(true); | ||
let _ = driver_task.await; | ||
// Here, there is an implicit, drop(conn_client). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is no conn_client
in scope so this is a bit confusing
src/proxy/outbound.rs
Outdated
.as_ref() | ||
.and_then(|wl| wl.network_gateway.as_ref()), | ||
) { | ||
(_, Some(_)) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think there is a guarantee that something that has a network_gateway supports HBONE. Or that something that has a network_gateway is even in a different network
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we should check that the protocol is HBONE to do double hbone? what about the inner tunnel? I guess my question is how should we explicitly encode protocol information.? Maybe have it be a field in network_gateway
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@howardjohn does WDS/network gateway need a new protocol
field? or are we good just checking that the protocol is HBONE and returning an Error in the rest of the cases? cc @keithmattix
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry github hid this comment so end up saying something similar but with slightly more context in #1429 (comment).
I think the WDS protocol has enough information, we just need to use the context of (src,dst) to determine whether a cross network traversal is required
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So just looking at the source/ dest networks should give enough info on whether to use the network gateway , and I can put this logic next to the waypoint logic in ‘build_request’ (I think that’s what the function is called).
let tls_stream = connector | ||
.connect( | ||
upgraded, | ||
rustls::pki_types::ServerName::IpAddress(wl_key.dst.ip().into()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The servername we pass has no meaning in the underlying library so I think its confusing to do the effort to pick a specific IP here vs always hardcoding a dummy IP and hiding it in the OutboundConnector
like we do today
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now, we use the peer address of the tcp stream, but if it has no meaning I agree that its better to just have a dummy variable and get rid of the argument.
) | ||
.await; | ||
|
||
let _ = drain_tx.send(true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need this? Doesn't dropping all references to the connection already make it close?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It closes it, but in my testing, it was not graceful (e.g. the proper close notifies weren't sent). For example, if the outer connection http2 CONNECT stream closes before the inner TLS stream has a chance to send the close notify.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But since there have been some refactors, ill take another look.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@howardjohn I took another look at it seems like connections aren't being closed particularly gracefully. Namely, I'm seeing FIN being sent before close notifies, But I do have a janky setup. How do we generally test for this?
Some(network_gateway.hbone_mtls_port), | ||
), | ||
Destination::Hostname(_) => ( | ||
self.resolve_workload_address(&wl, source_workload, original_target_address) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should do a lookup in the service registry
Fixes #1423 |
@@ -107,7 +112,7 @@ impl Outbound { | |||
debug!(component="outbound", dur=?start.elapsed(), "connection completed"); | |||
}).instrument(span); | |||
|
|||
assertions::size_between_ref(1000, 1750, &serve_outbound_connection); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we covered this live on the WG call, but just to close the loop here - this must not grow (beyond a trivial amount) else it meas every connection will use that much additional memory. Typically the fix here is to Box::pin the futures.
Protocol::HBONE => { | ||
let res = match ( | ||
req.protocol, | ||
req.actual_destination_workload |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The presence of a network gateway does not imply we need to reach it through the network gateway. For example, a local workload in a cluster will have a network gateway defined.
The WDS protocol is not contextual. A workload w/ a network gateway does not mean "To reach this workload, always go through this network gateway [and the control plane will dynamically set or unset this for you]", it means "if you need to traverse a network boundary, here is how to do it".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More generally, I wonder if the code might be simplified to have a new Protocol::DoubleHbone (maybe we make a new enum like RequestProtocol though)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed.
@@ -453,7 +551,7 @@ fn baggage(r: &Request, cluster: String) -> String { | |||
) | |||
} | |||
|
|||
#[derive(Debug)] | |||
#[derive(Debug, Clone)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not needed
Initial double HBONE implementation
Right now, inner HBONE will only hold one connect tunnel. Once the inner tunnel terminates, so will the outer tunnel (but not the outer HBONE). So when ztunnel receives its first connection to a double HBONE host (E/W gateway), it will perform two TLS handshakes. Subsequent connections to the same host will perform one TLS handshake.
This behavior is not great, but if we put the inner HBONE in the connection pool, then we pin ourselves to a pod in the remote cluster since ztunnel performs connection pooling, but is not aware of the E/W gateway's routing decision.
That being said, I think this is a good place to stop and think about control plane implementation and get some feedback on how I'm approaching this.
NOTE: The TLS/certificate related code changes are just for me to tests.
Tasks:
Some open questions:
N
inner HBONE connections per E/W or per remote cluster.References: