Replies: 6 comments 8 replies
-
I've been doing a little more digging and found this: https://github.com/bashi/early-hints-explainer/blob/main/explainer.md#fetch-and-html-integration - so it seems like even Then I found this: https://bugs.chromium.org/p/chromium/issues/detail?id=1361262 apparently browsers ignore all but the first set of early hints. The complexity is pretty astounding:
I started to think about what a middleware would look like that could deal with early hints: class MyMiddleware
def call(request)
return Response.new(103, headers, next: proc{Response.new(200, final_headers, actual_body})
end
end This would change the entire model for response handling and in addition, make things like caching much more complex to implement correctly. Maybe it's viable? Or maybe there is an alternative way to do this I'm just not considering? |
Beta Was this translation helpful? Give feedback.
-
The spec is quite clear, headers returned via early hints are an indication of what the final headers MAY contain. This allow clients to act on it early (e.g. preload assets), but only the headers in the final response are ... well final. So if you have a response such as:
Then the final response doesn't have a So the final response shouldn't include anything from the previous 103 early hint responses. |
Beta Was this translation helpful? Give feedback.
-
I don't understand what you mean by stream cancellation. The main use of 100 Continue is for slow uploads or processing that can often timeout. It's a way for the server to notify the client that everything is still OK and that it can continue to send data or wait. It's also a way for the server to make sure the client is still reading. In short it's useful for slow synchronous endpoints, e.g. holding a lock, resizing an image / video, or whatever. Rather than to make the client poll to wait for the operation to be done and download the response, you can do it all in a single query. |
Beta Was this translation helpful? Give feedback.
-
For the interface, I think you listed the two possible designs. Either a callback, or returning an iterator. For the iterator solution however, I'd suggest making it opt-in as to avoid calling code that doesn't expect it. If the caller didn't opt-in, it's best to just drop the 1xx and keep reading, possibly resetting the read timeout. Similarly with the callback solution, if it isn't passed, the client should just drop them. As for naming specifically: client.get("/index", early_hints: proc{|headers| ...}) IMO it is a bad idea to add something for a specific error code, the naming and API should handle the entire 1xx range so that future additions are also automatically handled. So more something like |
Beta Was this translation helpful? Give feedback.
-
On the whole, I'm also of the mind that there's only minimal benefit and potentially high complexity to handling 1xx responses. My lean is to just discard them. Even for my use case in a reverse proxy, I don't presently think it's that important to implement. Others may have a different perspective though. On the client side, sending The idea of having a proxy itself respond to Related, async-http would need to always discard 1xx responses for HTTP/1.0 requests, even if it is willing to send them on h1.1 and h2. The backwards compatibility concerns mean that both client and server have to treat 1xx as optional and discardable anyway, which leaves room to argue for simply discarding all the time. |
Beta Was this translation helpful? Give feedback.
-
There is another weird edge case, for the sake of clients doing websockets, |
Beta Was this translation helpful? Give feedback.
-
In general, I (personally) see the complexity introduced by 1xx informational responses as significantly out weighting the benefits, especially given that it feels to me most of the advantages can be had by:
The Complexity of Informational Responses
I would like to state clearly, that I think informational responses break the clean request/response semantic of HTTP, but specifically, the complexity shows up in a few places, and while we can talk about those specific cases, I also don't think that "HTTP the abstract semantic" should care about the specifics too much - the idea being that client and server behaviour tied to specific status codes, I think, should not be encouraged - HTTP is an abstract interface, and we shouldn't need to have specific code paths for specific status fields. In practice, this is a lot worse with HTTP/1.1 and significantly better in HTTP/2+ which focuses more on the abstract protocol.
100 Continue
Internally, if the client makes a request with
expect: 100-continue
, the server can transparently handle that and write an informational response when the server starts trying to read the body. This hides most of the practical value within the protocol itself, but this design won't work across proxies that aren't aware of it, i.e. if you imagine:In the above picture, if 100-continue is considered purely at the protocol level, it won't be forwarded to the ruby web application. Probably this could be okay, but semantics of the original request are somewhat lost and essentially converted to normal stream semantics.
I could imagine because the proxy may choose to buffer the request before sending it to the web application, we end up with sub-optimal request/response handling - entirely the problem we were trying to avoid.
103 Early Hints
A client may make a request to an application, and that application may choose to return an informational response with details like
link
headers to preload certain resources or perhaps other information. The client in theory should receive that information as early as possible so as to start preloading those resources if needed.This design works quite okay for browsers, but presents challenges to existing interfaces, e.g.
Async::HTTP::Client
andFaraday
.It's not clear to me how to best expose this. In addition, it's not clear to me how those headers should be exposed to the user in the final response. This can impact the complexity of caching responses, as now we potentially have multiple sets of headers, should they be merged, or should we ignore them?
The only ideas I have about how to expose this are to change the response to potentially contain multiple responses.
However, I feel like this potentially introduces a huge amount of complexity to existing usage. It might be possible to do the following:
At least the semantics in some cases are preserved:
The reason why such a feature is desirable, is it seems like the only way we preserve the general request/response symmetry as required by any kind of middleware and/or proxy. In other words, every single middleware would need to correctly deal with informational responses and I don't believe there is any way to hide that (although I'd like to be proven wrong or be inspired by a different approach.
Ignorance is Bliss
Perhaps one way we can solve the problem, is to simply hide all the informational responses from users, but process them where possible. In other words, 100 continue is handled at the protocol layer, and 103 early hints is probably ignored and/or merged into the final headers for the response.
@zarqman do you have any thoughts on the above?
Beta Was this translation helpful? Give feedback.
All reactions