-
Notifications
You must be signed in to change notification settings - Fork 754
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Timeouts not honored #362
Comments
Timeout implementations are (for better or worse) up to the underlying HttpStack. If you're using HurlStack, the timeout is passed to These are notably not equivalent to timeouts on the overall request. For HurlStack, setConnectTimeout sets the timeout for opening the initial connection. Then, when reading data, as I understand it, the read timeout is used for each read from the InputStream, which can cause it to be applied an arbitrary number of times. This is definitely confusing, especially given how it behaves inconsistently depending on the HttpStack. The upcoming Cronet stack ignores the timeouts altogether since Cronet itself doesn't even support setting timeouts. At the moment, my recommendation would be to apply your own timeout at a higher-level if aborting the request after a certain amount of time is the desired experience. But I'll leave this open to consider whether we should make this possible within Volley directly; we're considering tweaks to the RetryPolicy interface in #80. |
That's still good news for me, though. I already have to fiddle with underlying stack to set up custom SSL, so I guess I have to fiddle some more for timeouts :) Thank you! |
I have to say it's bit complicated to overwrite the behavior of the |
It seems like the core problem here is that we do expose createConnection as a protected method, where you could theoretically customize the connection being returned, but then the timeouts are applied on top of it in openConnection which clobbers anything you might have done there. Perhaps a quick fix could be to call getConnectTimeout/getReadTimeout to see if they're set - I'm not sure offhand whether they'll be guaranteed to be unset on a fresh connection object, or if they might still have some valid default values. |
This whole timeout business is leaving me sour. You do already set the timeout as both connect timeout and read timeout. Granted, some implementations may need them separate. And I think the best way to solve this would be do add another protected method invoked right before using the connection, to have the implementation do something with it, and/or override whatever it wants. Having a protected But that won't help with the original issue I've encountered. The connect timeout is really just that - a connect timeout on establishing the TCP connection to the remote. That - (a) - doesn't include DNS, and (b) - if there are multiple IPs to connect to, connection to all those IPs will be attempted (such is stated in the Android's What I was probably observing - is timeouts due to DNS, because this was tested on a device that had it's network cable removed, and may be a mixture of DNS cache hits and misses, since the times varied. I don't want to retract this bug, as there is room to giving users more control over initiated connections, as uncovered, however, the way this issue description stands originally - that's probably just simply not a bug. All I got out of that is implementing a BaseHttpStack over okhtt3, which is cool but probably wouldn't help even a little with my timeouts problem :) |
+1 to the implementation of different HTTP stack timeouts being quite confusing. We had similar issues investigating Apache HTTP's timeout behavior in our own apps. Each stack has its own set of steps which may or may not expose settings for timeouts. That's why I would lean towards a simpler API of a top-level request timeout, at which point the request is deemed to fail regardless of what's happening with the underlying HTTP stack. (It's already the case that a request that is "canceled" by Volley can still complete successfully - we just guarantee that the callbacks are not invoked; this timeout would behave similarly). If it's possible from there to set corresponding timeouts in the underlying HTTP stack to prevent unneeded requests from lingering further beyond the high-level timeout, then that's fine, but it won't impact the hard-stop which Volley could enforce. No promises of course that we do this, given the work involved and need for new APIs, but I think that would provide a significantly simpler interface. In the mean time, I would implement this in an application by layering on my own timeout logic between Volley and my application. |
I'm facing this problem on 1.1.0. I'll upgrade to 1.1.1, but I don't see any commits post 1.1.0 that would have addressed this problem. The request times out significantly later than what the timeout prescribed.
There was one commit to solve having total request time subtract queue waiting time, but I understand the log above indicates that the request was taken into the queue 1ms after being submitted. I have my own retry policy class (created the simplest possible since I was struggling with this problem):
It's not consistent either, though the numbers are somewhat suspicious. In one session (consecutive requests), I get, for the same 30s specified timeout:
70,309ms
120,342ms
160,641ms
120,442ms
160,467ms
The text was updated successfully, but these errors were encountered: