Opened 16 years ago
Closed 14 years ago
#8622 closed defect (bug) (maybelater)
HTTP API Fallover & non-blocking order doesnt appear to be working
Reported by: | DD32 | Owned by: | jacobsantos |
---|---|---|---|
Milestone: | Priority: | normal | |
Severity: | normal | Version: | 2.7 |
Component: | HTTP API | Keywords: | has-patch tested 2nd-opinion |
Focuses: | Cc: |
Description
It seem that the _getTransport() and _postTransport() functions are not falling over to the next available transport upon error, As the If block only ever matches a single transport.
That also has the effect of the non-blocking priority list being useless since which ever transport is matched, will be the only one available.
attached patch changes it to test each object, This could have a negitive downside of making requests take even longer in cases where Slowdowns are being experienced (ie. 4x 2second delays, 1 for each transport instead of only 1x 2second delay)
Perhaps some caching needs to be done to not use a transport if its failed in the last 12 hours, Much like the fsockopen class attempts to do.
Before:
array 0 => & object(WP_Http_Curl)[241] transports: WP_Http_Curl Requesting http://dd32.id.au/ using WP_Http_Curl took 0.000sWP_Http_Curl has failed with error:test
(curl::request returning a dummy error object, in reality, if it fails, it'll be timing out after 2-4 seconds)
(Aside: I believe this is the root issue on #8590, as it'll attempt to fire cron on every page load, and doesnt appear to be going into non-blocking mode in the details on that ticket)
After:
array 0 => & object(WP_Http_Curl)[241] 1 => & object(WP_Http_Streams)[242] 2 => & object(WP_Http_Fopen)[243] 3 => & object(WP_Http_Fsockopen)[246] transports: WP_Http_Curl, WP_Http_Streams, WP_Http_Fopen, WP_Http_Fsockopen Requesting http://dd32.id.au/ using WP_Http_Curl took 0.001sWP_Http_Curl has failed with error:test Requesting http://dd32.id.au/ using WP_Http_Streams took 0.710sWP_Http_Streams has failed with error:Could not open handle for fopen() to http://dd32.id.au/ Requesting http://dd32.id.au/ using WP_Http_Fopen took 1.214s
(thats a var_dump of the available transports at the start there)
jacobsantos: Any feedback?
Attachments (5)
Change History (29)
@
16 years ago
(hey, i just realised i didnt need that var_dump() of transports in the debug, its late :))
#2
@
16 years ago
I think it is the wrong path to use the same method in Fsockopen, because I think it is totally hackish way of going about it. I think it might be better, instead, to return the entire array and cycle through each one. The reason I didn't do this before was that a great deal of the API changed and I was unsure of where in the execution to put the code. The second reason would be that the worse case performance would be entirely terrible.
Also, the HTTP API shouldn't fail where other transports succeed. If one fails, then it should be reasonable to say that all of the others will also fail.
More hooks should be introduced to allow for a plugin to take hold and allow for more control. The issue is that anything done will introduce more performance overhead and slow the API down.
#4
@
16 years ago
Also, the HTTP API shouldn't fail where other transports succeed. If one fails, then it should be reasonable to say that all of the others will also fail.
Very true, Except in this case, It appears that curl IS failing, or at least timing out very quickly.
More hooks should be introduced to allow for a plugin to take hold and allow for more control.
I went to make a plugin that logged that sortof thing to find there wasnt enough hooks.. not sure what hooks would achieve it..and if they'd even be useful though..
#8
@
15 years ago
Thats a method i thought of, Once a transport fails, black list it, if it fails after a timeout blacklist, blacklist it twice as long. etc.
The problem with that patch, is that the ::test() functions use very little processor time, Thats not the place where transports fail, They fail when the request is made, and instead, it errors out (Which could be for any number of network-related issues, and not just transport issues)
#9
@
15 years ago
So, something like:
on curl fail, set_transient(blacklist_curl_transport, true, 1 day);
and then in the test() function:
if ( get_transient(blacklist_curl_transport) ) return false;
makes sense.
#10
@
15 years ago
sure, something like that would work.
I believe fsockopen() already has a blacklisting bit in it due to problem it was having
#11
@
15 years ago
- Keywords needs-patch added; has-patch needs-testing removed
- Milestone changed from 2.8 to Future Release
let's move this to future, pending the "correct" patch
#14
@
14 years ago
- Keywords has-patch tested added; needs-patch removed
- Owner set to jacobsantos
- Status changed from new to accepted
Should not need testing. Transient can be implemented later for caching, but I'm not concern with it right.
I can't seem to set the milestone.
@
14 years ago
Heavy handed approach, Assumes fopen is gone(#13915) therefor Get/Post can be combined
#16
@
14 years ago
This also prevents Unit testing of the class, As if you filter for which class is to be used, it may use a different transport (ie. whatever transport is the default on a system).
#17
@
14 years ago
- Resolution set to invalid
- Status changed from accepted to closed
as of #11613 being closed, this is mostly fixed.
The non-blocking order is no longer important, as the other ticket takes care of it.
The only remaining issue is that if a transport fails, thats it, end of story. The only place this causes problems is cases where the transport itself has a problem, ie #16870
I'm closing this as invalid now that _getTransport()
and _postTransport()
have been removed in #11613 and we can track the failures in #16870
#18
@
14 years ago
- Resolution invalid deleted
- Status changed from closed to reopened
The point of this and the patch was not getTransport() and postTransport(), it was the ability to allow fail-over to a transport when the transport itself has a problem. True, given the small amount who have problems with, it is safe to ignore this as a personal problem for those edge cases. However, it is nothing to add support for this since it has always been the original intention for even having the foreach in the code.
As for unit testing, well, you can't really unit test WP_Test::request(), functional testing or acceptance testing, sure, however unit testing has to be done for each individual class and well, they weren't written for unit testing in mind, so you are still going to have issues unit testing the transports.
#19
follow-ups:
↓ 20
↓ 21
@
14 years ago
As for unit testing,
Very simple to do. Filter the ::test() methods to select which transport you wish to test, test, remove filters, rinse repeat.
My suggestion here is to push the changes needed through #16870, specifically, when a transport fails, progressivly mark it as unusable.
The alternative is to adding an extra parameter similar to the block of code starting line 206 in this patch, which would be purely for a plugins usage.
In core we're striving for a HTTP API which just works, plugin authors shouldn't have to do anything more than request it, and handle an error condition being returned, but at the same time, it shouldn't cause unnecessary slowdowns for the user, failing over to the next transport over and over on a plugin's request should not be possible.
#20
in reply to:
↑ 19
@
14 years ago
Replying to dd32:
As for unit testing,
Very simple to do. Filter the ::test() methods to select which transport you wish to test, test, remove filters, rinse repeat. [...]
In core we're striving for a HTTP API which just works, plugin authors shouldn't have to do anything more than request it, and handle an error condition being returned, but at the same time, it shouldn't cause unnecessary slowdowns for the user, failing over to the next transport over and over on a plugin's request should not be possible.
That you've got a design problem doesn't come to your mind while your making that many words, right? (Not meant to sound rude, it's an actual question, not an attack, we need to deal with what we have so far)
#21
in reply to:
↑ 19
@
14 years ago
Replying to dd32:
As for unit testing,
Very simple to do. Filter the ::test() methods to select which transport you wish to test, test, remove filters, rinse repeat.
If you are writing your tests this way, then you are doing functional testing. If you aren't unit testing the individual transports as well, then you are doing it wrong.
Replying to dd32:
My suggestion here is to push the changes needed through #16870, specifically, when a transport fails, progressivly mark it as unusable.
That might just work, however, it depends on what you mean by failure. If there is a network issue preventing any of the transports from working, then you'll blacklist all of the transports from being used. There would have to be a way to check by what errors are considered the Transport not working.
Also, provided that the first transport should normally almost always work, you'd only ever be using the first transport. The failure might be network related and in which case the next transport might be unaffected by it. If you wish, you may fall back to two transports either fsockopen or streams with fsockopen as the preferred alternative if available.
Replying to dd32:
The alternative is to adding an extra parameter similar to the block of code starting line 206 in this patch, which would be purely for a plugins usage.
In core we're striving for a HTTP API which just works, plugin authors shouldn't have to do anything more than request it, and handle an error condition being returned, but at the same time, it shouldn't cause unnecessary slowdowns for the user, failing over to the next transport over and over on a plugin's request should not be possible.
Failing when the first transport fails and then conditionally preventing that transport from being used as opposed just dropping to the next available transport. I don't know, perhaps both might be better. I would say that if you are doing an HTTP request that is nonblocking, that performance and time doesn't really matter and what you want is the request to work regardless of how long it takes.
#22
follow-up:
↓ 23
@
14 years ago
- Keywords 2nd-opinion added
That you've got a design problem doesn't come to your mind while your making that many words, right? (Not meant to sound rude, it's an actual question, not an attack, we need to deal with what we have so far)
No, I agree with you there, It's a design problem for unit testing that you have to filter which transport to use, it however, is not a design problem for users of the API as they should never attempt to (or have a need to) use a transport directly.
That might just work, however, it depends on what you mean by failure. If there is a network issue preventing any of the transports from working, then you'll blacklist all of the transports from being used. There would have to be a way to check by what errors are considered the Transport not working.
Which is exactly why we do not have fallover in core, it hasn't been there since day1 (even though it was intended) and there has been no need to add it.
In most cases where a transport has failed, it's failed consistently (cURL and fsockopen) with the same error condition (DNS and no connectivity through that transport).
To add fallover and enabled by default will increase the timeouts that users experience.
Whilst a user should never be waiting on a HTTP request to complete in general, we do force that upon them in the Updaters/Installers as well as a few other locations related to media. To wait for all the transports (or a 2nd transport) to fail in many cases would lead to a bad user experience ("Its Slow!")
#23
in reply to:
↑ 22
@
14 years ago
Replying to dd32:
It's a design problem for unit testing that you have to filter which transport to use, it however, is not a design problem for users of the API as they should never attempt to (or have a need to) use a transport directly.
+1. Often it is difficult to test code properly due to how it is architected, but that doesn't mean the code is at fault. This is exactly how the class should be tested, given its structure.
To add fallover and enabled by default will increase the timeouts that users experience.
Whilst a user should never be waiting on a HTTP request to complete in general, we do force that upon them in the Updaters/Installers as well as a few other locations related to media. To wait for all the transports (or a 2nd transport) to fail in many cases would lead to a bad user experience ("Its Slow!")
Consistent failures are going to be more common, and falling over in a non-persistent way doesn't make much sense from the performance aspect. Ideally we should be deciding which transport to use and use it, not using each transport until one decides to work.
#24
@
14 years ago
- Milestone Future Release deleted
- Resolution set to maybelater
- Status changed from reopened to closed
Based on the previous discussion here, and striving for everything to just work as streamlined as possible, I'm re-closing as maybelater instead.
If a need for fallover comes up down (after the previously mentioned tickets are implemented) the line we can revisit.
Eg:
7s before the page would load instead of 3.5