WordPress.org

Make WordPress Core

Opened 9 years ago

Last modified 5 months ago

#17301 assigned enhancement

Keep the connection open when doing upgrades or long-running operations

Reported by: markjaquith Owned by: dd32
Milestone: Priority: normal
Severity: normal Version:
Component: Upgrade/Install Keywords: has-patch needs-testing needs-refresh
Focuses: Cc:
PR Number:

Description

When we do long-running operations like upgrades-over-FTP, we may go a while without sending any data, which may cause the connection to be closed. We should investigate whether we can send some "hey, still here" dummy data down the pipe to keep the connection from being unceremoniously closed on us.

For example, Rackspace Cloud Sites runs behind load balancers that cut the connection after 30 seconds of no data. I was able to defeat it by manually flushing some dummy data like so:

<?php

for ( $i = 1; $i < 46; $i++ ) {
	echo "$i...<br />\r\n";
	flush();
	sleep( 1 );
}
echo "DONE";

Attachments (1)

17301.diff (3.0 KB) - added by pento 6 years ago.

Download all attachments as: .zip

Change History (14)

#2 @sorich87
8 years ago

  • Type changed from defect (bug) to enhancement

#3 @hubertnguyen
8 years ago

Small note: the "Smush.it bulk" process does it really well. I tried on RackSpace Cloudsites, and it never times out, I left the process run for more than 45mn. Hope that Wordpress Core and plug-in update will never time out in the near future :)

/wp-admin/upload.php?page=wp-smushit-bulk

#4 in reply to: ↑ 1 @hubertnguyen
8 years ago

Replying to Denis-de-Bernardy:

see #10522, #10407, #11588 and #14049

Denis, I think that while #17301 may help with #14049, both are unrelated.

#5 @nacin
6 years ago

  • Component changed from General to Upgrade/Install
  • Milestone changed from Awaiting Review to 3.7

#6 @dd32
6 years ago

My initial thoughts on this were that Long-running functions (such as copy_dir() and unzip_file()) could execute a action every iteration, such as upgrader_tick.
A function hooked to that could then echo <!-- Some data --> every x seconds, If we had a method which left the FTP connection idle for a decent amount of time, we could also send a FTP NOOP command every x seconds.

Further to the last point, a FTP NOOP command would be good during long-running HTTP downloads when doing bulk upgrades, to keep the connection open.

@pento
6 years ago

#7 @pento
6 years ago

attachment:17301.diff​ adds a keepalive tick to some long functions, as suggested by dd32.

#8 @pento
6 years ago

  • Keywords has-patch needs-testing added

#9 @nacin
6 years ago

  • Owner set to dd32
  • Status changed from new to assigned
  • Type changed from enhancement to task (blessed)

Leaving this open for dd32.

#10 @nacin
6 years ago

  • Milestone changed from 3.7 to 3.8
  • Type changed from task (blessed) to enhancement

Per conversation with dd32.

#11 @dd32
6 years ago

  • Milestone changed from 3.8 to Future Release

#12 @chriscct7
4 years ago

  • Keywords needs-refresh added

#13 @hubertnguyen
4 years ago

Since I originally had this issue with Cloud Sites, I wanted to update this thread. The reason why it was slow is because the updates were done via FTP, with the FTP user/pass defined in wp-config.php

What happened is that every single file IO was transiting through the public web because Cloud Site's FTP isn't located on a local machine (even if it was, I'm not sure it'd be faster).

Using define('FS_METHOD','direct') in wp-config.php, the problem goes away, but the update process is much more likely to fail because of permission issues. Plug-ins or WP core files uploaded via FTP may not have the proper group write permission. That's not an issue (for me) via FTP, but it is an issue when PHP wants to write the files directly.

Anyway, the bottom-line is: people will slow FTP uploads can use the direct method, at the risk of running into permission issues.

Regardless, it would be nice if things would never time out until completion of the update.

Note: See TracTickets for help on using tickets.