Opened 5 weeks ago
Last modified 2 weeks ago
#64411 new enhancement
Media Library - Large Image Size Uploads
| Reported by: |
|
Owned by: | |
|---|---|---|---|
| Milestone: | Awaiting Review | Priority: | normal |
| Severity: | normal | Version: | 6.9 |
| Component: | Media | Keywords: | |
| Focuses: | performance | Cc: |
Description
Needs to handle large image uploads better for various server configurations natively.
To improve the Media Library's stability with 10MB+ assets, we need to revisit the default plupload_init configuration. Currently, the uploader often defaults to a monolithic POST request, which subjects large binaries to strict upload_max_filesize and max_execution_time limits. By enforcing a chunk_size (e.g., 1024KB) within the plupload parameters filter, we can segment the upload into smaller HTTP requests. This approach isolates failures to individual chunks rather than the entire file and effectively bypasses per-request size limits, significantly reducing failure rates on unstable client connections.
Regarding post-processing, the big_image_size_threshold introduced in 5.3 creates a bottleneck for high-resolution uploads. When the core attempts to immediately uncompress and scale a 10MB+ image (often exceeding the 2560px threshold), the memory footprint for the GD resource can spike instantly, leading to OOM (Out of Memory) fatal errors before the attachment metadata is even generated. We should consider handling these exceptions more gracefully or allowing the threshold to be bypassed programmatically via big_image_size_threshold when memory is detected to be insufficient, preventing the "hanging" upload experience.
Finally, we should look at the prioritization of WP_Image_Editor implementations. GD struggles with large pixel buffers compared to ImageMagick; preferring Imagick where available would mitigate memory exhaustion during resizing operations. Additionally, for the chunk reassembly phase, we need to ensure the environment's max_execution_time is sufficient (ideally 300s+) to handle the I/O overhead of stitching the file parts back together on the server side.
Change History (5)
#1
@
5 weeks ago
- Component changed from General to Media
- Focuses javascript php-compatibility removed
- Keywords needs-testing removed
#3
@
5 weeks ago
Hi @irgordon - thanks for the ticket... left some questions and comments below:
By enforcing a chunk_size (e.g., 1024KB) within the plupload parameters filter, we can segment the upload into smaller HTTP requests.
Interesting idea. What are the pros and cons of this approach? does it work for all servers/clients? (I wonder why we chose the current approach originally, was the chunked approach suggested/discussed?).
We should consider handling these exceptions more gracefully or allowing the threshold to be bypassed programmatically via big_image_size_threshold when memory is detected to be insufficient, preventing the "hanging" upload experience.
You can bypass the resizing behavior already, try this filter:
add_filter( 'big_image_size_threshold', '__return_false' );
Finally, we should look at the prioritization of WP_Image_Editor implementations. GD struggles with large pixel buffers compared to ImageMagick; preferring Imagick where available would mitigate memory exhaustion during resizing operations.
We already prefer Imagick when both GD and Imagick are available to handle a given upload format. So I'm not sure what change you are suggesting here.
#4
@
5 weeks ago
The primary benefit of enforcing chunking is bypassing restrictive upload_max_filesize limits on shared hosting without requiring users to edit php.ini. It also improves reliability on unstable mobile networks (resumability).
However, the downsides are typically server-side aggressive WAFs (ModSecurity) blocking rapid POST requests, and the need for garbage collection of orphaned chunks if an upload is abandoned.
Agreed that the filter exists for developers. However, our suggestion was aimed at UX resilience: automatically detecting if the environment has insufficient memory to handle the scaling operation and dynamically bypassing the threshold for that specific upload.
Currently, the process simply fatal errors or hangs, leaving the user confused. We are suggesting a 'fail-safe' where if the resize fails or is predicted to fail (memory check), WP skips the scaling and keeps the original, rather than terminating the upload entirely.
That clarifies the prioritization, thank you. The suggestion might be better phrased as adding a memory-check guardrail for GD specifically.
Since GD uncompresses the entire image into RAM (bitmap) to process it, it is the most common cause of a blank screen during uploads on overburdened servers. If Imagick is unavailable, could we check memory_get_usage() against the estimated memory needed for the image dimensions before attempting the GD operation? If it's going to crash, we could skip the resize (similar to the big_image_size_threshold logic) rather than crashing the process.
#5
@
2 weeks ago
If it's going to crash, we could skip the resize (similar to the big_image_size_threshold logic) rather than crashing the process.
I'm not sure we can reliably predict this. That said, we could fail more gracefully, catching and reporting the error. The original upload could be preserved, but my assumption is the sub-sized image creation would also fail in this instance, so I'm doubtful how useful this is for users. Serving only the original image giant upload on the front end would provide a terrible experience for site visitors.
Perhaps a more ideal solution is enabling client side media processing as is currently being worked on in Gutenberg to avoid the server limitations entirely.
I agree there would be some advantage to chunked uploads.
I'd be hesitant to use the process to work around server upload limits as there's no way for WordPress to know the intent of the limit. The safe assumption is that the limit is set intentionally to prevent large uploads, either by the hosting provider or the site developer. The majority of sites are on low-end hosting that includes limited disk space.