#33885 closed defect (bug) (invalid)
meta_form performs a potentially expensive query
Reported by: | jorbin | Owned by: | |
---|---|---|---|
Milestone: | Priority: | normal | |
Severity: | normal | Version: | 4.3.1 |
Component: | Administration | Keywords: | has-patch needs-testing needs-unit-tests needs-refresh close |
Focuses: | administration, performance | Cc: |
Description (last modified by )
This is a follow-up to #24498 and the research that @tollmanz did on the query used in meta_form. To summarize:
The query in meta_form can be incredibly slow on sites with a lot of post meta.
The solution to me seems to be to cache the query and update the cache of meta to use every time a new piece of post_meta is saved. This will prevent the query from needing to be run on sites with a persistent cache.
Attachments (10)
Change History (121)
This ticket was mentioned in Slack in #core by tollmanz. View the logs.
9 years ago
#4
@
9 years ago
Submitted to MariaDB folks: https://mariadb.atlassian.net/browse/MDEV-8872
#5
@
9 years ago
I've been following this ticket and have done some testing on a dataset from a large WordPress install that is currently unusable on 4.3 because of this bug. I can confirm that changing the field to VARCHAR(191) does result in the query using the index and brings performance to near 4.2 levels.
Here are the results of my testing:
CentOS 7
MariaDB 5.5.40
postmeta table has ~3.4 million rows and is 1.4 GB in size
4.1 query, UTF8mb4 meta_key:
SELECT meta_key
FROM wp_5_postmeta
GROUP BY meta_key
HAVING meta_key NOT LIKE '\_%'
ORDER BY meta_key
LIMIT 30;
14.47 seconds
4.3 query, UTF8mb4 meta_key:
SELECT DISTINCT meta_key
FROM wp_5_postmeta
WHERE meta_key NOT BETWEEN '_' AND '_z'
HAVING meta_key NOT LIKE '\_%'
ORDER BY meta_key
limit 30;
167 seconds
4.1 query, UTF8 meta_key (collation and encoding):
801 milliseconds
4.3 query, UTF8 meta_key (collation and encoding):
860 milliseconds
4.1 query, UTF8mb4, VARCHAR(191)
1.49 seconds
4.3 query, UTF8mb4, VARCHAR(191)
1.5 seconds
Doing an EXPLAIN on these queries confirms that before the VARCHAR(191) change, the query would not use the index when UTF8mb4 formatted. After the change, it uses the correct meta_key index (just as it did when UTF8 formatted).
From reading the MySQL documentation, this is not how I would expect indexes to work. The behavior described in the documentation seems to say that for a VARCHAR(255), the index would simply use the first 191 characters for the index and ignore the rest. I have not yet seen anything that suggests changing the field format itself is necessary.
@tollmanz's testing suite is pretty solid, but I've got a number of real sites with large postmeta tables impacted by this problem that I can test patches on if that would be helpful.
#6
in reply to:
↑ description
@
9 years ago
- Severity changed from normal to critical
- Version set to 4.3.1
Hi there,
I'm afraid I'm not that genius so can't contribute much other than saying that this is still an issue in WP 4.3.1.
Is there a quick and dirty (i.e. temporary) fix to easily adjust this? (whether in mysql or by editing a core file)
We've got 10M+ rows (woocommerce enabled site) and this definitely expensive query is crippling the back-end completely...
The front end seems to be OK, although it's touch and go because of the back-end issue.
I've been looking for a temporary solution and been trying to find out where this expensive query sits, but nothing I can find that works (not even in the includes/templates.php
I also already have applied these two solutions:
// Remove Custom Fields box from the Post edit screen. function remove_post_custom_fields_now() { $post_types = get_post_types( '', 'names' ); foreach ( $post_types as $post_type ) { remove_meta_box( 'postcustom' , $post_type , 'normal' ); } } add_action( 'admin_menu' , 'remove_post_custom_fields_now' );
and
//Fix Custom Field slow issue function remove_metaboxes() { remove_meta_box( 'postcustom', 'page', 'normal', 'orders' ); } add_action('admin_menu', 'remove_metaboxes');
It's getting so bad that I'm really in trouble soon...
Any temporary fixes or suggestions are much, much appreciated (hopefully others can benefit too).
Thanks, CM
#7
@
9 years ago
We also had a lot of performance problems because of this query (2 million records; query ran in 8-10 seconds). Just wanted to say that the updated index fixes the problem!
#8
@
9 years ago
- Keywords has-patch needs-testing added
- Milestone changed from Awaiting Review to 4.4
I've been playing around a bit with tweaking the query, with not much luck. The MySQL optimizer just seems to have some issues with this particular prefix index in utf8mb4
.
I'm wondering if it would be faster to do the sorting in PHP, instead. 33885.diff is first pass at that, grabbing a small chunk of rows, filtering them, then going back for more as needed.
For those of you with large data sets you can test on, could you please apply this patch, and tell me:
- What does performance look like to you?
- Does it give the same results as the current query?
This ticket was mentioned in Slack in #core by jorbin. View the logs.
9 years ago
#10
@
9 years ago
Here's a little nudge for all of you - we need to get this solved pretty soon (as in, in the next day or two, with 4.4 RC 1 due on the 18th), if this change is going to make it into 4.4.
Please test the patch and see if it helps.
#11
@
9 years ago
The expensive query bit me last Friday when it took a client site down. We're not using custom fields in the site, so the meta box had been disabled pretty much everywhere except for on Pages. Someone was editing a page and this triggered several instances of the query stacking up on the database layer and, eventually, locking things up entirely.
The first change I attempted was altering the database to set the size of the meta_key
column to the same as the index. That by itself went a long way to help - before the change, the original query took > 140s (s as in seconds) - after the change, the same query took only 0.015s.
So this is definitely a change I'd like to see us make.
However, the ask was on this specific patch. I've run both pre and post tests on Pages (again, the only post type utilizing custom fields):
Trunk: 96.7036ms
Patched: 39.9126ms
This is showing significant improvement. I like how we're doing the filtering in PHP as it prevents running a super long query in the database that could lock things up. However, the looping behavior of the patch could result in running this new query multiple times. In my situation, it was only running once. If meta is built in such a way that this were to run more than twice, the overall performance here would actually be 'worse' than with the existing query.
For comparison, running EXPLAIN
on the existing query (without the DB schema change) shows the query is scanning 1.2 million rows in order to generate our list of 30 meta keys (this is a somewhat large database). Running EXPLAIN
on the patched version of the query (again, without the DB schema change) shows that we're scanning even more rows (2.5 million) to generate 115 records, and then filtering them out to just the 30 we wanted.
Trunk:
id, select_type, table, type, possible_keys, key, key_len, ref, rows, Extra 1, SIMPLE, wp_postmeta, range, meta_key, meta_key, 767, , 1244198, Using where; Using temporary; Using filesort
Patched:
id, select_type, table, type, possible_keys, key, key_len, ref, rows, Extra 1, SIMPLE, wp_postmeta, ALL, , , , , 2488394, Using temporary
So, in this one case, a single run of the query feels faster. But it ultimately grabs far more information and loads far more results into memory in order for PHP to do the filtering outside of the database. I'm convinced we have a problem, but I'm not sure this is the right solution. Great contender, but still raises some red flags for me.
This ticket was mentioned in Slack in #core by ericmann. View the logs.
9 years ago
This ticket was mentioned in Slack in #core by wonderboymusic. View the logs.
9 years ago
#15
@
9 years ago
- Milestone changed from 4.4 to Future Release
With no commits, I'm moving this out of 4.4.
@
9 years ago
Add a filter to set predefined keys (or override with a different query) if necessary.
#16
@
9 years ago
As an alternative approach, patch 33885.2.diff adds a query earlier in the function's execution to allow short-circuiting everything if necessary. If custom fields are a finite set, they can be hard-coded into an array. If developers instead want to pull them with a different query (or from a provider like ElasticSearch for performance) they can hook into the same filter to grab the data as necessary.
This has the advantage of leaving existing functionality in place while still giving affected sites the ability to override the defaults in favor of performance.
#17
@
9 years ago
So this is pretty terrible.
Really what I'd like to do here is just kill the damn meta box. Perhaps in 4.5 we can try to nuke it for all new installs.
If a site is affected, and they would like to use this filter, can't they just instead remove the meta box entirely? What's the need for the filter, in that case? The venn diagram of (1) sites that will apply this filter and (2) sites that use custom fields is, I'm pretty sure, just wired.com (per @tollmanz, who like @helen is sitting next to me at php[world]). As a workaround, they could either intercept the query, copy-paste the meta box, or build a better user experience. I will lose zero sleep over this very rare situation. Thanks @helen for getting me riled up about this.
But if we're going to do the filter, let's have it be null
by default, that way someone can return an empty array.
#18
@
9 years ago
- Milestone changed from Future Release to 4.4
I agree that this meta box is terrible and that the general goal should be to remove it entirely, but I think in the meantime, a short-circuit is smart and the idea of populating the dropdown with values that aren't necessarily existing meta keys is a pretty interesting one. I wouldn't advertise that as a good use, though, since we don't want to actually encourage using the default meta box :)
Strongly agree about using null
. Going to commit this as a fix for 4.4 - longer term we shouldn't have the meta box registered by default. Not sure what can realistically be done about the actual underlying DB problem.
#20
follow-up:
↓ 21
@
9 years ago
Should we also pass the post type to the postmeta_form_keys
filter, as #18979 suggests?
#21
in reply to:
↑ 20
@
9 years ago
Replying to SergeyBiryukov:
Should we also pass the post type to the
postmeta_form_keys
filter, as #18979 suggests?
Maybe the whole $post
object, then.
This ticket was mentioned in Slack in #core by wonderboymusic. View the logs.
9 years ago
#24
follow-up:
↓ 38
@
9 years ago
- Milestone changed from 4.4 to Future Release
Moving to future release aka punting, this isn't really "fixed" for 4.4 so much as you can just get around it for now without having to alter the table.
#25
@
9 years ago
Just my 0.02$
Found this slow query while testing WP 4.4.2
Returns 0.5 million rows (of 1.9 million). Using where; Using temporary; Using filesort. 2.22 seconds
After modifying meta_key field to varchar(191), everything seems to a lot better:
Returns couple rows. Using where; Using index for group-by. Taking 0.0010 seconds
Waiting to final fix for this. Thank you.
#26
@
9 years ago
I needed this! I have WordPress Multisite and one of the subsites has a large bbPress forum.
But I'm a little confused by the posts here. I'm on WordPress 4.4 and I need to know how I can implement this temporary fix.
I swapped out the code in template.php with the code changes in 33885.2.2.diff. Then, I ran "ALTER TABLE wp_##_postmeta MODIFY meta_key varchar(191);" (where "##" is the blog ID of my subsite).
Things are definitely faster! I went from 80-second load times on the editor to only a few seconds. But did I do this correctly?
#27
follow-up:
↓ 28
@
9 years ago
@DJosephDesign got a large woocommerce-filled DB running on WP 4.4.2. Did your solution work to speed up admin, or do any issues persist?
#28
in reply to:
↑ 27
@
9 years ago
Replying to robscott:
@DJosephDesign got a large woocommerce-filled DB running on WP 4.4.2. Did your solution work to speed up admin, or do any issues persist?
It's working for me and it definitely speed up my post editor. But I don't think this affects the rest of the admin, because this addresses post meta.
#29
@
9 years ago
@DJosephDesign - the admin is peppered with slow queries, mainly based around the wp_postmeta table, which triggers such things as admin reporting for woocommerce, and a variety of other not-so-necessary but nonetheless loaded features.
In staging tested these updates and got some mild performance improvements. Queries are still slow, just not quite as painful.
#30
in reply to:
↑ 2
@
9 years ago
Spot on, I can confirm this is fixing critical issues.
I updated on a client site with 20+ million rows i wp_postmeta and it didn't respond at all after update. Just waiting 200+ seconds on each page load. After changing the field length to 191 on meta_key its back to it's original speed and returning pages in a few milliseconds again.
As a side note I don't think its good practice to have very longer keys for the meta_key anyway so the best solution would probably be to shorten the field in a upcoming release.
It would be nice if we could add som proactive scripts to the WP upgrade wizard warning the user about breaking changes before update. This kind of info would fit there and I can think up a lot more good cases where both plugin and core can detect issue before a core update to avoid downtime.
Replying to tollmanz:
I think I found the issue with this query. While the index was updated to have a length of 191, the field itself was not updated to
varchar(191)
. When I change the field tovarchar(191)
, the index works on the field and the query is fast (i.e., a few ms).
This test run (https://travis-ci.org/tollmanz/utf8mb4-query-time/jobs/82861471) shows the results with 1 million rows, using the following table:
CREATE TABLE `wp_postmeta` ( `meta_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `post_id` bigint(20) unsigned NOT NULL DEFAULT '0', `meta_key` varchar(191) COLLATE utf8mb4_unicode_ci DEFAULT NULL, `meta_value` longtext COLLATE utf8mb4_unicode_ci, PRIMARY KEY (`meta_id`), KEY `post_id` (`post_id`), KEY `meta_key` (`meta_key`(191)) ) ENGINE=InnoDB AUTO_INCREMENT=212732257 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;The results are:
Query_ID Duration Query 1 0.00040875 SELECT DISTINCT meta_key\nFROM wp_postmeta\nWHERE meta_key NOT BETWEEN '_' AND '_z'\nHAVING meta_key NOT LIKE '\\_%'\nORDER BY meta_key\nLIMIT 30 id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE wp_postmeta range meta_key meta_key 767 NULL 519624 Using where; Using indexI think we just need to update the schema to change the field to
varchar(191)
and unfortunately write some upgrade routines.
#33
in reply to:
↑ 32
@
9 years ago
Replying to patzo:
Replying to ocean90:
#36762 was marked as a duplicate.
has anyone able to resolve this issue? i believe my ticket was a duplicate.
just really confusing when i read this page.
There are a number of factors at play, almost all related to sites with a lot stored in wp_postmeta.
Performance improvements can be found by implementing some of the solutions posted above, namely this one:
https://core.trac.wordpress.org/ticket/33885#comment:2
The SQL query which will get this done for you may be similar to this one:
https://core.trac.wordpress.org/ticket/33885?replyto=32#comment:26
If you're not using MU, and you have standard prefix, it will be very similar to the following:
ALTER TABLE wp_postmeta MODIFY meta_key varchar(191);
Highly recommend you do this in a test environment, and backup DB before running any alter table stuff. At the very least backup wp_postmeta.
Then see if the slow queries are resolved. There are a few other things we needed to look at on the sites which had slow queries. Mainly very large installs with a lot of things in meta.
#34
@
8 years ago
Awesome! Our wordpress 4.5.3 + wpml + woocommerce site ( running a mariadb 5.5.47 database on a server with 12 cores + 64GB RAM) was facing several performance issues for a long time.
The alter table command fixed the utf8mb4 index issue and our server is now working as expected. Thanks!
Btw, why this fix is not yet available on wordpress core?
#35
@
8 years ago
I'm running WP 4.5.3 + Woocommerce (60,000 products) on Nginx 1.9.14, Mariah DB 10.0.24, PHP 5.4.45, with 8 cores and 6GB ram.
Before I test this out, is there any particular information I can provide that will be of use to those working on this fix?
As I seem to be the exact intended audience for this fix , I thought I'd chip in if possible.
#36
in reply to:
↑ 2
@
8 years ago
Yes, this solution worked for me. I have a wp_postmeta table with 600M rows and was experiencing the exact same slow query (60-80 sec). I used this handy command-line utility from the Percona Toolkit to modify the meta_key column from varchar(255) to varchar(191) without affecting the current site:
pt-online-schema-change --alter "MODIFY COLUMN meta_key VARCHAR(191)" D=mydatabase,t=wp_postmeta
Page load time of the Edit Post screen improved by over 50x.
Replying to tollmanz:
I think I found the issue with this query. While the index was updated to have a length of 191, the field itself was not updated to
varchar(191)
. When I change the field tovarchar(191)
, the index works on the field and the query is fast (i.e., a few ms).
This test run (https://travis-ci.org/tollmanz/utf8mb4-query-time/jobs/82861471) shows the results with 1 million rows, using the following table:
CREATE TABLE `wp_postmeta` ( `meta_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `post_id` bigint(20) unsigned NOT NULL DEFAULT '0', `meta_key` varchar(191) COLLATE utf8mb4_unicode_ci DEFAULT NULL, `meta_value` longtext COLLATE utf8mb4_unicode_ci, PRIMARY KEY (`meta_id`), KEY `post_id` (`post_id`), KEY `meta_key` (`meta_key`(191)) ) ENGINE=InnoDB AUTO_INCREMENT=212732257 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;The results are:
Query_ID Duration Query 1 0.00040875 SELECT DISTINCT meta_key\nFROM wp_postmeta\nWHERE meta_key NOT BETWEEN '_' AND '_z'\nHAVING meta_key NOT LIKE '\\_%'\nORDER BY meta_key\nLIMIT 30 id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE wp_postmeta range meta_key meta_key 767 NULL 519624 Using where; Using indexI think we just need to update the schema to change the field to
varchar(191)
and unfortunately write some upgrade routines.
#37
@
8 years ago
I was consistently seeing a 15s query time, ran ALTER TABLE wp_postmeta MODIFY meta_key varchar(191);
on the table and now it's down to 0.3s.
The wp_post_meta table is about 2m rows.
TY @tollmanz hoping someone can get this into core soon?
#38
in reply to:
↑ 24
;
follow-up:
↓ 39
@
8 years ago
Replying to helen:
Moving to future release aka punting, this isn't really "fixed" for 4.4 so much as you can just get around it for now without having to alter the table.
Why avoid altering the table here? Is the concern someone could be using a 191+ char key and updating could truncate the key? Could we check max field length and then update? Or are we hoping for an upstream fox from MariaDB?
#39
in reply to:
↑ 38
;
follow-up:
↓ 42
@
8 years ago
Replying to jb510:
Why avoid altering the table here? Is the concern someone could be using a 191+ char key and updating could truncate the key? Could we check max field length and then update? Or are we hoping for an upstream fox from MariaDB?
Ideally MySQL/MariaDB/whomever else fixes whatever the bug is that we're running into.
We can't really decrease the length of the column, mostly due to problems arising from truncating keys, but also because it adds unexpected limits to the key lengths which have existed this way for quite some time. Having a shorter index is really a bandaid to allow it to work over all hosts.
Ideally I think we're probably best removing this metabox query entirely, and moving towards something else. We could leverage the meta registration process for example for pre-filling the meta keys without a query or somesuch too (or just remembering the last x meta_keys used by users in a transient or option)
#40
@
8 years ago
Hey,
Just posting back here because I refer often to this link (I just sent someone here) to add 2 cents. On most managed WP hosts, when the meta table is somewhat large (WARNING non specific language!), this can cause a big speed increase. This is most often (for me) on quite big WooCommerce installations. It almost always speeds up admin and uncached (logged in user) pageload times significantly.
So 8s down to 1.5s loading time on admin load.
This is on real live sites in the wild! Usually hosted on a dedicated managed WP host of one flavour or another.
I check for the existence of > 190 char meta, then, if that's not there, I then run the query:
ALTER TABLE wp_postmeta MODIFY meta_key varchar(191);
#42
in reply to:
↑ 39
@
8 years ago
Replying to dd32:
Ideally MySQL/MariaDB/whomever else fixes whatever the bug is that we're running into.
This is not a bug, it's well documented and expected behaviour (https://dev.mysql.com/doc/refman/5.7/en/charset-unicode-conversion.html.
The query is slow because MySQL can't use partial index on meta_key
to optimize ORDER BY
clause (at least for now). There is a workaround ( innodb_large_prefix
) mentioned in the documentation I linked to but it requires configuration changes to MySQL so it's not a general solution.
#43
follow-up:
↓ 44
@
7 years ago
Unfortunately, I don’t see MySQL/MariaDB allowing sorting by partial indexes any time soon (and certainly not back porting it to older versions), which means we need to implement something in WordPress.
I don’t like the option of making meta_key
shorter, as we’ll either end up truncating existing data, or (if we check for longer values before truncating) have some sites with meta_key
as VARCHAR(191)
, and some with it as VARCHAR(255)
. Diverging database structures makes ongoing maintenance much more difficult, which I would prefer to avoid. That leaves the remaining options of more explicitly recommending the postmeta_form_keys
filter, removing the meta box, or caching the query result.
I don’t think that the postmeta_form_keys
filter is the right solution, particularly for sites that have regularly changing meta keys. We’d effectively be asking folks to write their own caching solution.
Removing the meta box is probably a big task that will involve a lot of ongoing work unrelated to this particular ticket. Ultimately, that’s the right solution, but I’m not sure if it’s something we should tackle to solve this bug.
That leaves caching the query result. Now, we can either cache using WP_Object_Cache
(so the cache is really only valuable to sites with a persistent caching setup), or cache the results in an option, so that it’s valuable to any site. I lean towards using the object cache, as the usual caching solution, but I’m open to arguments either way.
#44
in reply to:
↑ 43
@
7 years ago
Replying to pento:
I find myself agreeing with @pento here. Unfortunately I don't think there's some magical quick fix which we've missed.
Diverging DB structures and shortening field lengths is pretty much off the table IMHO, if only to prevent data destruction now or later (in a few years when the reason for some people having different field lengths has been long forgotten)
Removing the meta box is probably a big task that will involve a lot of ongoing work unrelated to this particular ticket. Ultimately, that’s the right solution, but I’m not sure if it’s something we should tackle to solve this bug.
Personally I see the need for the metabox, but I also think it could benefit greatly from a re-think, which will probably happen as part of a new post interface at some point in the future, probably reliant upon meta keys being registered and some form of UI being assigned, but that isn't for this ticket :)
That leaves caching the query result. Now, we can either cache using
WP_Object_Cache
(so the cache is really only valuable to sites with a persistent caching setup), or cache the results in an option, so that it’s valuable to any site. I lean towards using the object cache, as the usual caching solution, but I’m open to arguments either way.
A Transient seems like the best of both worlds here - memory cached when available, DB when not. The downside is that some people will be clearing all transients more often, and I fear those who are impacted by this the most are probably the same users, despite that, I still think it's the best option.
I'm thinking something along the logic lines of:
if transient doesn't exist then create transient as unique meta_key on initial db upgrade then queue a single cron task to generate transient on new meta_key addition then suffix to transient meta_key for display then merge registered meta_keys with transient and display
#45
@
7 years ago
This seems to be a lot of work when the simplest solution would be to remove the query which populates the meta keys dropdown entirely. The default behaviour would fall back to a text input for the meta key, and the postmeta_form_keys
filter could still be used on sites which want to provide a list of keys.
#46
@
7 years ago
Agree - though it may be as easy to make this an option, which defaults to "off", rather than using a filter. Its calling the expensive query when it wasn't (necessarily) needed that's the problem imo.
Is it not an option to update the database to a shorter meta_key
(varchar(191)
) after confirming the non existence of keys longer than that. I don't see how that could destroy any data, ever?
#47
@
7 years ago
Looking at this more, we are lucky that ordering by meta_key
is not very popular. This appears to be the only place in core where that happens, and none of the more popular plugins do this either.
In addition, WP_Meta_Query
does not support ordering by meta_key
– any attempts to do so are ignored.
The simplest option is @pento's original patch. Removing the ORDER BY
clause (or using ORDER BY NULL
) prevents MySQL from Using filesort
on the result set, which does result in an overall performance improvement.
Unfortunately, it isn't until the VARCHAR
length matches the meta_key
index length that Using temporary
stops and the meta_key
index is fully utilized.
As far as directly altering _postmeta
in concerned:
ALTER TABLE wp_postmeta MODIFY meta_key varchar(191);
I'd like to caution against doing this for a few reasons:
- Doing this only to
_postmeta
and not the other meta-data tables goes against the normalization of the meta API. It's a change you will quickly forget you made here, and will inevitably cause a problem years from now in your application. - You'll want to run
ANALYZE TABLE wp_postmeta;
afterwards, or visit/wp-admin/maint/repair.php
and follow the on-screen instructions. This is becauseALTER
ing the database table alone does not rebuild the key distributions. - On tables with millions of rows, both the
ALTER
and the aboveANALYZE TABLE
queries will take several seconds (or minutes) to complete. - On installations with replication, you will want to monitor your secondaries, making sure their key distributions match the primary.
- Running this
ALTER
has the hidden consequence of also updating themeta_key
index to assume the length of themeta_key
column, so futureALTER
s on this column will directly effect the index too.KEY `meta_key` (`meta_key`(191)) // Becomes: KEY `meta_key` (`meta_key`) // Because column & key share the same length, the key length definition is dropped
My conclusions:
- Future versions of MySQL and MariaDB will default to
innodb_large_prefix
being `ON` by default. This won't directly avoid temporary tables & filesorts, but it will generally improve query performance the way @matt_fw eluded to earlier. - For anyone that has added an index to their
meta_value
columns (not uncommon for sites that do complexJOIN
s), you'll want to keep a similar eye on your customizations. - Because of how rare & weird it is to
ORDER BY meta_key
, I do think it's OK to not optimize the meta database tables for his behavior. - Thankfully, even when their lengths are misaligned, the
meta_key
index does still function as intended for all queries other thanORDER BY meta_key
, so themeta_key
index is not wholly ineffective - only forORDER BY
queries. - @dd32's transient approach in #44 is a sound work-around for this isolated problem, for now.
- For later, we should consider what a redux of this meta-box experience looks like.
Tangent ideas:
- Like we have
wp_is_large_network()
, we could consider awp_is_large_site()
companion. In theory, it could be used by plugins like WooCommerce or bbPress to announce to the rest of the environment "hey, maybe the posts table will have more rows than is considered typical" so other aspects of the system (I.E. this meta-box query) can respond accordingly. - Pippin and I have started working on a Meta Manager plugin, which includes a WP_Meta_Data_Query class for querying
meta
tables directly (withoutJOIN
ing their paired object table.) I'd written those classes to eventually make into a core ticket. It's a bit premature to mention them now, but they still might be useful if we need to build a caching strategy around meta-only queries.
#48
@
7 years ago
@johnjamesjacoby nice points, well made :)
However, your caution aside, in reality, sites (like the wp_is_large_site()
woocommerce sites (great idea btw) you mention) which exhibit this issue are managed by people who are seeing 8 second load times every time they do some action in admin of their site. This is not okay for such users, who may go to some other software because of it...
Now, as WooCommerce is moving towards using "other places" to store data, in addition to the other forthcoming updates, I suspect this issue will wash through in time. BUT, where this is a problem, right now, it is a big one, which can drive site admin pretty crazy!
The solution, while a bandaid, to perform some optimizations on the database, does work, today, to make admin faster.
That solution being ALTER TABLE wp_postmeta MODIFY meta_key varchar(191);
"It's a change you will quickly forget you made here, and will inevitably cause a problem years from now in your application."
What, specifically, are the inevitable problems here? (assuming I don't forget changes made to the db!)
#49
@
7 years ago
Hey @robscott, thanks for the reply. I’ll cover my concern quickly here, and I’m happy to chat more with anyone more outside of this ticket (don’t want to pollute anyone’s inboxes, and don’t want to leave you hanging either.)
My concerns with one-off database alters are:
- Future attempts to alter may fail without warning, and nobody will remember why that might be
- Database alters aren’t in version control, so there is no history or log of that event having occurred to refer back to
- One table performs differently than the others, using the same procedural API on top of it, again, without an obvious reason why to other people who work on the site
- Reverting this change later (once WordPress itself is improved) may not be desirable, or forgotten completely
TL;DR - It’s confusing for other people if something else goes wrong. Improve without compromising future improvements.
#50
@
7 years ago
If we can't/shouldn't truncate the column it seems like the best the answer is to not make the query in the first place. Either but reverting to a text field as John Billion suggested above, or not loading the metabox itself unless explictly wanted.
Would lazy loading that box or just lazy loading the drop down be viable?
Are we past the everything must work in a nojs world? Maybe Ajax/lazy load the drop down when focused, but for nojs just leave a text input?
FWIW we just disable that box via filter on all sites globally now and have had no issues.
#51
@
7 years ago
FWIW we just disable that box via filter on all sites globally now and have had no issues.
This is my probably permanent recommended fix for existing installations, because by the time you have this much meta-data in that table, you will not derive very much joy from trying to choose a meta-key from a drop-down anyways.
Add this to an mu-plugin
of your choosing:
add_action( 'add_meta_boxes', function() { remove_meta_box( 'postcustom', null, 'normal' ); } );
Another approach:
// Could use 'load-post-new.php' instead of 'admin_init' add_action( 'admin_init', function() { foreach ( get_post_types_by_support( 'post-custom' ) as $type ) { remove_post_type_support( $type, 'post-custom' ); } } );
Long running queries are long running queries. Lazy loading the box or drop-down options only defers the problem, and "hiding" the meta-box doesn't help either because the same code still executes – it's just not visible. The meta-box needs to be removed after it's been added.
#52
@
7 years ago
- Keywords needs-patch added; has-patch needs-testing removed
Let's just remove the query from this meta box. If a site really wants it back, the existing postmeta_form_keys
filter can be used to populate the list via whatever means is desired, including running the query that is currently in place.
#53
@
7 years ago
- Keywords has-patch added; needs-patch removed
- Milestone changed from Future Release to 4.8.1
#54
@
7 years ago
Removing the query will be a problem for the 99% of sites that aren't running into this performance issue.
The best option still seems to be caching the query result.
#55
@
7 years ago
The best option still seems to be caching the query result.
I still agree with @pento, what @dd32 proposed is the best overall option.
@johnbillion's patch approach would be fine for a new meta-box, but it should really keep working as is for the existing one.
This ticket was mentioned in Slack in #core by jeffpaul. View the logs.
7 years ago
#57
@
7 years ago
- Milestone changed from 4.8.1 to 4.9
Per today's bug scrub and based on what appears to be lack of agreement on approach, we're punting this to 4.9.
This ticket was mentioned in Slack in #core by jjj. View the logs.
7 years ago
#59
@
7 years ago
@johnbillion @lopwal are either of your latest patches in line with the approach described by @pento @dd32 @johnjamesjacoby? If so, then let's get this tested and moved along. If not, then it would be great to get a patch early in the 4.9 cycle so we can get solid testing/feedback... thanks!
#60
@
7 years ago
@jbpaul17 I will put my hand up to test and banchmark this stuff when any patches are forthcoming. Most of my 'solutions' are not core enough to warrent any commentary here... but I have gone far too long (10 actual human years) without any meaningful contribution to wp core. I am convinced I can assist here! Particularly with some monster woo sites causing all manner of postmeta fun.
#61
@
7 years ago
33885.5.diff caches the result of the post meta key query, and naively clears the cache when a meta key is added or deleted.
If this approach works for folks, the cache clearing could be made a bit smarter.
#63
@
7 years ago
@robscott want to give 33885.5.diff a test to see how it compares?
#64
@
7 years ago
@jbpaul17 sure thing. Will take me a little while to setup & test, but will give it a spin and see how it works in a few setups (primarily testing "small wp" and "monster woocommerce" as these are two ends of spectrum).
@johnjamesjacoby you mentioned database alterations not being in version control. This is something that I've been deliberating for a while my side. Feels like a place where a gamechanging tool is needed: (proper, working, bidirectional) version control for MySQL (and other) databases. Until then, we have documenting changes and best practices. None of which has anything to do with core conversations really (in my opinion), except to suggest how to make running repairs in the wild to frustrated big wp hosters and maintainers. Yes - would love to discuss this stuff elsewhere.
#66
follow-ups:
↓ 67
↓ 68
@
7 years ago
For folks who want to see this in 4.9: October 30 is the last possible date for it to be committed, but I'll need to know if 33885.5.diff works for you well before then, so it can be polished and tested.
#67
in reply to:
↑ 66
@
7 years ago
Replying to pento:
For folks who want to see this in 4.9: October 30 is the last possible date for it to be committed, but I'll need to know if 33885.5.diff works for you well before then, so it can be polished and tested.
I'm happy to try it on my large WordPress Multisite (post-new.php currently takes 12.65 seconds to load).
But please forgive my ignorance, I don't know the proper way to implement that diff file. I'm comfortable enough in SSH, but I don't know the proper commands.
#68
in reply to:
↑ 66
@
7 years ago
Replying to pento:
For folks who want to see this in 4.9: October 30 is the last possible date for it to be committed, but I'll need to know if 33885.5.diff works for you well before then, so it can be polished and tested.
(I did some digging and found https://make.wordpress.org/core/handbook/tutorials/working-with-patches/#applying-a-patch to apply the patch.)
I now have the patch, but I'm not seeing any performance improvement.
This ticket was mentioned in Slack in #core by jeffpaul. View the logs.
7 years ago
#70
follow-up:
↓ 73
@
7 years ago
This needs commit in the next few hours if it is going to be part of 4.9. It should be part of beta4 for testing.
@DJosephDesign Do you have an external object cache on your install? You should have something like Memcached Object Cache installed.
#71
@
7 years ago
Hey guys, had a busy time last week, and this one got punted my side :) I am pulling together some nice big woocommerce dbs for this, but I think I'll struggle to benchmark anything meaningful, though I will try to (at least) get something concrete in next 1-2 days - i.e. works / doesn't work; improvement shown. Ideally, I'd like to test InnoDB vs MyISAM and various PHP versions, and also "Big WP" vs "small WP" in particular, to see what this throws up re improvements, though from what I can see, we're looking really for "any good reason we cannot push this live"?
One of my issues is having been working around this for some time using a variety of fixes, I need to make sure I'm testing a virgin db which we have not been tuning :)
#73
in reply to:
↑ 70
@
7 years ago
Replying to westonruter:
@DJosephDesign Do you have an external object cache on your install? You should have something like Memcached Object Cache installed.
I have Zend OpCache. My server is running Centminmod LEMP stack.
#74
follow-up:
↓ 75
@
7 years ago
@DJosephDesign OK, having op cache wouldn't help in this case. The op cache just improves performance of PHP. It does not cache DB queries which is what this ticket is about. You need to have an persistent object caching plugin that uses something like Memcached or Redis.
#75
in reply to:
↑ 74
@
7 years ago
Replying to westonruter:
@DJosephDesign OK, having op cache wouldn't help in this case. The op cache just improves performance of PHP. It does not cache DB queries which is what this ticket is about. You need to have an persistent object caching plugin that uses something like Memcached or Redis.
I can investigate doing that. But as you might see from months ago, one of the other patches, or maybe the DB restructure, _did_ give me a noticeable performance increase.
#76
@
6 years ago
- Milestone changed from 5.0 to Future Release
Switching milestone due to the focus on the new editor (Gutenberg) for WordPress 5.0.
#77
@
6 years ago
I'm just sharing observations.
I just migrated the MySQL server of a website with around 1.4mil postmeta rows to MariaDB 10.3 and came across this same issue. Following query took 30+ seconds to execute.
SELECT DISTINCT meta_key FROM #REDACTED#_postmeta WHERE meta_key NOT BETWEEN '_' AND '_z' HAVING meta_key NOT LIKE '\\_%' ORDER BY meta_key LIMIT 30
EXPLAIN shows the problem as
Using where; Using temporary; Using filesort
After checking max length of existing meta_key(s) and finding it to be below 191 characters I altered the table and changed meta_key length to 191 characters as suggested earlier.
SELECT max(length(meta_key)) FROM #REDACTED#_postmeta
This bought the query time down to 0.021 s
P.S. Please note, this is an older version of WordPress.
#78
@
6 years ago
I, too, am running into this problem now ... if it gets worse I will be unable to work. I have two questions:
- this ticket is 3 years old. Is there a chance that the problem gets resolved in the near future?
- how safe is ALTER TABLE wp_postmeta MODIFY meta_key varchar(191); - and can it be reversed to its former length without problems?
EDIT: This did it for me:
https://9seeds.com/wordpress-admin-post-editor-performance/
Best regards,
Alex
#79
@
6 years ago
I have to ask the same thing as @faglork . Is there going to be a fix for this after 3 years?
had to follow the 9seeds.com guide to fix this!
#80
@
6 years ago
Quick question, does this problem affect utf8mb3 or latin1? I'm looking at this doc https://dev.mysql.com/doc/refman/5.7/en/charset-unicode-conversion.html, I can infer that utf8mb3 is fine, thus latin1 is fine as well.
#81
follow-up:
↓ 82
@
5 years ago
This is causing really bad perf issues for Yoast SEO and people with large wp_postmeta tables. For us with 11mln rows, the query is taking 40s on pretty beefy machine.
#82
in reply to:
↑ 81
;
follow-up:
↓ 83
@
4 years ago
Replying to archon810:
This is causing really bad perf issues for Yoast SEO and people with large wp_postmeta tables. For us with 11mln rows, the query is taking 40s on pretty beefy machine.
How did you resolve the issue? We also have the same problem with YOAST and this query?
#83
in reply to:
↑ 82
@
4 years ago
Replying to Uranbold:
Replying to archon810:
This is causing really bad perf issues for Yoast SEO and people with large wp_postmeta tables. For us with 11mln rows, the query is taking 40s on pretty beefy machine.
How did you resolve the issue? We also have the same problem with YOAST and this query?
The solution is here: https://github.com/Yoast/wordpress-seo/issues/12894#issuecomment-611902940.
#84
@
4 years ago
I had issues with a site where we had more than 4 millions rows in the postmeta table, and what fixed it was indeed to set the meta_key column to VARCHAR(191).
However, what is puzzling me is that I've set back the type of the meta_key to VARCHAR(255) afterwards, and it's still loading fast.
I've applied the changes with phpMyAdmin, so maybe phpMyAdmin made other changes I'm not aware off when updating the type of the column.. Honestly I don't know. But now the type is still set to VARCHAR(255) and there is no noticeable slowdown.
If somebody have a valid explanation for that, I'm all ears!
#85
@
3 years ago
It's unbelievable that this ticket is still open. Can we tag someone so this gets a higher priority or gets merged in a new WordPress version asap? The performance gains for the smaller meta_key field might fix some serious climate issues;)
#86
@
3 years ago
Coming late to this tix, with some MySQL / MariaDB optimization experience.
tl;dr: Don't change the MySQL column definitions in core. Address performance issues by changing keys (indexes) instead.
Here's an explanation of the situation from my perspective. Key/value tables like postmeta
are among the trickiest to optimize in the world of SQL. But, of course they allow wide-open extension and creativity. If every custom field required a database schema change, well, we probably wouldn't have heard of WordPress in 2021. WordPress wouldn't be as wildly successful as it is without postmeta
(and usermeta
, termmeta
). So our challenge as WordPress.org site operators and developers is to make those tables work well at scale. (They already work fine for smaller installations.)
The slow query that's the topic of this tix is
SELECT DISTINCT meta_key FROM wp_postmeta WHERE meta_key NOT BETWEEN '_' AND '_z' HAVING meta_key NOT LIKE '\\_%' ORDER BY meta_key LIMIT 30
When the table index ("table key" in common WP parlance) is a prefix index such as wp_postmeta(meta_key(191))
it's useful for satisfying WHERE clauses like WHERE meta_key NOT BETWEEN '_' AND '_z'
: it serves to locate the correct rows of the table to use. MySQL uses the index to grab the rows and then doublechecks the filter condition against the contents of the row.
But when the index is like wp_postmeta(meta_key)
(not a prefix index), the index *covers* the query https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_covering_index in question. That is, the query can be satisfied directly from the index. That makes this particular query almost miraculously fast. These BTREE indexes are already in the right order for ORDER BY
. ORDER BY ... LIMIT...
is a notorious way to make your MySQL query slow except in the cases like this where it can exploit a covering index.
Here's the thing: This query is a special case: it only handles one column of one table. So, making a covering index just for it then saying "wow that's fast! problem solved!" isn't sufficient justification for a core table change.
I do not believe it's a good idea to change the definition of the table from meta_key VARCHAR(250)
to 191
. Others have pointed out this will break some of the zillions of WordPress installations out there, and I agree. Plus, the discipline of database administration says "don't change the data. change the indexes".
Good covering indexes often are *compound indexes* indexing multiple columns. And they often serve to optimize multiple query patterns, not just one.
Recent MySQL versions (specifically those with the InnoDB Barracuda engine) don't have the 767-byte limit on indexes: it's 3072 https://dev.mysql.com/doc/refman/5.7/en/innodb-limits.html. Barracuda has been available since MySQL 5.7, with some versions of 5.6 also supporting it. MariaDB 10.2 and beyond supports Barracuda too. So the prefix indexes on the VHARCHAR(250)
columns aren't necessary any more. If you want indexes on the meta_value LONGTEXT
columns you still absolutely need prefix indexes. But core doesn't have any indexes like that.
If any change is necessary in core, it's to the indexes not the tables. Rick James and I have been working on a plugin to reindex the *meta
tables and a few others. https://wordpress.org/plugins/index-wp-mysql-for-speed/. It's still pretty new, but it seems to help. It helps users upgrade from MyISAM to InnoDB, then adds some compound indexes. You can read about the theory of operation. here. https://www.plumislandmedia.net/wordpress/speeding-up-wordpress-database-operations/
We hope our experience can inform the core developers' decisions about any future changes to the database schema.
#87
@
3 years ago
- Milestone changed from Future Release to 6.0
Catching up with the latest comments here, the Index WP MySQL For Speed plugin suggested in comment:86 looks quite impressive, and the theory behind it is also a recommended reading.
This is also being discussed in the new Performance team repo: Revisit indexes for DB performance. With enough eyes and testing, I believe it could be one of the team's focuses in near future.
In the meantime, I would like to revisit 33885.5.diff as suggested in comment:61, which should improve performance for those using persistent object caching like Memcached or Redis. This could also use some testing.
#88
@
3 years ago
I like the idea of this plugin, very much - I will certainly check it out and contribute some help with it with a view to using it in flight!
I just noticed @OllieJones comment - which is very well reasoned indeed - however, I would like to point out that meta_key of more than 191 characters would be considered pretty absurd AND if we changed the definition of the table, we very likely, in 99.99% of cases, wouldn't be TOUCHING the data... at all.
And in fact, we could set core to update the table only if the table did not contain data of more than 191 chars. It's borderline "wrong" to have a meta KEY that long. I can't think of any circumstances where this would happen on purpose.
So how many of "zillions of sites" would we kill if we updated this? A number close to zero may be my guess! I've implemented this potentially thousands of times, with a query which first checks for meta_key which is greater than 191 chars, and it literally never happened.
I agree about indexes - absolutely. Just the underlying logic - who is using a meta key which is between 191 and 255 characters in length? That's longer than a full Tweet?!
EDIT - I should add that in 4 years we've been debating this, the fix which broke zero sites, has saved them from scaling dreadfully. Usually indexes are also required - and this plugin looks conceptually great.
#90
@
3 years ago
- Component changed from Query to Administration
Changing the component back, as this is related to the admin UI. The Query component is generally for tickets related to WP_Query
and other WP_*_Query
classes.
This ticket was mentioned in Slack in #core by chaion07. View the logs.
3 years ago
This ticket was mentioned in Slack in #core by hilayt24. View the logs.
3 years ago
#93
@
3 years ago
Thanks @jorbin for reporting this. This ticket was reviewed during a recent bug scrub session by the core team. Here's the feedback received:
- change the JOIN part of the query, the approach is similar to #49278
- change the meta_key length
Cheers!
Props to @hilayt24 & @craigfrancis
This ticket was mentioned in Slack in #core by chaion07. View the logs.
3 years ago
#95
@
3 years ago
Per the discussion in the bug scrub, it's recommended to pursue 33885.5.diff as mentioned in Sergey's comment and Ollie's comment, as the Performance team will be looking at indexes, which should be handled holistically.
#96
@
3 years ago
- Keywords needs-refresh added
- Severity changed from critical to normal
33885.5.diff no longer applies, three of the four hunks fail. Using object caching makes sense to me.
Lowering the priority as custom fields are hidden away on fresh WP installs and the queries are bypassed.
This ticket was mentioned in Slack in #core by costdev. View the logs.
3 years ago
#99
follow-up:
↓ 107
@
3 years ago
- Keywords close added
I think that the removal of the form in Gutenberg might actually be the best solution to this. If a site is large enough to be facing this issue, they should be on Gutenberg or be working the way towards it.
I think we can close this as wontfix
with the goal of deprecating meta_form
#100
follow-ups:
↓ 101
↓ 103
@
3 years ago
But what if more than half of the existing WordPress sites will never migrate to Gutenberg?
It is actually very possible.
#101
in reply to:
↑ 100
;
follow-up:
↓ 102
@
3 years ago
Replying to uzumymw:
But what if more than half of the existing WordPress sites will never migrate to Gutenberg?
It is actually very possible.
And in my case, the problem is from thousands of bbPress posts. And I suspect that bbPress will _never_ use Gutenberg.
#102
in reply to:
↑ 101
;
follow-up:
↓ 104
@
3 years ago
Replying to DJosephDesign:
I suspect that bbPress will _never_ use Gutenberg.
Never say never? 😆
Aside: if this doesn’t get addressed in WordPress via this ticket, I think it’s perfectly acceptable to port a workaround to bbPress.
#103
in reply to:
↑ 100
@
3 years ago
Replying to uzumymw:
But what if more than half of the existing WordPress sites will never migrate to Gutenberg?
It is actually very possible.
I remember this logic being used to not fix the problems with the menu edit screen which fails on very large menus due to max_input_vars, the retort to that for a long time was “use the brand new customizer” because it didn’t have this problem. Only, the customizer was and still is awful for managing large menus.
These types of issues need to get fixed, not kicked down the road for another 10 years.
@jorbin in my experience it’s way more likely large existing sites won’t be moving to Gutenberg soon if ever. They’re way less likely to be doing that because of the large existing content library doesn’t really “convert to Gutenberg”. Some have a lot of custom code rendered completely useless by Gutenbergs markup. I think a more accurate phrasing would be “large well funded sites are moving to Gutenberg”, but not all large sites have the resources to make that transition. This still needs to get fixed for them and for all the corners of WP that still use the classic editor.
#104
in reply to:
↑ 102
@
3 years ago
Replying to johnjamesjacoby:
Aside: if this doesn’t get addressed in WordPress via this ticket, I think it’s perfectly acceptable to port a workaround to bbPress.
That could be the solution. Demonstrate a workaround the works for most cases. This could be implemented by plugins who provide custom post types with a lot of postmeta. If it then seems to work very well in almost all cases, it can be suggested for the Classic Editor plugin, too.
Personally I don't expect WordPress core to fix every edge case, when that is not needed for core.
#105
@
3 years ago
- Milestone changed from 6.0 to Future Release
As we're approaching RC1 and discussion is still ongoing (albeit appears to be approaching a conclusion of sorts), I'm going to move this ticket to the Future Release milestone.
#106
@
3 years ago
In the meantime (until the Future Release), for people who have performance problems with this query, may I, in shameless self-promotion, suggest:
- If necessary and possible, upgrade MySQL to, at a minimum, 5.7.
- Use the https://wordpress.org/plugins/index-wp-mysql-for-speed/ plugin to set up more efficient indexes on the postmeta and other tables.
Doing these things should mitigate this problem without changes to core.
Why? Please see my earlier response to this tix at https://core.trac.wordpress.org/ticket/33885#comment:86
If you don't want to add yet another plugin to your site, and you're on a recent MySQL version, try using this SQL statement to add the more efficient indexes you need. (But the plugin automatically deals with older MySQL versions as best it can.)
ALTER TABLE wp_postmeta ADD UNIQUE KEY meta_id (meta_id), DROP PRIMARY KEY, ADD PRIMARY KEY (post_id, meta_key, meta_id), DROP KEY meta_key, ADD KEY meta_key (meta_key, meta_value(32), post_id, meta_id), ADD KEY meta_value (meta_value(32), meta_id), DROP KEY post_id;
#107
in reply to:
↑ 99
@
2 years ago
- Milestone Future Release deleted
- Resolution set to invalid
- Status changed from new to closed
Replying to jorbin:
I think that the removal of the form in Gutenberg might actually be the best solution to this.
Yes, agreed. This ticket has become invalid as the UI it refers to is no longer used (still can be enabled by plugin, but is "legacy" and with very limited support).
Please open new tickets with any ideas and/or patches for DB modernization and optimizations resulting from the discussion here.
#108
@
8 months ago
Hi there, I also converted my meta_type field type to varchar 191, hoping this would reduce also the table size (which did not happen). I have a 1 GB wp_postmeta table - the size causes problems with backup problems, update operations etc. The table is bloated because of my variations which is of course the root of the problem which I cannot solve. So I thought about converting the meta_value longtext type to mediumtext. In my staging environment, postmeta shrank to 400 mb. Did anybody do such thing already? Is this recommended or too criticial? Thanks for help. Frank
#109
follow-up:
↓ 110
@
8 months ago
It seems unlikely that a conversion of LONGTEXT to MEDIUMTEXT would result in a 60% table data size reduction; both are BLOB data types, and plus, really long meta_value values are ordinarily quite rare.
The same is true of conversion of meta_key from VARCHAR(255) to VARCHAR(191), for similar reasons.
I wonder whether the contents of that table, or its on-disk structure, are somehow anomalous?
(I am happy I am not in the budget hosting business, by the way. It must be painful to have your most successful customers start having trouble because of their success. That's a long way of saying you, @perlfan, may need to upgrade hosting.)
#110
in reply to:
↑ 109
@
8 months ago
Well, try it for yourself if you don't believe me :-) - the problem on my side is caused by variations as they really bloat the wp_postmeta table. I have ca. 40 products, some of them have 400 variations. This leads to millions of attribute entries. This problem isalso discussed in many posts (e.g. https://wordpress.stackexchange.com/questions/248207/simple-sql-query-on-wp-postmeta-very-slow, https://github.com/woocommerce/woocommerce/issues/15869). So I don't see a solution to increase the site performance (other than by altering the tabel structure) as I can't get rid of the variation entries in that table. And BTW, it's not guaranteed that switching hosting will help me as my I have already good server performance. Frank
Replying to OllieJones:
It seems unlikely that a conversion of LONGTEXT to MEDIUMTEXT would result in a 60% table data size reduction; both are BLOB data types, and plus, really long meta_value values are ordinarily quite rare.
The same is true of conversion of meta_key from VARCHAR(255) to VARCHAR(191), for similar reasons.
I wonder whether the contents of that table, or its on-disk structure, are somehow anomalous?
(I am happy I am not in the budget hosting business, by the way. It must be painful to have your most successful customers start having trouble because of their success. That's a long way of saying you, @perlfan, may need to upgrade hosting.)
#111
@
8 months ago
Frank unfortunately you’re doing it all wrong. 1) you’re asking for support on trac. Trac isn't for support 2) You don’t seem to read this trac ticket well enough to understand what it’s about, which is specifically about the meta form query done on the classic post editor, not post meta queries in general. 3) you’re using 400 variations on products, and there are invariably (pun) better ways to handle whatever it is you’re trying to do there. Kindly stop bothering everyone who worked on this ticket with your support requests.
I think I found the issue with this query. While the index was updated to have a length of 191, the field itself was not updated to
varchar(191)
. When I change the field tovarchar(191)
, the index works on the field and the query is fast (i.e., a few ms).This test run (https://travis-ci.org/tollmanz/utf8mb4-query-time/jobs/82861471) shows the results with 1 million rows, using the following table:
The results are:
I think we just need to update the schema to change the field to
varchar(191)
and unfortunately write some upgrade routines.