#59592 closed enhancement (fixed)
Store last changed value in key instead of using it as a salt for query caches
Reported by: |
|
Owned by: |
|
---|---|---|---|
Milestone: | 6.9 | Priority: | normal |
Severity: | normal | Version: | |
Component: | Cache API | Keywords: | has-patch has-unit-tests needs-dev-note |
Focuses: | performance | Cc: |
Description
This 'last changed' timestamp serves as a salt, implying that when this value changes, the cache key also changes, leading to the creation of a new cache. This method of cache invalidation has proven effective over the years but presents a drawback – it generates an excessive number of caches.
Imagine a high-traffic website constantly generating new content with new posts added every 10 minutes. This results in the 'last changed' cache being updated every time a new post or revision is created. Consequently, the WP_Query for the homepage could potentially generate a new cache each time a post is updated. This could lead to the creation of hundreds or even thousands of caches. The concept behind an object cache is to allow unused keys to expire and be removed from the cache. However, this process is not swift and may take hours or even days for these keys to naturally fall out of the cache. Furthermore, there's a risk that the object cache could become bloated and remove actively used cache keys.
The solution lies in reusing cache keys. Instead of using 'last changed' as a salt, store the 'last changed' values within the cache value itself. Then, retrieve the cache value and compare the 'last changed' value stored in the cache with the actual 'last changed' value. If they don't match, mark the cache as invalid, retrieve the values from the database, and resave the cache with an updated 'last changed' value.
This approach offers several benefits:
Cache keys are reused, ensuring that the number of cache queries remains stable, and the cache queries clean up after themselves.
Predictable cache keys allow caches to be primed efficiently. For instance, a caching plugin can discern the types of queries on a page and prime multiple cache keys in a single request.
However, there are some downsides to consider:
Third-party plugins may be reliant on salted cache keys.
When comparing the last updated value, the entire cache must be retrieved. This means that the first user after cache invalidation may experience a slightly larger request."
Change History (59)
This ticket was mentioned in PR #5458 on WordPress/wordpress-develop by @spacedmonkey.
2 years ago
#1
- Keywords has-patch added
This ticket was mentioned in Slack in #core-performance by mukeshpanchal27. View the logs.
2 years ago
#4
@
2 years ago
I think this is an excellent solution to the problem and would "truly" resolve #57625.
#5
follow-up:
↓ 6
@
2 years ago
Perhaps I'm missing something fundamental, but it seems to me that (at least part of) the conversation here and in #57625 is focusing on the wrong thing - largely how the cache is configured. Specifically, focusing on cache key TTL rather than having an appropriate cache eviction policy for the cache data (e.g. LRU, which I believe is the default for Redis). https://redis.io/docs/reference/eviction/
If you use something like allkeys-lru, then the only way actively-used keys could get evicted is if your cache is too small - a hosting error, not WP. If you do want to use TTL/expire on specific keys, then perhaps volatile-lru could be used. It seems to me that this decision should be up to the site owner, not WP Core.
This all seems to roughly echo what Peter Wilson said in his initial comment: https://core.trac.wordpress.org/ticket/57625#comment:3
The proposed solution here seems to add unnecessary complexity and processing overhead. So, perhaps a solution the present problems is to give users and plugins more control over caching policies such as whether to apply a TTL or not to a specific cache key? Also, to provide documentation/guidance on proper server cache (redis) config - both for users who manage their own servers, as well as for hosting providers to do a better job at.
This article from Ruby on Rails goes into detail about using key-based cache expiration, so it seems worth reviewing for inspiration.
https://signalvnoise.com/posts/3113-how-key-based-cache-expiration-works
I hope this is helpful rather than irrelevant/a distraction!
#6
in reply to:
↑ 5
@
2 years ago
Replying to nickchomey:
Perhaps I'm missing something fundamental, but it seems to me that (at least part of) the conversation here and in #57625 is focusing on the wrong thing - largely how the cache is configured. Specifically, focusing on cache key TTL rather than having an appropriate cache eviction policy for the cache data (e.g. LRU, which I believe is the default for Redis). https://redis.io/docs/reference/eviction/
As I was the original poster of the aforementioend ticket I wanted to clarify (if it was not clear from my description) that I had custom TTL for the query
cache groups (24-72h) and eviction policy - and this did the job for me.
Nevertheless I still believed that the way Core handles keys for these groups was not proper. On busy sites the last_updated could be regenerated tens or hundreds of times per day (as I proved in original post - it was enough to open add new post screen to just regenerate it twice, without even clicking save). That could case the numbers of query cache keys grow crazy fast and then TTL would be just some magic number which you can set based on some trial and error (to balance the cache efficiency vs memory levels).
Also I think (that's my personal opinion) that operating on the redis max memory all the time and relying on the eviction is not a healthy state and that's just covering the problem.
All of the above doesn't change the fact that I understand that the problem is not trivial to solve and query caches nature make it more difficult :)
This ticket was mentioned in Slack in #core-performance by spacedmonkey. View the logs.
9 months ago
This ticket was mentioned in Slack in #core-performance by spacedmonkey. View the logs.
6 months ago
This ticket was mentioned in PR #8728 on WordPress/wordpress-develop by @spacedmonkey.
6 months ago
#9
- Keywords has-unit-tests added
This update introduces a wp_is_valid_last_changed()
function to validate cache data against the last_changed
value, improving cache reliability and removing timestamp dependency. Cache keys have been simplified by removing last_changed
from their structure, and the cache values now store last_changed
for consistency. Additionally, functions across core query classes are updated to implement these changes.
Trac ticket: https://core.trac.wordpress.org/ticket/59592
@spacedmonkey commented on PR #5458:
6 months ago
#10
Closing in favour of https://github.com/WordPress/wordpress-develop/pull/8728
@tillkruess commented on PR #8728:
6 months ago
#11
LGTM! I'd approve this if I'd had the permissions.
@tillkruess commented on PR #8728:
6 months ago
#12
I think we should consider implementing two helper functions to read and write cache keys with
$last_changed
awareness, so that that logic is abstracted and doesn't need to be duplicated in all these places.
Excellent idea!
#13
follow-up:
↓ 14
@
6 months ago
Cross posting here from LinkedIn - an issue we hit with the recent core cache changes around option caching was really detrimental to performance because we’re using an object cache running on another server, a redis instance in our case. This isn't uncommon for larger sites.
Each cache lookup is still a remote request, so while often faster than a heavy db query ideally core would also optimise for not making too many cache requests when we know the data is likely to be stale.
This approach sounds like it might cause a lot of unnecessary cache requests, and cause us the same problem so I’m curious about that consideration and whether you have benchmarks for our kind of setup too?
#14
in reply to:
↑ 13
@
6 months ago
Replying to sanchothefat:
Each cache lookup is still a remote request, so while often faster than a heavy db query ideally core would also optimise for not making too many cache requests when we know the data is likely to be stale.
This approach sounds like it might cause a lot of unnecessary cache requests, and cause us the same problem so I’m curious about that consideration and whether you have benchmarks for our kind of setup too?
It does not cause unnecessary cache requests. Please open a dedicated ticket for your problem, it's irrelevant to this ticket.
#15
@
5 months ago
Thanks for the feedback, @sanchothefat.
This ticket was created following a discussion with your CTO, @rmccue. The remote cache was taken into consideration during its creation. Importantly, this patch does not increase the number of cache gets.
To clarify how the current and proposed approaches work:
Current Behaviour
- WP_Query is called.
- A cache lookup is performed, using the last changed value in the cache key.
- Cache is found.
- A post is updated.
- Last changed is updated.
- A new cache lookup is performed, now using the updated last changed value.
- Cache is not found (as the key has changed).
- A new cache is generated using the new key.
Proposed Behaviour
- WP_Query is called.
- A cache lookup is performed, comparing the last changed value in memory with the one stored in cache.
- Cache is found and deemed valid.
- A post is updated.
- Last changed is updated.
- A second cache lookup is performed, again comparing last changed.
- Cache is now considered invalid due to the change.
- The same cache key is reused and updated with new values.
Both approaches perform two cache lookups. The only difference lies in the second lookup: instead of a cache miss (which requires regenerating and storing new data), the proposed method performs an in-memory comparison. This is a very lightweight operation, as the cache in question typically contains only an array of post IDs—amounting to just a few bytes.
Moreover, in some caching systems (such as memcached, in my experience), a successful cache get is actually faster than a miss.
While it’s true that the first request in the proposed model may involve retrieving slightly more data, the ability to reuse cache keys is a significant improvement. It prevents the object cache from being filled with stale or rarely-used keys and allows query caches to persist longer, improving overall performance and cache efficiency.
I agree with @tillkruess, that improving database queries and remote caches, are out of scope of this discussed ticket and should be discussed else. Changing existing queries will be very hard, as these queries have filter that have to be maintained.
#16
@
5 months ago
Thanks for the clarification. I’m not talking modifying any db queries fwiw, but the impact on remote cache servers does feel relevant still.
Definitely a smart move where it’s only a small amount more data, just IDs retrieved at the same time, though for user, post or term queries returning a lot of rows (not good practice but happens) it could be a lot more data. I’ll see if I can pull some stats for that in case it’s helpful.
#17
@
5 months ago
Ok, I definitely misunderstood what's going into the cache there in some instances. Apologies @spacedmonkey - thanks for your work on this, looks like a great improvement :)
#18
follow-up:
↓ 19
@
5 months ago
I think I had originally suggested this when chatting with @spacedmonkey (although, I'm not the CTO :D), primarily because the introducing of this key-based caching started causing cache "churn" for us with data getting pushed out of Redis.
Most of WP is designed not to use key-based invalidation generally, so this was a pretty big conceptual change to the design of the cache system as a whole; moving back to regular keying I think makes the most sense and keeps it consistent. I can see a case for key-based invalidation, but WP was never really designed for that, so switching it has caused some issues.
(For reference, we use allkeys-lfu as the eviction policy.)
@sanchothefat Note that the data being stored here is only a list of IDs, as those then get pushed to _prime_{post,term,comment}_caches()
and the various populate functions - so the typical size here is under a kilobyte.
#19
in reply to:
↑ 18
@
5 months ago
Replying to rmccue:
I think I had originally suggested this when chatting with @spacedmonkey (although, I'm not the CTO :D),
Sorry for getting your job title wrong.
Most of WP is designed not to use key-based invalidation generally, so this was a pretty big conceptual change to the design of the cache system as a whole; moving back to regular keying I think makes the most sense and keeps it consistent. I can see a case for key-based invalidation, but WP was never really designed for that, so switching it has caused some issues.
So are you for or against this change? I find this message unclear.
BTW, I understand this is a big change, which is why I want the feedback from the community.
This ticket was mentioned in Slack in #core-performance by spacedmonkey. View the logs.
5 months ago
#21
@
5 months ago
Sorry, to be clear, I'm in favour of the change conceptually, as it avoids us needing to use extensive key-based invalidation.
#22
follow-up:
↓ 23
@
5 months ago
As a concept I like this idea for the *-queries
groups, taking @rmccue, @tillkruess, @sanchothefat's word on the improved efficiency for large sites.
On the proposed implementation, I am wondeing if it would be better to introduce new wp_cache
functions to do the checking of the last changed value as there are enough occurrences of the pattern in Core that it could be worthwhile (18, based on a quick check).
Something along the lines of:
<?php wp_cache_get_query( ... $group ...) { $query_group = "$group-queries"; $result = wp_cache_get( ... $query_group ... ) $last_changed = wp_cache_get_last_changed( $group ); if ( $result['last_changed'] !== $last_changed ) { return $false; } return $result['data']; } wp_cache_set_query( ... $group ... ) { $last_changed = wp_cache_get_last_changed( $group ); wp_cache_set( $key, [ 'data' => $data, 'last_chagned' => $last_changed ], $group-queries ) }
#23
in reply to:
↑ 22
@
5 months ago
Replying to peterwilsoncc:
On the proposed implementation, I am wondeing if it would be better to introduce new
wp_cache
functions to do the checking of the last changed value as there are enough occurrences of the pattern in Core that it could be worthwhile (18, based on a quick check).
A getter/setter function would be great, we'd just need to account for N timestamps, because some keys have multiple (posts+terms) for example.
#24
@
5 months ago
@peterwilsoncc Have you had a chance to look at https://github.com/WordPress/wordpress-develop/pull/8728?
This introduces a new function wp_is_valid_last_changed
. This is a very similar idea to what you mentioned in comment.
#25
@
5 months ago
@spacedmonkey Yes, I saw that. I'm just wondering whether it would be better to write the code once rather than duplicating it for each instance of a cache key using lastchanged. If it were once or two locations, I'd be happy to copy-paste but with 18ish occurrences in core it seems better to me to unify the code in a single location.
#26
@
5 months ago
How about the following functions.
function wp_query_cache_get( $cache_key, $group, $last_changed )
function wp_query_cache_set( $cache_key, $data, $group, $last_changed ){
wp_cache_set( $key, [ 'data' => $data, 'last_chagned' => $last_changed ], $group)
}
I also consider wp_cache_get_query_data
and wp_cache_set_query_data
for the naming. Thoughts @peterwilsoncc @flixos90 ?
I want to avoid doing another wp_cache_get_last_changed
, as this might result in a remote cache get ( like in the case of @rmccue ). That is why I am passing $last_changed
into the function instead of passing $cache_group
.
Update:
$last_changed has to be a passed in, as there is the following code in WP_Query.
$last_changed = wp_cache_get_last_changed( 'posts' );
if ( ! empty( $this->tax_query->queries ) ) {
$last_changed .= wp_cache_get_last_changed( 'terms' );
}
Reviewing feedback, I see the need for multiple calls, so wp_cache_get_muliple_query_data
or wp_query_cache_get_multiple
.
#27
follow-up:
↓ 28
@
5 months ago
@spacedmonkey I'm happy if last changed needs to be passed to the functions given your and Till's comments.
As to naming, I think naming things is very hard. I can see a few plugins using the last changed pattern for cache invalidation so I think my original suggestion of using the keyword query
isn't great. Note: there are a few false positives in the link provided.
Canonically, I think we'd want the naming convention to be with the cache API (ie, wp_cache_
prefixed) rather than wp_query
prefixed as we'll eventually be using this elsewhere.
Maybe it's as simple as wp_cache_last_changed_set|get|etc
but others will have opinions, I am sure.
#28
in reply to:
↑ 27
@
5 months ago
Just my two cents, feel free to ignore.
Canonically, I think we'd want the naming convention to be with the cache API (ie,
wp_cache_
prefixed) rather thanwp_query
prefixed as we'll eventually be using this elsewhere.
Agreed on sticking with wp_cache_*()
prefixes for these functions.
Maybe it's as simple as
wp_cache_last_changed_set|get|etc
but others will have opinions, I am sure.
That might be confusing with the existing wp_cache_(get|set)_last_changed()
?
I also consider wp_cache_get_query_data and wp_cache_set_query_data for the naming.
I don't mind wp_cache_(get|set)_query()
.
As for the signature, to accommodate multiple timestamps and an optional expiration, this would be my suggestion. That way the implementation can decide on how to handle multiple $last_changed
.
<?php function wp_cache_set_query( string $key, mixed $data, string $group, float|float[] $last_changed, int $expire = 0 ) { }
#29
@
5 months ago
wp_cache_set_query
etc works for me. If others suggestions come in, we can decide what colour to paint the bike shed at that time.
#30
@
5 months ago
- Milestone changed from Future Release to 6.9
- Owner set to spacedmonkey
- Status changed from new to assigned
- Version set to 6.1
After feedback from @peterwilsoncc @tillkruess @sanchothefat @flixos90 I have rewritten the PR at https://github.com/WordPress/wordpress-develop/pull/8728.
This new PR adds the following functions.
wp_cache_get_query_data( $cache_key, $group, $last_changed )
wp_cache_set_query_data( $cache_key, $data, $group, $last_changed )
wp_cache_set_multiple_query_data( $data, $group, $last_changed )
wp_cache_get_multiple_query_data( $cache_keys, $group, $last_changed )
The updated PR, does not have unit tests and docs yet. It just a sense check on the approach, I will get the PR into a final stage to merge.
I am happy with the new functions, it makes it easier to update the code and add these style caches to your own code.
I have also moved this ticket into the milestone, as it looks like it is going ahead.
This ticket was mentioned in Slack in #core-performance by spacedmonkey. View the logs.
4 months ago
This ticket was mentioned in Slack in #core-performance by spacedmonkey. View the logs.
4 months ago
This ticket was mentioned in Slack in #core-performance by spacedmonkey. View the logs.
4 months ago
@peterwilsoncc commented on PR #8728:
3 months ago
#35
@spacedmonkey This got lost behind a resolved comment:
I discussed passing the last changed groups to the functions with Till and we think that would be more helpful. I know this will result in more calls to wp_cache_get( 'last_changed' ... ) but I don't think that's a big problem as they will be in the local, PHP, memory on the second and subsequent calls rather than need to connect to the persistent caching server each time.
@tillkruess commented on PR #8728:
3 months ago
#36
I discussed passing the last changed groups to the functions with Till and we think that would be more helpful. I know this will result in more calls to wp_cache_get( 'last_changed' ... ) but I don't think that's a big problem as they will be in the local, PHP, memory on the second and subsequent calls rather than need to connect to the persistent caching server each time.
@peterwilsoncc Theoretically, it would be ideal to pass in the groups and their timestamp to the function to make it easy to parse. But since we're removing $last_changed
from the keys there is no more need for "cleanup" and IMO thus the group timestamp is a lot less relevant. So I consider this resolved.
#37
@
3 months ago
Moving discussion here as @peterwilsoncc requested:
Should we rename these new functions to
wp_cache_(g|s)et_versioned()
as well as the last_changed parameter?
#38
@
3 months ago
Names that have been suggested in the pull request:
wp_cache_*_salted()
wp_cache_*_versioned()
wp_cache_*_hashed()
wp_cache_*_salted_data()
I've put them in order of my preference but am not too fussed as to whether it's salted or versioned. I could live with any of them if others have a strong preference.
#39
@
3 months ago
I like wp_cache_*_salted_data, is the most clear to me.
I could live with wp_cache_*_salted
#41
follow-up:
↓ 42
@
3 months ago
I think we should also change the last_changed parameter to salt, to make it consistent.
#42
in reply to:
↑ 41
@
3 months ago
Replying to spacedmonkey:
I think we should also change the last_changed parameter to salt, to make it consistent.
Yes, that makes sense.
#43
@
3 months ago
Updated function names to wp_cache_*_salted()
and change all parameters to salt. I also made it so that salt can be an array. That makes the logic a lot easier to follow.
Can I get another review @peterwilsoncc @tillkruess @flixos90 @rmccue ?
#44
@
3 months ago
- Keywords needs-dev-note added
- Version 6.1 deleted
This will need a dev note:
- alerting devs checking/setting the query caches directly
- advice on how to use them in older versions of WP (ie, copy-pasta wrapped in a
function_exists()
- noting the cache invalidation upon upgrade due to the cache key changes
I'm sure we'll think of more in the time between now and 6.9.
I've unset the version, as they apply to bugs rather than enhancements.
@spacedmonkey Enormous props for your work on this, thanks for your patience as architecture and naming was worked out.
@peterwilsoncc commented on PR #8728:
3 months ago
#45
I've pushed a couple of minor changes for the following:
- https://github.com/WordPress/wordpress-develop/pull/8728/commits/bd142f123ad7abb336b6a6fed0aa8d401d25a022 - CS alignment standards in various docs.
- https://github.com/WordPress/wordpress-develop/pull/8728/commits/12a45c716072e6750f0db5417635a742e9059356 - a functional change to return the results of setting the cache/s to match the existing APIs for
wp_cache_set
andwp_cache_set_multiple
@peterwilsoncc commented on PR #8728:
2 months ago
#46
I've committed @felixarntz and @mukeshpanchal27's suggestions and merged in trunk.
I'm still a little concerned about passing in the salt values rather than the last changed names.
It requires a developer remember to maintain order when generating the salted value, for example the following will result in different underlying cache keys being hit.
$salts = array(
wp_cache_get_last_changed( 'group1' ),
wp_cache_get_last_changed( 'group2' ),
);
$cached = wp_cache_get_salted( 'key', 'group', $salt );
$salts = array(
wp_cache_get_last_changed( 'group2' ),
wp_cache_get_last_changed( 'group1' ),
);
$cached = wp_cache_get_salted( 'key', 'group', $salt );
If instead we pass the group names for the $salt
parameter, we can sort them prior to getting the last changed values.
sort( $salt ); $real_salt = array(); foreach ( $salt as $s ) { $real_salt[] = wp_cache_get_last_changed( $salt ); }
A compromise possibility may be to accept the salts with keys and do a ksort
, using Query as an example
$salt = array( 'posts' => wp_cache_get_last_changed( 'posts' ), 'terms' => wp_cache_get_last_changed( 'terms' ), );
I don't know that I am invested enough to consider this a blocker but if it's something we can optimise while working on this ticket, it would be preferable than doing so in the future.
@mukesh27 commented on PR #8728:
2 months ago
#47
If instead we pass the group names for the $salt parameter, we can sort them prior to getting the last changed values.
+1
@tillkruess commented on PR #8728:
2 months ago
#48
@peterwilsoncc I can see an argument for a helper function to make the salt specifically for wp-core queries, but sorting $salt
seems risky and passing group names to wp_cache_get_salted()
seems incorrect to me. Just my ¢2.
$salt = wp_cache_get_last_changed_salt([ 'users', 'comments' ])
$cached = wp_cache_get_salted( 'some-query', 'user-queries', $salt );
function wp_cache_get_last_changed_salt( $groups ) {
if (! is_array( $groups ) ) {
$groups = [$groups];
}
sort( $groups );
return array_map(function ( $group ) {
return wp_cache_get_last_changed( $salt )
});
}
@spacedmonkey commented on PR #8728:
2 months ago
#49
I can see an argument for a helper function to make the salt specifically for wp-core queries, but sorting
$salt
seems risky and passing group names towp_cache_get_salted()
seems incorrect to me. Just my ¢2.
I think we might be over thinking this a little. My perf to just commit this and we can see key sort later if we feel it is needed.
If we do add keys to the salts, then we could just do this.
$salt = array( 'posts' => wp_cache_get_last_changed( 'posts' ), 'terms' => wp_cache_get_last_changed( 'terms' ), ); wp_cache_get_salted( $cache_key, $group, $salt );
if( is_array( $salt ) ) { ksort( $salt ); $salt = array_values( $salt ); $salt = implode( ':', $salt ); }
Above could be adde later without any sort of break.
#50
follow-up:
↓ 51
@
2 months ago
I am really struggling with the commit message for this change. This is what I have so far.
Caching API: Add new helper functions for query-based caches
Query-based caches are now improved by reusing cache keys. On high-traffic sites, the last_changed value can change frequently for user cache groups such as posts or terms. Previously, cache keys for query caches were generated using the last_changed value as part of the key. This meant that whenever last_changed was updated, a new cache key (and cache entry) was generated, while the old cache was left to expire naturally in the object cache.
That approach relied on the object cache backend efficiently evicting stale caches. This worked well with memcache, but with other backends such as Redis, eviction behaviour is not always reliable.
To address this, the following new helper functions have been introduced:
- wp_cache_get_salted
- wp_cache_set_salted
- wp_cache_get_multiple_salted
- wp_cache_set_multiple_salted
These functions provide a consistent way to get/set query caches. Instead of using the last_changed value as part of the cache key, it is now stored inside the cache value as a “salt”. This allows cache keys to be reused, with values updated in place rather than relying on eviction of outdated entries.
Love feedback @peterwilsoncc @tillkruess @flixos90
#51
in reply to:
↑ 50
@
2 months ago
Replying to spacedmonkey:
That approach relied on the object cache backend efficiently evicting stale caches. This worked well with memcache, but with other backends such as Redis, eviction behaviour is not always reliable.
Instead I'd say:
That approach relied on the object cache backend evicting stale keys which is not always enabled by default (Redis/Valkey). Regardless of the keys eviction the behaviour caused tons of cache keys to generate, while the new approach will not.
#52
@
2 months ago
The previous approach depended on the object cache backend automatically evicting stale keys, which is not always enabled by default (e.g. in Redis or Valkey). Even when eviction occurred, it still led to a large number of cache keys being generated. The new approach avoids this issue by reusing keys instead.
#53
@
8 weeks ago
Brain dump follows. I think any of the suggestions are good enough.
Caching API: Use consistent cache keys for query groups.
Query-based caches are now improved by reusing cache keys. Previously, cache keys for query caches were generated using the last_changed
value as part of the key. This meant that whenever last_changed
was updated, all the previously cached values for the group became unreachable.
The new approach allows WordPress to replace previously cached results that are known to be stale. The previous approach relied on the object cache backend evicting stale keys which is done at various levels of efficiency.
To address this, the following new helper functions have been introduced:
wp_cache_get_salted
wp_cache_set_salted
wp_cache_get_multiple_salted
wp_cache_set_multiple_salted
These functions provide a consistent way to get/set query caches. Instead of using the last_changed value as part of the cache key, it is now stored inside the cache value as a “salt”. This allows cache keys to be reused, with values updated in place rather than relying on eviction of outdated entries.
#54
@
7 weeks ago
- Resolution set to fixed
- Status changed from assigned to closed
In 60697:
Caching API: Use consistent cache keys for query groups. Query-based caches are now improved by reusing cache keys. Previously, cache keys for query caches were generated using the `last_changed` value as part of the key. This meant that whenever `last_changed` was updated, all the previously cached values for the group became unreachable. The new approach allows WordPress to replace previously cached results that are known to be stale. The previous approach relied on the object cache backend evicting stale keys which is done at various levels of efficiency. To address this, the following new helper functions have been introduced: * wp_cache_get_salted * wp_cache_set_salted * wp_cache_get_multiple_salted * wp_cache_set_multiple_salted These functions provide a consistent way to get/set query caches. Instead of using the last_changed value as part of the cache key, it is now stored inside the cache value as a "salt". This allows cache keys to be reused, with values updated in place rather than relying on eviction of outdated entries. Props spacedmonkey, peterwilsoncc, flixos90, sanchothefat, tillkruess, rmccue, mukesh27, adamsilverstein, owi, nickchomey. Fixes #59592.
@spacedmonkey commented on PR #8728:
7 weeks ago
#55
Commited in r60697.
Trac ticket: https://core.trac.wordpress.org/ticket/59592