Opened 12 years ago
Closed 12 years ago
#23167 closed task (blessed) (fixed)
Cache incrementors for get_pages
Reported by: | nprasath002 | Owned by: | westi |
---|---|---|---|
Milestone: | 3.6 | Priority: | normal |
Severity: | normal | Version: | 3.5 |
Component: | Cache API | Keywords: | has-patch needs-unit-tests |
Focuses: | Cc: |
Description
Make use of a caching incrementor and get pages from their individual cache buckets to avoid memory exhaustion errors in php scripts
Attachments (6)
Change History (23)
#2
@
12 years ago
- Keywords needs-unit-tests added
- Milestone changed from Awaiting Review to 3.6
- Owner set to westi
- Status changed from new to reviewing
#4
@
12 years ago
We're thinking alike. #23173
23173 can be an umbrella ticket for all such changes as this.
#5
@
12 years ago
Since wp_insert_post() calls clean_post_cache() it doesn't need to do its own last_changed bumping.
#6
@
12 years ago
Unlike posts, pages are less likely to have a hot cache. Caching just the IDs and then running get_post() on them could result in a lot of extra queries when a persistent cache backend is not being used. We definitely need to investigate that. If we go with caching IDs, we should consider introducing and using wp_cache_get_multi().
#7
@
12 years ago
Looks like caching only IDs will be just fine since get_pages() calls update_post_cache(). This ensures the pages are cached should we do the same get_pages() query later in the same page load. On the next page load the cache is empty when there is no persistent cache so get_pages() does a full pages query and cache set. Query counts are looking fine so far in testing.
@
12 years ago
Remove the last_changed bumps from wp_insert_post() since clean_post_cache() handles that.
#11
@
12 years ago
$last_changed = wp_cache_set( 'last_changed', 1, 'posts' );
The default cache returns boolean true from wp_cache_set(). This works out since true is cast to '1'. Some cache backends, however, return void. This is cast to an empty string. If the same query is run again later in the page load it uses a different cache key resulting in another query instead of using the already cached query. The keys look something like this:
get_pages:3577d9c3933b07d4dc753e71a0a49d72: get_pages:3577d9c3933b07d4dc753e71a0a49d72:1
To accommodate non-compliant caches and to make the code clearer, let's explicitly assign 1 to last_changed.
#14
@
12 years ago
- Cc kurtpayne added
r23320 changed 'comment' to 'posts' and now unit test Tests_Comment_Query::test_get_comments_for_post
is failing.
#16
@
12 years ago
Resetting $last_changed to 1 is wrong. It is very unlikely but if $last_changed got purged from cache while the content of the buckets remained (it was on different memcache server which failed) you might get a key which points to old data which might still be in the cache.
In other words this way you increase the possibility of key collision.
IMO it is better to use a random initial seed every time it needs to be calculated.
This looks great it would be great to see a little more detail from the testing you did on this here for other people to understand the issue and how this resolves it.
Some background for everyone:
The basic issue as I recall is that the data cached by get_pages can get very large for large hierarchies because of all the "post" data being cached for each node - we saw an example where the memcache object cache backend had to split the get_pages cache into 62 1 Meg chunks.
The patch addresses this by only caching the IDs as the post objects are already cached.
Some questions for nprasath002: