Make WordPress Core

Opened 14 years ago

Closed 14 years ago

Last modified 14 years ago

#16893 closed enhancement (fixed)

Stop or reduce crawling of comment reply ?replytocom URLs

Reported by: joelhardi's profile joelhardi Owned by: nacin's profile nacin
Milestone: 3.2 Priority: normal
Severity: normal Version: 3.1
Component: Comments Keywords: has-patch dev-feedback
Focuses: Cc:

Description

(For full background, you can check out the off-topic comments at the ends of #10550 and #16881.)

r16230 (quite appropriately) removed the rel="nofollow" attribute from the <a href="?replytocom">Reply to this comment</a> links in the comments display. Since then, users have reported search engines are now crawling these pages (as one would expect). This means unnecessary server overhead (these pages are almost always dynamically generated even when using a caching plugin) and may reduce the frequency search engines crawl "real" pages since there are so many of these dummy URLs to index.

Additionally, there may be SEO-related reasons why this is bad. Although that may be largely mitigated by rel="canonical" and the fact that contents of these pages are 99.9% a duplicate of their canonical versions, search engines are also known to penalize sites for having many pages with duplicate content. And, the specification for the canonical attribute states that its use in page-ranking calculations is at the discretion of indexers (i.e., using canonical is no deterministic guarantee of anything).

I'm attaching 2 patches to be considered/discussed separately:

1: robots meta tag

The first applies the robots exclusion standard to these URLs.

For individual site admins, putting Disallow: *?replytocom in your robots.txt is the obvious fix. #11918 would have added this rule to what do_robots() returns, but this was dropped (but that was way before r16230). I would support putting it back in.

My patch general-template.php.17522.diff would add the <meta name='robots' content='noindex,nofollow' /> tag (already used when the WordPress privacy setting is enabled) to the ?replytocom URLs. The pages would still be hit by crawlers but would no longer be indexed -- so, 100% addressing the duplicate indexing issue and rendering moot debates over the effectiveness of "canonical."

Compared to robots.txt, this is an imperfect solution to the crawling/server overhead problem because these pages would still have to be at least partially retrieved to read the meta tag. I expect that it will still have a beneficial effect in this area, because well-behaved crawlers will reduce the frequency that they retrieve these noindex URLs.

I've got this code deployed on some live sites to test this idea out (but they previously had a robots.txt rule blocking ?replytocom, so I don't expect instantly useful info).

Even if it's not 100% effective, I think this is an improvement and trivial enhancement that could go into core soon. (this is the dev-feedback item.)

2: change links to forms

Picking up on an idea filosofo had to replace the <a> links with forms, I've done that with comment-template.php.17522.diff.

Functionally, this is a drop-in replacement, since these are GET forms that produce the same HTTP request as the current <a> tags (please, try them out!).

Although Google has been crawling forms for some time, it's only been on an experimental basis and isn't widespread. It also does not affect crawler's page selection or the search engine ranking in any way. So, changing these links to GET forms should sharply reduce (but possibly not eliminate) crawling of URLs. I don't have any data for this so it merits further discussion.

One implementation issue would be that since the <a> tags are now <button> elements, it's going to affect themes. That alone could be a deal-killer! I chose the button element over input because it's easier to style in a cross-browser way (i.e. to look just like a link), but defer to the UI folks on that.

So ... this 2nd idea isn't necessarily fully cooked and would need broader support.

And, someone may have an even better way to address the overall issue.

Attachments (3)

general-template.php.17522.diff (501 bytes) - added by joelhardi 14 years ago.
adds robots meta tag to ?replytocom pages
comment-template.php.17522.diff (1.5 KB) - added by joelhardi 14 years ago.
changes "reply" links in comments list to form buttons
16893.diff (664 bytes) - added by nacin 14 years ago.

Download all attachments as: .zip

Change History (12)

@joelhardi
14 years ago

adds robots meta tag to ?replytocom pages

@joelhardi
14 years ago

changes "reply" links in comments list to form buttons

#1 follow-up: @hakre
14 years ago

Thanks for the initiative.

Why nofollow?

#2 in reply to: ↑ 1 @joelhardi
14 years ago

Replying to hakre:

Why nofollow?

I'm assuming you mean in the robots meta tag?

Well, a distinction should be made here between this and rel="nofollow" -- the robots version predates the rel attribute and has a different meaning. (The potential for confusion has been noted in the rel=nofollow spec since forever.) The robots meta version actually means "do not scan this page for links to follow" whereas the rel attribute means "leave this link out of your link-value calculations" (i.e. PageRank). You probably knew that, I just thought I'd explain for whoever comes along since these tickets have veered way OT.

So, in the robots version, we could do just "noindex" but leave out "nofollow" and the page would not be indexed, but the search engine crawler would still scan it for links to other pages to index.

Anyway, I thought about it whether to include it, because right now there are no "bad" links on the ?replytocom pages. All the links are identical to those on the regular post/comment page, with the exception of cancel-comment-reply-link, and it only hrefs "${postURL}#respond".

But, for the same reason there's also no reason to not include it -- the links that would be followed have already been spidered one hop before getting to the ?replytocom page.

I decided to include it because the goal is to eliminate or reduce crawling of these pages -- so the hope is that if they're "noindex,nofollow," smart search engines like Googlebot will adjust their crawl frequency of these URLs down, since they're of zero value. Possibly they could also cancel page downloading midstream when the HTTP response is larger than the TCP window size, reducing transfer (although this is so minor I don't think it's worth a big investigation).

Whereas, if the pages aren't nofollow, bots can/should still scan them for links, so they're going to be crawled perhaps just as frequently as before (only, not be indexed).

Hypothetically, if in the future WordPress were to add some new, unique link to these ?replytocom pages (unlikely, I know), we could have to revisit this issue. However, I think such a link is more likely to be another functional, app-controller style URL (like "cancel comment reply") that we don't want crawled than something with unique high-value content that we do want followed. So, that also argues in favor of putting the "nofollow".

#3 @joelhardi
14 years ago

Reporting back on my running of attachment:general-template.php.17522.diff (which adds the robots noindex,nofollow meta tag to ?replytocom URLs) on 2 live sites for the past couple of weeks since this ticket was added.

It's worked as well as (or better than) I expected and I'd recommend adding this functionality to a future release.

?replytocom pages have not been indexed by Google and there's been no increase in googlebot crawling of these sites (previously I'd had robots.txt block access to these URLs). So, even the hypothesis about googlebot intelligently not trying to recrawl these URLs once it encounters the meta tag has borne out.

Also, in Google Webmaster Tools there's a "crawl errors" section which normally lists URLs blocked by robots.txt. These URLs aren't included (in fact they don't show up anywhere in Webmaster Tools) since they're blocked by the meta tag. So, the end-user goal of users not having these URLs litter their screen when they log into Webmaster Tools is also achieved. I think this is a good improvement to quiet the complaining on the other thread about Google now crawling these pages since the rel="nofollow" attrib was dropped from <a> tags, and don't see any potential downsides.

#4 @nacin
14 years ago

This patch seems a little low in the stack. Maybe this somewhere:

if ( isset( $_GET['replytocom'] )
    add_filter( 'pre_option_blog_public', '__return_zero' );

#5 follow-up: @joelhardi
14 years ago

Thanks, I agree about it being too low in the stack, I just couldn't think of a better way.

I looked for someplace obvious to add a filter and didn't see one, and didn't know what filter to add or about __return_zero (how useful!). So you have solved 95% of it!

I had thought that, to group the filter addition with the other replytocom code, it would have to go into one of the functions in comment-template.php unless there was a more serious refactor. 'replytocom' is just a magic string in that file and there are about 3 funcs doing branches on isset($_GET['replytocom']).

The problem is that none of these functions is called until after wp_head() so that doesn't work.

It would definitely work to put it in default-filters.php but then it's even lower in the stack. Could put the replytocom check in a new function and hook it to wp_head, but don't think that's better since it adds overhead and is basically equivalent to how noindex() is called.

Could make replytocom into a public query var, and then add a filter to 'query_vars' or similar so that it's called inside class WP when the request is parsed.

Anyway, those are my ideas, somebody like you who knows the code 10x better may have a much better one.

#6 in reply to: ↑ 5 @nacin
14 years ago

  • Milestone changed from Awaiting Review to 3.2

Replying to joelhardi:

It would definitely work to put it in default-filters.php but then it's even lower in the stack. Could put the replytocom check in a new function and hook it to wp_head, but don't think that's better since it adds overhead and is basically equivalent to how noindex() is called.

Actually, I was thinking default-filters.php. Putting it in a function and hooking it is no different, since undoing that is unhooking a filter, while undoing this is hooking a filter. Dropping it into default-filters simply avoid an extra layer. It's not lower in the stack, as it's keeping the code outside of noindex(), which would still feel like a generic function.

Moving to 3.2 for review.

@nacin
14 years ago

#7 @joelhardi
14 years ago

Works for me!

#8 @nacin
14 years ago

  • Owner set to nacin
  • Resolution set to fixed
  • Status changed from new to closed

In [17891]:

Don't allow indexing of replytocom URLs. fixes #16893.

#9 @hakre
14 years ago

Thanks, I can't wait to see this in public testing!

Note: See TracTickets for help on using tickets.