How to have scrolling vertically as well as horizontally inside fragment of fragment activity?

What's the best way to make an infinitely-scrolling page crawlable by Google?

  • In a project I'm working on, we have photo pages which use infinite scrolling. The photos don't have a traditional separate permalink page, instead you can link directly to its position in the main page, using a hash fragment in the URL. I'm trying to figure out the best way to make these pages & photos crawlable by Google and other search engines. The three approaches I'm considering are: 1. Present the entire photo page to Googlebot, but I don't know how this jives with Google's "don't present different content to search engines" guideline, even though it's essentially the same thing the user would see if he or she scrolled all the way down. 2. The page degrades gracefully to regular pagination if JavaScript is not present, so Googlebot should't have a problem navigating the content if I do nothing. However, I don't want page 2, page 3, etc. indexed, because it's not something users will typically see. 3. Use the Ajax crawling guidelines (http://code.google.com/web/ajaxcrawling/docs/getting-started.html), and list *every* photo's URL (page/1/#!/photo/1, page/1/#!/photo/2, page/2/#!/photo/15, etc.) in the XML sitemap. I'm leaning towards this option, but I'm not sure if I should put the photo's main page (without the hash fragment) as a rel="canonical" link (so Google lists the page from the top, and not each photo's individual position.) Any thoughts, or any other ideas?

  • Answer:

    There is a specific ajax crawling procedure that is designed for exactly this problem. Basically Google will replace any #! it sees in a link with ?_escaped_fragment_= so if you use #! and also handle the escaped-fragment syntax then everything will work smoothly. Twitter uses this with "new Twitter", so it isn't some minor thing and you can be confident it will be well-supported. For more detail: http://code.google.com/web/ajaxcrawling/

Kevin Lacker at Quora Visit the source

Was this solution helpful to you?

Other answers

I wouldn't redirect bots, but I would put in some visible (albeit obscure for users) pagination that would enable for the creation of pages that have unique elements (page titles, headings, meta data, content). I like your option 2 but am puzzled that you wouldn't want them indexed. If you only ever have one page in an index, then your ability to rank for many things will be drastically limited. Your option 3 seems to contradict what you say in your option 2 however! I'd go w/ unique page creation via noscript or some other hidden div approach creating pagesĀ  w/ illustrative breadcrumbs that enable people to see where they are in any structure. You could always overlay old pages with a light box or div that shouted about the homepage in some way.

Rob Watts

You should use progressive enhancement instead. For example, render the page with the standard pagination links (only showing the first page), and then use JS to replace the pagination links and inject the next set of results. That way, search engines will still be able to crawl your pages as before, while real users will experience infinite scrolling behavior.

Avishai Weiss

You could direct Google (and other search bots) to a different page, with or without pagination. Bear in mind that Google only collects a certain amount of data from each page anyway, you might need to break them up anyway.

Thomas Edwards

Just Added Q & A:

Find solution

For every problem there is a solution! Proved by Solucija.

  • Got an issue and looking for advice?

  • Ask Solucija to search every corner of the Web for help.

  • Get workable solutions and helpful tips in a moment.

Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.