Pagination SEO: Rel Next/Prev, Load More, and Infinite Scroll
Still haunted by pagination SEO? This guide dissects the ghost of rel=”next/prev” and provides clear, actionable strategies for modern pagination.
In this article
Let’s Get This Over With: An Introduction to Pagination SEO
Let’s be direct: pagination SEO is a solved problem that most of the internet still gets wrong. It’s the practice of ensuring search engines can efficiently crawl and understand content split across a series of component pages. When done poorly, it wastes crawl budget, dilutes link equity, and creates a swamp of duplicate or thin content.
For years, we had `rel=”next”` and `rel=”prev”` to signal the relationship between these pages. Then, in 2019, Google casually mentioned they hadn’t actually used it as an indexing signal for years. The SEO community, predictably, lost its collective mind.
But the fundamentals haven’t changed. Search engines still need to discover your deep-linked products and articles. This guide cuts through the noise and provides the definitive, modern approach to pagination SEO, from classic numbered pages to JavaScript-heavy infinite scroll.
Why Pagination SEO Still Haunts Us
If `rel=”next/prev”` is a ghost, why are we still talking about this? Because the underlying problems it tried to solve are very much alive and well. A poor pagination strategy is a direct attack on your site’s technical health.
The primary villain is Crawl Budget waste. Search engines allocate finite resources to crawl your site. If Googlebot spends its time hopping through thousands of low-value paginated URLs, it has less time to find and index your critical new content or update existing pages.
Next is index bloat. Without clear signals, search engines might index every paginated variant, creating a mess of near-duplicate pages in the SERPs. This can dilute the authority of the main category page and make it harder for your most important content to rank.
Finally, there’s the issue of link equity distribution. Paginated pages often link to the deepest products or posts on a site. If these pages aren’t crawled effectively or their links are `nofollow`ed (a cardinal sin we’ll discuss later), that authority never flows to the pages that actually make you money.
The ‘Right’ Way to Handle Standard Pagination SEO
Forget the old ways. The current best practice for standard pagination (`/category?page=2`, `/category?page=3`, etc.) is simple, logical, and effective. The goal is to treat each paginated page as a unique, indexable entity that is part of a larger whole.
First, every paginated page should have a self-referencing canonical tag. Page 2 should canonicalize to Page 2. Page 3 should canonicalize to Page 3. Do not, under any circumstances, canonicalize all paginated pages back to the first page. That’s an explicit signal to Google to ignore the content on pages 2 and beyond, effectively hiding those product links from discovery.
Here is what the canonical tag on `https://www.example.com/widgets?page=2` should look like:
Second, you must use standard links for your pagination controls. Search engines need to discover these URLs to crawl them. Hiding them behind JavaScript functions without a fallback `href` attribute is a recipe for disaster. Make the links clear and crawlable.
This approach allows Google to index the component pages, discover the unique content (and links) on each, and understand through internal linking that they form a cohesive series. For a deeper dive on canonicalization, see our guide to canonicals.
<link rel="canonical" href="https://www.example.com/widgets?page=2" />
JavaScript Pagination: ‘Load More’ and Infinite Scroll
Now for the modern challenges: ‘load more’ buttons and infinite scroll. While they offer a slick user experience, they are often implemented in a way that is completely opaque to search engine crawlers. The core issue is that content is loaded dynamically upon a user action (a click or scroll) without a corresponding change in a crawlable URL.
The solution is progressive enhancement. Build a foundation of standard, crawlable pagination first. Your site should function perfectly with accessible URLs like `?page=2`, `?page=3`, etc. Then, layer your JavaScript functionality on top for users whose browsers support it.
When a user clicks ‘Load More’, your JavaScript can fetch the content from the `?page=2` URL and append it to the current view, while also updating the URL in the browser bar using the History API (`pushState`). This gives you the best of both worlds: a seamless UX and a crawlable architecture.
You can easily audit this with a tool like ScreamingCAT. Crawl your site with JavaScript rendering disabled, then crawl it again with JS rendering enabled. If the crawler discovers significantly fewer product or article links in the non-JS crawl, you have a discovery problem.
Good to know
With infinite scroll, be mindful of performance. Loading an endless stream of products can bog down the user’s browser. Consider converting to a ‘load more’ button after a few scrolls to give the user more control.
- Build a crawlable foundation: Ensure a complete set of `?page=N` URLs exists and is linked with standard `` tags.
- Use the History API: When new content is loaded via JS, update the URL in the browser bar to reflect the current state (e.g., from `/category` to `/category?page=2`).
- Avoid fragment identifiers: Don’t use hash-based URLs (`#page=2`) for pagination. Google often ignores them for indexing purposes.
- Ensure discoverability: The initial page load must contain links to subsequent pages so crawlers can find them without executing JavaScript.
A Rogues’ Gallery of Pagination SEO Mistakes
Over the years, we’ve seen some truly creative ways to mismanage pagination. Here are the greatest hits of pagination malpractice and why you must avoid them.
The most common error is canonicalizing all paginated URLs to the first page of the series. This tells search engines that pages 2, 3, and 100 are just duplicates of page 1. Google will likely obey, de-index those pages, and never discover the unique products linked from them.
Another classic is using `nofollow` on pagination links. The logic is usually to ‘stop wasting PageRank’ on these pages. This is flawed thinking. It prevents link equity from flowing to deeper pages and tells crawlers not to even bother discovering what’s on the other side.
Using `noindex, follow` on paginated pages (from page 2 onwards) is a slightly more nuanced, but still problematic, strategy. While it keeps the pages out of the index, it relies on Google continuing to honor the `follow` directive over the long term, which is not guaranteed. It’s a temporary fix that creates long-term uncertainty.
Finally, there’s the ‘View All’ page. In theory, a page containing all products that you canonicalize to is a great idea. In practice, it’s almost always a performance disaster, leading to slow load times and a poor user experience. Unless you have a small number of items, avoid this approach.
Warning
Never block your paginated URLs with `robots.txt`. Blocking crawling prevents search engines from seeing any signals on the page, including links to your products or articles. This is the SEO equivalent of shooting yourself in the foot.
Auditing Your Pagination with ScreamingCAT
Theories are nice, but data is better. You need to audit your implementation to find what’s actually happening. ScreamingCAT is the perfect tool for dissecting your pagination strategy.
Start by running a crawl on your site. Once it’s complete, use the search bar in the top right to filter for your pagination parameter, such as `?page=`. This will immediately isolate all your paginated URLs.
With these URLs filtered, click on the ‘Canonicals’ tab. In the ‘Canonical Link Element 1’ column, you should see self-referencing URLs. If you see page 1’s URL listed for every paginated page, you’ve found a major issue.
Next, check the ‘Directives’ tab. Look for any unexpected `noindex` or `nofollow` directives that could be sabotaging your crawl. For JavaScript-heavy sites, run a second crawl with JavaScript rendering enabled (Configuration > Spider > Rendering) and compare the number of discovered URLs. A large discrepancy points to a reliance on client-side actions for link discovery.
This same process is invaluable when auditing faceted navigation, another common source of index bloat and crawl budget waste. The principles are the same: ensure every important URL is crawlable, indexable, and sending the right signals.
You can’t fix what you can’t see. A thorough crawl is the first and most critical step in diagnosing any technical SEO issue, especially pagination.
ScreamingCAT Team
Key Takeaways
- The `rel=”next/prev”` HTML attribute is dead and no longer used by Google for indexing.
- The best practice for standard pagination is to use self-referencing canonical tags on each paginated page.
- For ‘load more’ and infinite scroll, use progressive enhancement. Build a crawlable, paginated series first, then add JavaScript functionality.
- Never canonicalize all paginated pages to page 1 or use `nofollow` on pagination links.
- Use an SEO crawler like ScreamingCAT to regularly audit your pagination for incorrect canonicals, directives, or JavaScript rendering issues.
Ready to audit your site?
Download ScreamingCAT for free. No limits, no registration, no cloud dependency.