Skip to main content

Engineering

Why your React site isn't ranking — the prerender gap nobody explains

Most React SPAs ship an empty body to Google. Here's what the crawler actually sees, why the JS-render queue can take weeks, and the three fixes ranked by effort.

By Mr. Gill ·

Every couple of weeks someone emails me with a variation of the same question. "My React site looks great. Lighthouse is green. But Google won't index the pages. Why?"

The answer is almost always the same. Googlebot isn't seeing what you're seeing. You shipped a beautiful SPA to a crawler that doesn't wait for it to hydrate. And nobody on your team noticed because when you visit the page in a browser, everything is fine.

This is the gap nobody explains well. So let's fix that.

What Googlebot actually sees on a CSR React site

Open the source of a typical create-react-app or default Vite build. Here's the body.

<body>
  <div id="root"></div>
  <script type="module" src="/assets/index-abc123.js"></script>
</body>

That's the page. When a crawler hits it, it gets that. An empty div, a script tag, and nothing else. Your title is set by react-helmet at runtime. Your article body is fetched from an API after hydration. Your meta description doesn't exist yet.

A human browser happily runs the JS, fetches the data, and renders the page in about four hundred milliseconds. A crawler doesn't wait. At least, not in the way you think.

The rendering queue nobody mentions

Googlebot has two passes. The first one reads your HTML. If there's nothing interesting (empty body, no meta, no content), the URL goes into a second queue for JavaScript rendering. That queue is processed when Google has the capacity, which is, and I'm quoting roughly from their own public talks, "usually within a few hours but sometimes weeks."

Until it clears that queue, your page is effectively invisible. Worse, some other crawlers (the AI ones, specifically GPTBot, ClaudeBot, PerplexityBot) don't run JavaScript at all. If your content isn't in the first response, it doesn't exist for them.

2 passesGooglebot render pipeline
Hours–weeksQueue wait for JS pass
0 JSMost AI crawlers execute

CSR vs SSR vs static prerender

There are three rendering strategies worth knowing. Each has a real trade-off. Pick the one that matches your site's shape.

Client-side rendered (CSR)

Ship an empty shell, hydrate in the browser. Cheap to host. Terrible for SEO. Fine for logged-in app dashboards, not fine for anything you want discovered.

Static prerender (SSG)

At build time, render each route to a finished HTML file. Crawler gets a complete page. Serves from a CDN. Best choice for content sites where routes don't change per-user.

The third option, server-side rendering (SSR) on every request, sits between those two. You get fresh content and great SEO, but you're now paying for an always-on server and you have to think about cache invalidation. Next.js, Remix, and SvelteKit all land here by default. Great tools. Sometimes overkill for a marketing site.

The prerender pattern that worked for us

PIXIPACE is a Vite + React SPA with about 195 routes (articles, services, location pages). We host on Vercel, which could run Next.js but we didn't want the framework lock-in for this site. So we wrote a small prerender script instead.

The shape is simple. Build the app normally. Then, for each known route, boot a headless renderer, render the React tree with the route in the URL, and write the result to dist/<route>/index.html. Vercel serves those static files directly.

import { renderToString } from "react-dom/server";
import { render } from "../dist-ssr/server-entry.js";

for (const route of ALL_ROUTES) {
  const { html, head } = await render(route);
  const full = template
    .replace("<!--app-head-->", dedupeHead(head))
    .replace("<!--app-html-->", html);
  await writeFile(outPath(route), full);
}

Dynamic routes that only exist in the database (like individual Journal articles) can't be baked at build time, so we route those through a Firebase Function that renders on-demand and caches at the edge. Fast enough. Googlebot-friendly. Cheap.

Got a React site Google won't index?

We've migrated a few out of the CSR rendering queue. Send us the URL and we'll tell you which of the three strategies fits.

See our web work →

What to do Monday morning

If you think your site has this problem, the first diagnostic takes sixty seconds. Load the page, then right-click, then View Page Source. Not DevTools. Source. That's what the crawler sees.

If your title, h1, and article body are in there, you're fine. If it's an empty div and a script tag, you have the gap. From there, the options, in rough order of effort:

  1. Switch to a framework that prerenders by default (Next.js, Astro, Remix). Biggest lift, biggest payoff.
  2. Add a static prerender step to your existing Vite build. Two to five days of work for most SPAs.
  3. Use a rendering proxy (Rendertron, Prerender.io). Quickest to set up, but you're adding a third-party dependency in the critical path.

One of these three options fixes this. None of them are hard. The reason it doesn't get fixed is that the symptom — "Google won't index us" — looks like an SEO problem, so people hire SEO consultants to fix a rendering problem. That rarely ends well.

Key takeaways
  • Right-click > View Page Source is the sixty-second test. If your content isn't there, crawlers don't see it.
  • Googlebot's JS-render queue can take hours or weeks. AI crawlers skip JS entirely.
  • Static prerender is the sweet spot for content sites. SSR for dynamic. CSR only for authenticated apps.
  • This is a rendering problem, not an SEO problem. Fix the rendering first, the rankings follow.