Something interesting is happening in AI search that most SEOs haven't caught up with yet: Google rank position matters less than it used to.

A page sitting on page 3 of Google can appear as the first cited source in a ChatGPT response — because AI systems re-rank based on answer quality, authority, and content clarity, not on who paid for placement or who has the most backlinks. The playing field has genuinely shifted.

But here's the catch: that re-ranking advantage only matters if AI crawlers can read your content in the first place. And for a significant share of React and Next.js sites, they can't.

The core problem: AI crawlers — GPTBot, PerplexityBot, ClaudeBot, anthropic-ai — are plain HTTP crawlers. They request your page, read the initial HTML response, and leave. If your content lives in JavaScript that executes after page load, there is nothing for them to read. You are effectively invisible to AI search, even if Google indexes you perfectly.

The Two-Minute Diagnosis

Before we go deeper, run this test right now. It takes two minutes and tells you exactly whether you have a problem.

Test 1: The No-JavaScript Test

This is the most intuitive check. Open your site in Chrome, open DevTools (F12), go to Settings → Preferences → Debugger → Disable JavaScript. Reload the page.

What renders without JavaScript is exactly what AI crawlers see. If your content disappears, shows a spinner, or displays a blank white page — AI crawlers are seeing the same thing.

Test 2: The Curl Test

This simulates how GPTBot actually crawls your site from the command line:

curl -A "GPTBot" https://yoursite.com/your-key-page | grep "expected headline"

If the grep returns nothing, your headline isn't in the initial HTML response. AI crawlers have nothing to work with.

Run the same command with your main value proposition, a key product name, or a blog post title. If none of them appear in the curl output, you have a visibility problem that's silently cutting you off from AI search traffic.

Important: A clean Google Search Console is not evidence that AI crawlers can read your site. Googlebot renders JavaScript. AI crawlers don't. They're entirely separate systems operating by different rules.

Why React and Next.js Sites Have This Problem

Client-side rendering: what AI crawlers actually receive

A React Single-Page Application (SPA) — anything built with Create React App, Vite, or a bare React setup without SSR — sends the browser a near-empty HTML file like this:

<!DOCTYPE html>
<html>
<head>
  <title>My App</title>
</head>
<body>
  <div id="root"></div>
  <script src="/static/js/main.chunk.js"></script>
</body>
</html>

The browser receives this, downloads the JavaScript bundle, executes it, and then the content appears. Googlebot replicates this entire process. GPTBot does not — it reads the HTML above, finds a empty <div id="root"></div>, and considers the page empty.

Why Next.js doesn't automatically fix this

Next.js is often cited as the solution to AI crawler visibility, and it can be — but only if you use it correctly. The framework has two distinct rendering models that behave very differently:

Configuration Initial HTML has content? AI Crawlers See It?
App Router — Server Component (default) Yes Yes
App Router — 'use client' with useEffect data fetch No No
Pages Router — getStaticProps or getServerSideProps Yes Yes
Pages Router — no data fetching function No No
Vite / Create React App (no SSR) No No

The most common trap for Next.js App Router users: wrapping content components in 'use client' for interactivity (forms, toggles, animations) and also fetching content inside them with useEffect. That data fetches after render, lives only in JavaScript memory, and never reaches the initial HTML. AI crawlers miss it entirely.

The Fix: Making Your Content Server-Rendered

Next.js App Router: keep content in Server Components

The App Router defaults to Server Components — components that render on the server and include their output in the initial HTML. This is ideal for AI visibility. The rule is simple: content goes in Server Components, interactivity goes in Client Components.

// ✅ GOOD — Server Component (default, no directive needed)
// File: app/blog/[slug]/page.tsx

export default async function BlogPost({ params }) {
  const post = await getPost(params.slug); // runs on server

  return (
    <article>
      <h1>{post.title}</h1>
      <div dangerouslySetInnerHTML={{ __html: post.content }} />
      <ShareButton />  {/* This is a Client Component — just the button */}
    </article>
  );
}
// ❌ BAD — Content fetched in useEffect (invisible to AI crawlers)
// File: app/blog/[slug]/page.tsx
'use client';

export default function BlogPost({ params }) {
  const [post, setPost] = useState(null);

  useEffect(() => {
    fetch(`/api/posts/${params.slug}`).then(r => r.json()).then(setPost);
  }, []);

  if (!post) return <div>Loading...</div>; // AI crawlers see this

  return <article>...</article>; // AI crawlers never reach this
}

Next.js Pages Router: add getStaticProps or getServerSideProps

Pages Router requires explicit data fetching functions. Pages without them default to pure client-side rendering.

// ✅ GOOD — Static generation (content in initial HTML)
export async function getStaticProps({ params }) {
  const post = await getPost(params.slug);
  return { props: { post }, revalidate: 3600 };
}

export default function BlogPost({ post }) {
  return <article><h1>{post.title}</h1>...</article>;
}

Vite / Create React App: migrate to Next.js or add SSR

For plain React SPAs (Create React App, Vite), the fastest path to AI visibility is migrating key pages to Next.js. If a full migration is impractical short-term, consider:

  • React Router with a pre-rendering step — tools like react-snap or vite-plugin-ssr generate static HTML for each route
  • A CDN-level edge worker that serves pre-rendered HTML to known bot user agents
  • Selective SSR — migrate only the highest-traffic landing pages and blog posts first

The robots.txt Piece: Necessary but Not Sufficient

One fix that's easier than SSR but often overlooked: make sure AI crawlers are explicitly allowed in your robots.txt. Many sites block all unknown bots by default.

# Allow all major AI crawlers
User-agent: GPTBot
Allow: /

User-agent: PerplexityBot
Allow: /

User-agent: ClaudeBot
Allow: /

User-agent: anthropic-ai
Allow: /

User-agent: Applebot
Allow: /

User-agent: Google-Extended
Allow: /

Our own robots.txt explicitly allows GPTBot, ClaudeBot, and PerplexityBot. We use this as a signal to AI systems that we welcome their crawls — and we verify it regularly as part of our own GEO audit process.

But allowing bots in robots.txt has no effect if your HTML is empty. The bot will crawl your page, find nothing, and move on. You need both: access and content.

The AI Re-Ranking Opportunity

This is the business case that makes fixing this worth your time. Google's ranking algorithm has thousands of signals, many of which are slow to influence: backlinks take months to build, domain authority accumulates over years. AI search is different.

ChatGPT, Perplexity, and Google AI Overviews select sources based on answer quality, topical authority, and structured content clarity. A well-written, authoritative page on a niche topic — even on a young domain — can outrank established players in AI responses if it answers the question better. We've observed pages ranked #3 on Google appearing as primary sources in AI-generated answers because the content structure was cleaner and more directly answerable.

That opportunity only exists if AI crawlers can read your content. Right now, for every React site with invisible CSR content, a competitor with a WordPress blog or a plain HTML page is getting the AI citation. The fix is technical and finite; the benefit is ongoing.

For a deeper look at the GEO fundamentals behind AI citation behavior, see our guides on What is GEO and GEO vs SEO in 2026.

Quick Checklist: AI Visibility for React/Next.js Sites

  1. Run the no-JS test. Disable JavaScript in DevTools, reload each key page. Content should appear without JS.
  2. Run the curl test. curl -A "GPTBot" https://yoursite.com | grep "your headline" — confirm the match returns.
  3. Audit your robots.txt. Verify GPTBot, PerplexityBot, ClaudeBot, anthropic-ai are all allowed.
  4. Fix App Router CSR. Move data fetching out of 'use client' components using async Server Components or fetch() in the page.
  5. Fix Pages Router CSR. Add getStaticProps or getServerSideProps to content pages that lack them.
  6. Add Article schema. Add Article JSON-LD to blog posts and key pages. AI systems use structured data to understand content and context.
  7. Verify with a GEO audit. Run a full audit to confirm AI crawlers can index your content and score your overall AI visibility.

See Exactly What AI Crawlers See on Your Site

Our free GEO Audit simulates GPTBot and PerplexityBot crawls against your live URL and scores your AI visibility across 10 technical and content signals — including JavaScript rendering, robots.txt configuration, schema markup, and content structure.

Run Free GEO Audit →