
If you run marketing at a B2B SaaS company, you've probably noticed something weird over the last year. The blog posts that used to get 4,000 monthly visits now get 600. The "ultimate guide" your team spent a quarter writing is getting outranked by a three-sentence summary inside Google's AI Overview.
Welcome to AI SEO. It's less a new channel and more the floor moving under the old one.
This is the beginners' version of what we tell PNP clients on their first call. No jargon stacking, no "paradigm shifts," no 47-point checklist. Just what actually changed, what to do about it, and what to ignore.
What AI SEO actually is (and isn't)
AI SEO is the practice of writing and structuring your content so that large language models — ChatGPT, Gemini, Perplexity, Claude, Google's AI Overview — pick you as the source they cite when a B2B buyer asks them a question.
That's it. The tactics are new. The job is the same job SEO always had: be the thing a buyer trusts when they're making a decision.
What it isn't: a way to churn out more articles faster. Every agency on LinkedIn is selling "AI content at scale." Most of them are building what we call content landfills — thousands of near-identical pages that pollute your domain and teach Google that you have nothing to say. Landfills don't rank in blue links anymore. They definitely don't get cited in AI Overviews.
Why the old playbook broke
For about a decade, the B2B SEO formula was: pick a keyword, write a "What is [keyword]" guide, hit 2,000 words, throw in an FAQ schema, rank. The top three results got most of the clicks. The rest got scraps.
Two things changed that.
First, Google's 2023 Helpful Content system started actively demoting the kind of generic, template-driven content that used to do fine. Not because an AI wrote it — Google doesn't care who wrote it — but because there was no information gain. Nothing you couldn't already find in the first five results.
Second, AI Overviews now answer the question inside the search page. The buyer reads the summary and moves on. Recent data puts click-through on informational queries down by roughly half compared to pre-Overview days. For a lot of top-of-funnel terms, "ranking #1" no longer means getting the traffic — it means getting quoted.
The job shifted from winning the blue link to being the source the AI quotes in the answer.
SEO vs. GEO vs. AEO — what each one actually covers
Three acronyms, one ecosystem. Here's the short version we use internally.
SEO is still the foundation. Crawlable site, clean URL structure, fast pages, sensible internal links, real backlinks. If your CRM landing page isn't indexable, nothing else on this list matters — Google can't cite what it can't read.
GEO (Generative Engine Optimization) is about making sure LLMs cite you when they synthesize an answer. That comes down to two things: do you have a take the model can't invent on its own (a proprietary number, a framework, a contrarian argument), and is your content structured cleanly enough that the model can lift a paragraph and attribute it?
AEO (Answer Engine Optimization) is the narrow version of GEO aimed at direct-answer modules — AI Overviews, Perplexity cards, ChatGPT citations. Tables, short definitions at the top of sections, clear headings. The kind of content a synthesizer can quote in three lines without losing the meaning.
You don't pick one. You do all three, in order. SEO before GEO before AEO. Skip the foundation and the rest is vanity.
Is AI-written content bad for SEO?
No. And yes. The nuance matters.
Google has been clear: they reward useful content regardless of how it was produced. A human who writes regurgitated fluff ranks worse than an AI-assisted piece that surfaces a real insight. The tool isn't the problem.
The problem is what most teams do with AI. They ask ChatGPT to "write an article about X," lightly edit the output, and publish. That output has no point of view, no first-party data, no sense of which of the ten possible framings is the right one for their buyer. It's a summary of the existing internet, republished.
LLMs were trained on the existing internet. They don't need your summary of it.
Where AI works is in the middle of the workflow, not the ends. Use it to cluster 500 keywords. Use it to find gaps in a competitor's documentation. Use it to draft the bones of a section you already know what you want to say. Don't use it to decide what you think. That's the only part that's yours.
How to actually rank (or get cited) in AI Overviews
Four things, in rough order of impact.
Say something only you can say. If your article could have been written by a generalist who'd never used your product, the LLM has no reason to cite you over the ten other pages saying the same thing. Product-led content, customer data, internal benchmarks, opinionated frameworks — that's the moat. A DevOps platform shouldn't write "What is CI/CD." It should publish the deploy-failure rate across its 5,000 customers and what patterns caused them.
Structure for extraction. Put a one-paragraph answer at the top of every major section. Use tables for comparisons. Use clear H2/H3 hierarchy. The model is going to try to lift a 30-80 word chunk; make sure the chunk it lifts is the one you want it to lift.
Get the technical foundations right. Schema markup (Article, FAQ, SoftwareApplication where relevant). Internal links that form a real topic cluster, not a random web. A reasonable Core Web Vitals score. None of this is sexy. All of it is table stakes.
Build the citation graph. LLMs weigh sources partly by how often other credible sources already cite them. Guest posts on your top three industry publications, a quoted stat in a Gartner piece, a podcast appearance — these now do double duty as both backlinks and training-signal anchors.
Using ChatGPT without wrecking your rankings
The highest-leverage thing to do with ChatGPT in an SEO workflow isn't drafting. It's research.
Give it the top 10 SERP results for a keyword and ask what every article missed. Give it a competitor's help center and ask which pain points they don't address. Give it 500 keywords from Ahrefs and ask it to cluster by buyer-journey stage. Give it your last quarter of sales call transcripts and ask what language prospects use to describe the problem your product solves.
That's the work a senior strategist used to spend a week on. You can now get a first pass in an hour.
The draft itself is the easy part, and also the part where AI-ness shows up most. Write the opening, the opinion, and any line that contains a claim about your business yourself. Let AI fill in the connective tissue. Then read it aloud — if it sounds like a LinkedIn thought leader who's never done the job, cut it.
Three misconceptions that cost teams money
"AI SEO is set-and-forget." It isn't. Every agency pitching "1,000 articles a month" is pitching you a future penalty. The teams winning on AI SEO publish less, not more. They publish fewer, better things, then maintain them.
"AI content triggers a Google penalty." Not directly. What gets penalized is unhelpful content, and unhelpful content is what you get when you skip the human step. If a subject matter expert didn't read it before publish, don't publish it.
"GEO replaces traditional SEO." It doesn't. LLMs rank citations partly on traditional SEO signals — backlinks, domain authority, indexability. If your SEO fundamentals are broken, GEO can't save you. GEO is the cherry. SEO is the cake.
What this looks like when it works
I'll skip the made-up percentages. Two quick patterns we've seen.
A compliance-focused FinTech we worked with was getting hammered on generic "GDPR vs CCPA" queries by larger brands. Instead of chasing those, they built a comparison matrix across six regional frameworks, with citations to the source regulations and annotations from their in-house counsel. That one page now shows up as the primary citation in AI Overview answers for that whole cluster of queries, because no other source had the same structured depth. Pipeline from that single page beats the rest of their blog combined.
A cybersecurity client was sitting on 80 whitepapers nobody read. We didn't rewrite them — we broke them into answer-sized chunks with proper H2/H3 structure, added FAQ schema, and interlinked them. Same content, repackaged for extraction. Cited output in Perplexity went from near-zero to consistently showing up in the top three sources within two months.
Neither is magic. Both are applied fundamentals.
The checklist, shortened
If you read this far and want a weekend action list:
- Audit your top 10 traffic pages. For each, ask: does this have a proprietary insight, number, or framework? If no, it's a landfill candidate. Rewrite or consolidate.
- Add an answer kernel. Two or three sentences at the top of each major section, written as if Perplexity is about to quote them word-for-word.
- Ship schema. Article, FAQ, Organization, Product. Use Google's Rich Results Test to validate.
- Pick one proprietary asset to build. Customer benchmark, original survey, internal framework. One is enough. Publish it, pitch it, watch it become the thing AI cites.
- Stop tracking rank as your only KPI. Start tracking citations and share of voice in AI answers. Tools like Ahrefs Brand Radar and Profound are catching up fast.
The takeaway
In the blue-link era, your content was one of ten options on a results page. In the AI era, it's either the source the model quotes, or it's background noise the model trained on and then forgot.
The goal isn't to write more. It's to be the version of the answer that's worth citing.
AI SEO for B2B SaaS: Frequently Asked Questions
Quick answers on AEO, GEO, AI-generated content penalties, and how to measure AI-era SEO success.