Planning content around search volume data alone while LLMs are running numerous backend queries is the fastest way to miss out on content opportunities.
Seer Interactive found that 95% of the queries that answer engines actually execute have zero tracked search volume. Zero.
So every time you open your keyword tool and filter by volume, you’re systematically ignoring what ChatGPT, Perplexity, and Gemini are searching for.
Discovery is moving from keywords and rankings to AI-driven chat experiences. And the real advantage isn’t when you train AI to recognize your brand in these experiences, when you get AI to recommend it.
Alex Birkett recently walked through how to extract topics from customer data, reverse-engineer citation patterns, and use a three-part validation model to create an effective content strategy for AI search.
Table of contents
- If Google went dark tomorrow, would your content strategy survive?
- AI search optimization: The missing layer between keywords and conversations
- Start with questions, rather than search volume
- Map customer research to five prompt patterns
- Reverse engineer citation sources to find content gaps
- The three-part validation model that actually works
- Mine Reddit and review sites for untracked questions
- Rank prompts by frequency and intent
- What to do this week
- Build tomorrow’s AI search content strategy on today’s customer research
If Google went dark tomorrow, would your content strategy survive?
LLMs don’t process queries the way Google does.
When someone asks an answer engine, “I need a CRM that integrates with Slack and my email,” the system doesn’t search for that exact phrase. It fans out into multiple sub-queries: CRM features, Slack integration capabilities, email sync requirements, and use cases for teams using both tools.
Most of those fanout queries never appear in keyword tools.
They have no volume data because humans don’t type them into search boxes. Instead, LLMs generate them on the backend to match user intent. And these fanout queries are expanding into longtail variations specific to the context in which the user is searching.
Seer Interactive’s research found that fanout queries averaged 6.7 words, but could range anywhere from 2 to 17 words.

This is why keyword planning alone is ineffective when creating a content strategy for AI search.
You’re optimizing for the question users ask, rather than the 10 questions AI generates to answer it. Keyword tools show search volume for single queries. LLMs use query fanout. The gap between the two is where your content strategy is failing.
Customer research catches what keyword tools miss, while sales call transcripts reveal the specific, multi-part problems prospects ask about. Someone saying “we need to scale our ABM campaigns but keep them personalized” on a call triggers different content needs than “ABM software” in a keyword tool.
AI search optimization: The missing layer between keywords and conversations
Search volume doesn’t tell the whole story anymore.
LLMs rewrite queries on the fly, decide what content gets surfaced, and skip traditional rankings altogether.
Here’s how to build an answer engine content strategy that keeps you visible.
Start with questions, rather than search volume
Run customer research before you open a keyword tool. Not alongside it. Before it.
Mine three sources: sales call transcripts, support tickets, and NPS survey responses. These contain actual questions prospects ask when they’re trying to solve problems. Upload anonymized sales calls to tools like Notebook LM and extract themes. Look for repeated pain points, instead of one-off complaints.
And yes, the format matters.
You need to structure findings as prompts, rather than keywords.
“The channel… is going to include keywords mostly as validation—nowadays it also includes prompts.”
— Alex Birkett
For example, “I need an AI-based CRM that integrates with Slack” instead of “best CRM software.” Buyers use longer, more specific queries in LLMs because they’re conversing, not searching.
This shift changes what you prioritize. A question that appears in five sales calls matters more than a keyword with 1,000 monthly searches. Especially if those five calls represent your ideal customer profile, asking the exact question your product answers.
Map customer research to five prompt patterns
Group customer questions into categories that match how people use answer engines:
- Product discovery: “Best X for Y” queries where buyers are exploring solutions in a category.
- Jobs to be done: “I need to accomplish X” statements that describe desired outcomes rather than product features.
- Pain points: “I’m struggling with Y” complaints that surface when current solutions fail.
- Comparisons: Alternative and versus queries that happen during evaluation stages.
- Brand-aware: Product FAQs and feature questions from people already familiar with your solution.
“From an AI planning perspective…product comparisons, solution evaluation, solving pressing pain points—those are the ones…to prioritize for prompt tracking.”
— Alex Birkett
Focus on pain point queries first. When users describe specific problems, AI tools surface solutions and product recommendations in responses.
For example, someone searching “my oven won’t stop smoking” is more likely to get product suggestions than just troubleshooting steps.
Complaint-based prompts trigger product recommendations more reliably than generic category searches.
This is more important than most teams realize. Pain point content positions your brand as the solution to a problem the AI just validated, rather than one option in a list of alternatives.
Reverse engineer citation sources to find content gaps
Export citation data from tools like Profound or Peak. Run batch URL analysis in Ahrefs to check which cited pages also rank in traditional search.
Pages that perform in both channels are your highest priorities as they get AI citations and drive organic traffic. But pages with zero keyword rankings that get frequent AI citations are opportunities, too. They’re working in answer engines despite having no SEO value.
Check what page types show up most in citation data. Eight formats are prominent:
- Listicles ranking solutions in a category
- Reviews and competitive comparison pages
- Knowledge base articles answering specific questions
- Product pages with detailed specs
- Persona and use case content for specific buyer types
- Case studies providing validation-stage proof
- Product-led content like calculators and how-tos
- Original research and data studies
Build these formats for your priority topics. They perform for both SEO and LLM citations because they match how AI systems structure answers.
Keep this in mind when choosing page types to include in your LLM citation strategy:
- A comparison page gives the AI clear alternatives to present;
- A knowledge base article provides specific data points to cite; and
- An original research offers unique information that isn’t duplicated across competitor sites.
Get this right, and every AI citation becomes a new entry point to your funnel.
The three-part validation model that actually works
Planning content with a binary yes/no filter for AI search is futile. Every topic needs three validations scored on a 1-10 scale:

1. Buyer data confirms customers ask about it.
How often does this question appear in sales calls, support tickets, or customer research?
2. Product alignment shows your solution solves it.
Can you credibly answer this question with your product or service? Does creating content here reinforce your positioning or dilute it?
3. Channel validation proves people search for it.
Is there search volume, citation data, or discussion forum activity related to this topic?
For example, a prompt like “SEO agencies in New York” scores high on all three if you’re a New York agency. Buyers ask for local providers, your service fits, and search volume exists. Prioritize it.
Meanwhile, a topic like “B2B buying behavior” might score lower on volume but higher on buyer relevance and product fit if your sales calls reveal this is what prospects want to understand. Use scores to prioritize with limited resources.
This model forces honest evaluation. You can’t create content just because it has volume if your product doesn’t solve that problem. There’s no point in chasing buyer questions if nobody’s searching for answers. And ignoring high-intent customer questions just because they lack volume data shows you never really understood intent in the first place.
Mine Reddit and review sites for untracked questions
Reddit is the most cited external source across answer engines. But it’s not enough to just participate in discussions. Analyze threads for content ideas that keyword tools miss.
Use Keywords Everywhere to summarize Reddit threads and extract core themes. Look for topics your audience discusses repeatedly, but competitors haven’t documented. Topics like “how to scale one-to-one LinkedIn ABM campaigns” can show zero search volume but appear in dozens of user discussions.
Review sites like G2 and Capterra work the same way. Complaints and feature requests in reviews reveal content gaps. A pattern of users complaining about a specific integration or workflow, for instance, is a signal that answer engines will need content addressing that pain point.
This is where volume-based planning breaks down completely.
These are real questions from real prospects that your keyword tool will never surface because they’re discussed in forums and review platforms, rather than typed into Google.
Rank prompts by frequency and intent

Frequency shows importance. Cross-reference customer questions with search volume where it exists. If a repeated customer question has 1,000 monthly searches, prioritize it immediately. If it has zero volume, check if it’s a pain point with high intent, as those are still topics worth considering.
Prioritize bottom and middle funnel topics first. Product comparisons, alternatives, and solution evaluation queries are where your brand appears in AI responses. Top funnel content builds authority but rarely drives direct visibility in answer engines.
This inverts traditional content strategy.
You’re not building a funnel from awareness to decision. You’re starting at the decision stage where AI tools make recommendations, then working backward to build authority.
What to do this week
- Pull three months of sales call transcripts. Upload them to Notebook LM and run a thematic analysis.
- Extract 20-30 questions prospects ask. Not questions you wish they’d ask, but questions they actually voice on calls.
- Format those questions as prompts. Group them by product line and buyer journey stage. Score each on buyer relevance (1-10), product fit (1-10), and channel potential (1-10).
- Pick your top five prompts based on combined scores.
- Create one piece of content for the highest-scoring topic. Use a format AI tools cite: comparison page, how-to guide, or knowledge base article.
Don’t overthink the execution. A 1,200-word comparison article built around a real customer question will outperform a 3,000-word thought leadership piece about industry trends.
“I would highly advise against scaling out thousands of long-tail pages… They might work now, but they risk SEO performance.”
— Alex Birkett
Answer engines cite specific, actionable content that solves defined problems. That’s your focus.
Build tomorrow’s AI search content strategy on today’s customer research
Search volume data optimized for Google’s algorithm doesn’t predict what ChatGPT will cite or recommend.
This takes longer to get right.
You can’t spin up 50 AI-optimized articles in a month by plugging keywords into a content brief generator. You need customer research, citation analysis, and validation scoring for every topic. It takes roughly three months to build the research infrastructure, six months to publish enough content to test what works, and 12 months to see meaningful traffic shifts.
This may seem like risky long play, but teams that have made this shift are seeing results traditional SEO can’t deliver.
They’re being cited in answer engines for high-intent queries, showing up in AI-generated buying guides for their category, and turning LLM recommendations into qualified pipeline.
Answer engines are already handling 10-15% of search queries in some industries, and that percentage is growing.
Want to learn how to turn AI into outcomes?
→ Enroll in CXL’s LLM Content Strategy by Alex Birkett and learn how to:
- Attribute your work to revenue so you can defend budget
- Map real buyer questions and create content that converts
- Use LLMs to speed research, and draft and repurpose content, without losing your brand voice
- Set up QA and review so brand and accuracy survive at scale