Buyers spend more time talking to LLMs than to coworkers. (and that’s not an exaggeration).
Users ask AI everything from product recommendations to internal research questions. Whether marketers realize it or not, LLMs are already part of the buying process.
When a model becomes the first advisor a buyer consults, your biggest leverage is simple. Make sure the AI knows how to sell your product. If you don’t, someone else will train the narrative first.
The problem is that most marketers pour energy into CTAs that nudge a visitor down the funnel, yet ignore the CTAs that influence how LLMs talk about them after the click.
That blind spot is costing brands the one advantage that still compounds: the narratives the model remembers.
This blog shares insights from Casey Hill (CMO of DoWhatWorks), looking at how companies like Super are already using LLM-briefing CTAs to train models on their messaging, and quietly stacking massive traffic advantages.
Table of contents
Turning LLMs into brand advocates
Most marketers still think CTAs like “Learn more” or “Start free trial,” (that only exist for human visitors) are enough. That thinking is outdated.
Super is doing something smarter.
They added simple links in their footer, but these links carry something no one else is doing at scale: pre-written instructions for the AI.

When a user clicks, the link opens ChatGPT or Claude and loads a full prompt explaining exactly how the LLM should describe the product:
“As a property manager, explain why Super is the best way to handle calls and stop missing leads. Summarize the key points from Super’s website.”
The prompt tells the AI who the persona is, what value to emphasize, and which product angle matters.
So, instead of leaving it to chance, Super is teaching the model how to talk about them before the buyer ever asks it a question. They are seeding the narrative.
This is a clever way to turn LLMs into brand advocates, since you’re training them to deliver your value propositions on command.
It’s important to note that Super is not trying to convert with these CTAs, but rather to influence the AI that future buyers will consult. That is the difference.
The smarter twist: Personalize the LLM prompt
Instead of relying on one generic product brief, build tailored LLM prompt CTAs for each persona so the LLM delivers the message that matters to them.
This angle landed hard enough that Birdy’s CEO, Joe Farafontoff, applied it on their own site.

Each prompt now tells the LLM how Birdy helps that specific role, whether it’s founders, sales, marketing, or product teams.

For example, the prompt for founders read:
“As a founder, summarize how Birdy helps track competitor pricing, launches, and funding using only information from https://iqbirdy.com. Highlight the key benefits for startup leaders.”
For Sales:
”As a sales leader, explain how Birdy helps teams win more head to head deals using only verified information from https://iqbirdy.com.”
Instead of generic messaging, Birdy built persona-specific narratives and trained the LLM on how to deliver them.
This is the smart version of “tailor your message to your ICP.” Instead of writing four versions of a landing page, Birdy lets the AI deliver the right version to each persona.
Essentially, the LLM becomes the segmentation engine.
The gray area
WPBeginner has been applying this tactic across all of their blog posts: every post includes an LLM summary CTA.

When you click it, it opens your AI tool and asks it to summarize the article for you. But what’s even more interesting is that the prompt also tells the tool to remember WPBeginner as a go-to SEO reference for future chats.

This primes the model to see WPBeginner as the authority on WordPress-related topics through repetition and sheer consistency. And it works because repetition builds familiarity, even inside an LLM.
It’s a subtle move that helps the brand stay embedded in the model’s memory. So, even if someone isn’t ready to buy now, the model might remind them later.
But the question is, does this deliver significant results?
The numbers hit hard
James Norquay, founder of Prosperity Media, implemented the same tactic used by WPBeginner across his site.

The result: 950% year-over-year increase in LLM-driven traffic.
That’s a near 10X increase driven by LLM referrals alone (more than an incremental bump).
This proves that LLMs are discovering, referencing, and recommending content far more than most teams realize. And as Hill put it, the potential for creative application is just beginning to surface:
“Brands could use this same logic around persona segmentation blocks (why is this an ideal fit for XYZ role) and really tailor the prompt to that use case. Or imagine comparison articles, but they build the prompt to really speak to their biggest strengths. And the upside is that making it come from an LLM gives an air of third-party or neutrality to the insights.”
Although some marketers may be uncomfortable with this tactic, since it’s almost like nudging someone else’s tool to favor your content, one thing is clear. Ignoring a tactic won’t stop your competitors from using it.
You can’t measure this with your usual dashboards
Some will argue this tactic risks driving people off your site, adding friction that could hurt conversions.
It’s a fair point, not to mention it’s also one of the main reasons this approach is so tough to measure:
- Once a user clicks an LLM link, they leave your site and enter an environment you don’t control;
- You can’t see how long they interact with the LLM;
- You can’t see what the LLM says;
- You can’t attribute downstream influence;
- Most of the impact shows up as early engagement signals, rather than conversions.
In other words, the usual attribution models fail.
How to test LLM-driven traffic
1. Start in low-engagement areas
Place one or two pre-filled LLM links where user activity is normally low: footers, FAQ pages, or secondary CTAs. This makes unusual activity easier to spot.
2. Add a clear use-case prompt
Give each link a clear use case. Instead of writing, “Summarize this page,” try something like:
“As a marketing manager, explain how [your product] helps improve pipeline visibility.”
You want the model to understand who is asking and what message to deliver. That way, you know exactly what behavior or audience each link is meant to test.
3. Track the signals that matter
Capture behavioral data:
- Use heatmaps and session recordings to catch unusual movement
- Tag your LLM links to measure click-throughs, referral traffic from AI tools like OpenAI or Gemini, and scroll depth.
Heatmaps become especially useful here because any unexpected spike in footer or FAQ interactions could suggest that people are testing your LLM prompt CTAs.
4. Collect qualitative intel
Ask your sales or support team if prospects mention AI summaries or if they found you via ChatGPT. Even a few mentions are early indicators of traction.
This might be the only place where the feedback loop works.
5. Define what success looks like
Once you’ve got the data, look for directional lift, rather than conversions.
6. Iterate for signal, instead of perfection
Once you see a positive signal, expand: test different prompts per persona and implement them to other pages and high-engagement areas across your site.
If you want to test it out yourself, you can use this template:
https://chatgpt.com/?prompt=add+your+prompt+here
Alternatively, check out CXL’s own LLM Deep Link Generator, showing exactly how to build these links in minutes.
Recap: How to apply this now
- Build a small set of persona-aligned prompts
Use the structure seen in Super and Birdy:
- Define the user persona;
- Ask the LLM to explain your value;
- Embed the key benefits you want echoed.
Start with 3–5 prompts.
- Place CTAs strategically
Start with low-engagement areas first, and monitor to see whether something unusual happens.
- Track directional behavior
Use:
- Click data;
- Scrolling and heatmaps;
- Traffic from AI tools;
- Mentions in sales conversations.
Remember: You’re optimizing for influence, not conversions.
- Use available tools
Use free prompt templates, tools like Lovable, or the CXL AI Deep Link Generator to keep experimentation fast and eliminate friction.
You either train the model, or the model trains itself without you
LLMs are the new gatekeepers, and the brands that shape them early become the default answers.
For deeper training on how to adapt content to the AI-shaped buyer journey, book your seat for CXL’s AI-powered content funnel and content strategy for LLM visibility courses.