A few months ago, I took my family to Dinosaur National Monument in Utah. We saw some pretty cool fossilized dinosaur bones as well as ancient petroglyphs and pictographs. There was, however, one stop that disappointed us.
A trail we were following had a clearly worn path through an open field, but we couldn’t find any trail markers beyond it. We wandered around for a while but eventually turned back without finding what we had come to see.
Most hikers know to look for the cairns—little stacks of rocks that let you know you’re on the right trail. Unfortunately, that path didn’t have any cairns. I’ve seen the same thing happen to businesses on the path to “find” personalized experiences for their visitors.
Every business wants to take their audience to that magical place of at-scale personalization. The good news is that there is a tried-and-true path. The bad news is that there are five pernicious pitfalls along the way.
In this post, I’ll cover those pitfalls and show you the six cairns to help identify the right path for your personalization strategy.
Why is personalization so difficult?
Before we talk about the pitfalls and cairns on the personalization path, you need to know a little about the destination itself. Why is personalization such an elusive destination?
In the idealized version of personalization, every visitor who comes to a site is given the perfect experience based on who they are and what they need. It sounds great. It’s difficult to execute. Why?
To personalize correctly, you need the right combination of
- Who, with visitor segmentation;
- What, with content and offers;
- Where, with the location on your site.
An example might be showing first-time visitors (who) a percent-off coupon (what) on your homepage (where). The challenge is that there are an infinite number of possible combinations when you put those three things together.
The who could be first-time visitors, return visitors, browser type, mobile device or desktop, time of day, day of the week, geolocation, DMA, previous customer, campaign responses, time zone, operating system, screen resolution, gender, age, interests, preferences, recency, frequency, etc.
The what could be banners, calls to action, images, videos, GIFs, emails, posts, text, copy, content types, promotions, offers, discounts, layouts, funnels, grids, styles, pricing, fonts, titles, flow, icons, buttons, links, pagination, accounts, borders, etc.
The where could be homepage, blog posts, emails, social media channels, landing pages, search results, content pages, department pages, category pages, help center, billing pages, checkout pages, account pages, sign-up pages, product detail pages, confirmation pages, shopping cart, about us pages, clearance section, store locator, etc.
There is a way through—a path to personalization that any company can follow. Let’s talk about the pitfalls first. As you recognize these traps, you’ll be able to see when you stray from the true path.
5 pitfalls on the path to personalization
1. Tool-first personalization
This pitfall is propagated by all the major software companies. As each develops fancy new features to identify visitors, integrate data, or use artificial intelligence, they claim that their new feature will increase your conversion rates with personalization.
I was at the Adobe Summit this year, and this pitfall came up over and over throughout the conference. Every new feature was a way to personalize better. Every breakout session that talked about a tool was “a great way to personalize.”
Don’t believe it. The tool-first strategy is one of the worst because it puts your strategy in the backseat while your tool drives you off a cliff.
Rather than take a tool-first approach, remember to determine your personalization strategy first, then use the tool to support that strategy.
2. Segment-first personalization
This pitfall is a common trap on the path to personalization because good personalization is all about the audience. Marketers and optimizers want to serve content to a specific audience, so they often take this audience-first approach.
I recently saw an example of this segment-first personalization. Leading up to Mother’s Day, a company wanted to personalize to mothers. So they defined their audience first.
They wanted to personalize to young mothers, not all mothers, just the young ones. Once they defined their audience, they created content for them and launched the campaign.
This audience-first strategy is dangerous because it looks and feels like good personalization. But it has one critical fault: What if we chose the wrong audience?
- What if this campaign would have been more effective for all mothers?
- What if this campaign would have been better with mothers and fathers included?
- What if this campaign would have been better targeted at anyone who has a living mother?
- What if it would have been better just to include everyone?
The segment-first personalization method is dangerous because it limits the sample size too soon. Good personalization starts as broadly as possible, then hones in where value is discovered.
3. Content-first personalization
This pitfall is one we would expect companies to have figured out by now. A content-first strategy is when a company makes a product or piece of content that they really like. Because they love it so much, they personalize the content to the audience that they think would benefit from it.
This is kind of like the Field of Dreams “if you build it, they will come” strategy, but it’s more like “we will build it, then make them come.” The content-first strategy doesn’t seek to learn if the content is good. Its only decision is who sees the content.
Examples of this pitfall are easy to see with products, which apply just as well to content mistakes:
- Segway. The classic example of a product-first approach. Segway found no traction initially. It was a cool product that didn’t have the right audience. It wasn’t until the right audience was paired with the product—law enforcement, urban tour guides, and warehouse companies, not the general public—that Segways began to sell.
- New Coke. That time when Coke created something new for the sake of creating something new, and the audience was horribly disappointed.
- Amazon Fire. Amazon wanted to build a phone for their current audience base. Since they went product first, they weren’t very successful.
The only way we can know who content should go to is if we test that content in front of all audiences. We should never assume that good content or products will work for the audiences we want them to work for.
4. Correlative data point–first personalization
This is a trap for people who love data but confuse correlation with causation. While everyone can recite the “correlation is not causation” mantra, few understand and apply it.
Marketers who get their hands on data that shows a correlation between an audience and a piece of content almost always jump to the conclusion that that the audience needs to be personalized with that piece of content. The flaw is that your analytics is showing a correlation—it can’t tell you what’s causing the response.
I recently heard an example of an “expert” promoting this misguided strategy at a conference. He said, “Your data can show you that a visitor was looking at your product, so when they come back to your site after a few days, you should show them the product they were looking at previously. This will give the visitor a unified experience.”
This a classic example of using correlative data to jump to personalization. How do we know that visitor still wants that product? Are there other products that would be a better fit? It’s easy to jump to conclusions when you have data that hints at what could be happening.
Your analytics data should be used for test ideation and not personalization. Once you have causal testing data, then you can personalize.
5. Machine-first personalization
Digital marketing always has that next sexy thing that people want to do. Lately, it seems like machine learning and AI is all the buzz. Machines will solve all our personalization problems. Supposedly.
That’s a nice story, but every machine or software program has limitations:
- Humans with limited information created the machine to behave a certain way.
- Machines are limited by the inputs they receive.
- It takes a massive amount of data for any truly automated, causal learning to take place.
- To get to causal data, you still have to feed inputs into the machine.
You’ve heard the saying “garbage in, garbage out.” Well, machine learning makes a lot of assumptions based on the programming of the creator and the inputs into the machine. Both are imperfect.
At that same Adobe Summit I attended, I heard a lot of phrases like
- “Trust the AI.”
- “The AI will show you who has a propensity to convert.”
- “Once the algorithm shows their behavioral conversion propensity, you can then create a segment and personalize.”
Machine learning takes all the flaws of Pitfall 4 and tries to automate them. It speeds them up—so it just makes mistakes faster.
Further, since the machine is usually a black box, we aren’t even aware of the garbage. In fact, the machine is programmed to make itself look good and show results even if there are flaws in how it got them.
Now that you know the pitfalls, here’s how to stay on the straight-and-narrow path.
6 cairns on the path to personalization
As I mentioned earlier, there are six guiding cairns on the path to personalization. This path is like a checklist that you can use to make sure you don’t stray into the pitfalls.
1. Check your assumptions.
The problem was that they didn’t have any causal data to show that they should personalize. They hadn’t even done basic testing yet. As their consultant, we talked through the implications of the decision, and I advised them against this course of action.
Unfortunately, they wanted to please the CEO, so they followed their marching orders, determined to personalize to their audiences. Fast forward through two years of arbitrary, non-data-driven personalization, and Staples finally realized that they had no idea if what they were doing was valuable.
They were personalizing based on assumptions, not data. The finally stopped the mad course they were on—but only after two years of wasted effort.
The first cairn on the path to personalization is a sanity check. We must stop and ask ourselves about assumptions that are built into the actions we’re taking. Ask yourself the following questions:
- Do I have causal data showing a change in audience behavior?
- Have I allowed all audiences to see my variations?
- Have I created variations to challenge my personalization assumptions?
- Have I avoided all of the personalization pitfalls mentioned above?
If you can answer “Yes” to all of the questions above, then you should continue on the path. If any of your answers are “No,” then pause here before making further personalization plans.
2. Check your audience size.
All optimization efforts come with opportunity costs. If we choose to personalize to an audience, we’re forgoing other optimization opportunities. What could Staples have accomplished in two years if they hadn’t wasted all that effort?
Before proceeding with any personalization efforts, we need to understand the size of the audience we want to personalize to—and the trade-off of not personalizing. Once you know the audience size, you need to compare that to the potential value of optimizing to the larger population.
Do some simple math. Figure out your average lift per test run on your total audience. Then, compare that to the lift you would need on your smaller, target population for personalization to equal the lift from audience-wide experiments.
Suppose the audience you’re personalizing to is 10% of your total population, and your average lift per test is 7%. Any lift on 10% of the population would have to be much larger to equal the same lift you’d get with your total population.
Be cautious of optimizing to an audience that’s so small that it would take phenomenal gains to equal modest improvements with your total population.
3. Design variations as if you were personalizing.
At this phase, we’re not personalizing yet, but we are creating variations that we would use if we were personalizing. As with all good testing, create variations that are drastically different and have as many options as your traffic will support.
Since we’re going to be challenging the idea of personalization itself, you may also want to design a variation that’s for a generic audience to see how it fares against the personalized versions.
4. Allow all audiences to see all variations.
This step may seem a little counter-intuitive, but it’s important to help us challenge our assumptions. We want to see if the personalized experience is indeed better for the audience it was built for. The only way to prove that is to show that experience to other audiences, and to show the intended audience other experiences.
This is the most critical step to prove a case for personalization, but because it’s a little counter-intuitive, most organizations don’t do it. We have to challenge our assumptions by getting the data that proves (or disproves) the case for personalization. To do that, we have to:
- Show our variations to all audiences, not just the intended target audience.
- Design variations to challenge what we think the best personalized experience is.
5. Look for a change of pattern between audiences.
At this step, we need to analyze our data to understand two things:
- Is the new personalized experience better for our intended target audience?
- Does this experience work equally well for a general audience?
If the personalized experience for the target audience didn’t improve success over the control, then you learned that the variations didn’t accomplish their objective of personalizing.
Even when there is a lift in the new experience, if the general audience likes the new experience just as well as the target audience, then there’s no case for personalization.
This just happened to my organization. We created what we thought would be some good variations for the logged-out homepage experience. Some variations were for visitors who weren’t members; the other was for members who were logged out.
After running the test, we found that the engagement pattern was exactly the same for the current members as for those without an account. Since the pattern of engagement was the same, we learned that these variations weren’t a good opportunity for personalization.
If your audiences respond the exact same way, you aren’t personalizing—you’re just optimizing. As a follow up to our first logged-out homepage test, we decided to test nine new variations to tackle the same question but in different ways to see if there’s a better way to personalize that experience.
6. Evaluate learning and make a plan of action.
At this point, you’ve learned what’s working and what isn’t. You’ve learned whether you were able to influence your target audience with a personalized experience or not.
As with any other optimization campaign, you need to identify the most efficient next steps to keep learning about your audience. Some likely actions might be:
- You saw a lift in the personalized experience and want to try and improve it with a follow-up test.
- The general audience liked the personalized experience as much as the target audience, so, moving forward, all audiences will see the new experience.
- Your target audience responded better to one of the other variations you created that wasn’t intended for a personalized experience, so you rethink how to customize an experience for your target audience.
- None of your variations beat the control, so you re-evaluate if this is where you should be spending your efforts.
The goal of personalizing is to improve the experience of visitors. That should, in turn, have a positive impact on your business. The goal should never be personalization in and of itself. We must be visitor-focused and make sure that focus translates into a business impact.
By avoiding these five pitfalls and following these six cairns, any company can successfully move forward on the path of personalization. We would all do well to think of personalization not as a destination we reach but as a journey we’re on.
Visitor attitudes and the market in general are always changing. There are new audiences, new experiences, and new ways of improving the visitor experience. We must test to learn what does and doesn’t work for our audiences.