Google Analytics helps us identify
conversion uplift opportunities. Traffic is precious, and we don’t want to waste it on tests that don’t result in learning or uplifts.
That’s why we want good data for:
Which pages have uplift opportunities; Specific page issues.
There’s a philosophical statistics debate in the A/B testing world: Bayesian vs. Frequentist.
This is not a new debate. Thomas Bayes wrote “
An Essay towards solving a Problem in the Doctrine of Chances” in 1763, and it’s been an academic argument ever since.
When should you use bandit tests, and when is
A/B/n testing best?
Though there are some strong proponents (and opponents) of bandit testing, there are certain use cases where bandit testing may be optimal. Question is, when?
Your website is leaking money. Everybody’s is.
The first step toward plugging the leaks is identifying
where the leaks are. Which funnel steps, which layers of your site, which specific pages are leaking money? Google Analytics can provide answers.
Customer personas are often talked about in marketing and product design, but they’re almost never done well.
One thing many people forget when dealing with data: outliers.
Even in a controlled online
A/B test, your data set may be skewed by extremities. How do you deal with them? Do you trim them out, or is there another way?
A/B testing is fun. With so many easy-to-use tools, anyone can—and should—do it. However, there’s more to it than just setting up a test. Tons of companies are wasting their time and money.
A/B testing tools like Optimizely or VWO make testing easy, and that’s about it. They’re tools to run tests, and not exactly designed for post-test analysis. Most testing tools have gotten better at it over the years, but still lack what you can do with Google Analytics – which is like everything.
Customers don’t usually see one ad and then click over to purchase.
In reality, the path is much more complex, and usually includes various marketing channels – organic and paid search, referral, social media, television.
But if you’re a rigorous and data-driven marketer, the question has to cross your mind: how much credit can I give each channel for this conversion?
Just when you start to think that
A/B testing is fairly straightforward, you run into a new strategic controversy.
This one is polarizing: how many variations should you test against the control?