A very common scenario: A business runs tens and tens of A/B tests over the course of a year, and many of them “win.” Some tests get you 25% uplift in revenue, or even higher.
Yet when you roll out the change, the revenue doesn’t increase 25%. And 12 months after running all those tests, the conversion rate is still pretty much the same. How come?
Even if your A/B tests are well planned and strategized, when run, they can often lead to non-significant results and erroneous interpretations.
You’re especially prone to errors if incorrect statistical approaches are used.
In this post we’ll illustrate the 10 most important statistical traps to be aware of, and more importantly, how to avoid them.
As a business, your email list is one of the most valuable assets you have. The bigger your list and more engaged your subscribers, the more money you can make.
Having a well-thought-out plan for A/B testing Facebook ad campaigns is essential if you want to improve your performance reliably and consistently.
And the more you test, the better. A study of 37,259 Facebook ads found that “most companies only have one ad, but the best had hundreds.”
A/B testing Facebook ad campaigns can get complicated quickly (and easily produce invalid results). Spending the time upfront to perfect your testing process and structure will go a long way.
Chances are, you’ve heard of Google Optimize by now. It’s Google’s solution for A/B testing and personalization. It launched in beta in 2016 and left optimizers around the world waiting in line to try it out.
Since it left beta in March 2017, anyone can give it a try without the wait. But what can you expect? How do you configure it properly? How do you run your first experiment?
When should you use multivariate testing, and when is A/B/n testing best?
The answer is both simple and complex.
Lots of entrepreneurs struggle with pricing. How much to charge? It’s clear that the right price can make all the difference—too low and you miss out on profit; too high and you miss out on sales.
A great deal has been written about whether, in the Internet age, your business should have a phone number on its website.
On one hand, having a phone number can increase the trustworthiness of your website, help sell potential customers who aren’t comfortable buying online, and allow customers to contact support easily.
The flip side? Phone support costs money.
Many anecdotes support both strategies, but we should be asking, “Where’s the data?”
Marketers of all stripes are obsessed with tools.
This obsession has bred comprehensive lists of growth tools, SEO tools, and general online marketing tools. It is no different with us in conversion optimization. We nerd out on testing tools.
Though no optimization program has ever hinged on which tool you used, there are important distinctions between A/B testing tools—from the statistics they use, their price, and more.
One thing that is often either overlooked or misunderstood is the difference between client-side and server-side testing tools.
When should you use bandit tests, and when is A/B/n testing best?
Though there are some strong proponents (and opponents) of bandit testing, there are certain use cases where bandit testing may be optimal. Question is, when?