Why A/A Testing is a Waste of Time

The title may seem a bit controversial, a fairly common question I get from large (and small) companies is—“Should I run A/A tests to check whether my experiment is working?”
The answer might surprise you.
The title may seem a bit controversial, a fairly common question I get from large (and small) companies is—“Should I run A/A tests to check whether my experiment is working?”
The answer might surprise you.
Even if your A/B tests are well planned and strategized, when run, they can often lead to non-significant results and erroneous interpretations.
You’re especially prone to errors if incorrect statistical approaches are used.
In this post we’ll illustrate the 10 most important statistical traps to be aware of, and more importantly, how to avoid them.
As a marketer and optimizer it’s only natural to want to speed up your testing efforts. So now the question is—can you run more than one A/B test at the same time on your site?
Let’s look into the “why you shouldn’t” and “why you should” run multiple tests at once.
There’s a philosophical statistics debate in the A/B testing world: Bayesian vs. Frequentist.
This is not a new debate. Thomas Bayes wrote “An Essay towards solving a Problem in the Doctrine of Chances” in 1763, and it’s been an academic argument ever since.
Marketers of all stripes are obsessed with tools.
This obsession has bred comprehensive lists of growth tools, SEO tools, and general online marketing tools. It is no different with us in conversion optimization. We nerd out on testing tools.
Though no optimization program has ever hinged on which tool you used, there are important distinctions between A/B testing tools—from the statistics they use, their price, and more.
One thing that is often either overlooked or misunderstood is the difference between client-side and server-side testing tools.
A/B testing is no longer a new field. Finding a proper A/B testing tool isn’t the problem anymore. Now, the problem is choosing the right one.
A/B testing splits traffic 50/50 between a control and a variation. A/B split testing is a new term for an old technique—controlled experimentation.
Yet for all the content out there about it, people still test the wrong things and run A/B tests incorrectly.
In my experience, I find that teams and organizations report many winning A/B tests with high uplifts, but somehow they don’t seem to bring those uplifts in reality. How come?
A/B testing is fun. With so many easy-to-use tools, anyone can—and should—do it. However, there’s more to it than just setting up a test. Tons of companies are wasting their time and money.
If you ever ran a highly trustworthy and positive a/b test, chances are that you’ll remember it with an inclination to try it again in the future – rightfully so. Testing is hard work with many experiments failing or ending up insignificant. It’s optimal to try and exploit any existing knowledge for more successes and fewer failures. In our own practice we started doing just that.