One-tailed tests allow for the possibility of an effect in one direction. Two-tailed tests test for the possibility of an effect in two directions—positive and negative.
Simple as that concept may seem, there’s a lot of controversy around one-tailed vs. two-tailed testing. Articles like this one lambaste the shortcomings of one-tailed testing, saying that “unsophisticated users love them.”
Let’s set the record straight.
Testing tools are getting more sophisticated. Blogs are brimming with “inspiring” case studies. Experimentation is becoming more and more common for marketers. Statistical know-how, however, lags behind.
This post is filled with clear explanations of A/B testing statistics from top CRO experts. A/B testing statistics aren’t that complicated—but they are that essential to running tests correctly.
Here’s what we’ll cover (feel free to jump ahead):
And just in case you’re uncertain about why A/B testing statistics are so essential…
Years ago, when I first started split-testing, I thought every test was worth running. It didn’t matter if it was changing a button color or a headline—I wanted to run that test.
My enthusiastic, yet misguided, belief was that I simply needed to find aspects to optimize, set up the tool, and start the test. After that, I thought, it was just a matter of awaiting the infamous 95% statistical significance.
I was wrong.