One-tailed tests allow for the possibility of an effect in one direction. Two-tailed tests test for the possibility of an effect in two directions—positive and negative.
Simple as that concept may seem, there’s a lot of controversy around one-tailed vs. two-tailed testing. Articles like this one lambaste the shortcomings of one-tailed testing, saying that “unsophisticated users love them.”
Let’s set the record straight.
Table of contents
- One-tailed vs two-tailed: Differences and use cases
- Does it matter which method you use?
- The case for two-tailed testing
- When can I use one-tailed tests?
- Which tools use which method?
One-tailed vs two-tailed: Differences and use cases
Many people don’t even realize that there are two ways to determine whether an experiment’s results are statistically valid. That’s led to a lot of confusion and misunderstanding about one-tailed and two-tailed testing.
The commotion comes from a justifiable worry: Are my lifts imaginary? As mentioned in this article, sometimes A/A tests will come up with some quirky results, thus making you question the efficacy of your tools and your A/B testing plan.
So when we’re talking about one-tailed vs. two-tailed tests, we’re really talking about whether we can trust the results of our A/B tests and take action based on them.
If you’re just learning about testing, Khan Academy offers a clear illustration of the difference between one-tailed and two-tailed tests:
Why would you choose one over another? The two-tailed test can show evidence that the control and variation are different, but the one-tailed test can show evidence if variation is better than the control.
In frequentist tests, you have a null hypothesis. The null hypothesis is what you believe to be true absent evidence to the contrary.
Now suppose you’ve run a test and received a p-value. The p-value represents the probability of seeing a result at least that “extreme” in the event the null hypothesis were true. The lower the p-value, the less plausible it is that the null hypothesis is true.
Now suppose you are A/B testing a control and a variation, and you want to measure the difference in conversion rate between both variants. The two-tailed test takes as a null hypothesis the belief that both variations have equal conversion rates.
The one-tailed test takes as a null hypothesis the belief that the variation is not better than the control, but could be worse.
Does it matter which method you use?
Okay, so now that we went over what the tests actually are, we can ask the important question: Does it even matter which you use? Turns out, that’s a complicated question.
According to Kyle Rush, it does:
Pros and cons of each method
Maxymiser (now part of Oracle) laid out some pros and cons of using either test:
|One-tailed tests|| ||
Other factors in validity
So there are other factors when it comes to testing for statistical validity. Still, there are strong opinions around one-tailed and two-tailed testing.
The case for two-tailed testing
Two-tailed tests mitigate Type I errors (false positives) and cognitive bias errors. Furthermore, as Kyle Rush said, “unless you have a superb understanding of statistics, you should use a two-tailed test.”
Here’s what Andrew Anderson had to say:
Neal Cole, conversion specialist at a leading online gaming company, agrees:
When can I use one-tailed tests?
According to some, there is a time and a place. It is often contextual and depends on how you intend to act on the data. As Luke Stokebrand said:
One-tailed tests are not always bad; it’s just important to understand their downside. In fact, there are many times when it makes sense to use a one-tailed test to validate your data.
Andy Hunt from UpliftROI acknowledges the faults of one-tailed tests but takes a realistic approach:
Similarly, Jeff Sauro from MeasuringU reiterates that while you should normally use the two-sided p-value, “You should only use the one-sided p-value when you have a very strong reason to suspect that one version is really superior to the other.”
Kyle Rush echoes this:
Which tools use which method?
When you ask the question of which A/B testing software uses which method, you enter a world of murky answers and ambiguity. That’s to say, not many of them list it specifically.
So here’s what I got from research and from asking testing experts (correct me if I’m wrong or need to add something):
Tools that use one-tailed tests
- Conductrics (plus option to run two-tailed via an API, along with Bandit options).
Tools that use two-tailed tests:
Of course, certain tools have custom frameworks as wel. Kyle Rush explains Optimizely’s Stats Engine:
The issue of using one-tailed vs two-tailed testing is important, though the decision can’t be made with statistics alone. As Chris Stucchio said, “It needs to be decided from within the context of a decision procedure.”
When running an A/B test, the goal is almost always to increase conversions rather than simply answer idle curiosity. To decide whether one-tailed or two-tailed is right for you, you need to understand your entire decision procedure rather than simply the statistics.
If you’d like to learn more about one-tailed and two-tailed testing, there are many resources. Here are a few that are easy to understand:
Otherwise, I’ll close with something Peep said about the subject:
The one- vs. two-tailed issue is minor (it’s perfectly fine to use a one tailed test in a lot of cases) compared to test sample sizes and test duration. Ending tests too soon is by far #1 testing sin.
Working on something related to this? Post a comment in the CXL community!