# top-statistics-for-ab-testing

## Statistical Power: What It Is and How To Calculate It in A/B Testing

Years ago, when I first started split-testing, I thought every test was worth running. It didn’t matter if it was changing a button color or a headline—I wanted to run that test.

My enthusiastic, yet misguided, belief was that I simply needed to find aspects to optimize, set up the tool, and start the test. After that, I thought, it was just a matter of awaiting the infamous 95% statistical significance.

I was wrong.

## Outliers in Statistics: How to Find and Deal with Them in Your Data

One thing many people forget when dealing with data: outliers.

Even in a controlled online A/B test, your data set may be skewed by extremities. How do you deal with them? Do you trim them out, or is there another way?

## One-Tailed vs. Two-Tailed Tests (Does It Matter?)

One-tailed tests allow for the possibility of an effect in one direction. Two-tailed tests test for the possibility of an effect in two directions—positive and negative.

Simple as that concept may seem, there’s a lot of controversy around one-tailed vs. two-tailed testing. Articles like this one lambaste the shortcomings of one-tailed testing, saying that “unsophisticated users love them.”

On the flip side, some articles and discussions take a more balanced approach and say there’s a time and a place for both.

Let’s set the record straight.

## A/B Testing Statistics: An Easy-to-Understand Guide

Testing tools are getting more sophisticated. Blogs are brimming with “inspiring” case studies. Experimentation is becoming more and more common for marketers. Statistical know-how, however, lags behind.

This post is filled with clear explanations of A/B testing statistics from top CRO experts. A/B testing statistics aren’t that complicated—but they are that essential to running tests correctly.

Here’s what we’ll cover (feel free to jump ahead):

And just in case you’re uncertain about why A/B testing statistics are so essential…