Confidence intervals are a standard output of many free and paid A/B testing tools. Most A/B test reports contain one or more interval estimates.
Even if you’re simply a consumer of such reports, understanding confidence intervals is helpful. If you’re in charge of preparing and presenting those reports, it’s essential.
If you’re invested in improving your A/B testing game, you’ve probably read dozens of articles and discussions on how to do a/b testing.
In reading advice about how long to run a test or what statistical significance threshold to use, you probably saw claims like “Always aim for XX% significance” or “Don’t stop a test until it reaches YYY conversions” – where XX% is usually a number higher than 95%, and YYY is usually a number higher than 100.
You might also have heard it’s best to come up with many variants to test against the control to improve your chance of finding the best option.
No matter what rule is offered, such advice seems to rest on the assumption that there is a one-size-fits-all solution that works in most situations.