If you’re invested in improving your A/B testing game, you’ve probably read dozens of articles and discussions on how to do a/b testing.
In reading advice about how long to run a test or what statistical significance threshold to use, you probably saw claims like “Always aim for XX% significance” or “Don’t stop a test until it reaches YYY conversions” – where XX% is usually a number higher than 95%, and YYY is usually a number higher than 100.
You might also have heard it’s best to come up with many variants to test against the control to improve your chance of finding the best option.
No matter what rule is offered, such advice seems to rest on the assumption that there is a one-size-fits-all solution that works in most situations.
If you ask most marketers, they will tell you that A/B testing and personalization are two completely different things. I respectfully disagree, and I think this disagreement is at the root of how to use them best together.
You run an A/B test, and it’s a winner. Or maybe it’s flat (no difference in performance between variations). Does it mean that the treatments that you tested didn’t resonate with anyone? Probably not.
If you target all visitors with the A/B test, it merely reports overall results – and ignores what happens in a portion of your traffic, in segments.
As conversion optimization continues to mature and become adopted by more organizations, it’s always interesting to get an a/b testing tutorial from people in your network to see how they’re approaching growth and optimization. Especially, for me, in the tech startup space, as these companies often live and die by data, and tend to build their organizations around experimentation.
LawnStarter is one such company, so we sat down with their CTO, Jonas Weigert, to learn how they experiment across their product and communication and how they deal with optimization as a company.
Nothing works all the time on all sites. That’s why we test in the first place; to let the data tell us what is actually working.
That said, we have done quite a bit of user experience on ecommerce sites and have seen some trends in terms of what generates positive experiences from a customer perspective.
This post will outline 16 A/B test ideas based on that data.
Just when you start to think that A/B testing is fairly straightforward, you run into a new strategic controversy.
This one is polarizing: how many variations should you test against the control?
The traditional (and most used) approach to analyzing A/B tests is to use a so-called t-test, which is a method used in frequentist statistics.
While this method is scientifically valid, it has a major drawback: if you only implement significant results, you will leave a lot of money on the table.
If you’re doing it right, you probably have a large list of A/B testing ideas in your pipeline. Some good ones (data-backed or result of a careful analysis), some mediocre ideas, some that you don’t know how to evaluate.
We can’t test everything at once, and we all have a limited amount of traffic.
You should have a way to prioritize all these ideas in a way that gets you to test the highest potential ideas first. And the stupid stuff should never get tested to begin with.
How do we do that?
A/B testing is common practice and it can be a powerful optimization strategy when it’s used properly. We’ve written on it extensively. Plus, the Internet is full of “How We Increased Conversions by 1,000% with 1 Simple Change” style articles.
Unfortunately, there are experimentation flaws associated with A/B testing as well. Understanding those flaws and their implications is key to designing better, smarter A/B test variations.
Both your visitor and Google prefer your site to be fast. Increasing site speed has been shown to increase conversion rates as well as increase SERP rankings, both resulting in more money for your business.
You’re doing A/B split testing to improve results. But A/B testing tools actually may slow down your site.