As conversion optimization continues to mature and become adopted by more organizations, it’s always interesting to get an a/b testing tutorial from people in your network to see how they’re approaching growth and optimization. Especially, for me, in the tech startup space, as these companies often live and die by data, and tend to build their organizations around experimentation.
LawnStarter is one such company, so we sat down with their CTO, Jonas Weigert, to learn how they experiment across their product and communication and how they deal with optimization as a company.
Nothing works all the time on all sites. That’s why we test in the first place; to let the data tell us what is actually working.
That said, we have done quite a bit of user experience on ecommerce sites and have seen some trends in terms of what generates positive experiences from a customer perspective.
This post will outline 16 A/B test ideas based on that data.
Just when you start to think that A/B testing is fairly straightforward, you run into a new strategic controversy.
This one is polarizing: how many variations should you test against the control?
The traditional (and most used) approach to analyzing A/B tests is to use a so-called t-test, which is a method used in frequentist statistics.
While this method is scientifically valid, it has a major drawback: if you only implement significant results, you will leave a lot of money on the table.
If you’re doing it right, you probably have a large list of A/B testing ideas in your pipeline. Some good ones (data-backed or result of a careful analysis), some mediocre ideas, some that you don’t know how to evaluate.
We can’t test everything at once, and we all have a limited amount of traffic.
You should have a way to prioritize all these ideas in a way that gets you to test the highest potential ideas first. And the stupid stuff should never get tested to begin with.
How do we do that?
A/B testing is common practice and it can be a powerful optimization strategy when it’s used properly. We’ve written on it extensively. Plus, the Internet is full of “How We Increased Conversions by 1,000% with 1 Simple Change” style articles.
Unfortunately, there are experimentation flaws associated with A/B testing as well. Understanding those flaws and their implications is key to designing better, smarter A/B test variations.
Even A/B tests with well-conceived test concepts can lead to non-significant results and erroneous interpretations. And this can happen in every phase of testing if incorrect statistical approaches are used.
Both your visitor and Google prefer your site to be fast. Increasing site speed has been shown to increase conversion rates as well as increase SERP rankings, both resulting in more money for your business.
You’re doing A/B split testing to improve results. But A/B testing tools actually may slow down your site.
Data should speak for itself, but it doesn’t. After all, humans are involved, too – and we mess things up.
So you ran a test – and you ran it correctly, following A/B testing best practices – and you’ve reached inconclusive results.