fbpx

a/b

Beyond "One Size Fits All" A/B Tests

If you’re invested in improving your A/B testing game, you’ve probably read dozens of articles and discussions on how to do a/b testing.

In reading advice about how long to run a test or what statistical significance threshold to use, you probably saw claims like “Always aim for XX% significance” or “Don’t stop a test until it reaches YYY conversions” – where XX% is usually a number higher than 95%, and YYY is usually a number higher than 100.

You might also have heard it’s best to come up with many variants to test against the control to improve your chance of finding the best option.

No matter what rule is offered, such advice seems to rest on the assumption that there is a one-size-fits-all solution that works in most situations.

Keep reading

How to Segment A/B Test Results to Find Gold

You run an A/B test, and it’s a winner. Or maybe it’s flat (no difference in performance between variations). Does it mean that the treatments that you tested didn’t resonate with anyone? Probably not.

If you target all visitors with the A/B test, it merely reports overall results – and ignores what happens in a portion of your traffic, in segments.

Keep reading

Jonas Weigert on A/B Testing Beyond the Landing Page (Q&A)

As conversion optimization continues to mature and become adopted by more organizations, it’s always interesting to get an a/b testing tutorial from people in your network to see how they’re approaching growth and optimization. Especially, for me, in the tech startup space, as these companies often live and die by data, and tend to build their organizations around experimentation.

LawnStarter is one such company, so we sat down with their CTO, Jonas Weigert, to learn how they experiment across their product and communication and how they deal with optimization as a company.

Keep reading

PXL: A Better Way to Prioritize Your A/B Tests

If you’re doing it right, you probably have a large list of A/B testing ideas in your pipeline. Some good ones (data-backed or result of a careful analysis), some mediocre ideas, some that you don’t know how to evaluate.

We can’t test everything at once, and we all have a limited amount of traffic.

You should have a way to prioritize all these ideas in a way that gets you to test the highest potential ideas first. And the stupid stuff should never get tested to begin with.

How do we do that?

Keep reading

UX Research and A/B Testing

A/B testing is common practice and it can be a powerful optimization strategy when it’s used properly. We’ve written on it extensively. Plus, the Internet is full of “How We Increased Conversions by 1,000% with 1 Simple Change” style articles.

Unfortunately, there are experimentation flaws associated with A/B testing as well. Understanding those flaws and their implications is key to designing better, smarter A/B test variations.

Keep reading

Categories