fbpx

top

PXL: A Better Way to Prioritize Your A/B Tests

If you’re doing it right, you probably have a large list of A/B testing ideas in your pipeline. Some good ones (data-backed or result of a careful analysis), some mediocre ideas, some that you don’t know how to evaluate.

We can’t test everything at once, and we all have a limited amount of traffic.

You should have a way to prioritize all these ideas in a way that gets you to test the highest potential ideas first. And the stupid stuff should never get tested to begin with.

How do we do that?

Keep reading

ResearchXL

While testing is a critical part of conversion optimization to make sure we actually made things better and by how much, it’s also the tip of the iceberg of the full CRO picture. Testing tools are affordable (even free), and increasingly easier to use – so pretty much any idiot can set up and run A/B tests. This is not where the difficulty lies. The hard part is testing the right things, and having the right treatment.

The success of your testing program is a sum of these two: number of tests run (volume) and percentage of tests that provide a win. Those two add up to indicate execution velocity. Add average sample size and impact per successful experiment, and you get an idea of total business impact.

So in a nutshell, this is how you succeed:

  1. Run as many tests as possible at all times (every day without a test running on a page/layout is regret by default),
  2. Win as many tests as possible,
  3. Have as high impact (uplift) per successful test as possible.

Executing point #1 obvious, but how to do well for points #2 and #3? This comes down to the most important thing about conversion optimization – the discovery of what matters.

Keep reading

Categories