While testing is a critical part of conversion optimization to make sure we actually made things better and by how much, it’s also the tip of the iceberg of the full CRO picture. Testing tools are affordable (even free), and increasingly easier to use – so pretty much any idiot can set up and run A/B tests. This is not where the difficulty lies. The hard part is testing the right things, and having the right treatment.
The success of your testing program is a sum of these two: number of tests run (volume) and percentage of tests that provide a win. Those two add up to indicate execution velocity. Add average sample size and impact per successful experiment, and you get an idea of total business impact.
So in a nutshell, this is how you succeed:
- Run as many tests as possible at all times (every day without a test running on a page/layout is regret by default),
- Win as many tests as possible,
- Have as high impact (uplift) per successful test as possible.
Executing point #1 obvious, but how to do well for points #2 and #3? This comes down to the most important thing about conversion optimization – the discovery of what matters.
Before you get out your pitchforks, I want to stress that this article does not represent Peep’s views.
The easiest lies to believe are the ones we want to be true, and nothing speaks to us more than validation of the work we are doing or what we already believe. Due to this we become naturally defensive when someone challenges that world view.
The “truth” is that there is no single state of truth and that all actions, disciplines, and behaviors can and should be evaluated for growth opportunities. It doesn’t matter if we are designers, optimizers, product managers, marketers, executives, or engineers, we all come from our own disciplines and will naturally defend to the death if we feel threatened even in the face of overwhelming evidence.
A/B testing is great and very easy to do these days. Tools are getting better and better. As a result, people rely more and more on the tools. As a result, critical thinking is much less common.
It’s not fair to just blame the tools of course. It’s very human to try to (over)simplify everything. Now the internet is flooded with A/B testing posts and case studies full of bullshit data, imaginary wins. Be wary when you read any testing case study, or whenever you hear someone say “we tested that”.
AB testing is supposed to be straightforward and extremely transparent. It should be so easy to see the ROI – especially when compared to opaque stuff like SEO. But is it really so transparent as we’d like to think?
As more and more people start to look to testing and conversion optimization as a consistent and meaningful tool for their marketing and other initiatives it is important that people start to realize that optimization as a discipline is not just a false add-on to existing work. Testing when done correctly can and should be by far the number one driver of revenue for your entire site and organization, and yet according to 3 of the major tools on the market the average testing program only sees 14% of their tests succeed.
As more and more business owners are learning about the benefits of the new version of Google Analytics (referred to as “Universal Analytics”) as well as the utility of Tag Management Systems (made even more popular by the release of the free Google Tag Manager), Peep reached out to me to write an article about moving an inline GA implementation to Google Tag Manager. This is work we do often over at Analytics Ninja, so I feel more than happy to provide this guide for CXL’s readers. There are many benefits of using a Tag Management System, though as my friend Julien Coquet puts it, “it’s not a miracle cure.” If you take a quick look at any TMS vendor’s benefits page, you’ll notice the following big points stick out (here are Google’s):
For all of the talk about how awesome (and big, don’t forget big) Big data is, one of the favorite tools in the conversion optimization toolkit, A/B Testing, is decidedly small data.
Optimization, winners and losers, Lean this that or the other thing, at the end of the day, A/B Testing is really just an application of sampling.
You take couple of alternative options (eg. ‘50% off’ v ‘Buy One Get One Free’ ) and try them out with a portion of your users. You see how well each one did and then make a decision about which one you think will give you the most return.