Raise your hand if you’ve ever struggled with a decision between disciplined testing procedures and expedient decision-making.
For example, think of a time when you’ve had to decide between sticking to your A/B test design—namely, the prescribed sample size—and making a decision using what appears to be obvious, or at least very telling, test data. If your arm is waving vigorously in the air right now, this post is for you. Also, put it down and stop being weird.
You regularly run A/B tests on the design of a pop-up. You have a process, implement it correctly, find statistically significant winners, and roll out winning versions sitewide.
Your tests answer every question except one: Is the winning version still better than never having shown a pop-up?
The endgame of optimization begins when the local maximum has been found for the most relevant pages on your website. At this point, uncovering more gains can become a greater challenge.
How can you deal with such a scenario?
“Our A/B testing tool’s visual editor allows marketers to set up tests without needing developers!”
A version of this is communicated by pretty much every testing tool out there. Their websites show off screenshots or videos of visual editors. But it’s wrong. You shouldn’t use visual editors to run your testing program.
If you read this blog regularly, you probably don’t need an introduction to CRO or A/B testing. You know the major players, best practices, and you’ve likely tested your fair share of ideas.
But, as an expert, you likely know some of the persistent frustrations with current approaches. To name just a pair:
- Testing simply takes time.
- Our best instincts are often wrong.
This is the methodology that I have developed over 12 years in the industry and working with over 300 organizations. It is also the methodology that has been used to have a near perfect test streak (6 test failures in 5.5 years), even if most others do not believe that stat.