A/B testing is highly useful, no question here. But a lot of businesses should not be doing it. They’re not ready yet.
Roughly speaking, if you have less than 1000 transactions (purchases, signups, leads etc) per month – you’re gonna be better off putting your effort in other stuff.
A lot of microbusinesses, startups and small businesses just don’t have that transaction volume (yet).
You might be able to run A/B tests with just 500 transactions per month too (read: how many conversions do I need?), but you need bigger impacts per successful experiment to improve the validity of those tests.
If it’s like 240 vs 260 conversions (~8% uplift), you actually don’t have enough evidence to know one way or another. I highly recommend reading up on A/B testing statistics to know why that is.
You also have to keep in mind the cost of optimization. I don’t just mean the cost of tools you need for a testing program (which can be free), but the cost of the time spent:
- figuring out what to test – using a data-driven approach instead of guessing randomly (which would render the whole testing program useless),
- designing the treatment (provided that you’re testing non-trivial stuff),
- doing QA on the tests.
Now let’s say you get that 8% lift, and it’s a valid winner. You had 125 leads per week, and now you have 135 / week. Is the ROI there? Probably not.
So when you calculate your needed sample sizes before you run the test, do the math on the ROI as well. What would be the value of X% lift in actual dollars?
Table of contents
What about microconversions?
Yeah you could be measuring clicks on buttons and stuff, but there is no causal relationship between higher microconversions and higher revenue. You can entice more people to click by over promising or setting false expectations. Or just shift the problem onto the next page – where they have to pay, but they don’t.
So if you’re spending all that energy running tests on microconversions – and don’t actually know for sure what the impact on the bottom line is – you’re better off doing other stuff. Like making your product better. Figuring out your unit economics, so you could acquire customers better. Building an audience.
A/B testing does not equal optimization – you should still optimize
Testing is a great way to validate hypotheses. But even if you don’t have the numbers to have a meaningful amount of quantifiable data, you can still do heuristic analysis, usability evaluation and all the qualitative stuff:
- talk to your customers,
- run user tests,
- survey your clients,
- poll your website visitors.
(Here’s more on how to optimize low traffic websites).
Time is a precious resource. It might be better spent elsewhere than A/B testing when you’re still small – because math.
Join the conversation
Add your comment
I agree that the lift on a micro conversion doesn’t necessitate a lift in overall revenue. That said, I think your anti-micro conversion argument is a bit of a slippery slope. It completely ignores the concept of increasing volume on informative pages to your key conversion pages.
There is a hierarchy of goals on every page that isn’t directly associated with the conversion page, e.g., the on page goal, the campaign goal, and the business goal. On a cart page your goals are all in line; however on a product page the on page goal is not directly connected to the purchase.
There’s a reason we invest in things like SEO and paid traffic, to increase volume to our key conversion pages. Similarly, that is why evaluating micro conversions as a volume producer is a valid practice.
Say your cart conversion rate is 2.5% and you run a test to increase volume to that page. If your 2.5% cart conversion rate remains consistent then the micro-conversion on the previous page could have a positive impact on the bottom line.
Just my 2 cents…
I get what you’re saying, and I kind of agree – but it’s not a given that the number of transactions will go up. So again, it comes down to the cost of optimization. How much of people’s time is spent running those tests vs the benefit in dollars.
Kinda curious about your sample size calculations – 1000 is a huge sample size. Based on Power values, you would be getting away with much less:
For confirmatory purposes, you should be repeating your tests anyway. I would so much rather run 30 tests with sample size of 30 rather than one huge one with sample of 1000. I will learn so much more about the sub-groups and probably generate many more hypotheses and theories.
And if you go Design of Experiments route, you’d get away with even smaller sample size as you have “hidden repeats”.
Not sure which universe your math is from. Sample size of 30 visitors? Ummmm…
Thanks for the controversial post, I highly recommend use
Evan’s A/B Sample size calculator. I’ve built the equal sample size curves, so as you can see there are cases where A/B test could run well.
Great chart. Yes – the baseline is already really high, and you get like a 50% lift on top of that, you can get away with less sample size. However, that is not a reality for the overwhelming majority.
Also, experience shows that even if you have “enough sample size” based on the calculator – like 20 vs 40 conversions, it’s still too early to call the test. If you’d run the test longer, you’ll often see it flatten out, the high uplift disappear. After all, calculating the sample size is just an exercise in algebra – not the actual reality. I don’t trust any AB test that has less than a couple of hundred conversions per variation EVEN IF the sample size is there.
I totally agree with this article man, and I thought I was all alone in this subject haha. Thanks for giving me additional ground because it will definitely help me prove to my clients.
Peep, great points here man.
I’m no expert and I get to worrying about this A/B split-testing but…there really is not enough data to really give a crap about this right now and I can spend my time a little better doing something that’s working on ROI like creating another digital product and copywriting.
Thanks for the great info man.
Well said Peep, thanks for that!
The true can be hard, but there are more important things to do and consider for small business and low traffic sites than A/B testing.
“…There is no causal relationship between higher microconversions and higher revenue.”
That’s very well explained of A/B testing for small business owners. I agree with you that you emphasize more value on time rather than A/B testing in business which has less than 1000 transactions. I think it helps a lot for startup companies. But want to know more about micro conversions. I bet your tips would really work in business.
Thanks! Our post on micro-conversions: https://cxl.com/should-you-optimize-for-micro-conversions/
Comments are closed.