Running conversion optimization experiments the right way with Chad Sanderson CX Optimization Agency Message Testing Blog Search Start 7-day trial for $1 Minidegrees Marketing courses Teams Community Resources

Running conversion optimization experiments the right way with Chad Sanderson

Learn how to run conversion optimization experiments the right way. In this video, I sit down with Chad Sanderson, Program Manager on the Microsoft Experimentation Platform team, to discuss statistical testing, calculating sample size, and selecting the right tools to help you run statistically significant conversion optimization tests.

video
 
What about calculating sample sizes for other types of AB Yeah. tests like my Facebook ads or e-mail split testing? size depends based on the metric variance. The problem that most people don't realize is that sample that doesn't compute these continuous type of metrics. rate and that's normally because they find an online calculator a metric like revenue per visitor or average order value So one of the most common errors that I see is people care about but they base their experiment sample size off of conversion Can you explain? and how people get even simple things wrong like calculating Experiment Platform and we were just chatting about statistics Hey guys, I'm sitting here with Chad Sanderson from Microsoft solutions to deliver that. I personally don't trust many providers besides the AB testing There always has to be some type of statistical test going on. sample sizes. Anytime you're doing any type of true AB testing. Well there's actually some pretty simple calculations to do way. So if you want to measure RPV then you know how how would you do it as well. in order to get those continuous metrics you can find them all online. You can just search for a continuous metric sample size calculators or there's also just pretty basic algorithms that'll Yeah that's right. You weren't even close to being able to see an impact either go about it you calculate sample size differently? sample size there. So if you build a conversion rate metric where the sample So if the variance is really high if there's really big But actually the RPV you can't look at it there's not enough point the sample size is going to be way higher. swings between the lowest point of your data and the highest size is lower might be under powering your experiment pretty drastically. So you run a test and let's say you reach a sample size you declare B as a winner and then you were measuring both conversion rate improvement and revenue per visitor improvement. with email testing I think is that there's so many variables anything that's just doing a comparison and the other issue another and then telling you the average and that's not really there are simply randomising visitors into one group or providers are actually not performing as statistical tests statistical tests and the majority of these email have a true AB test unless you're performing some provide AB testing capabilities but the reality is you can't least there is a lot of issues with it. So one thing that some email providers say is that they testing? tests like testing my Facebook ads or doing email split But what about calculating sample sizes for other types of methods anyway because it's such a big deal. still go after it and try to find these calculators or haven't been developed for the marketer yet but you should So it may take a little bit of leg work because some things Yeah so I think that's kind of a pretty big problem too or at and it was 10 percent better or at least that's what I saw on are unknown. So like for example let's say that I sent out a subject line B that somehow roll up and make it more accountable? paper. Well what if I had sent it out on a different day. Would it have still been ten percent better? What if I was actually tracking a different metric was would I think there's a lot of variables in that equation that So... I don't know that it's better? was a winner and even if we do extrapolate from that what value Are you saying that it's actually probably not better or For example most emails just go out all at a time over a single day is, are we able to extrapolate from that that this that may not give you a perfect answer. does that have for the next email. So what would the value then be like... Because usually when they do split testing it's like I send out emails to say 10 percent of my email list and find that subject line B is better 10 percent better. to calculate stats for an e-mail AB test? bigger factors over periods of time like for example we ran 50 on the value that that adds and instead start thinking around stop thinking about individual tests because I'm kind of iffy I think the biggest thing around email testing is people should was describing earlier. continuous rate then you have to maybe use another method like I it to your traditional online calculators if you're doing a if you are calculating conversion rate you can still do Yeah I mean the stats are basically the same regardless so Have you seen like a tool out there that is that you could use a system who's not even performing any statistics that we're doing an AB test and I'm going to take it at the word of But sometimes it's not quite enough just to say, Well we're better job in fixing those things. around e-mail testing and e-mail providers I think could do a lot There's a lot of things that could be more robust about percent difference one way or the other? well maybe I don't have the sample size to see a true 10 this thing is truly a winner. What if I actually performed statistics on this and saw that I personally don't trust many providers. out how to run this data yourself and maybe question... deliver that because it's pretty explicit. So if they're not you need to do the legwork to actually figure I think it's very easy to just look at two base numbers and say Ok, number one am I calculating the right metrics? Am I performing the right stats? Am I looking at this for a long enough time? There's a lot of things that can go wrong. Besides the actual email besides the AB testing solutions to to your business. to be some type of statistical test going on. Gotcha. Anytime you're doing any type of true AB testing there always has e-mail experiments and in the vast majority of those it's the That's like an actionable learning that you can then apply e-mails with the longer subject lines that won. When you do split this for ads. Google Ads, Facebook Ads, you know et cetera same things applies? You know like impressions versus quakes and calculating sample sizes? Yep exactly the same. haven't embraced some of the more scientific learnings that If you want more interviews like this... But you know people will get there. pretty hard. It's a slow learning curve and getting into statistics is rigorous science so that doesn't exist for most people yet. marketers have or CROs have from AB testing which is a very I haven't heard that people are doing let's say upfront sample You know one of the issues is a lot of people I think still Yeah, exactly. And, there we go... You know they're like let's test it. size calculations to figure out when the test is done. makes sense but actually, they don't, Because so many people run on some sort of test on ads which well yeah we have a winner or loser. Subscribe to my channel.

Do you like videos like this? Please subscribe to my channel

About The Pe:p Show

The Pe:p Show is a series of short and to the point videos. Topics that I’m covering go way past conversion stuff – it’s about optimizing all the things: your life, health, relationships, work, and business. I will also be interviewing industry peers on various topics like digital marketing, growth hacking, and more.

Current article:

Running conversion optimization experiments the right way with Chad Sanderson

What’s on my mind

Hi, I'm Peep Laja—founder of CXL. I'm a former champion of optimization and experimentation turned business builder.

I do a lot of thinking, reading, and writing around business, strategy, and optimization. I send a weekly newsletter with what's on my mind on this stuff.

Subscribe

Categories