Learn how to run conversion optimization experiments the right way. In this video, I sit down with Chad Sanderson, Program Manager on the Microsoft Experimentation Platform team, to discuss statistical testing, calculating sample size, and selecting the right tools to help you run statistically significant conversion optimization tests.
What about calculating sample
sizes for other types of AB
tests like my Facebook ads or
e-mail split testing?
size depends based on the metric
The problem that most people
don't realize is that sample
that doesn't compute these
continuous type of metrics.
rate and that's normally because
they find an online calculator
a metric like revenue per
visitor or average order value
So one of the most common errors
that I see is people care about
but they base their experiment
sample size off of conversion
Can you explain?
and how people get even simple
things wrong like calculating
Experiment Platform and we were
just chatting about statistics
Hey guys, I'm sitting here with
Chad Sanderson from Microsoft
solutions to deliver that.
I personally don't trust many
providers besides the AB testing
There always has to be some type
of statistical test going on.
Anytime you're doing any type of
true AB testing.
Well there's actually some
pretty simple calculations to do
So if you want to measure RPV
then you know how how would you
do it as well.
in order to get those continuous
metrics you can find them all
You can just search for a
continuous metric sample size
calculators or there's also just
pretty basic algorithms that'll
Yeah that's right.
You weren't even close to being
able to see an impact either
go about it you calculate sample
sample size there.
So if you build a conversion
rate metric where the sample
So if the variance is really
high if there's really big
But actually the RPV you can't
look at it there's not enough
point the sample size is going
to be way higher.
swings between the lowest point
of your data and the highest
size is lower might be under
powering your experiment pretty
So you run a test and let's say
you reach a sample size you
declare B as a winner and then
you were measuring both
conversion rate improvement and
revenue per visitor improvement.
with email testing I think is
that there's so many variables
anything that's just doing a
comparison and the other issue
another and then telling you the
average and that's not really
there are simply randomising
visitors into one group or
providers are actually not
performing as statistical tests
statistical tests and the
majority of these email
have a true AB test unless
you're performing some
provide AB testing capabilities
but the reality is you can't
least there is a lot of issues
So one thing that some email
providers say is that they
tests like testing my Facebook
ads or doing email split
But what about calculating
sample sizes for other types of
methods anyway because it's such
a big deal.
still go after it and try to
find these calculators or
haven't been developed for the
marketer yet but you should
So it may take a little bit of
leg work because some things
Yeah so I think that's kind of a
pretty big problem too or at
and it was 10 percent better or
at least that's what I saw on
So like for example let's say
that I sent out a subject line B
that somehow roll up and make it
Well what if I had sent it out
on a different day.
Would it have still been ten
What if I was actually tracking
a different metric was would
I think there's a lot of
variables in that equation that
I don't know that it's better?
was a winner and even if we do
extrapolate from that what value
Are you saying that it's
actually probably not better or
For example most emails just go
out all at a time over a single
day is, are we able to
extrapolate from that that this
that may not give you a perfect
does that have for the next
So what would the value then be
Because usually when they do
split testing it's like I send
out emails to say 10 percent of
my email list and find that
subject line B is better 10
to calculate stats for an e-mail
bigger factors over periods of
time like for example we ran 50
on the value that that adds and
instead start thinking around
stop thinking about individual
tests because I'm kind of iffy
I think the biggest thing around
email testing is people should
was describing earlier.
continuous rate then you have to
maybe use another method like I
it to your traditional online
calculators if you're doing a
if you are calculating
conversion rate you can still do
Yeah I mean the stats are
basically the same regardless so
Have you seen like a tool out
there that is that you could use
a system who's not even
performing any statistics that
we're doing an AB test and I'm
going to take it at the word of
But sometimes it's not quite
enough just to say, Well we're
better job in fixing those
around e-mail testing and e-mail
providers I think could do a lot
There's a lot of things that
could be more robust about
percent difference one way or
well maybe I don't have the
sample size to see a true 10
this thing is truly a winner.
What if I actually performed
statistics on this and saw that
I personally don't trust many
out how to run this data
yourself and maybe question...
deliver that because it's pretty
So if they're not you need to do
the legwork to actually figure
I think it's very easy to just
look at two base numbers and say
Ok, number one am I calculating
the right metrics?
Am I performing the right stats?
Am I looking at this for a long
There's a lot of things that can
Besides the actual email besides
the AB testing solutions to
to your business.
to be some type of statistical
test going on.
Anytime you're doing any type of
true AB testing there always has
e-mail experiments and in the
vast majority of those it's the
That's like an actionable
learning that you can then apply
e-mails with the longer subject
lines that won.
When you do split this for ads.
Google Ads, Facebook Ads, you
know et cetera same things
You know like impressions versus
quakes and calculating sample
Yep exactly the same.
haven't embraced some of the
more scientific learnings that
If you want more interviews like
But you know people will get
It's a slow learning curve and
getting into statistics is
rigorous science so that doesn't
exist for most people yet.
marketers have or CROs have from
AB testing which is a very
I haven't heard that people are
doing let's say upfront sample
You know one of the issues is a
lot of people I think still
And, there we go...
You know they're like let's test
size calculations to figure out
when the test is done.
makes sense but actually, they
Because so many people run on
some sort of test on ads which
well yeah we have a winner or
Subscribe to my channel.
Do you like videos like this? Please subscribe to my channel
The Pe:p Show is a series of short and to the point videos. Topics that I’m covering go way past conversion stuff – it’s about optimizing all the things: your life, health, relationships, work, and business. I will also be interviewing industry peers on various topics like digital marketing, growth hacking, and more.
Peep Laja is the founder of CXL. He's a renowned conversion optimization champion and was nominated as the most influential CRO expert in the world.
After setting up and running Speero (previously CXL Agency) for five years, he started CXL Institute, where data-driven marketers get trained.
Over the last 20 years, Peep has worked in web development, marketing consulting, B2B sales, SEO, PPC, and SaaS.
Running conversion optimization experiments the right way with Chad Sanderson