You’ve worked all quarter on a new content marketing series and conversions are ticking upwards.
Do you attribute these conversions exclusively to your content? What about the customers who clicked through to your article from your social media page—do you attribute those conversions to socials or to the article (or both)?
When should you use bandit tests, and when is
A/B/n testing best?
Though there are some strong proponents (and opponents) of bandit testing, there are certain use cases where bandit testing may be optimal. Question is, when?
One thing many people forget when dealing with data: outliers.
Even in a controlled online
A/B test, your data set may be skewed by extremities. How do you deal with them? Do you trim them out, or is there another way?
There’s a philosophical statistics debate in the
A/B testing world: Bayesian vs. Frequentist.
This is not a new debate. Thomas Bayes wrote “
An Essay towards solving a Problem in the Doctrine of Chances” in 1763, and it’s been an academic argument ever since.
The title may seem a bit controversial, a fairly common question I get from large (and small) companies is—“Should I run A/A tests to check whether my experiment is working?”
The answer might surprise you.
A very common scenario: A business
runs tens and tens of A/B tests over the course of a year, and many of them “win.” Some tests get you 25% uplift in revenue, or even higher.
Yet when you roll out the change, the revenue doesn’t increase 25%. And 12 months after running all those tests,
the conversion rate is still pretty much the same. How come?
Even if your
A/B tests are well planned and strategized, when run, they can often lead to non-significant results and erroneous interpretations.
You’re especially prone to errors if incorrect
statistical approaches are used.
In this post we’ll illustrate the 10 most important statistical traps to be aware of, and more importantly, how to avoid them.
As a marketer and optimizer it’s only natural to want to speed up your testing efforts. So now the question is—can you run more than one
A/B test at the same time on your site?
Let’s look into the “why you shouldn’t” and “why you should” run multiple tests at once.
Google Analytics helps us identify
conversion uplift opportunities. Traffic is precious, and we don’t want to waste it on tests that don’t result in learning or uplifts.
That’s why we want good data for:
Which pages have uplift opportunities;
Specific page issues.
Your website is leaking money. Everybody’s is.
The first step toward plugging the leaks is identifying
where the leaks are. Which funnel steps, which layers of your site, which specific pages are leaking money? Google Analytics can provide answers.