One of the most challenging parts of producing high-quality content is finding and sourcing accurate statistics and research. You’ll often go down the sourcing rabbit hole only to discover that a statistic is from 2012 or that the study’s sample size consisted of just a few people—and that’s only after you make an effort to dig deeper.
With many outdated and misleading statistics crowding the first page of Google, how do you know which stats are legitimate? How can you use research to strengthen your content rather than regurgitating the same old stats?
I am the Web Experience Manager for Cisco in EMEAR, and I’ve been working on increasing our capabilities as a business when it comes to conversion optimization for the past three and a half years.
Here’s our story of bringing experimentation into a $51 billion company with 71,000 employees spread across 96 locations worldwide.
Over the last several years, email has been pronounced dead half a dozen times, if not more. The truth is, even today, that email is very much alive and, for most optimizers, it’s far from being on its proverbial deathbed.
How can there be such a divided opinion? Segmentation and personalization are the answer.
Optimizers who take advantage of it are seeing real ROI. Optimizers who don’t? Well, they’re likely declaring that “the email blast is dead.”
At a certain point, the results from your A/B testing will likely slow down. Even after dozens of small iterations, the needle just won’t move.
Reaching diminishing returns, is never fun. But what exactly does that mean? In most cases, you’re probably hit a local maximum.
So the question is, what do you do now?
It’s easy to get lost down the rabbit hole of metrics for your business. But focusing on the handful of metrics that matter is what will ultimately drive the biggest results.
The trick is figuring out which metrics you should focus on. This article will break down everything you need to know about defining and setting your key performance indicators (KPI’s.)
You’ve acquired a ton of customers lately for your SaaS company. On the surface, this is awesome. More customers, more money. So you throw your energy into your customer funnel. However, soon after sign-up, they seem to fly right out the back door.
Why is this happening? Why do customers leave—or use the service less—often without saying anything? What you’re experiencing is customer churn.
The title may seem a bit controversial, a fairly common question I get from large (and small) companies is—“Should I run A/A tests to check whether my experiment is working?”
The answer might surprise you.
A very common scenario: A business runs tens and tens of A/B tests over the course of a year, and many of them “win.” Some tests get you 25% uplift in revenue, or even higher.
Yet when you roll out the change, the revenue doesn’t increase 25%. And 12 months after running all those tests, the conversion rate is still pretty much the same. How come?
Knowing what your customers want, when they want it, and how they’d like it served up to them is at the core of developing winning test hypotheses.
It’s the why behind the quantitative data that shapes your copy and gives your visitors an easily navigable path to becoming a customer.
Even if your A/B tests are well planned and strategized, when run, they can often lead to non-significant results and erroneous interpretations.
You’re especially prone to errors if incorrect statistical approaches are used.
In this post we’ll illustrate the 10 most important statistical traps to be aware of, and more importantly, how to avoid them.