advanced - ConversionXL Archives - Page 2 of 3 Institute Revenue optimization services Blog Search Start 7-day trial for $1 Subscriptions Minidegrees Online courses Upcoming courses Resources

advanced

How to Make More Money With Bayesian A/B Test Evaluation

The traditional (and most used) approach to analyzing A/B tests is to use a so-called t-test, which is a method used in frequentist statistics.

While this method is scientifically valid, it has a major drawback: if you only implement significant results, you will leave a lot of money on the table.

Keep reading

Intelligent Agents: An A.I. View of Optimization

As a digital analyst or marketer, you know the importance of analytical decision making.

Go to any industry conference, blog, meet up, or even just read the popular press, and you will hear and see topics like machine learning, artificial intelligence, and predictive analytics everywhere.

Because many of us don’t come from a technical/statistical background, this can be both a little confusing and intimidating.

Keep reading

10 Statistics Traps in A/B Testing: The Ultimate Guide for Optimizers

Even A/B tests with well-conceived test concepts can lead to non-significant results and erroneous interpretations. And this can happen in every phase of testing if incorrect statistical approaches are used.

Keep reading

Running Before You Walk: What You Need to Know About Personalization

Web personalization is all the rage, but are you trying to run before you’ve learned how to walk?

Keep reading

The Hard Life of an Optimizer - Yuan Wright [Video]

Here’s another presentation from CXL Live 2015 (sign up for the 2016 list to get tickets at pre-release prices).

While optimization is fun, it’s also really hard. We’re asking a lot of questions.

Why do users do what they do? Is X actually influencing Y, or is it a mere correlation? The test bombed – but why? Yuan Wright, Director of Analytics at Electronic Arts, will lead you through an open discussion about the challenges we all face – optimizer to optimizer.

Keep reading

Your Test is Only as Good as Your Hypothesis [Video]

CXL Live 2016 is coming up next March (get on the list to get tickets at pre-release prices). We’re going to publish video recordings of the previous event, and here’s the first one.

You run A/B tests – some win, some don’t. The likelihood of the tests actually having a positive impact largely depends whether you’re testing the right stuff.  Testing stupid stuff that makes no difference is by far the biggest reason for tests that end in “no difference”.

Keep reading

Iterative A/B Testing - A Must If You Lack a Crystal Ball

You have a hypothesis and run a test. Result – no difference (or even drop in results). What should you do now? Test a different hypothesis?

Keep reading

Can You Run Multiple A/B Tests at the Same Time?

You want to speed up your testing efforts, and run more tests. So now the question is – can you run more than one A/B test at the same time on your site?

Will this increase the velocity of your testing program (and thus help you grow faster), or will it pollute the data since multiple separate tests could potentially affect each other’s outcomes? The answer is ‘yes’ to both, but what you should do about it depends.

Keep reading

How to Come Up with More Winning Tests Using Data [ResearchXL model]

While testing is a critical part of conversion optimization to make sure we actually made things better and by how much, it’s also the tip of the iceberg of the full CRO picture. Testing tools are affordable (even free), and increasingly easier to use – so pretty much any idiot can set up and run A/B tests. This is not where the difficulty lies. The hard part is testing the right things, and having the right treatment.

The success of your testing program is a sum of these two: number of tests run (volume) and percentage of tests that provide a win. Those two add up to indicate execution velocity. Add average sample size and impact per successful experiment, and you get an idea of total business impact.

So in a nutshell, this is how you succeed:

  1. Run as many tests as possible at all times (every day without a test running on a page/layout is regret by default),
  2. Win as many tests as possible,
  3. Have as high impact (uplift) per successful test as possible.

Executing point #1 obvious, but how to do well for points #2 and #3? This comes down to the most important thing about conversion optimization – the discovery of what matters.

Keep reading

Lies Your Optimization Guru Told You

Before you get out your pitchforks, I want to stress that this article does not represent Peep’s views.

The easiest lies to believe are the ones we want to be true, and nothing speaks to us more than validation of the work we are doing or what we already believe.  Due to this we become naturally defensive when someone challenges that world view.

The “truth” is that there is no single state of truth and that all actions, disciplines, and behaviors can and should be evaluated for growth opportunities.  It doesn’t matter if we are designers, optimizers, product managers, marketers, executives, or engineers, we all come from our own disciplines and will naturally defend to the death if we feel threatened even in the face of overwhelming evidence.

Keep reading

How to achieve more with less in content marketing

Find out how to reach more people and get the most value from your content marketing efforts with content recycling.

Join Paul Boag, UX Marketing Specialist @ Boagworks, live Wed. May 27 @ 11 AM CT.

Register here

Categories