advanced

Running Before You Walk: What You Need to Know About Personalization

Web personalization is all the rage, but are you trying to run before you’ve learned how to walk?

Keep reading »

When to Run Bandit Tests Instead of A/B/n Tests

When should you use bandit tests, and when is A/B/n testing best?

Though there are some strong proponents (and opponents) of bandit testing, there are certain use cases where bandit testing may be optimal. Question is, when?

Keep reading »

Bayesian vs Frequentist A/B Testing - What's the Difference?

There’s a philosophical statistics debate in the optimization world: Bayesian vs Frequentist.

This is not a new debate; Thomas Bayes wrote “An Essay towards solving a Problem in the Doctrine of Chances” in 1763, and it’s been an academic argument ever since.

Keep reading »

The Hard Life of an Optimizer - Yuan Wright [Video]

Here’s another presentation from CXL Live 2015 (sign up for the 2016 list to get tickets at pre-release prices).

While optimization is fun, it’s also really hard. We’re asking a lot of questions.

Why do users do what they do? Is X actually influencing Y, or is it a mere correlation? The test bombed – but why? Yuan Wright, Director of Analytics at Electronic Arts, will lead you through an open discussion about the challenges we all face – optimizer to optimizer.

Keep reading »

Your Test is Only as Good as Your Hypothesis [Video]

CXL Live 2016 is coming up next March (get on the list to get tickets at pre-release prices). We’re going to publish video recordings of the previous event, and here’s the first one.

You run A/B tests – some win, some don’t. The likelihood of the tests actually having a positive impact largely depends whether you’re testing the right stuff.  Testing stupid stuff that makes no difference is by far the biggest reason for tests that end in “no difference”.

Keep reading »

Why You Should Test on Mobile and Desktop Separately

While running A/B tests on all your traffic at once might seem like a good idea (to get a bigger sample size faster), in reality it’s not. You need to target mobile and desktop audiences separately. Keep reading »

Iterative A/B Testing - A Must If You Lack a Crystal Ball

You have a hypothesis and run a test. Result – no difference (or even drop in results). What should you do now? Test a different hypothesis? Keep reading »

The Discipline Based Testing Methodology

This is the methodology that I have developed over 12 years in the industry and working with over 300 organizations. It is also the methodology that has been used to have a near perfect test streak (6 test failures in 5.5 years), even if most others do not believe that stat. Keep reading »

Can You Run Multiple A/B Tests at the Same Time?

You want to speed up your testing efforts, and run more tests. So now the question is – can you run more than one A/B test at the same time on your site?

Will this increase the velocity of your testing program (and thus help you grow faster), or will it pollute the data since multiple separate tests could potentially affect each other’s outcomes? The answer is ‘yes’ to both, but what you should do about it depends. Keep reading »

How to Come Up with More Winning Tests Using Data [ResearchXL model]

While testing is a critical part of conversion optimization to make sure we actually made things better and by how much, it’s also the tip of the iceberg of the full CRO picture. Testing tools are affordable (even free), and increasingly easier to use – so pretty much any idiot can set up and run A/B tests. This is not where the difficulty lies. The hard part is testing the right things, and having the right treatment.

The success of your testing program is a sum of these two: number of tests run (volume) and percentage of tests that provide a win. Those two add up to indicate execution velocity. Add average sample size and impact per successful experiment, and you get an idea of total business impact.

So in a nutshell, this is how you succeed:

  1. Run as many tests as possible at all times (every day without a test running on a page/layout is regret by default),
  2. Win as many tests as possible,
  3. Have as high impact (uplift) per successful test as possible.

Executing point #1 obvious, but how to do well for points #2 and #3? This comes down to the most important thing about conversion optimization – the discovery of what matters.

Keep reading »