When should you use bandit tests, and when is A/B/n testing best?
Though there are some strong proponents (and opponents) of bandit testing, there are certain use cases where bandit testing may be optimal. Question is, when?
There’s a philosophical statistics debate in the optimization world: Bayesian vs Frequentist.
This is not a new debate; Thomas Bayes wrote “An Essay towards solving a Problem in the Doctrine of Chances” in 1763, and it’s been an academic argument ever since.
Here’s another presentation from CXL Live 2015 (sign up for the 2016 list to get tickets at pre-release prices).
While optimization is fun, it’s also really hard. We’re asking a lot of questions.
Why do users do what they do? Is X actually influencing Y, or is it a mere correlation? The test bombed – but why? Yuan Wright, Director of Analytics at Electronic Arts, will lead you through an open discussion about the challenges we all face – optimizer to optimizer.
CXL Live 2016 is coming up next March (get on the list to get tickets at pre-release prices). We’re going to publish video recordings of the previous event, and here’s the first one.
You run A/B tests – some win, some don’t. The likelihood of the tests actually having a positive impact largely depends whether you’re testing the right stuff. Testing stupid stuff that makes no difference is by far the biggest reason for tests that end in “no difference”.
While running A/B tests on all your traffic at once might seem like a good idea (to get a bigger sample size faster), in reality it’s not. You need to target mobile and desktop audiences separately. Keep reading »
This is the methodology that I have developed over 12 years in the industry and working with over 300 organizations. It is also the methodology that has been used to have a near perfect test streak (6 test failures in 5.5 years), even if most others do not believe that stat. Keep reading »
You want to speed up your testing efforts, and run more tests. So now the question is – can you run more than one A/B test at the same time on your site?
Will this increase the velocity of your testing program (and thus help you grow faster), or will it pollute the data since multiple separate tests could potentially affect each other’s outcomes? The answer is ‘yes’ to both, but what you should do about it depends. Keep reading »
While testing is a critical part of conversion optimization to make sure we actually made things better and by how much, it’s also the tip of the iceberg of the full CRO picture. Testing tools are affordable (even free), and increasingly easier to use – so pretty much any idiot can set up and run A/B tests. This is not where the difficulty lies. The hard part is testing the right things, and having the right treatment.
The success of your testing program is a sum of these two: number of tests run (volume) and percentage of tests that provide a win. Those two add up to indicate execution velocity. Add average sample size and impact per successful experiment, and you get an idea of total business impact.
So in a nutshell, this is how you succeed:
- Run as many tests as possible at all times (every day without a test running on a page/layout is regret by default),
- Win as many tests as possible,
- Have as high impact (uplift) per successful test as possible.
Executing point #1 obvious, but how to do well for points #2 and #3? This comes down to the most important thing about conversion optimization – the discovery of what matters.