If you read this blog regularly, you probably don’t need an introduction to CRO or A/B testing. You know the major players, best practices, and you’ve likely tested your fair share of ideas.
But, as an expert, you likely know some of the persistent frustrations with current approaches. To name just a pair:
- Testing simply takes time.
- Our best instincts are often wrong.
Customers don’t usually see one ad and then click over to purchase.
In reality, the path is much more complex, and usually includes various marketing channels – organic and paid search, referral, social media, television.
But if you’re a rigorous and data-driven marketer, the question has to cross your mind: how much credit can I give each channel for this conversion?
Just when you start to think that A/B testing is fairly straightforward, you run into a new strategic controversy.
This one is polarizing: how many variations should you test against the control?
While this method is scientifically valid, it has a major drawback: if you only implement significant results, you will leave a lot of money on the table.
As a digital analyst or marketer, you know the importance of analytical decision making.
Go to any industry conference, blog, meet up, or even just read the popular press, and you will hear and see topics like machine learning, artificial intelligence, and predictive analytics everywhere.
Because many of us don’t come from a technical/statistical background, this can be both a little confusing and intimidating.
Keep reading »
Even A/B tests with well-conceived test concepts can lead to non-significant results and erroneous interpretations. And this can happen in every phase of testing if incorrect statistical approaches are used.
When should you use bandit tests, and when is A/B/n testing best?
Though there are some strong proponents (and opponents) of bandit testing, there are certain use cases where bandit testing may be optimal. Question is, when?