A/B testing tools like Optimizely or VWO make testing easy, and that’s about it. They’re tools to run tests, and not exactly designed for post-test analysis. Most testing tools have gotten better at it over the years, but still lack what you can do with Google Analytics – which is like everything. Keep reading »
If you read this blog regularly, you probably don’t need an introduction to CRO or A/B testing. You know the major players, best practices, and you’ve likely tested your fair share of ideas.
But, as an expert, you likely know some of the persistent frustrations with current approaches. To name just a pair:
- Testing simply takes time.
- Our best instincts are often wrong.
Customers don’t usually see one ad and then click over to purchase.
In reality, the path is much more complex, and usually includes various marketing channels – organic and paid search, referral, social media, television.
But if you’re a rigorous and data-driven marketer, the question has to cross your mind: how much credit can I give each channel for this conversion?
Just when you start to think that A/B testing is fairly straightforward, you run into a new strategic controversy.
This one is polarizing: how many variations should you test against the control?
While this method is scientifically valid, it has a major drawback: if you only implement significant results, you will leave a lot of money on the table.
As a digital analyst or marketer, you know the importance of analytical decision making.
Go to any industry conference, blog, meet up, or even just read the popular press, and you will hear and see topics like machine learning, artificial intelligence, and predictive analytics everywhere.
Because many of us don’t come from a technical/statistical background, this can be both a little confusing and intimidating.
Keep reading »
Even A/B tests with well-conceived test concepts can lead to non-significant results and erroneous interpretations. And this can happen in every phase of testing if incorrect statistical approaches are used.