If you ever ran a highly trustworthy and positive a/b test, chances are that you’ll remember it with an inclination to try it again in the future – rightfully so. Testing is hard work with many experiments failing or ending up insignificant. It would only seem optimal to try and exploit any existing knowledge for more successes and fewer failures. In our own practice we started doing just that. Keep reading »
You run an A/B test, and it’s a winner. Or maybe it’s flat (no difference in performance between variations). Does it mean that the treatments that you tested didn’t resonate with anyone? Probably not.
If you target all visitors with the A/B test, it merely reports overall results – and ignores what happens in a portion of your traffic, in segments.
A/B testing tools like Optimizely or VWO make testing easy, and that’s about it. They’re tools to run tests, and not exactly designed for post-test analysis. Most testing tools have gotten better at it over the years, but still lack what you can do with Google Analytics – which is like everything. Keep reading »
Value proposition is the #1 thing that determines whether people will bother reading more about your product or hit the back button. It’s also the main thing you need to test – if you get it right, it will be a huge boost. Keep reading »
Chances are, you’ve heard of Google Optimize by now. It’s Google’s solution for A/B testing and personalization. It launched in beta last year, which left optimizers around the world waiting in line to try it out. Now that it’s out of beta, you can give it a try without the wait.
But what can you expect? How do you configure it properly? How do you run your first experiment?
Nothing works all the time on all sites. That’s why we test in the first place; to let the data tell us what is actually working.
That said, we have done quite a bit of user experience on ecommerce sites and have seen some trends in terms of what generates positive experiences from a customer perspective.
This post will outline 16 A/B test ideas based on that data.
Just when you start to think that A/B testing is fairly straightforward, you run into a new strategic controversy.
This one is polarizing: how many variations should you test against the control?
A/B testing is common practice and it can be a powerful optimization strategy when it’s used properly. We’ve written on it extensively. Plus, the Internet is full of “How We Increased Conversions by 1,000% with 1 Simple Change” style articles.
Unfortunately, there are experimentation flaws associated with A/B testing as well. Understanding those flaws and their implications is key to designing better, smarter A/B test variations.