A/B testing tools like Optimizely or VWO make testing easy, and that’s about it. They’re tools to run tests, and not exactly designed for post-test analysis. Most testing tools have gotten better at it over the years, but still lack what you can do with Google Analytics – which is like everything. Keep reading »
Value proposition is the #1 thing that determines whether people will bother reading more about your product or hit the back button. It’s also the main thing you need to test – if you get it right, it will be a huge boost. Keep reading »
Chances are, you’ve heard of Google Optimize by now. It’s Google’s solution for A/B testing and personalization. It launched in beta last year, which left optimizers around the world waiting in line to try it out. Now that it’s out of beta, you can give it a try without the wait.
But what can you expect? How do you configure it properly? How do you run your first experiment?
Nothing works all the time on all sites. That’s why we test in the first place; to let the data tell us what is actually working.
That said, we have done quite a bit of user experience on ecommerce sites and have seen some trends in terms of what generates positive experiences from a customer perspective.
This post will outline 16 A/B test ideas based on that data.
Just when you start to think that A/B testing is fairly straightforward, you run into a new strategic controversy.
This one is polarizing: how many variations should you test against the control?
A/B testing is common practice and it can be a powerful optimization strategy when it’s used properly. We’ve written on it extensively. Plus, the Internet is full of “How We Increased Conversions by 1,000% with 1 Simple Change” style articles.
Unfortunately, there are experimentation flaws associated with A/B testing as well. Understanding those flaws and their implications is key to designing better, smarter A/B test variations.
I have been part of some the best conversion optimization teams in the world, and they seem to have an intuitive sense on how to run the best experiments. People that are involved in these teams share a similar mindset.
I wanted to try to make this a process, one that could teach any organization how to run better experiments. I wanted to try to make this mindset more explicit in a way that is fun to use.
There’s no rocket science that follows, but this framework may well help your team drive a more efficient optimization culture.
Do a quick Google search for “A/B testing mistakes,” and you’ll find a good amount of articles.
Common on those lists is the oft-repeated advice that you should “not make more than one change per test.”
As it turns out, like much of the advice in the conversion optimization world, it’s not so simple.
William A. Foster once said, “Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction, and skillful execution; it represents the wise choice of many alternatives.”
Yet, we continue to see businesses pushing leads through doors, pushing customers through funnels… just hoping that they’ll create a high quality, engaged audience by accident.
Unfortunately, it doesn’t work that way. A high quality, engaged audience is anything but accidental. It requires that optimizers put in the effort to create user state models, dig into cohort analysis and correlative metrics, run experiments for different user states, etc.
It’s not the easy choice, but if you’re looking for long-term revenue growth, it’s the only choice.
Keep reading »