“It is difficult to solve a problem you don’t understand.”
I think most of us agree to the logic behind that statement.
Well, the exact same logic applies to conversion optimization.
And that’s why research is so important.
It helps you understand your conversion problems – where they are and what’s causing them.
Having this insight makes everything in your CRO process fall into place. It provides you with a foundation for making informed decisions and prioritizing your optimization opportunities according to effort and potential return.
A/B Testing Is NOT an Excuse to Skip Your Homework
If you want to establish an effective A/B testing program that actually moves the needle, you’ll have to do your homework – there’s simply no way around it. Here’s why:
Testing unqualified ideas is pure guesswork. You’re basically just pitting two guesses against each other in the hopes that one will have an impact. At some point, you’ll probably chance upon something that works, but you’ll also have wasted a lot of time and money along the way.
Simply trying stuff to see what happens is fun, but not a good optimization strategy. It is much more constructive than that.
Doing research upfront to qualify your ideas before you start testing will help you build stronger hypotheses, and vastly increase the quality of your tests as well as the impact they have on your business. Moreover, it’ll save you a lot of time and frustration in the long run.
Start With Where and What, Then Move on to Why
I start my research process by going through quantitative data from Google Analytics with the goal of finding out what’s going wrong and where it’s going wrong.
Initially, I dig into general data to get a high-level idea of how the website is performing as well as how much traffic and how many conversions the website gets.
These insights help me get a general idea of what we’re dealing with and what optimization strategy we need to adopt. When you have actual numbers on users/sessions and conversions, you can quickly do a few basic calculations to figure out what your testing capacity is.
You might not have enough traffic to do any meaningful testing, in which case you need to adjust your CRO strategy.
You might have just enough traffic to run tests that yield massive lifts, in which case you need to focus on radical and pervasive changes.
You might have lots of traffic and conversions, in which case you can actually experiment with everything – small changes to radical redesigns – and get meaningful data within a reasonable timeframe (2-4 weeks).
Evan Miller has a great sample size calculator that you can play around with to get an impression of how big a sample you need in order to detect a given lift.
Online Dialogue has a similar calculator that lets you calculate how big of a lift you need to be able to conclude a test within a given timeframe based on your traffic and current conversion rate.
Once I have this initial data in place, I go through general stuff like device mix, browsers, bounce rates, top landing pages, top exit pages, etc. This gives me an overall idea of how users are interacting with the website. Also it helps me identify glaring issues like browser compatibility.
With the where and what questions in place, I start collecting qualitative data with the purpose answering the why questions (e.g. Why are users bouncing like crazy on the main product landing page?)
I usually kick this process off by interviewing the folks in customer support and sales. It’s amazing how much insight they can provide you with.
These guys spend all day talking to customers and have in-depth knowledge of the problems and issues that they are dealing with – both in relation to the website and the product itself. Moreover, they are familiar with the decision-making process of the target audience and can help you build better optimization hypotheses.
Here are the 5 questions I ask during these interviews:
- What are the top 3 questions you get from potential customers?
- What do you answer when you get these questions?
- Are there any particular aspects of the product/offer that people understand?
- What aspects of the product/offer do people like the most/least?
- Did I miss anything important? Got something to add?
After that, I hone in on specific sections/pages on the website to get more information. Session recordings are great for funnel analysis. Scroll/Heat maps are really good for getting an idea of how users interact with individual pages. Form analytics are fantastic for understanding which parts of the form facilitate friction or even abandonment.
Feedback polls are a really amazing way to get answers to specific questions. I try to keep these polls as simple and non-intrusive as possible.
So, I normally stick to one question and 3-4 answers. If you go with open-ended questions, a little trick is to hide the comment field behind a radio button. In my experience, this boosts completion rates as it seems that presenting an open comment field off the bat scares people off.
When digging into value prop-related stuff, I like to ask, “What’s most important to you?” and present 3 different aspects of the product or service (e.g. “1. Saving money. 2. Saving time. 3. Getting high quality leads.”) Then you can add a fourth option, “Other”, and have that fold out as an open comment field in case there are other aspects you missed.
P.S. ConversionXL has a great framework for collecting and prioritizing all your conversion research.
- Gather your quantitative (what and where) and qualitative (why) data using the strategies you learned from Yehoshua Coren (quantitative) and Jen Havice (qualitative).
- Determine whether A/B testing is right for you. If you don’t have enough traffic, don’t A/B test. If you have just enough, focus on testing radical changes. If you have lots, you can test small changes and still get meaningful data.
- Using your research, hone in on the identified points of friction (what and where) and come up with a hypothesis based on the identified “why”. Run your test! Repeat.
For more from Michael, check out Unbounce.
Copywriting, A/B testing, analytics, psychology... we'll cover it all. But first, Brian Massey, the Conversion Scientist, reminds you of the basics.
Joanna Wiebe of Copy Hackers and Airstory on how to write copy that converts like crazy.
David Kadavy, author of Design for Hackers, on designing for conversions. He debunks today's biggest design myths and tells you what actually matters.
Bart Schutz of Online Dialogue and The Wheel of Persuasion on using psychology to increase conversions.
Talia Wolf of Conversioner talking emotional persuasion. Building on what we learned from Bart, she explains how to appeal to your visitors' emotions.
Justine Jordan from Litmus walks you through collecting emails, improving your open rate, designing for all browsers / devices / email clients, A/B testing emails and more.
Chris Mercer from SeriouslySimpleMarketing.com on how to setup your analytics in a meaningful way that ensures you're gathering useful data.
Yehoshua Coren, the Analytics Ninja, walks you through extracting insights from your analytics using segmentation.
Jen Havice of Make Mention on how to use qualitative research to answer one of the most important questions in conversion optimization: Why?
Michael Aagaard, senior conversion optimizer at Unbounce, on how to strategically decide what to test using conversion research.
Peep Laja teaches you everything you need to know to run valuable, statistically valid tests that will actually lead to applicable insights.
Peep on how to combine everything you've learned into a systematic, repeatable CRO process.