fbpx

6 Ways You’re Messing Up Customer Surveys (and How to Fix Them)

customer surveys

Qualitative research or quantitative research? Doesn’t matter if you’re doing it wrong.

It used to be that qualitative research was swept under the rug in CRO circles, but most people now know what it’s important for uncovering the ‘why’ behind the ‘what’ that quantitative analytics shows you.

Figuring out your primary target audience is important for boosting conversions. You’ll need to figure out what they want, what matters to them, and what are the sources of friction in completing the desired actions.

Qualitative research is a good way to do this. It’s mostly about learning who the customers are, what they want, how badly they want it, and the specific language your customers use. Customer surveys are a key piece in this puzzle. There are 1,000 reasons to do customer surveys, but one of the best uses is for conversion research – to uncover insight to generate better hypotheses. But if you’re doing them wrong, you’re missing out on that insight.

Here are all the ways you’re messing up your surveys (and how to fix that):

1. You’re Not Thinking About Your Business Objectives

You’ve probably seen competitors’ sites that have utilized exit surveys. Means you should too, right? Same questions will probably work…

No. Don’t copy your competitors, they don’t know what they’re doing either. Jumping on a bandwagon does nothing to accomplish your specific goals. Instead, ask a few questions like:

  • Why are you collecting qualitative data?
  • What’s the purpose?
  • What are you going to do with the answers?
What are your business objectives in doing a survey? (Image Source)
What are your business objectives in doing a survey? (Image Source)

If you’re enacting web/exit surveys, there are a few other questions you’ll have to ask yourself:

  • Will the widget distract from the highest priority goal on the page?
  • Is your question relevant to the content on the page?
  • Does the exit survey add value to the page or take away?

Again, just because you’ve seen other sites with exit surveys doesn’t mean you need to blindly implement one. You’ve got to be clear up-front with your business goals first. Jen Havice, conversion copywriter and founder of Make Mention Media, says lacking objectives can waste your time and your customers’ time:

jen havice

Jen Havice:

“I’d say a common problem centers around businesses or organizations not having goals they want to achieve or at least insights they’re looking to gain. This goes to relevancy. Don’t waste your time or your respondent’s patience with questions you’re simply interested in knowing the answers to versus ones that you need the answers to. You run the risk of having to survey them again unnecessarily and being ignored.”

There are many reasons to do customer surveys. Our purpose is for CRO, so our main goal is to learn about our customers. Here are some typical things we’re looking to learn with customer surveys:

  • Who are these people? What are their common characteristics? Can we form a hypothesis on some different customer personas?
  • What kind of problem are they solving for themselves? We can use this information in our value proposition.
  • What are the exact words they use? You can steal exact phrases from this for your copy, essentially having the customer write copy for you.
  • What are the biggest sources of friction? Doubts? Hesitations? Any unanswered questions? Knowing this allows us to take steps to reduce friction.
  • How would they prefer to buy?
  • Do they comparison shop? How much? If they shop around a lot, it’s important to stress our unique benefits more. We need to be visibly different or better than the competitor.
  • Can you uncover any insights about their emotional state?

2. You’re Surveying The Wrong People

Noah Kagan wrote a blog post about an email campaign they sent to 30,000 subscribers trying to promote their Monthly1K product. Even though the campaign was highly targeted, only 30 people had purchased the product. They had already validated that the product was great based on previous customers’ experiences. What was the problem?

To find out, they sent out a customer survey to the right people. Noah says that “it was WHO I surveyed that proved to be the most helpful.” Here’s who he sent it to people who:

  1. Opened the email.
  2. Clicked the email.
  3. But did not buy.

Of course, he asked great questions, too (more on that later). But he started with asking the right people.

Who should you survey?

There are a few ways you could go about this, and they all depend on your specific goals.

As Noah did, you can survey people who were interested but did NOT buy – and that could lead to quality insights.

Here’s another way to look at it. Survey people who still freshly remember their purchase and the friction they experienced in the buying process. Only talk to your recent first-time customers (who have no previous relationship or experience with you that might affect their responses).

In this frame, you’ll want to filter out repeat buyers or people who have purchased a long time ago. If someone purchased 6 months ago or longer, they’ll probably have forgotten everything they were thinking and they’ll give you false information.

There are other ways to choose who to survey. Here are some scenarios:

  • If you want to improve your buying process, survey brand new buyers (and the people who didn’t buy).
  • If you want to start a loyalty program to improve customer retention, survey frequent buyers.
  • If you want to start a VIP program for top spenders, survey customers who spend a lot of money with you.

3. You’re Not Surveying The Right Amount of People

Since you’re running this customer survey for qualitative reasons, you don’t need to make numerical comparisons between two data sets. That means you can feel comfortable with fewer responses. The fewer sessions you gather, the wider your margin of error becomes. Here’s a chart that shows confidence levels for different sample sizes:

sample_size

A ±6% margin of error is totally okay for qualitative surveys. You’re mostly trying to spot patterns and do voice of customer research.

Clearly, though, you don’t want to survey too few people. Ask 10 people, and surely some loud voices can skew your data. You’re at a high risk of forming false patterns then.

We have found that somewhere between 100 and 200 is the ideal quantity. After 200 responses, the answers tend to get repetitive and don’t add value (and they take longer to analyze, using up more resources). But if you have less than 100, and there might not be enough data to identify trends or to draw conclusions from.

UX Matters put it well:

“Following data collection, rather than performing a statistical analysis, researchers look for trends in the data. When it comes to identifying trends, researchers look for statements that are identical across different research participants. The rule of thumb is that hearing a statement from just one participant is an anecdote; from two, a coincidence; and hearing it from three makes it a trend. The trends that you identify can then guide product development, business decisions, and marketing strategies.”

If you’re asking specific and quantifiable questions, such as Net Promoter Score, by all means collect data from a large sample. But otherwise, balance resources with the insight you’d like to gain. If you have less than 100 people who recently bought from you, then you do with what you can get. 10 responses is better than zero.

4. You’re Asking The Wrong Questions

Doesn’t matter how many, or which, people you survey, if you ask them the wrong questions you’ll get irrelevant answers.

Back to Noah Kagan’s blog post. Not only did they ask the right people, but they asked the right questions. Their survey had four questions:

  1. Were you at least interested in buying? Yes or NO
  2. Be specific about your answer
  3. What’s holding you back from starting your business?
  4. Should we make our support sumo do a dance video?

With this, they found the top four reasons that held people back from buying. They also used the customer’s language on the landing page, reordered the page so that the top questions were answered, and reduced friction by answering questions, fears, and doubts.

(Read the full story here.)

Noah knew what he wanted to find out and asked the right questions to the right people. How do you ask good questions?

To better understand your target audience, we’ve recommended the following questions (adjust the wording as you see fit):

  • Who are you? (Get the demographical data and see if there are any trends)
  • What are you using [your product] for? What problem does it solve for you? (Understand problem, and uncover unintended uses)
  • How is your life better thanks to it? Which tangible improvements in your life or business have you seen?
  • What do you like about our product the most?
  • Did you consider any alternatives to our product (prior to signing up)? If so, which ones?
  • What made you sign up for our product? What convinced you that it’s a good decision? Why did you choose us over others?
  • Which doubts and hesitations did you have before joining?
  • Which questions did you have, but couldn’t find answers to?
  • Anything else you would like to tell us?

Dr. Karl Blanks from Conversion Rate Experts mentioned in a previous article that the golden questions for them are:

  • What’s the one thing that nearly stopped you buying from us?
  • What was your biggest fear or concern about using us?

Whatever the case, make sure your questions are clear and understandable. Jen Havice has a great example of the trouble that can occur if the questions are confusing:

jen havice

Jen Havice:

“Another one falls into the camp of “don’t make people think” or have to interpret what you’re asking. For instance, I asked a question in a survey asking “What types of online copywriting frustrate you the most?” The problem was that some people thought I meant copy to read instead of to write. Needless to say, this gave me some unhelpful answers.”

Open-ended vs specific questions

Different people have different opinions on this, but in most cases I think it’s smart to avoid yes/no questions and multiple choice, at least if you’re doing conversion research and uncovering insights for hypotheses. Especially if you’re interested in voice of customer, or digging up insights on fears, doubts, and uncertainties, limited range questions produce limited range answers.

Though, there are certain use cases for multiple choice questions or those with a limited range or scale. Paul Dunstone recommends that you, “ask questions which produce data you can measure (i.e. Data that can be easily broken down into statistics – such as multiple choice or Net Promoter Score data).”

Of course, these questions lead you to different results, so as always, it’s important to factor in your own specific goals with the survey.

To sum it up, the quality of the questions will determine the quality of the insight you will get. So ask the right questions.

5. You’re Analyzing Your Data Wrong

“You have your way. I have my way. As for the right way, the correct way, and the only way, it does not exist.” -Friedrich Nietzsche

First off, to be clear, there is no general consensus among qualitative researchers about the process of qualitative data analysis. There might not be a single best way to do it.

Here’s our process. It’s all manual labor and it takes time:

  1. Be clear about your goals. What are you looking for?
  2. Conduct an initial review of all the information to gain an initial sense of the data.
  3. Code the data. This is often described as ‘reducing the data’, and usually involves developing codes or categories (while still keeping the raw data).
  4. Interpret the data.
  5. Write a summary report of the findings.

This isn’t a linear process. It’s active and continuous. It’s normal to jump between these steps and go back and forth during analysis. Be prepared to spend a few hours, at least 4-6, on this.

Initial Review of Data

For this part, go in and look at responses individually. Some questions can be groups together, like doubts/hesitations/unanswered questions and customer persona data, etc.

The goal in the initial review is to identify broad trends, and then create a code for each trend. The code is usually a word or short phrase that suggests how the associated data helps us reach the goals we set in the previous steps. Make sure you write the codes down!

Coding enables you to organize large amounts of text and to discover patterns that would be difficult to detect by reading alone. Codes answer the questions, “What do I see going on here?” or “How do I categorize the information?”

Codification

Now that you have a list of codes, go back and attach codes to as many responses as you can. Not all the answers will be able to be labels, and it’s totally fine to tweak, add and eliminate codes as you get a better understanding of the data. The goal here is to link elements of the data that seem to share some perceived commonality.

As an example, there’s a client whose whose product was “vegan healthy meal plans.” A weekly grocery shopping list with recipes for each breakfast, lunch and dinner for 7 days.

Reading through responses for the first time, we noticed that there are 3 typical use cases:

  1. Busy mom – someone too busy to think about what to shop and what to cook.
  2. Overweight or sick people who want to get healthy by following the meal plans
  3. Vegan/people with celiac disease – People who bought it because of the gluten-free and vegan thing.

So these were the three codes. During the second reading, we went in and added comments that said “busy,” “overweight,” or “vegan,” and counted the number of responses per code to get an idea of the distribution.

Interpret the data

Now that you’ve went through the data so many times that you’re sick of it, what patterns do you detect? Write down what you can about hypothetical personas (as many as you can spot), and count how many responses per code you have to prioritize issues.

Put Together a Summary

Here’s where you summarize your findings. Make sure you write down key learnings (your memory isn’t as great as you think). Keep them at hand, and combine them with other forms of research to formulate hypotheses.

Another thing that can help with visualization is word clouds. Though they can’t stand alone, they can complement your reading of the responses.

Can you guess the title of this book? (Image Source)
Can you guess the title of this book? (Image Source)

6. Beware of Biases

We all have a bias blind spot, or a tendency to believe that we are less affected by biases than others. That’s crap, though, because we’re all subject to a variety of cognitive biases as well as errors and biases specific to qualitative research.

One of the most common biases to beware of is confirmation bias: “Your opinions are the result of years of paying attention to information which confirmed what you believed while ignoring information which challenged your preconceived notions.”

Related, but slightly different, is the backfire effect. That’s when, in light of new information that opposes your existing view, you double down on the prior and strengthen the inaccurate belief. You might recognize this in a testing context: results say one thing; HiPPO wants to do another. It’s disconcerting.

Bias also occurs in the research process, every part of it: planning data collection, analysis, and publication. According to a paper titled Identifying and Avoiding Bias in Research:

“Bias is not a dichotomous variable. Interpretation of bias cannot be limited to a simple inquisition: is bias present or not? Instead, reviewers of the literature must consider the degree to which bias was prevented by proper study design and implementation. As some degree of bias is nearly always present in a published study, readers must also consider how bias might influence a study’s conclusions.”

Some of the most common ones here have to do with selection bias, particularly self-selection bias. As a WiderFunnel article mentioned:

“Self-selection bias is a significant problem when users volunteer to be in a study…You’ll also never be able study people who don’t want to participate in studies. Look out for real customer motivations for giving feedback.”

There are plenty of other biases to be aware of as well. I won’t go too in-depth (that would be an article of its own), but if you’re interested, look these up:

Or check out one of these articles.

Tips And Tricks

Here are just a few ways to ensure you get, not only good responses, but responses at all.

First, when sending invitations for the survey, keep the email short and simple. Make it clear. Mention how long the survey will take (2-minute survey) or just say how many questions will be on it (5 short questions). Use incentives, such as the chance to win an Amazon gift card.

Second, trigger urgency by using fairly short deadlines (<1 week). After that, follow up with reminder emails. In our experience, about 80% of responses will come within the first 24 hours of sending the email.

Here are some tools to use for your customer surveys:

Conclusion

Customer surveys are important. They are a way of opening up a dialogue with prospects, existing customers, and former customers in order to figure out pain points and create a better user experience (and boost conversions). But they’re often done wrong, or at least not as well as they could be.

Most of it comes down to intimately understanding your business goals and how customer surveys will help you accomplish them. When you’re implementing customer surveys, keep the following in mind:

  1. Think about your business objectives.
  2. Survey the right people.
  3. Survey the right amount of people.
  4. Ask the right questions.
  5. Analyze your data correctly.
  6. Keep biases in check.

Finally, use the data correctly – or at least use it at all. Some people will collect hundreds of responses and let them sit dormant. This is likely related to not having clear business objectives with your survey. The important thing is putting the insights to use.

Related Posts

Join the conversation Add your comment

  1. Awesome post, Alex. I’ve experienced a lot of the backfire effect with current and past clients doubling down on what they know is inaccurate in light of unveiling new contradictory information.

    Maybe it would be good to write a follow-up beginner post about stat testing survey test results to help eliminate different biases from survey participants.

    What tools do you suggest for stat testing survey results (for small businesses and marketers who can’t afford Stata or SAS)?

    1. Avatar photo

      Not a bad idea for an article. I’ll see what we can do!

      For stat testing survey results, I’m not too sure. For the type of customer surveys I describe in the article, it’s more about spotting trends, so it’s all manual labor. Reading through results again and again.

  2. Hi Alex, thanks for mentioning SurveyGizmo. If I had enough magical powers to make people read this before creating surveys I absolutely would!

  3. This is one question I’d like to see every ecommerce site I shop on ask me: “What’s the one thing that nearly stopped you buying from us?” Some of them are truly horrendous, but because they have very little current competition I buy from them anyway. I have two I would nominate for the absolute worst ecommerce sites ever – and one of them is part of the S&P 500 so surely they could afford to hire someone to fix their problems.

    Do conversion optimization experts ever work with PhD organizational psychologists who specialize in survey design and analysis? I know that is what NBRII.com does, but until this post it never occurred to me that it would make sense for them and ecommerce conversion optimizers to be working together.

  4. Awesome article, Alex. Found so many useful tips and resources. Thanks, Niraj (Founder at hiverhq.com)

Comments are closed.

Current article:

6 Ways You’re Messing Up Customer Surveys (and How to Fix Them)

Categories