fbpx

Improving Survey Analysis: 3 Steps to Get More from Customer Feedback

post office box

Even if you collect customer feedback, it won’t have much value if your survey analysis falls short.

From not preparing data correctly to jumping to conclusions based on statistically insignificant data, a lot can go wrong. Thankfully, there are some easy wins.

This post shows you how to improve survey analysis in three steps:

  1. Choose the right questions to get useful data;
  2. Prepare your data for analysis;
  3. Conduct a driver analysis on your data.

Before any of that, however, you need to clarify your goals.

Know what you want to get out of your customer survey

What do you want to find out? Before considering a specific research method or survey question, you must define the research question. Which issues do you need to tackle to make more money or keep your customers happy? (Importantly, your research question is not the same question that you ask users.)

For customer surveys, the biggest divide is between quantitative and qualitative research. The common refrain is that quantitative data tells you the “what” while qualitative data tells you the “why.” But even that has its limitations:

No single article can cover survey analysis for every type of research. The lessons in this post apply broadly to many types of surveys that seek to understand how people feel about your company, product, or service and why they feel that way.

Still, for simplicity, this article uses Net Promoter Score as an example. For one, it’s widely used. But it also showcases why a single-question, quantitative survey limits your analysis.

As such, it’s an opportunity to highlight how powerful your analysis can become if you add (and code and analyze) qualitative follow-up questions.

If you’ve figured out the question that your company needs to answer, you can move on to finding the right one to pose to your customers.

1. Choose the right questions

Good survey analysis starts with good survey design. Asking the right questions sets you up for success with your analysis. For customer feedback, everyone knows the classics on the menu: customer satisfaction, Net Promoter Score (NPS), and customer effort score.

But, for decades, market researchers have faced a quandary. They want to maximize the response rate while also learning as much as possible. And most customers and users (80% to be specific) who are faced with a laundry list of survey questions will bail.

If you have to reduce your laundry list down to a single item or question, what do you ask? For better and worse, the answer for many has been NPS. The limitations of NPS are a useful way to understand how survey design impacts your analysis.

nps survey

The limitations of Net Promoter Score

When introducing NPS to the world, Frederick Reichheld rightly pointed out that existing customer satisfaction survey approaches weren’t effective. They were overly complex, outdated by the time they reached front-line managers, and frequently yielded flawed results.

Yet the idea that NPS is the “one number you need to grow” has set many on the wrong path. (Learn more about its limitations here.)

NPS’s simplicity is the primary reason why so many swear by it, but that simplicity is also its flaw. NPS should always be asked—and even more importantly, analysed—in tandem with other questions.

While NPS seems clear cut, it can be quite vague:

  • Why did your customer picked a score of 6 rather than 7 (making them a detractor rather than a mere passive)?
  • What is the statistical difference between a score of 9 or a score of 10? How should it influence decision-making?
customer satisfaction gauge

Questions that involve likelihood to revisit, repeat purchases, or product reuse can be just as good (if not better) predictors of customer loyalty, depending on your product or industry.

So what’s the solution?

Ladder questions to gain better insights

Thus, we get to the crux of choosing the right questions—as in multiple! You should ladder your questions to find out more information. One-question surveys are tempting.  They deliver numbers quickly without lengthy analysis. But they’re insufficient.

(The good news is that it doesn’t require as much effort as you think to analyze more complex data—more on that later.)

Laddering questions lets the respondent elaborate on an answer they gave. This can be as simple and effective as an open-ended question after a closed question. The purpose is to determine the “why,” which will allow you to perform the driver analysis in Step 3.

If a customer gives an NPS of 6, you may ask as your follow-up, “What prompted you to give us a 6?” or “What is the most important reason for your score?”

Another option for customer satisfaction surveys is to ladder questions with a second closed question. For example, consider the following questions for ACME, a fictional bank:

  1. How would you rate your satisfaction with ACME bank?
  2. How would you rate the following from ACME bank:
  • Fees;
  • Interest rates;
  • Phone service;
  • Branch service;
  • Online service;
  • ATM availability.

This would give you the data to help determine which aspects of the bank’s service influences overall satisfaction.

Once you’ve supplemented your initial score with follow-up answers, you can clean and code the data to make your survey analysis far more powerful.

2. Prepare your data for analysis

One of the most important components of data processing is quality assurance. Only clean data will give you valid results. If your survey uses logic, a first step is to ensure that no respondent answered questions they shouldn’t have answered.

Beyond that simple check, the process has two main components: data cleaning and data coding.

Cleaning data

Cleaning messy data involves:

  • Identifying outliers;
  • Deleting duplicate records;
  • Identifying contradictory, invalid, or dodgy responses.

Two types of respondents often muck up your data: speedsters and flatliners. They can be especially problematic when rewards are associated with the completion of your survey.

Speedsters. Speedsters are respondents who complete the survey in a fraction of the time it should have taken them. Therefore, they could not have read and answered all the questions properly.

You can identify speedsters by setting an expected length of time to complete the entirety or a section of the survey. Then, remove any respondents who fall significantly outside of that time.

An industry standard is to remove respondents who complete the survey in less than one-third of the median time.

Flatliners. Flatlining, sometimes called straight-lining, happens when a respondent picks the exact same answer for each item in a series of ratings or grid questions.

For example, they may pick the answer 1 for a series of rating questions like “On a scale of 1–5, where 1 means ‘not satisfied’ and 5 means ‘extremely satisfied,’ how would you rate each of the following options?”

flatliner on a survey
Including contradictory questions can help “catch out” flatliners in your surveys.

Flatliners pollute survey data by marking the same response for every answer. Occasionally, you may want to design your survey to try to catch flatliners. You can do so by asking contradictory questions or including a form of front/back validation, which asks the same question twice.

For example, you may ask respondents to select their age bracket twice but place the age brackets in a different order for consecutive questions. Doing so, however, will make your survey longer.

Questions like the one above can filter out inattentive survey takers, but they may lower completion rates.

You must balance maximizing completion rates with clean data. It’s a judgement call. How important was the series of questions they flatlined on? Did it determine what other questions they got asked? If they flatline for three or four consecutive responses, are their other responses still valid?

You can use practical considerations to help determine the balance. For example, if you want to survey 1,000 people and end up 50 over your quota, it may be easier to take a more aggressive approach with potentially dodgy respondents.

There are other quality-control measures you can implement in your survey design or check during data cleaning.

  • Make open-ended questions mandatory. If a respondent provides gibberish answers (random letters or numbers, etc.), review their other answers to decide whether to remove them.
  • Put in a red-herring question to catch speedsters, flatliners, and other inattentive respondents. Include fake brands or obviously fake products in an answer list. (One note of caution: Make sure that these fake items don’t have similar spellings to existing brands.) Flag respondents who select two or more fake items.

Once you have clean data, you can start coding—manually for smaller datasets, programmatically for large ones.

Manual coding for small datasets

Coding open-ended text data

The traditional method of dealing with open-ended feedback is to code it manually. This involves reading through some of the responses (e.g. 200 randomly selected responses) and using your judgment to identify categories.

For example, if a question asks about attitudes toward a brand’s image, some of the categories may be:

  1. Fun;
  2. Worth what you pay;
  3. Innovative, etc.

This list of categories and the numerical codes assigned to them is known as a code frame. After you have a code frame, you read all the data entries and manually match each response to an assigned value.

For example, if someone said, “I think the brand is really fun,” that response would be assigned a code of 1 (“Fun”). The responses can be assigned one value (single response) or multiple values (multiple response).

You end up with a dataset that looks like the following:

If your dataset is too big to code manually, there’s another option: sentiment analysis.

Sentiment analysis to code big datasets

There is another method of text analytics called sentiment analysis or, sometimes, opinion mining. Text analytics uses an algorithm to convert text to numbers to perform a quantitative analysis.

In the case of sentiment analysis, an algorithm automatically counts the number of positive or negative words that appear in a response and then subtracts the negative from the positive to generate a sentiment score:

In the above examples of sentiment analysis, the top response would generate +2, while the bottom would generate -2.

While sentiment analysis seems simple and has some advantages, it also has limitations. There will always be a degree of noise and error. Sentiment analysis algorithms struggle with sarcasm and often are poor interpreters of meaning (for now, at least).

For example, it’s difficult to train a computer to correctly interpret a response like “I love George Clooney. NOT!” as negative. But if the alternative is trawling through thousands of responses, the trade-off is obvious.

Not only is sentiment analysis much faster than manual coding, it’s cheaper, too. It also means that you can quickly identify and filter for responses with extreme sentiment scores (e.g. -5) to better understand why someone had a strong reaction.

There are a few keys to make sentiment analysis coding work:

  • Ask shorter, direct questions to solicit an emotion or opinion without leading respondents. For example, replacing “What do you think about George Clooney?” with “How do you feel about George Clooney?” or “When you think about George Clooney, what words come up?” The latter questions are less likely to generate ambivalent, tough-to-interpret responses.
  • Avoid using sentiment analysis on follow-up questions. For example, responses to “Why did you give us a low score?” are not suitable for sentiment analysis because the question is asked of people already leaning toward a negative sentiment.
  • Be wary of multiple opinions in a single question. Answering multiple questions will skew your results toward the middle ground, which is why shorter and more direct questions about a single aspect are better.

Before proceeding to statistical analysis, you need to summarize your cleaned and coded data.

Summarize your survey data for analysis

NPS. Calculating your NPS is simple. Divide your respondents into three groups based on their score: 9–10 are promoters, 7–8 are passives, 0–6 are detractors. Then, use the following formula:

% Promoters - % Detractors = NPS

One additional note: NPS reduces an 11-point scale to a 3-point scale: Detractors, Passives, and Promoters. This can limit your ability to run stats testing in most software.

The solution is to recode your raw data. Turn your detractor values (0–6) into -100, your passives (7–8) into 0, and promoters (9–10) into 100.

You can manually recode and calculate your NPS in Excel using an IF formula:

Customer satisfaction score. Generally, for a customer satisfaction score, you ask customers a question like: “How would you rate your overall satisfaction with the service you received?” Respondents rate their satisfaction on a scale from 1 to 5, with 5 being “very satisfied.”

To calculate the percentage of satisfied customers, use this formula:

(customers who rated 4–5 / responses) x 100 = % satisfied customers 

Now you can move on to driver analysis of your survey data.

3. Conduct a driver analysis

Alternatively known as key driver analysis, importance analysis, and relative importance analysis, driver analysis quantifies the importance of predictor variables in predicting an outcome variable.

Driver analysis is helpful for answering questions like:

  • “Should we focus on reducing prices or improving the quality of our products?”
  • “Should we focus our positioning as being innovative or reliable?”

This component of survey analysis consists of five steps.

Step 1: Stack your data

To conduct a driver analysis, stack your data. A stacked data format looks like the table below:

how to stack data for survey analysis

In the above example, the first column is your quantitative metric (e.g. NPS), while the second, third, and fourth columns are coded responses to open-ended follow-up questions.

Step 2: Choose your regression model

There are several types of regression models. Software can, and should, do the heavy lifting. The key decision for a researcher is which model to use. The most common models are linear regression and logistic regression. Most studies use a linear regression model, but the decision depends on your data.

It goes beyond this article to explain which model to use, as the answer is specific to your data. Briefly, linear regression is appropriate when your outcome variable is continuous or numeric (e.g. NPS).

Logistic regression should be used when your outcome variable is binary (e.g. has two categories like “Do you prefer coffee or tea?”). You can find a more detailed explanation in this ebook.

In the steps below, I run through an example to show how easy it is to do a regression analysis once you’ve chosen a model. If you prefer to see the process as part of an interactive tutorial, you can do so here.

Step 3: Run your data through a statistics tool

We completed a survey that asked respondents the following questions:

  1. How likely are you to recommend Apple? (standard NPS question)
  2. What is the most important reason for your score? (open-ended response)

We wanted to see the key brand elements that caused respondents to recommend Apple.

We coded the free-text answer to question two, which gave us the dataset below:

We loaded this into our analysis software (Displayr), and we ran a logistic regression to see which of the brand attributes were most important in determining the NPS score.

(Most standard software packages like Stata, SAS, or SPSS offer logistic regression. In Excel, you can use an add-in like XLSTAT. There are also some handy YouTube videos, like this one by Data Analysis Videos.)

The further the t value is from 0, the stronger the predictor variable (e.g. “Fun”) is for the outcome variable (NPS).  In this example, “Fun” and “Worth what you pay for” have the greatest influence on NPS.

Since our estimates for “Fun” and “Worth what you pay for” are fairly close together, we ran a relative importance analysis to be sure of our results:

relative importance analysis

This second analysis is a hedge against a potential shortcoming of logistic regression, collinearity. Collinearity “tends to inflate the variance of at least one estimated regression coefficient. This can cause at least some regression coefficients to have the wrong sign.”

Step 4: Test for significance

How do you know when (quantitative) changes in your customer satisfaction feedback are significant? You need to separate the signal from the noise.

Statistical significance testing is automated and built-in to most statistical packages. If you need to work out your statistical significance manually, read this guide.

(You should also be aware of the limitations and common pitfalls of statistical significance, which you can read about here, here, or here.)

Step 5: Visualize your data

Finally, we can visualize the results to share with colleagues. The goal is to enable them to understand the results without the distraction of number-heavy tables and statistics:

brand attribute visualization

One way to visualize customer feedback data, especially NPS, is to plot the frequency or percentage of the ratings. For example, if you want to show the distribution of ratings within each promoter group, you can use a bar pictograph visualization.

The pictograph bar chart below has been color coded to make it easier to distinguish between groups. It clues us to an important observation. Among detractors, there is a far bigger concentration of scores 5–6 than 0–4.

net promoter score visualization

These 5–6 detractors are much more likely to be swayed than those who gave a score closer to 0. With that in mind, you could focus your analysis of qualitative responses on that group.

Comparison visualizations. You can also compare the NPS of different brands to benchmark against competitors. The stacked bar chart below shows responses by promoter group for different tech companies.

While we lose some detail by aggregating ratings by promoter group, we gain the ability to compare different brands efficiently:

At a glance, it’s easy to spot that Google has the most promoters and fewest detractors. IBM, Intel, HP, Dell, and Yahoo all struggle with a high number of detractors and few promoters.

This visualization also shows the size of the passive group relative to the other two. (A common mistake is to concentrate solely on promoters or detractors.)

If the size of your passive group is large relative to your promoters and detractors, you can dramatically improve your NPS by nudging them over the line. (There’s also the risk, of course, that they could swing the other way.).

Visualizations over time. You can track your NPS over time with a column or line chart (if you have enough responses over a long period of time). These charts can help show the impact of marketing campaigns or product changes.

Be careful, though. If your NPS fluctuates (perhaps due to seasonal sales or campaigns), it can be difficult to spot a trend. A trendline can indicate the overall trajectory.

For example, in the chart below, it would be difficult to spot which way the NPS is trending without the help of a trend line:

customer satisfaction over time visualization

Conclusion

When it comes to improving survey analysis, a lot comes down to how well you design your survey and prepare your data.

No single metric, NPS included, is perfect. Open-ended follow-up questions after closed questions can help provide context.

There are a few other keys to successful survey analysis:

  • Ladder your questions to get context for ratings and scores.
  • Design your survey well to avoid headaches when prepping and analyzing data.
  • Clean your data before analysis.
  • Use driver analysis to get that “Aha!” moment.
  • Manually code open-ended questions for small surveys; use sentiment analysis if you have thousands of responses.
  • Test for significance.
  • Visualize your data to help it resonate within your organization.

Related Posts

Join the conversation Add your comment

  1. Great article and incredibly helpful! I’ve just been designing a customer feedback survey and this was invaluable.

Comments are closed.

Current article:

Improving Survey Analysis: 3 Steps to Get More from Customer Feedback

Categories