by David Ham | July 25, 2018
It is becoming conventional wisdom that respondents get annoyed with longer surveys. However, it is critical that your surveys are properly designed to capture the information needed to drive intelligent investments in customer and employee relationships.
If you are in the survey research field, you have heard two questions from internal or external clients many times:
- How soon can we go live?
- How long will the survey be?
The first one is usually easier for me to answer. I give them a time frame that is optimistic but realistic and then I tell them why it almost always take longer. My clients typically have surveys reviewed by their team’s subject matter experts (SME) and the result is always pretty much the same. Each SME provides feedback that the survey is way too long, and each has two or three questions about their own areas of expertise that need to be added. It’s a classic example of goal conflict.
Imagine going to your physician for an annual health maintenance exam, having her measure your height, and telling you that is all she needs. You would immediately look for a new physician. Yet people have been trained in recent years to expect survey research to work that same way.
That segues nicely into the second question about how long the survey will be. I tell clients to expect thirty questions and they often look stunned. We all know that is way too long, right? There has been plenty of talk in recent years about only needing one question, or very few questions, to get needed information. It’s interesting that so many executives think that such limited information can adequately guide business investment decisions.
Imagine going to your physician for an annual health maintenance exam, having her measure your height, and telling you that is all she needs. You would immediately look for a new physician. Yet people have been trained in recent years to expect survey research to work that same way, and provide sufficient information on how to invest in customer or employee relationships.
While there are a lot of opinions out there about the importance of short surveys to ensure higher response rates, how much focus is there on data quality?
At CFI Group, we have looked at relationships between survey structure and the quality of data they provide. Based on what I’ve heard from some other survey practitioners, it is becoming conventional wisdom that long surveys drive scores down because respondents get annoyed with longer surveys.
For a major logistics company, we tested asking an overall satisfaction question twice in the same survey, once at the start and again at the end. Asking satisfaction at the start was a good proxy for the quick top-of-mind data that many organizations now seek. But what we found, was that the early satisfaction question scored lower on average than the later satisfaction question. Further investigation showed that the intervening questions about specific types of experiences reminded customers of all things that go well on a day-to-day basis working with this company, as opposed to the few bad top-of-mind experiences that biased the scores of the initial satisfaction question.
We conducted a similar test for a multinational telecommunications company and saw the same result. The conclusion was the additional questions prior to the satisfaction question resulted in a more realistic, well-thought, and accurate assessment of the customer experience.
A client in the government sector performed a specific test of survey length by using three different questionnaire variations. The “short” version used only the three index questions from the American Customer Satisfaction Index (ACSI). As expected, it received a slightly higher response rate than the “medium” and “long” versions. Those two had 15 and 30 questions, respectively, with both versions asking the three key ACSI questions at the end. The medium and long surveys both had the same response rate, meaning that someone starting the survey was no more likely to drop out because of the 15 additional questions in the long version.
The difference was in the satisfaction scores. Respondents to the short survey reported being the least satisfied. They were evaluating technical service and support quality, with their top of mind responses focusing on problems. As the survey length increased, the satisfaction scores increased as well. The longer surveys reminded respondents about all the things that went right on a day-to-day basis, and, importantly, provided the most diagnostic data about what needed to improve further. It is that data which ultimately delivers the return on investment for the survey, by identifying aspects of customer and employee experiences that most need to be fixed.
We know that survey volume can be a problem. We are all getting hit with invitations everywhere we turn. Easy access to survey software has made everyone an expert, with the expectation that shorter surveys will generate more feedback. It is critical, however, to ensure that your surveys are properly designed to capture the information needed to drive intelligent investments in customer and employee relationships.
CFI Group offers expertise in helping businesses measure and manage the customer and employee experience. Contact us for more information on how you can design a survey to appropriately capture the information you need to drive change.
center AUTHOR Adapted from Dr. Claes Fornell’s book | July 09, 2018 The Satisfied Customer: Winners and Losers in the Battle for Buyer Preference no-repeat;center top;; […]
- July 9, 2018