by David Ham | June 13, 2017
With the amount of money involved in professional sports contracts, the current trend toward deeper analysis makes good business sense. However, many businesses do the opposite with customer experience metrics, with dubious measures taken at face value with little or no understanding of what lies behind them (or how they relate to business outcomes).
As hard core sports fans know, huge changes are taking place in terms of how player performance is analyzed. Traditional baseball statistics, like batting average for hitters and ERA (earned run average), are no longer as important to front-office analysts. They are now looking at metrics like wOBA (weighted on-base average) and xFIP (expected fielding independent pitching). With the amount of money involved in professional sports contracts, the deeper analysis makes good business sense.
Yet it seems many businesses do the opposite with their customer experience metrics. They use dubious measures taken at face value with little or no understanding of what lies behind them. More importantly, they have little understanding of how these metrics relate to business outcomes.
I remember a life insurance company running TV ads touting something like a “97% customer satisfaction rating.” Ok, so what does that mean? My experience as a life insurance customer is this: they send me a bill every year and I pay it. Unless the bill is wrong or the payment mishandled, there’s nothing to be satisfied or dissatisfied about. The ad made me fear that much of that dissatisfied 3% were those unfortunate people who had filed a claim to get reimbursed for the loss of a loved one. The 97% statistic sounded impressive but meant nothing without more context.
Automotive dealerships have a bad reputation in this regard. Sales and service customers are routinely told how they should respond to the survey when it arrives. Questions are phrased like this: “Is there any reason you can’t give me a perfect score?” Dealership personnel are often compensated based on their scores, so they feel the need to pressure customers.
When I worked in automotive marketing and research (way back in the 20th century), automakers sometimes made vehicle allocation decisions based on dealership customer satisfaction scores. The better a dealership would score, the greater the number of the highest demand vehicles the dealership would receive. I even knew of some dealerships that would offer a free oil change in return for a customer who brought in a blank survey, which dealership staff would then complete and submit.
Of course, automotive companies also feel the pressure to get high scores to earn awards from other companies, which they can then use in their advertising. But what is the point of measuring performance when the process is deeply and systematically corrupted?
The reality is that many companies spend a great deal of money on customer experience metrics but don’t trust these metrics enough to make meaningful business decisions. Their purpose should be to obtain action-oriented data that ties to business outcomes, and therefore allows managers to make better informed decisions that drive business success.
CFI Group’s white paper on the top 10 customer satisfaction survey best practices addresses examples like those discussed above. One common misconception is that companies should strive for a “benchmark” relative to competition, or possibly something that will sound impressive in an ad or an annual report. Instead, organizations should look for the “rightmark” at the optimal performance level that maximizes profitability. (Hint: It’s probably not 97%.)
Another problem with the distortion of customer metrics results from rewarding or punishing individuals based on survey scores, as in the previous automotive dealership example. In reality, many organizations lack sufficient data to evaluate individual performance. And even if they had sufficient data, they might still resort to bad behavior, such as badgering customers for high scores, or holding back survey invitations when transactions go poorly.
One retail client I worked with used a different incentive to maximize survey responses. The client held quarterly sweepstakes, awarding gift cards to a handful of survey respondents each quarter. Each store with a winning customer would be rewarded, not because of the satisfaction score but instead to encourage stores to make customers aware of the survey.
The bottom line was that this client wasn’t just looking for happy numbers to make everyone feel warm and fuzzy. Instead, it wanted the truth from its customers so it would know what needed to improve. Ultimately, this company is proving to be a winner today, in a very difficult era for large retail chains.