The year 2016 was an eventful one for home building.
We’ve all heard it or even said it: “You get what you pay for.” Increasingly in the new home business, that means paying employees — salespeople, construction managers, warranty service representatives, etc. — for delivering high customer satisfaction ...
We’ve all heard it or even said it: “You get what you pay for.” Increasingly in the new home business, that means paying employees — salespeople, construction managers, warranty service representatives, etc. — for delivering high customer satisfaction scores on home buyer surveys. Seems to make sense, doesn’t it? The truth: Linking compensation to customer satisfaction scores can be risky. Consider this real-world example:
There once was a builder who claimed 95% customer satisfaction, with more than 80% of his customers responding to his survey. Based on these results, the builder gave hefty bonuses by community and to individuals who scored high.
So after all the great results and generous incentives, why did this builder have only par referral sales and mediocre results on independent surveys published in his market?
Simply, he didn’t have accurate data. He missed the target in measuring customer satisfaction. This builder used a phone survey to ask 50 questions immediately after move-in. He applied an aggressive strategy to achieve high levels of response. By repeatedly contacting his home buyers until they took the survey, he got more than 80% of them to respond. This sounds good, so what went wrong?
A common mistake companies make when surveying is to use invasive “pressure surveys” rather than tactful voluntary surveys. We see vast differences between survey results based on which method is used. Badgering customers for an opinion, with the good intention of getting a high response rate, introduces too much error. This happens through repeatedly calling customers, having employees pressure customers when they see them, or sending excessive mailings to customers to partake in a survey. This irritates customers and disrupts a builder’s ability to do what experts say is the most important thing in home building: provide customer service. If customer service is the most important factor in making home buyers happy, then why use a measurement system that irritates them? How can you schedule service work when you have a research company calling them five times a day for a survey?
By pressuring customers to respond, your customer satisfaction measurement (CSM) is impeding your customer relationship management (CRM). To make matters worse, many companies tie a significant portion of their employee incentive programs to CSM, increasing the pressure to get surveys in. Sound familiar? For many in the industry, this is a frustrating truth.
Customer satisfaction is based on emotion, and emotions are delicate states to measure. Honeymoon effects and social desirability are just two of the well-known sources of error in survey research that affect home buyers’ responses. If you’re not careful with your measurement system, these might creep into your results.
The Honeymoon Effect
One reason the aforementioned builder showed high customer satisfaction that was not supported in third-party surveys was that he surveyed too soon after move-in. This is what researchers call the honeymoon effect: Survey results are skewed because of an unusually charged emotional experience. In this example, there was so much positive emotion surrounding the purchase of a new home that it exaggerated satisfaction levels. If this builder surveyed customers 30 days after move-in, the results would have been different. The honeymoon effect associated with closing surveys has been documented in numerous studies on home buyer satisfaction. On average, home buyer satisfaction drops 10% from closing to 30 days after move-in.
Survey Length and Delivery Mode
In our example, customers were badgered for responses to a phone survey, and they typically became irritated or threatened. To make matters worse, this builder used a 50-question survey to gauge customer responses. To ask 50 questions over the phone takes an excessive amount of time, which strongly affects data. Research studies have found statistically significant differences between results gathered via the telephone and results gathered through the mail, using an identical survey.
Which method is right? We’ll never be able to offer a 100% verifiable answer to this question, yet common sense and examination of qualitative data give good clues. Phone surveys of home buyer satisfaction that use more than 15 questions and involve repeated attempts create discomfort, anger, fear, social pressure, exhaustion and discontent. These are unrelated to satisfaction with you as a builder, and they muddy the results both positively and negatively. Customers in phone-based pressure surveys tend to do one of four things:
- Become irritated and provide overly negative results
- Feel guilt/pressure to avoid conflict and respond by being “nice” to the builder (social desirability)
- Become confused with the questions and just want to “get it over with”
- Might provide an accurate, balanced assessment of their experience
Errors occur in all surveys, but when customers are pressured with a long phone survey during dinner, the likelihood of error in the results increases dramatically.
The example I like to give that demonstrates the power of social pressure on survey results is this: You are at a restaurant where the meal and service aren’t good, but you decide to stick it out anyway. After you’ve force-fed yourself and gained control of your anger at the waiter, the owner approaches and asks, “How was your meal?” A few of us would answer truthfully, but most people would respond with “fine” or a similar “nice” response.
The same often happens when surveying home buyers with intrusive methods. That’s not good when the data are used to guide your company and trigger bonuses to your employees and trades.
A big reason companies often get into pressure-survey mode is a concern over response rates. I often hear builders ask, “How can I provide bonuses to my employees without having at least a 75% response rate on my surveys?” The answer depends on how you bonus your employees.
In studies by NRS Corp., we produced reports at a 50% response rate and later at 75% using noninvasive methods. The data changed only .75 of a point on overall home buyer satisfaction. This means an equal balance of happy and unhappy are responding to noninvasive surveys at 50% response levels.
Customer satisfaction results commonly are tied to a specific project, team and/or employee. This is what researchers call a “business unit.” Despite the technical label, business units define your operations and what you want to track and measure. If you survey too many units independent of each other, you are less likely to end up with a sufficient response rate per unit. For example, a builder once told me that when customer satisfaction results were tied to individual employees, there often was only one or two surveys to use in calculating bonuses. “It’s just not acceptable,” he said. “One month the scores are high and we bonus and pat them on the back, and the next they’re in the toilet.”
He’s right. You can’t bonus employees based on the customer satisfaction results of only a few surveys. The amount of error reported in those results is huge.
Paul Cardis, M.S., M.A., is the president and founder of NRS Corp., a leading research and consulting firm serving home builders since 1993. He has conducted numerous research studies, provided consulting, and written a variety of articles for academic and trade journals on customer satisfaction and market research. He can be e-mailed at email@example.com.