Apologies to Paul Simon, but when I looked at the long list of design ideas I compiled while at the International Builders’ Show in Orlando, I thought I’d try to mention 50 of them—a nice round num
Who Are Those Guys?
Reaching the pinnacle in one, two or even three markets is one thing. reaching this level in five cities is truly impressive. Ten is incredible. Fifteen is astounding. Seventeen of 21 is almost beyond belief.
|Contact Scott Sedam
via e-mail at scott@TRUEN.com
I saw that identical look on a builder's face recently in New Orleans at PB's annual Benchmark conference. Pulte Homes' Phoenix and Minnesota divisions had just received NRS Awards for finishing one-two in homeowner satisfaction among production builders nationwide. These awards were based on a national survey of more than 55,000 homeowners. Pulte's performance astounded NRS Corp. president Paul Cardis, who never expected large production builders to outscore the smaller builders in the survey.
When Les Woody from the Phoenix division walked up to receive the award, the builder I was watching sighed, closed his eyes, lowered his head for a moment, looked up, met the eyes of another associate and then slumped in his chair, shaking his head. I'd describe his reaction as a strange combination of admiration, frustration and resignation.
Earlier, this same builder and I had discussed the 2003 J.D. Power results and how Pulte (including Del Webb and DiVosta) had taken first place in 12 of 21 cities measured this year (including Phoenix and Minneapolis) and finished among the top three in 17 of 21 cites. The builder repeated several times, "I just don't see how they do it!"
We should note that a fair number of local builders and a couple of regionals rank right with Pulte. I'd recommend these builders to any of my friends and be assured that they would be treated like kings, own a fantastic home and know that their builder would stand behind the home for a long, long time. But reaching the pinnacle in one, two or even three markets is one thing. Reaching this level in five cites is truly impressive. Ten is incredible. Fifteen is astounding. Seventeen of 21 is almost beyond belief.
The J.D. Power statistics contain interesting details to ponder, some of which are on www.jdpower.com and others that my TrueNorth colleagues calculated from the published data. If you look, you'll notice that the winning scores vary considerably by market. We took the next step and calculated that the average winning score for all 21 markets is 128.2. Market-leading scores ranged from Pulte's 144 in Las Vegas to 115 for J.S. Hovnanian in Philadelphia and G.L. Homes in Fort Lauderdale, Fla. Incredibly, the winning scores in Philadelphia and Fort Lauderdale would have been below average in Las Vegas and Phoenix. Is the East Coast just tougher? Are Westerners "easy graders?" If you search the data, you'll find exceptions that mess with our convenient theories.
If my company were the top scorer in a market with a low average score, I wouldn't do much celebrating because it could be an also-ran next year. In five markets, the average customer satisfaction score increased more than 10 points from 2002 to 2003. In Denver/Colorado Springs, the average score increased 17 points. No market had a decrease in average score. The bar is going up at an extraordinary rate. Can you keep up?
So which are the top builders nationwide? It's natural to wonder about that. J.D. Power cited Pulte as No. 1 among nationals in its news release but didn't go beyond that - and for good reason. Those J.D. Power folks are smart! It's a very tricky question, and there are many ways to measure it. None of those ways is really "clean," and any way you do it, someone will squawk. It would be simple but wrong just to add all the raw scores by company, average them and then rank each company nationally. The huge relative differences in the city averages and the range of distribution nix that approach. You just can't get comfortable with it.
Instead, you must come up with a reasonable scoring system to "normalize" the differences across the markets and apply it consistently to all. People like 0-10 scales, so we at TrueNorth devised one - our "0-10 Index." We tried many approaches and finally settled on this method. Following are our "reasonable scoring parameters." (Note: If you don't like our system, devise your own. All the data are there for you, but prepare for long nights.)
First, in this 0-10 Index, a company gets recognition from prospective customers only if it scored in the top 10 in any one market. Below that, customers aren't impressed. So we assigned points as follows in each city: 10 points for first place, 9 for second, etc., down to 1 point for 10th place. Eleventh and below get 0.
Second, a company must be above the local mean to get any points - period. It's a reasonable assumption that customers look askance at any builder scoring "below average." Why credit that?
We also limited this analysis to builders appearing in the J.D. Power survey in a minimum of five cities. A company might build in six or seven but show up in only three in the survey. Some builders make it in and others do not simply because of which cities J.D. Power measures. The Midwest, for example, has a big hole, with only Chicago and Minneapolis represented. The 13 biggest builders and 16 of the top 25 were measured in at least five cities, and those appear in the 0-10 Index.
Culling the numbers and making calculations created a nightmare because of all the recent mergers and acquisitions, but we did our best to combine all of the right companies. And for those low scorers that want to "shoot the messenger," know that the method used was quite kind. We considered assigning negative scores to below-average finishes but simply didn't know where to stop.
It starts to get real squirrelly (technical term). So we stopped at 0, meaning a builder that finished 15 positions below the mean got the same 0 as the builder that was just one place under it. That was the fairest way to go. (Any builder not listed above can create its own 0-10 Index score using the parameters described above and see where it compares with this group.)
So here they are on an overall 0-10 Index scale, rounded to the nearest 10th. Careful - this index does not reveal where a builder finished nationally against 260 other builders (and a couple of hundred more counting divisions of regionals and nationals). It does show where a builder finished among this peer group. And it does enable a builder to say across the broad national survey, "On the 0-10 Index scale, we rated about an "X.X" (fill in the number).
Remember, these data already are public - I'm merely helping you interpret them. It's simple human nature to predict that most companies finishing below fifth or so will be unhappy about this. Sorry about that. Ripping up my picture won't help. An absolute hallmark of a top-performing organization, though, is the ability to face brutal facts openly and honestly. That requires willingness and real leadership.
Next month, we'll conclude our look at the J.D. Power results by asking if anyone should care about the J.D. Power survey. How about you, banker? How about Wall Street? We'll consider what the top-scoring companies do differently. Then we'll discuss the scariest thing about the survey and how it can send you on a wild-goose chase, throwing good money after bad, making things worse, not better.
Meanwhile, keep asking yourself, "Who are those guys?"