One of my pet peeves is practice “benchmarking,” and a personal story illustrates my problem.
Years ago, I used to date a guy on the Kansas State golf team. One of his pet peeves was a popular technique used to read the way a putt would break on a green called “the plumb-bob.” To do this, a player would stand on a green with his ball between himself and the hole. Then he or she would hold a putter by the very end of the grip, with arm extended in front, and the shaft lined up with the ball. If that placed the top of the grip to the right of the hole, the putt would break left; if the grip was on the left, it would break the other way.
In general, plumb-bobbing is a simple but effective technique that uses a weight on a string to let gravity confirm the straightness of walls or beams, etc. The problem is that most golf putters these days have a “goose neck” so the head (the weight) doesn’t line up with the shaft, which makes the putter hang at an angle, giving a false reading of what’s straight. When my boyfriend pointed this out to one of his teammates, he replied: “That’s okay, I only use it as indication of the break.” Right.
I’m often reminded of this story when I see so-called “benchmarking” studies of advisory firms, in which firm owners enter financial and operational data, usually online, and then receive an assessment of how their firm compares to other firms, in various “key” areas. Not to put too fine a point on it, but I find these “studies” to be useless, and often dangerous, if taken seriously. The problem with benchmarking studies is that they entirely depend on the accuracy of data entered, which is usually neither checked, nor confirmed. Having seem the data that some of my client firm owners have submitted to these studies over years, I can tell you that all too often it bears very little resemblance to reality.