We all know that statistical analysis is used to prove all sorts of things
This article [subscription required] in the April 2005 issue of Harvard Business Review discusses the perils of benchmarking.
chriscurnow.com has long disliked the notion of benchmarking. We rather think that every organisation should strive to be unique and the absolute best it can be. The HBR article takes on benchmarking from a different point of view. The authors argue that benchmarking only tells us what someone else did that worked for them and ignores statistical bias in the studies. There is, therefore, no guarantee it will work for us.
Here is a sample from a sidebar:
During World War II, the statistician Abraham Wald was assessing the vulnerability of airplanes to enemy fire. All the available data showed that some parts of planes were hit disproportionately more often than other parts. Military personnel concluded, naturally enough, that these parts should be reinforced. Wald, however, came to the opposite conclusion: The parts hit least often should be protected. His recommendation reflected his insight into the selection bias inherent in the data, which represented only those planes that returned. Wald reasoned that a plane would be less likely to return if it were hit in a critical area and, conversely, that a plane that did return even when hit had probably not been hit in a critical location. Thus, he argued, reinforcing those parts of the returned planes that sustained many hits would be unlikely to pay off.