After compiling 30 years of data and after issuing 20 reports finding that individual investors underperform by virtually any measure, this year’s Quantitative Analysis of Investor Behavior (QAIB) report from Dalbar, issued in April, offered a new and daunting conclusion.
“Attempts to correct irrational investor behavior through education have proved to be futile. The belief that investors will make prudent decisions after education and disclosure has been totally discredited. Instead of teaching, financial professionals should look to implement practices that influence the investor’s focus and expectations in ways that lead to [more] prudent investment decisions.”
For the 30 years ending Dec. 31, 2013, this year’s QAIB disclosed that equity fund investors earned an average annual return of 3.69% compared with the S&P 500′s 11.11%. “After enormous efforts by thousands of industry experts to educate millions of investors” that the report concludes were “ineffective,” Dalbar recommends four best practices for financial professionals: set expectations below market indexes, control exposure to risk, monitor risk tolerance and present forecasts in terms of probabilities.
That’s good advice for advisors, of course, but it doesn’t offer suggestions for how those advisors can make better decisions themselves. That’s a crucial point because there is little reason to expect them to be significantly better decision-makers than consumers. For example, professional investors are seriously overconfident and trade too often. They exhibit “herding” behavior. Moreover, when professional analysts are 80% certain that a stock is going to go up, they are right about 40% of the time—which is worse than pure chance.
A variety of behavioral and cognitive biases constantly conspire to limit the abilities of laypeople and professionals alike to make good investment decisions. Accordingly, we’re right to be skeptical about our decision-making abilities in general because our beliefs, judgments and choices are so frequently wrong.
Recent evidence even suggests that being smarter, more aware or more educated doesn’t seem to help us deal with these cognitive difficulties more effectively. Indeed, they may actually make things worse. For example, one study suggests that, in many instances, smarter people are more vulnerable to thinking errors, even basic ones. Moreover, “people who were aware of their own biases were not better able to overcome them.”
Because our intuition isn’t trustworthy, we need to be sure that our investment process is data-driven at every point (a point I have made before in these pages). We need to be able to check our work regularly. Generally speaking, the key is to use a carefully developed, consistent process to limit the number of decisions to be made and to avoid making “gut-level” decisions not based upon good evidence; but also to be flexible enough to adjust when and as necessary.
The very best performers are great teams of people who create careful, data-driven statistical models based upon excellent analysis of the best evidence available in order to establish a rules-driven investment process. Yet, even at this point, the models are not of the be-all/end-all variety. Judgment still matters because all models are approximations at best. These models only work until they (inevitably) don’t anymore—consider the implosion of Long-Term Capital Management in the late 1990s, for example.
No good process is static. Approaches work for a while, sometimes even a long while, and then don’t. Markets change. People change. Trends change. Stuff happens. As Nobel laureate Robert Shiller recently told Institutional Investor magazine, big mistakes come from being “too formulaic and bureaucratic. People who belong to a group that makes decisions have a tendency to self-censor and not express ideas that don’t conform to the perceived professional standard. They’re too professional. They are not creative and imaginative in their approach.”
Poor decision-makers tend to see things though one analytical lens. That’s why pundits, who typically see the world through a specific ideological prism, have such demonstrably lousy track records. Good decision-makers use a wide assortment of analytical tools, seek out information from diverse sources (using “outside” sources is especially important), are comfortable with complexity and uncertainty and are decidedly less sure of themselves.
“Given the impressive power of this simple technique, we should expect people to go out of their way to use it. But they don’t,” says Harvard psychologist Daniel Gilbert. The reason why is clear, according to Michael Mauboussin: Most of us think of ourselves as different, and better, than those around us, especially in our supposed areas of expertise. Moreover, we are prone to see our situation as unique and special. But in almost all cases, it isn’t.
Going through the effort consistently and comprehensively to check our work and improve our processes requires extraordinary organizational patience, but the stakes are high enough to merit such a long-term investment.
As in every field, those who make poor decisions propose all sorts of justifications and offer all kinds of excuses. They insist that they were right but early, right but gob-smacked by the highly improbable or unforeseeable, almost right, mostly right or wrong for the right reasons. As always, such nonsense should be interpreted unequivocally as just-plain-wrong.
A quick summary of some of the (often overlapping) ways research has shown that we can improve our judgment follows.
Make sure every decision-maker has positive and negative skin in the game.
Focus more on what goes wrong and why than upon what works (what Harvard Medical School’s Atul Gawande calls “the power of negative thinking”).
Make sure your investment process is data-driven at every point.
Keep the investment process as decentralized as possible.
Invoke a proliferation of small-scale experimentation; whenever possible, test the way forward, gingerly, one cautious step at a time.
Move and read outside your own circles and interests.
Focus on process more than results.
Collaborate—especially with people who have very different ideas (what Kahneman calls “adversarial collaboration”).
Build in robust accountability mechanisms for yourself and your overall process.
Slow down and go through every aspect of the decision again (and again).
Establish a talented and empowered team charged with systematically showing you where and how you are wrong. In essence, we all need an empowered devil’s advocate (as I wrote in December).
Before making a big decision, conduct a “pre-mortem” in order to legitimize and empower the doubters. Gather a group of people knowledgeable about the decision and provide a brief assignment: “Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome has been a disaster. Take 10 minutes to write a brief history of that disaster. Discuss.”
Dalbar’s latest conclusion suggests that individual investors shouldn’t be expected to help themselves. That reality provides advisors with a tremendous and important opportunity. But unless and until advisors improve their own decision-making skills, the opportunity will necessarily be squandered. There is simply no substitute for good judgment. But it remains in remarkably short supply.