Close Close
Popular Financial Topics Discover relevant content from across the suite of ALM legal publications From the Industry More content from ThinkAdvisor and select sponsors Investment Advisor Issue Gallery Read digital editions of Investment Advisor Magazine Tax Facts Get clear, current, and reliable answers to pressing tax questions
Luminaries Awards
ThinkAdvisor

Portfolio > ETFs > Broad Market

How to fine-tune your powers of prediction

X
Your article was successfully shared with the contacts you provided.

I have a confession — I’m a sucker for predictions. If I come across (as I did some months ago) a headline like “Stock Market Crash of 2016: The countdown begins,” there’s no way that I’m not going to read it.

I know the statistics. I’ve read the books. I’ve written columns about it. Predictions are almost always wrong, and in the rare case they’re correct, luck is more likely the reason than skill. While I’ve trained myself to be more sanguine about market volatility, scary headlines, and risk in general, I’m as human as the next person. There is a part of me that will always crave the certainty that predictions promise.

Even the blatantly self-promotional “sky is falling” or apocalyptic narratives that flood my inbox are impossible to ignore. Based loosely on verifiable facts, they work because they play on our legitimate fears of what would happen if the financial world as we know it stopped working, and the wealth that took us so much time to accumulate simply disappeared.

In the back of our mind is the inkling that perhaps the writer has some insight or information we don’t possess. Fortunately we don’t have to dig too far before it becomes clear that the real driving force behind the “research” is income for the author.

That still leaves a small percentage of predictions that are generated from sources of real substance. Written by authors who are serious, experienced and respected, their intent is the open dissemination of important information and a reasoned discussion of its implications. Even then there is no guarantee.

Delayed crises

Some of you might remember Paul Ehrlich’s 1968 book “The Population Bomb.” Its prologue opened with these words: “The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.”

Ehrlich was and remains a respected biologist and faculty member of Stanford University. His arguments were built on logical, well-reasoned science. No one could accuse him of drumming up fear just to sell his book. But Ehrlich’s prediction never actualized. Nearly 50 years later Ehrlich continues to warn about a pending problem — and he may yet be right. But his original prediction wasn’t that “at some point” we would run out of the ability to feed ourselves — it was that the crisis is imminent.

Jeremy Grantham is one of those rare people on Wall Street: an incredibly successful investor, creative thinker and writer with the courage to speak his mind in an industry that has turned saying nothing of substance into an art form. If you’re looking to hang your hat on a prediction, you’d be hard pressed to find one that is as clearly reasoned as his.

When Grantham writes that bubble territory for the S&P 500 starts at 2250, you know he’s not just throwing out a number. In the spring of 2015 he laid out his case for the potential catalysts to drive the current bull market into bubble territory but left some wiggle room for a “normal” decline of 10–20 percent, postponing the ultimate blowout until 2016 election. The market fell 12 percent in late August — so events appeared to be tracking his prediction perfectly. Grantham has been notably prescient at least twice before (prior to the Internet bubble, and prior to the financial crisis), but like every human being who has ever walked the planet, he’s not infallible.

Let’s say we are convinced that Grantham is onto something and the market is going to hell in a handbasket. We need to know how to represent that expectation in a rational investment plan. More importantly, if things don’t play out the way Grantham suggests, we need to have a built-in mechanism that regularly audits our perception of the world so we don’t end up chained to what may turn out to be an increasingly inaccurate prediction.

But we can’t use that mechanism until we overcome considerable confusion about the very nature of prediction: what it is and what it’s for. As long as I can remember, prediction tended to be this wild and perverse guessing game whose odds of success were not much better than the lottery; but it persisted because of our instinctive and persistent discomfort with uncertainty. Left in the shadows was a much more constructive way to understand and use prediction — one that had been around for over 200 years.

When someone makes a prediction, we don’t expect them to call a press conference every few days revising it little-by-little until it and the events that ultimately play out coincide. Prediction is all about confidently taking a bold stance and holding on to it until you’re proven right or wrong. Who would be interested in predictions if there weren’t any drama? As it turns out, just about anyone who is serious about anything.

Textbook case

In his book “Thinking Fast and Slow,” Daniel Kahneman relates a story that he regards as “one of the most instructive experiences of my professional life.” It relates directly to the challenge of prediction. During his time at Hebrew University he assembled a team of professors, graduate students and the dean (an expert in curriculum design) to design a course and write a textbook.

After a year of work, Kahneman asked each member of the team to write down how long they thought it would take to submit a finished draft of the textbook. The estimates ranged from 18 months to two and a half years.

Then he asked the dean how long he thought it would take. The dean said, based on his experience, there was a 40 percent likelihood that they would never even finish — and if they did finish it would take approximately seven years. Kahneman asked him how this group compared, in skills and resources to the others he knew about. The dean replied, “We’re below average, but not by much.” They ultimately completed the project — taking eight years. But by then the demand for the coursework had diminished and the textbook was never published.

Kahneman highlighted his experience to show how critical it is to have a base rate when making any kind of prediction. In his team’s circumstance, the base rate was a 40 percent failure rate, and a seven year timeframe for completion. Not knowing that, the group turned out to have wildly unrealistic estimates of both the effort and time required for the project.

New information

In “The Signal and The Noise,” Nate Silver shows us how the work of an 18th century English minister, of whose life we know little to nothing, revealed the true power of prediction. Silver writes: “In accordance with Bayes’s Theorem, prediction is fundamentally a type of information processing activity — a matter of using new data to test our hypothesis about the objective world, with the goal of coming to a truer and more accurate conception of it.”

Bayes’s Theorem is simple, but its counterintuitive underpinnings can be challenging. First, its vision of the future is built within the framework of conditional probability. As many of us know the hard way, probability is not the easiest concept to grasp — which is one reason casinos make so much money. And second, the data it relies upon are primarily subjective estimates or, as those of us who have liberal arts degrees call them, “guesses.”

The three guesses that make the theorem so useful are: (1) The probability that if an action “a” happens, outcome “b” is true. (2) The probability that if an action “a” happens, outcome “b” is not true. (3) The probability that outcome “b” is true even if action “a” never happened.

The magic that makes this such a useful tool is the third guess — what Kahneman referred to as the “base rate.” In plain English we call this plausibility. In other words, within the context of the real world do our assumptions make any sense? In a world awash with data, complicated algorithms and unlimited processing power, it is easy to lose sight of something as simple as common sense.

Pervasive problem

To demonstrate how research without proper context can turn into junk, Silver points to the work of medical researcher John Ioannidis, whose 2005 paper “Why Most Published Research Findings Are False” revealed a recurring flaw in almost all scientific research. A 2012 article in the scientific journal Nature strongly reaffirmed Ioannidis’s claim. During a 10-year period, scientists at the biotechnology company Amgen were able to reproduce the results of only six out of 53 landmark scientific papers. That is nearly a 90 percent failure rate!

Ioannidis’s concluded that the primary reason for these increasingly embarrassing results was that, prior to their study, the researchers never thought to ask themselves about the probability of their hypothesis being true. As a result they had no base rate, no context with which to judge their own bias once they began collecting data.

In contrast, a Bayesian prediction is all about “context.” The initial prediction is just the starting point of the process — a point of view we adjust regularly as new data arrives. And in a world where we are more than aware of our behavioral, social and evolutionary biases, it is that “third guess” (i.e. the base rate) which helps us know when our biases are affecting how we interpret new evidence.

If you believe that there is a 100 percent probability that the market is going to collapse, Bayes’s Theorem predicts accurately that no amount of new evidence will change your mind. Of course, that’s just common sense — which is the wonderful point of it all.

By employing subjective inputs (guesses) in an objective way, Bayes’s Theorem will never be perfectly right. But conversely, its mechanism and its logic make it almost impossible to end up being perfectly wrong. In the world of risk management you could spend a lot of money and employ a lot of Ph.D.’s who could not deliver better odds than that.

See also:

What Fed increase? Top Treasuries forecaster is bullish for 2016

5 not-so-bold regulatory predictions for 2016

2016 outlook: Health products other than major medical


NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.