Gary Smith

The peril isn’t that computers will take over the world. The real danger is that humans trust computers to make important decisions, warns economics professor Gary Smith, in an interview with ThinkAdvisor.

Computers have no common sense, no wisdom, don’t know what words mean in a sentence and lack critical thinking and judgement. Therefore, fear that they’ll gain control of civilization is misplaced, Smith stresses.

Indeed, don’t trust computers to pick stocks, cautions the Pomona College professor and ace stock-picker, a specialist in statistical pitfalls who became an independently wealthy multimillionaire by investing in stocks. Computers lack the competence to pick stocks. They don’t even know what a stock is or what one should be worth, he contends.

In his new book, “The AI Delusion” (Oxford University Press- Oct. 2018), Smith argues that while computer algorithms are great at finding patterns and performing narrow tasks, they’re bad at assessing the reliability of the data they unearth as well as whether their statistical analyses are plausible. Consequently, human reasoning is needed more than ever, he says.

The professor spotted the dot-com bubble early on and now sees the possibility of an artificial intelligence bubble building because of what he calls the excessive hype of companies boasting that they use AI for their products and processes.

About a third of all stock trades are made by computer algorithms without human intervention. Further, in place of human judgement, sophisticated trading systems are deciding to buy and sell stocks. Their “mysterious, inscrutable” processes are concealed inside “black box” algorithms, Smith says.

These are so mathematically complex that it’s impossible to determine if the patterns they find are useful or useless, contends the professor, who explored the stock-picking models of John Bogle and Robert Shiller, among others, in his last book, ”Money Machine: The Surprisingly Simple Power of Value Investing” (Amacom 2017).

Artificial intelligence expert Roger Schank, Northwestern University professor emeritus, blurbs that Smith’s new book “goes a long way towards dispelling the BS about AI.”

ThinkAdvisor recently interviewed Smith, speaking by phone from Claremont, California. For seven years, he was an assistant professor at Yale, where he’d received a Ph.D. in economics. In our conversation, he stressed that though computers are smart, they still hardly measure up to the human brain.

Here are excerpts:

THINKADVISOR: Why did you title your new book “The AI Delusion”?

GARY SMITH: We trust computers too much. We let them make important decisions for us, like which stocks should we buy, what to pay for insurance, who gets that job.

But aren’t computers helping the financial services industry with investing?  

Big funds boast that all their trading is done by computers with no human intervention. But computers don’t know what stocks are. They don’t know what stock prices are or what a stock should be worth. They don’t know what determines stock prices. All they can do are find patterns and correlations.

Many financial advisors are using computer algorithms to help invest clients’ assets. Any advice to advisors?

Don’t trust investment decisions to a computer. They mine data and find patterns, but they have no way of [knowing] whether a pattern makes sense because they don’t understand them. Yes, they can get lots of data on stocks, but you need humans to step in and ask, “Is this a good reason to buy or not?” You can’t trust computers to do things they’re incapable of, such as picking stocks, approving loans or screening job applications.

Data mining is perhaps the most dangerous form of artificial intelligence, so you write. Why would it be?

Data mining is ransacking data looking for patterns. That doesn’t necessarily prove anything. Random coincidental chance could be in any set of data: Computers will data mine like crazy and come up with coincidental, temporary correlations that are useless — or even worse. Computers are good at finding patterns because they can analyze so much data quickly, but the patterns they come up with can be absolutely meaningless.

Why are people so awestruck by computers?

Part of the reason is reinforcement they get seeing movies like “Star Wars” with its characters R2-D2 and C-3PO. It’s the idea that computers seem kind of cute and cuddly and really, really smart. That’s anthropomorphizing — attributing human qualities to computers and to animals and gingerbread cookies — the notion that computers are just humans with a different kind of skin.

But computers do have a wow factor.

Computers can tell the capital of any country and can beat humans at chess. And because of the way computer code is written – trying to match patterns – they’re lightning fast at calculating. So it’s natural for humans to think that if computers are good at very difficult things, they must be really good at everything.

But they’re not.

Right. It’s the simple stuff that computers have trouble with. They have absolutely no idea about anything in the world we live in. They have no common sense or wisdom. They don’t have critical thinking.

What’s the proof?

If you ask a computer, “Is it safe to walk downstairs backwards if I close my eyes?” it has no idea what you’re talking about. It doesn’t know what words mean in a sentence. So how can computers take over the world?

What is “black box” data mining, and why is it artificial but “not intelligent,” as you write?

The math behind the algorithms is so complicated that even the people writing the code don’t understand what the computer is doing. That is, nobody can look “inside” [a black box algorithm] and see what’s going on because the math that’s [executing] what’s going on isn’t straightforward.

What danger do black box trading systems pose?

If, for example the computer says, “Buy Apple stock,” we don’t know why. It might turn out OK; it might not turn out OK. But, in either case, we don’t know why.

You spotted the dot-com bubble. Now you see a parallel to artificial intelligence. What’s the similarity?

In the dot-com bubble, there was so much hyping. If you put dot-com at the end of a company’s name, the stock price doubled, on average. People were in awe of dot-coms. Now that’s happening with AI. The marketing word of 2017 was “AI.” If you wanted to sell a product, you told customers, “We used AI to develop this.” People said, “Wow, that must be really good!”

So there might be an AI bubble building?

AI has the same kind of bubbly sense because it’s hyped so. But [the reality is that right now] AI can’t come close to delivering what people think it can deliver. AI can’t make informed decisions. We think computers are so smart. A lot of that is nonsense.

Do you think the hype could cause an AI bust, like the dot-com bust?

Companies that are advertising themselves as based on AI can definitely get overhyped, and prices will go up. There’s [the case of technology company] Theranos. They promised that you could administer blood tests by yourself that would test for 100 different diseases with only a drop of blood. Theranos was the darling of Wall Street. Then The Wall Street Journal did a big exposé on it. It turned out that their product wasn’t good. Then came the [company’s] downfall. [Last year, it closed.]

Is anyone making headway into using AI for trading stocks with a logical, thought-out approach?

We still know very little about how the brain works — how it puts things together, makes logical plans and connections. [But] Renaissance Technologies [has software that figures] why it might make sense to make a trade. Then their computers investigate. For example, a market in one part of the world is closed today, but another market in a different part of the world isn’t. The prices therefore might get a little out of whack in those two markets. And so they’ll trade on that.

What do you think of high-frequency trading?

Computers are in a big race to see who can get there fastest to read an order as it comes in and then execute the trade. We’re talking about nanoseconds. From an economic standpoint, that’s a huge waste of resources and also a bit of a rip-off. The other thing is that the algorithms don’t want to hold onto positions for more than a few nanoseconds. So these high-speed computers start trading furiously amongst themselves. Millions of shares are traded in just a few seconds.

What was a disturbing result of unsupervised computers moving in unison in the big Flash Crash of May 2010? The crash was a matter of “computers blindly following rules,” you said in a June 2017 interview with me.

One stock sold for $100,000, and another stock sold for a penny. Computers don’t know what a stock is worth. They just know: “My algorithm says, “Sell” — so I sold.”

Are you still using the same stock-picking strategy we previously discussed, including screening companies for profitability, earnings, P/E ratios, management and products?

Yes, and [my] stocks are still doing great. I started buying Apple about a year and a half ago. When I wrote “Money Machine,” it was at $90. When that book came out, it was at about $110 or $120. Now it’s $220.

What were your top reasons for investing in Apple?

Dividends and earnings compared to its price, [made the stock] so darn cheap. It was selling for, like, ten times earnings; and earnings were going to grow almost certainly. It was a company that had a great brand name. Apple has what you look for in a great stock: relatively low price compared to earnings; a [strong] brand and loyal customers.

Observers were surprised when Warren Buffett began investing in Apple because he’s famous for steering clear of technology.

Everybody said, “What? Why is he buying a computer stock?” He said: “Apple isn’t a computer stock — it’s a consumer brand, and one with amazing loyalty and a great ecosystem.” It has great products — watches, phones, [iTunes] — and they’re all linked together.

Given your viewpoint about AI, I assume you don’t think much of robo-advisors or index funds?

They would not be on my Christmas wish list.

— Related on ThinkAdvisor: