The peril isn’t that computers will take over the world. The real danger is that humans trust computers to make important decisions, warns economics professor Gary Smith, in an interview with ThinkAdvisor.
Computers have no common sense, no wisdom, don’t know what words mean in a sentence and lack critical thinking and judgement. Therefore, fear that they’ll gain control of civilization is misplaced, Smith stresses.
Indeed, don’t trust computers to pick stocks, cautions the Pomona College professor and ace stock-picker, a specialist in statistical pitfalls who became an independently wealthy multimillionaire by investing in stocks. Computers lack the competence to pick stocks. They don’t even know what a stock is or what one should be worth, he contends.
In his new book, “The AI Delusion” (Oxford University Press- Oct. 2018), Smith argues that while computer algorithms are great at finding patterns and performing narrow tasks, they’re bad at assessing the reliability of the data they unearth as well as whether their statistical analyses are plausible. Consequently, human reasoning is needed more than ever, he says.
The professor spotted the dot-com bubble early on and now sees the possibility of an artificial intelligence bubble building because of what he calls the excessive hype of companies boasting that they use AI for their products and processes.
About a third of all stock trades are made by computer algorithms without human intervention. Further, in place of human judgement, sophisticated trading systems are deciding to buy and sell stocks. Their “mysterious, inscrutable” processes are concealed inside “black box” algorithms, Smith says.
These are so mathematically complex that it’s impossible to determine if the patterns they find are useful or useless, contends the professor, who explored the stock-picking models of John Bogle and Robert Shiller, among others, in his last book, ”Money Machine: The Surprisingly Simple Power of Value Investing” (Amacom 2017).
Artificial intelligence expert Roger Schank, Northwestern University professor emeritus, blurbs that Smith’s new book “goes a long way towards dispelling the BS about AI.”
ThinkAdvisor recently interviewed Smith, speaking by phone from Claremont, California. For seven years, he was an assistant professor at Yale, where he’d received a Ph.D. in economics. In our conversation, he stressed that though computers are smart, they still hardly measure up to the human brain.
Here are excerpts:
THINKADVISOR: Why did you title your new book “The AI Delusion”?
GARY SMITH: We trust computers too much. We let them make important decisions for us, like which stocks should we buy, what to pay for insurance, who gets that job.
But aren’t computers helping the financial services industry with investing?
Big funds boast that all their trading is done by computers with no human intervention. But computers don’t know what stocks are. They don’t know what stock prices are or what a stock should be worth. They don’t know what determines stock prices. All they can do are find patterns and correlations.
Many financial advisors are using computer algorithms to help invest clients’ assets. Any advice to advisors?
Don’t trust investment decisions to a computer. They mine data and find patterns, but they have no way of [knowing] whether a pattern makes sense because they don’t understand them. Yes, they can get lots of data on stocks, but you need humans to step in and ask, “Is this a good reason to buy or not?” You can’t trust computers to do things they’re incapable of, such as picking stocks, approving loans or screening job applications.
Data mining is perhaps the most dangerous form of artificial intelligence, so you write. Why would it be?
Data mining is ransacking data looking for patterns. That doesn’t necessarily prove anything. Random coincidental chance could be in any set of data: Computers will data mine like crazy and come up with coincidental, temporary correlations that are useless — or even worse. Computers are good at finding patterns because they can analyze so much data quickly, but the patterns they come up with can be absolutely meaningless.
Why are people so awestruck by computers?
Part of the reason is reinforcement they get seeing movies like “Star Wars” with its characters R2-D2 and C-3PO. It’s the idea that computers seem kind of cute and cuddly and really, really smart. That’s anthropomorphizing — attributing human qualities to computers and to animals and gingerbread cookies — the notion that computers are just humans with a different kind of skin.
But computers do have a wow factor.
Computers can tell the capital of any country and can beat humans at chess. And because of the way computer code is written – trying to match patterns – they’re lightning fast at calculating. So it’s natural for humans to think that if computers are good at very difficult things, they must be really good at everything.
But they’re not.
Right. It’s the simple stuff that computers have trouble with. They have absolutely no idea about anything in the world we live in. They have no common sense or wisdom. They don’t have critical thinking.
What’s the proof?
If you ask a computer, “Is it safe to walk downstairs backwards if I close my eyes?” it has no idea what you’re talking about. It doesn’t know what words mean in a sentence. So how can computers take over the world?