Close Close
Popular Financial Topics Discover relevant content from across the suite of ALM legal publications From the Industry More content from ThinkAdvisor and select sponsors Investment Advisor Issue Gallery Read digital editions of Investment Advisor Magazine Tax Facts Get clear, current, and reliable answers to pressing tax questions
Luminaries Awards
ThinkAdvisor

Technology > Artificial Intelligence

Are Killer Robots the Next Black Swan?

X
Your article was successfully shared with the contacts you provided.

Computing and robotics advances in recent years have kindled worries that artificial intelligence (AI) someday will slip free to wreak destruction upon the world. Warnings about such scenarios can be found in recent works that don’t carry a science-fiction label, such as documentary filmmaker James Barrat’s recent book Our Final Invention: Artificial Intelligence and the End of the Human Era.

No less a scientific name than Stephen Hawking has been raising alarms about out-of-control AI. In a recent opinion piece, Hawking and several scientist co-authors warned: “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

In an era of grandmaster-beating chess programs and driverless-car prototypes, it is no surprise that anxieties about where such technology is heading have gained traction. Nor is it a surprise that critics target the tech sector and the Pentagon (especially the Defense Advanced Research Projects Agency, DARPA) for working on increasingly powerful systems that may one day outsmart, outrank or outlive us all. (A UN meeting in Geneva in May convened experts to discuss emerging “lethal autonomous weapons.”)

What might be surprising is the idea that Wall Street could give rise to dangerous AI—not in the obvious sense that financial institutions raise capital for the tech sector, but in that financial technology itself might be the matrix for the rise of the machines. That is the gist of one emerging line of thinking about the dangers of smart computers.

Funding Consciousness

In Our Final Invention, Barrat presents a dark vision of AI outstripping and endangering humanity—as soon as the next few decades. DARPA and Google figure prominently in this jeremiad, but Barrat also includes a scenario sketched out by Alexander Wissner-Gross, a scientist-engineer with affiliations at Harvard and MIT, in which a powerful AI could emerge from agent-based financial models, which simulate the behavior of multiple players in a market or economy.

After noting that a huge amount of money and brainpower now goes into developing ever-better models used by hedge funds and in high-frequency trading, Barrat writes: “Wouldn’t the next logical step be to make your hedge fund reflective? That is, perhaps your algorithm shouldn’t automatically trigger sell orders based on another fund’s massive sell-off (which is what happened in the flash crash of May 2010).”

He continues: “Instead, it would perceive the sell-off and see how it was impacting other funds, and the market as a whole, before making its move. It might make a different, better move. Or maybe it could do one better, and simultaneously run a very large number of hypothetical markets, and be prepared to execute one of many strategies in response to the right conditions.”

Barrat explains: “In other words, there are huge financial incentives for your algorithm to be self-aware—to know exactly what it is and model the world around it.”

How do you program self-awareness into a computer? Nobody knows, and that may be a big obstacle to this scenario coming true. Wissner-Gross suggests it might happen by accident, as the interplay of multiple algorithms gives rise to an “artificial general intelligence” (AGI), a system with a human-level grasp of the world. He tells Barrat: “If you follow the money, finance has a decent shot at being the primordial ooze out of which AGI emerges.”

An AGI, Barrat emphasizes, might reprogram itself to be an “artificial superintelligence” (ASI), and systems that are as smart as or smarter than humans will have little interest in being our tools. He writes: “I think our Waterloo lies in the foreseeable future, in the AI of tomorrow and the nascent AGI due out in the next decade or two.”

Significant Leap

The kind of concern expressed by Barrat requires a great deal of extrapolation from today’s situation. Currently, there are computer programs that can beat grandmasters at chess, but none that can think “I’d rather just kill my opponent,” let alone act on such thoughts. If a conscious computer is “due out in the next decade or two,” it may be that the product timetable will slip like previous visions of flying cars and underwater cities.

There is also reason to be skeptical about whether Wall Street in particular has huge incentives to develop self-aware algorithms. Would such programs necessarily trade better, or might consciousness actually undermine the high speed and lack of hesitation that characterize trading algorithms today? If a hedge fund manager wants an entity “to be self-aware—to know exactly what it is and model the world around it”—wouldn’t it be cheaper to hire a human?

An interesting development in the chess world of recent years is that human-computer teams, in which a grandmaster is aided by a program, have tended to be stronger than either humans or computers playing alone. Perhaps a similar complementarity has advantages in finance, undercutting any incentives to get humans out of the loop.

Technology writer Edward Tenner, in an essay at the online magazine The American early this year, wrote: “I would take warnings about the dangers of superintelligent machines more seriously if today’s computers were able to make themselves more resistant to human hackers and to detect and repair their own faults. Organizations with access to some of the most advanced supercomputers and gifted programmers have been hacked again and again by individuals and groups with modest resources, compromising everything from credit card numbers to espionage secrets.”

It may be that among future black swans—threats that are hard to predict and can have devastating consequences—powerful computers that plot against humanity are less worrisome than powerful computers that are used by humans for some malevolent end. Still, warnings such as Barrat’s may have some resonance. Public opinion about Wall Street has been decidedly sour since the financial crisis, and public opinion about the tech sector has been slipping too, amid concerns ranging from deteriorating privacy of personal data to job losses resulting from automation.

Therefore, as improbable as the threat may be, don’t be surprised to hear more going forward about Wall Street risking a robot rebellion—and about regulatory countermeasures. Futurist author David Brin, also drawing on Wissner-Gross’s ideas, has argued for a financial transaction tax or fee, not only to dampen high-frequency trading but to “discourage the very worst kind of artificial intelligence from leaping upon our necks out of the dark.”

If there is one thing that could make public sentiment toward the financial sector more negative than it has been, a hedge-fund-generated Terminator may be it.


NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.