Close Close
Popular Financial Topics Discover relevant content from across the suite of ALM legal publications From the Industry More content from ThinkAdvisor and select sponsors Investment Advisor Issue Gallery Read digital editions of Investment Advisor Magazine Tax Facts Get clear, current, and reliable answers to pressing tax questions
Luminaries Awards

Life Health > Health Insurance > Your Practice

How Humans Can Manage Their Algorithmic Overlords

Your article was successfully shared with the contacts you provided.

(Bloomberg View) — Humans are gradually coming to recognize the vast influence that artificial intelligence will have on society. What we need to think about more, though, is how to hold it accountable to the people whose lives it will change.

Google tells us what to believe. Facebook tells us what’s news. Countless other algorithms are standing in for human control and judgment, in ways that are not always evident or benign. As Larry Summers recently noted, the impact on jobs alone (in large part from self-driving cars) could be greater than that of trade agreements.

So who will monitor the algorithms, to be sure they’re acting in people’s best interests? Last week’s congressional hearing on the FBI’s use of facial recognition technology for criminal investigations demonstrated just how badly this question needs to be answered. As the Guardian reported, the FBI is gathering and analyzing people’s images without their knowledge, and with little understanding of how reliable the technology really is. The raw results seem to indicate that it’s especially flawed for blacks, whom the system also disproportionately targets.

(Related onThinkAdvisorState regulators eye life accelerated underwriting programs)

In short, people are being kept in the dark about how widely artificial intelligence is used, the extent to which it actually affects them and the ways in which it may be flawed. That’s unacceptable. At the very least, some basic information should be made publicly available for any algorithm deemed sufficiently powerful. Here are some ideas on what a minimum standard might require:

Scale. Whose data is collected, how, and why? How reliable are those data? What are the known flaws and omissions? Impact. How does the algorithm process the data? How are the results of its decisions used? Accuracy. How often does the algorithm make mistakes — say, by wrongly identifying people as criminals or failing to identify them as criminals? What is the breakdown of errors by race and gender?

Such accountability is particularly important for government entities that have the power to restrict our liberty. If their processes are opaque and unaccountable, we risk handing our rights to a flawed machine. Consider, for example, the growing use of “crime risk scores” in decisions about bail, sentencing, and parole. Depending on the data and algorithm used, such scores can be as prejudiced as any human judge.

Transparency, though, is just a starting point. It won’t tell us what to do if blacks are systematically getting harsher sentences, or if poor people are automatically being branded as suspected criminals. At best, it will start a political and moral conversation with the designers of the algorithms, or among members of Congress — one that will force us to revisit our most fundamental concepts of justice and liberty.

Cathy O’Neil is a mathematician who has worked as a professor, hedge-fund analyst and data scientist. She founded ORCAA, an algorithmic auditing company, and is the author of “Weapons of Math Destruction.

— Read Our website and email newsletters are changing on ThinkAdvisor.


© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.