Close Close
Popular Financial Topics Discover relevant content from across the suite of ALM legal publications From the Industry More content from ThinkAdvisor and select sponsors Investment Advisor Issue Gallery Read digital editions of Investment Advisor Magazine Tax Facts Get clear, current, and reliable answers to pressing tax questions
Luminaries Awards
ThinkAdvisor

Life Health > Health Insurance > Your Practice

Look Who's Fighting Our Algorithmic Overlords

X
Your article was successfully shared with the contacts you provided.

Computer algorithms play an increasingly important role in running the world — filtering news, assessing prospective employees, even deciding when to set prisoners free. All too often, though, their creators don’t make them adequately accountable to the people whose lives they affect.

It’s thus good to see some researchers and politicians starting to do something about it.

(Related: Robo-Advisors: The Coming Wave in the Financial Industry)

Objective as they may seem, artificial intelligence and big-data algorithms can be as biased as any human. Examples pop up all the time. A Google AI designed to police online comments rated “I am a gay black woman” 87% toxic but “I am a man” only 20%. A machine-learning algorithm developed by Microsoft came to perceive people in kitchens as women. Left unchecked, the list will only grow.

Help may be on the way. Consider Themis, a new, open-source bias detection tool developed by computer scientists at the University of Massachusetts Amherst. It tests “black box” algorithms by feeding them inputs with slight differences and seeing what comes out — much as sociologists have tested companies’ hiring practices by sending them resumes with white-sounding and black-sounding names. This can be valuable in understanding whether an algorithm is fundamentally flawed.

The software, however, has a key limitation: It changes just one attribute at a time. To quantify the difference between white and black candidates, it must assume that they are identical in every other way. But in real life, whites and blacks, or men and women, tend to differ systematically in many ways — data points that algorithms can lock onto even with no information on race or gender. How many white engineers matriculated from Howard University? What are the chances that a woman attended high-school math camp?

Untangling cultural bias from actual differences in qualifications isn’t easy. Still, some are trying. Notably, Julius Adebayo at Fast Forward Labs — using a method to “decorrelate” historical data — found that race was the second biggest contributor to a person’s score on COMPAS, a crime-prediction algorithm that authorities use in making decisions on bail, sentencing and parole. His work was possible thanks to Florida sentencing data unearthed by Julia Angwin at ProPublica for her own COMPAS audit — an effort that sparked a battle with COMPAS maker Northpointe, in large part because there’s no shared definition of what makes an algorithm racist.

Many researchers and practitioners are working on how to assess algorithms and how to define fairness. This is great, but it inevitably runs into a bigger problem: secrecy. Algorithms are considered the legally protected “secret sauce” of the companies that build them, and hence largely immune to scrutiny. We almost never have sufficient information about them. How can we test them if we have no access in the first place?

There’s a bit of good news on this front, too. Last week, James Vacca, a Democratic New York City council member, introduced legislation that would require the city to make public the inner workings of the algorithms it uses to do such things as rate teachers and decide which schools children will attend.

It’s a great idea, and I hope it’s just the first step toward making these fallible mechanisms more transparent and accountable — and toward a larger, more inclusive discussion about what fairness should mean.

—-Read State Regulators Eye Life Accelerated Underwriting Programs on ThinkAdvisor.


We have a
Facebook news feed. Visit https://www.facebook.com/ThinkAdvisorLifeHealth


NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.