Computer algorithms play an increasingly important role in running the world — filtering news, assessing prospective employees, even deciding when to set prisoners free. All too often, though, their creators don’t make them adequately accountable to the people whose lives they affect.
It’s thus good to see some researchers and politicians starting to do something about it.
Objective as they may seem, artificial intelligence and big-data algorithms can be as biased as any human. Examples pop up all the time. A Google AI designed to police online comments rated “I am a gay black woman” 87% toxic but “I am a man” only 20%. A machine-learning algorithm developed by Microsoft came to perceive people in kitchens as women. Left unchecked, the list will only grow.
Help may be on the way. Consider Themis, a new, open-source bias detection tool developed by computer scientists at the University of Massachusetts Amherst. It tests “black box” algorithms by feeding them inputs with slight differences and seeing what comes out — much as sociologists have tested companies’ hiring practices by sending them resumes with white-sounding and black-sounding names. This can be valuable in understanding whether an algorithm is fundamentally flawed.
The software, however, has a key limitation: It changes just one attribute at a time. To quantify the difference between white and black candidates, it must assume that they are identical in every other way. But in real life, whites and blacks, or men and women, tend to differ systematically in many ways — data points that algorithms can lock onto even with no information on race or gender. How many white engineers matriculated from Howard University? What are the chances that a woman attended high-school math camp?
Untangling cultural bias from actual differences in qualifications isn’t easy. Still, some are trying. Notably, Julius Adebayo at Fast Forward Labs — using a method to “decorrelate” historical data — found that race was the second biggest contributor to a person’s score on COMPAS, a crime-prediction algorithm that authorities use in making decisions on bail, sentencing and parole. His work was possible thanks to Florida sentencing data unearthed by Julia Angwin at ProPublica for her own COMPAS audit — an effort that sparked a battle with COMPAS maker Northpointe, in large part because there’s no shared definition of what makes an algorithm racist.