Where AI Falls Short

This blog post was originally published at Intel’s website. It is reprinted here with the permission of Intel.

My colleague Abigail Wen is interviewing some of the greatest minds in artificial intelligence (AI) with her new podcast—Intel on AI. From Ivy-league professors to Fortune 50 executives, I’m pleasantly surprised to hear how many experts in the field are concerned with the potential for AI to impact society in negative ways.

Episode 7 address this issue directly with guest Yeshimabeit (Yeshi) Milner, founder of Data for Black Lives. In the episode, Yeshi and Abigail discuss why Data for Black Lives, a community of over 4,000 professionals, is working to breakdown the silos between scientists and activists.

“Models aren’t neutral; no data is neutral.”

-Yeshi Milner

A History of Inequality

In a previous episode of the podcast, MIT’s Bernhardt Trout talked about some of the ethical quandaries AI is leading us into, and that while many of these questions are not new to humanity, advancements in things such as autonomous driving are forcing moral dilemmas to the forefront. While Bernhardt recognizes that the technologies themselves being developed are important to consider, he says, “But, perhaps a more important issue is how well we face that technology.”

Yeshi’s work with Data for Black Lives takes a similar approach, pushing to make sure people involved in developing and implementing algorithms that will have major influence on lives are reflective of the communities that will be most impacted. For example, Yeshi talks about the history of homeownership in America and how seemingly well-intentioned technology widened disparity gaps.

During the New Deal of the 1930s, the Home Owners’ Loan Corporation was established to refinance nearly one out of every five private mortgages in urban areas to offset the devastation of the Great Depression. Part of this work included the creation of “Residential Security” maps to assess the mortgage leading risks in certain areas. Neighborhoods considered high risk were often denied loans from financial institutions to improve housing and other economic opportunities, a process that came to be known as “redlining.” Soon after in the 1950s, engineer Bill Fair and mathematician Earl Isaac created the data analytics company FICO while working at the Stanford Research Institute, pioneering the practice of credit scoring that has become the standard for American private loan investment. But by using data steeped in exclusionary practices, such credit scores can further economic inequality.

Outcome vs. Intention

On its face, a risk assessment model benefits lender and borrower—an accurate assessment of ability to repay and avoid either harmful over-lending, as happened in the run-up to the 2008 financial crisis, or under-lending, reducing access to the credit so important to home purchases and business expansion.  Reducing the discretion of the branch manager who used to make these decisions should prevent bias against borrowers, but as Yeshi notes, when these models are based on deeper historical inequalities, they can make matters worse by locking out entire generations from economic opportunities being offered to others; something Mehrsa Baradaran from University of California, Irvine School of Law has studied in the research paper “Jim Crow Credit.”

Credit scores have a complicated interaction with racial bias in the US, where they are commonly used as a screening tool by hiring managers, despite weak evidence of effectiveness for this purpose. There’s obvious potential for a vicious cycle—unemployment leading to debt, leading to further unemployment—and so several states have banned the practice. However, a paper from researchers at Harvard and the Boston Federal Reserve showed that while this well-intentioned measure decreased unemployment overall, it increased Black unemployment. One hypothesis for the mechanism is that without the “objective” signal of the credit score, some hiring managers leaned more heavily on their own intuitions and biases.

Exacerbating Underlying Problems

Today, a host of financial technology (fintech) companies are launching to find new ways to provide credit to people all over the planet, promising to change outdated models of risk. Yet research from Harvard suggests borrowers using these newer services could be more likely to default. As non-profit organizations like World Bank attempt to end extreme poverty with the assistance of AI projects, the tools being used must be carefully considered because AI can act as an amplifier for human bias.

Intel’s Sandra Rivera and DataRobot’s Ben Taylor talked about the issues of bias in hiring and employment during a previous podcast episode, and Yeshi echoes their observations by highlighting how bias can unfortunately be built-in to too many technologies, such as facial recognition as studied by MIT researcher Joy Buolamwini or in medical race-adjusted algorithms as Dr. Darshali Vyas wrote in The New England Journal of Medicine.

The Dangerous of Misuse and Mistrust

How AI is used in health care is a critical questions because it is one of the most promising areas of applications, yet also ripe for the most potential exploitation, misuse, or mistrust. Yeshi noted that increasingly high COVID-19 rates in Miami caused the city to launch a random screening process with the University of Miami Miller School of Medicine earlier this year, but the program was quickly met with backlash over fears that outcome bias could lead to more aggressive restrictions among certain communities. Similar fears were stoked in Minnesota when a state official used the phrase “contact tracing” in reference to investigating protesters arrested during demonstrations against police brutality.

A foreigner living an entire ocean away from the US, I listened to this episode as many of you will—as an outsider from a society with very different fault lines. However, the themes of public trust as key to the adoption of new technology are universal. For example, many mobile network towers have been attacked across Europe due to unsubstantiated 5G fears, and diseases like measles have returned partially due to unfounded fears of vaccination. Trust is a key foundation of public policy too: during my time at Intel I’ve been seconded to NCMEC, where I worked on automated systems that detect the sharing of child sexual abuse imagery and report it to national authorities—systems that, absent a last-minute derogation, will be turned off in the EU as new laws come into effect to strengthen privacy protections for users of instant messaging systems.

Done right, analytics and AI can give us tools that are more effective, more transparent, and fairer than those they replace. But this podcast shows the degree of care that is necessary not only to get it right, but to be seen to get to right, too. Without the trust of the public, technical solutions, however worthy, are doomed.

The entire podcast episode with Yeshi and Abigail is worth listening to, as are the other episodes in the series. Listen to more episodes at: https://www.intel.com/content/www/us/en/artificial-intelligence/podcast.html

To learn more about Intel’s work in AI, visit: https://intel.com/ai

Edward Dixon
Data Scientist, Intel

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top