Building in Our Biases: Ethics in Artificial Intelligence

This blog post was originally published by Bitfury. It is reprinted here with the permission of Bitfury.

Artificial intelligence is having its watershed moment. The technology has advanced beyond simple application and experimentation and is now being used across several fields to supplement and enhance human intelligence. The ethical implications of this rapid advancement are serious, and after a close review of the AI ecosystem over the past twelve months, I have determined that pressing technical and ethical errors are being made. These issues do not lie in the nature of artificial intelligence itself, but rather in the design choices made by researchers and companies. First, there are unquestionable biases in the design and selection of the data sets used to train AI. In addition to the problematic data selection, I am concerned about the advancing design of these networks that will soon outstrip our understanding (and therefore our oversight).

In short: Humans are building AI systems that are nearly impenetrable to human analysis, coding in biases and fallacious thinking while failing to include the best aspects of human intelligence, including our capacity for complex thinking. And despite several possible applications of “AI for good,” we are instead using it to disintermediate ourselves and solidify discriminatory practices.


Ethical/Technical Issue #1: Data Sets

The human brain is incredible — it is the home of our intelligence, our personality, our biases and our ignorance. We are still learning how our mind works, but researchers Kahneman and Tversky proposed that we think of our brain as two subsystems — a conscious, logical rational system operating under the auspices of a second unconscious, emotional one. Our unconscious brain works on instinct and decides many of our actions (often before we even realize we’ve made a decision). It does this by searching itself for examples, opinions and memories and executes based on this information — regardless of ethical ramifications or even factual accuracy. Your brain has been capturing this data since day one. We train ourselves through this data capture, and this information forms the basis of our different personalities, cultures and beliefs. For example, Europeans believed there was only one type of swan — the white swan — until Australia and its black swan were discovered. This is just one example of information bias. If your brain only sees white swans, you assume there are only white swans (and any suggestion that there are black swans is lunacy). This means we unknowingly introduce biases into our minds all the time. And we are now doing the same with artificial intelligence.

Machine learning systems learn (like humans) through example — but they need a staggering number of examples to do so (more so than humans, which is encouraging for our own future). It makes sense, then, that we should give these systems unbiased examples with which to learn from (a deck with an equal number of white and black swans, for example) — but that’s not what is happening. Rather, we are introducing data that comes pre-loaded with our own implicit biases. The results are explicit examples of racism, discrimination and more. In 2016, ProPublica reported that an AI system used by police departments to estimate the risk of future crime committed by a defendant (usually given to judges for sentencing), was biased. The formula “was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants. White defendants were mislabeled as low risk more often than black defendants.”

This is just one example of dozens where AI is simply getting it wrong — sometimes on a large scale. In 2016, Microsoft released an intelligent chat-bot that learned from its interaction with other real users on social media. Although it was not a very sophisticated chatbot, within a day it started to use racist expressions and bad language (thanks to its dataset, public Twitter interactions). Microsoft shut the bot down immediately.

We need thorough, representative datasets that can train these AI applications — and researchers need to be constantly taking inventory of their own biases when compiling them.


Ethical/Technical Issue #2: Human Oversight

If our goal is to train artificial intelligence to do the jobs we do not want to do, we should (as professionals, but also as ethical humans) ensure that AI is doing these jobs well and within our guidelines. This can be done through AI goal setting. What will the AI be doing? What is the intended result? Where are the boundaries? These goals can come from the original human programming, but if the system is self-sustaining (meaning it learns from its results and environment) these goals will guide its actions well beyond the original intentions of the developer. Unfortunately, there is a growing professional abandonment of responsibility here in pursuit of technical experimentation and efficiency.

For example: In 2017, Facebook created AI applications that could talk to each other and complete project negotiations. These applications shortly invented their own language to communicate, which wasn’t a problem — until the developers soon realized they couldn’t understand the language as it progressed. Where was the goal setting in this scenario? Why wasn’t “human oversight” required for project decisions to be made? Perhaps because humans have an implicit and limited understanding of our own decision-making processes. How can you expect AI applications to communicate and act like your team, if you have not mapped and built in the rules of engagement (for example: denominating the preferred language of business).

Another possible reason is efficiency — adding humans back into a decision-making process run by AI applications slows it down. It’s far easier to have humans oversee decisions instead and only step in in case of issue. While this sounds idyllic, the reality is that we still do not fully understand how intelligent machines make decisions, including the factors of their choices and the basis of their judgements. If we do not understand a decision, can we rely on it?

According to MIT Press’ Machine Learning, different machine learning models offer varying levels of interpretability of their decision making. Deep learning models are known for being the least interpretable by humans. A deep learning model is composed of simple processing units that are connected together into a network. The size of these networks, along with their distributed nature and ability to transform data as its used, makes its decisions incredibly difficult to interpret, understand, and therefore explain. It is essential that we understand how these models use data and have a failsafe mechanism for detecting biases before we give them autonomy.


Recommendations

Artificial intelligence has the capacity to make our world more trusted, transparent, and secure — but only if we can fight our own inherent biases as we design it and remain its responsible advocates and overseers. We are certainly not beyond needing to set guidelines and goals for ourselves — here are a few I believe we should start with.
1. Regulation. At the time of publication, there is no regulation even proposed for auditing AI systems (much less the datasets being used to train them). This should be remedied, especially when AI is applied in sectors with existing institutional discrimination and inequalities. Organizations such as the World Economic Forum’s AI council are making incredible strides in this regard.
2. External auditing. Any company or organization working on AI applications should build coalitions of advisors and partners from diverse sectors, countries, genders, ages and races. Otherwise, it is nearly guaranteed that there will be biases built into these projects.
3. Human/AI cooperation. Until we have a much stronger handle on how artificial intelligence makes decisions and analyzes information, humans should be an integral part of any project that uses AI. This means doctors equipped with AI diagnostics instead of AI-only diagnosticians; recruiters using AI to search for applicants who may not fit the traditional “mold” of employee, instead of recruitment done entirely by AI, etc.

For further reading about the ethics of artificial intelligence, I recommend the articles linked above as well as these excellent resources.

Books:

Weapons of math destruction: How big data increases inequality and threatens democracy, by Cathy O’Neil (2016)

News Articles:

Reuters: Amazon scraps secret AI recruiting tool that showed bias against women

Quartz: Amid employee uproar, Microsoft is investigating sexual harassment claims overlooked by HR

MIT Technology Review: The US just released 10 principles that it hopes will make AI safer

Wired: The Best Algorithms Struggle to Recognize Black Faces Equally

GBP News: How Artificial Intelligence Reflects Human Biases — And How It Can Improve

Research, Studies and Presentations:

“50 Years of Test (Un)fairness: Lessons for Machine Learning.” By Ben Hutchinson and Margaret Mitchell.

“To predict and serve?” by Kristian Lum and William Isaac.

The Perpetual Line-Up: Unregulated Police Face Recognition in America. By Clare Garvie, Alvaro Bedoya and Jonathan Frankle at Georgetown Law, Center on Privacy & Technology.

“Gender shades: Intersectional accuracy disparities in commercial gender classification.” By Joy Buolamwin and Timnit Gebru.

“Gender recognition or gender reductionism?: The social implications of embedded gender recognition systems.” By Foad Hamidi, Morgan Klaus Scheuerman, and Stacy M. Branham.

EU guidelines on ethics in artificial intelligence: Context and implementation. By various authors.

Fabrizio Del Maffeo
Head of Artificial Intelligence, Bitfury

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top