Biometric Mirror Highlights How AI can be Biased and Why that’s a Problem

Humans tend to be naturally judgmental, almost always making internal snap judgements about someone when we first meet them. After looking at the person for only a few seconds, we might remark their gender, race, and age, or decide whether or not we think they’re attractive, trustworthy, or nice.

Of course, after we actually get to know the person, we might realize our initial perception of them was wrong – and when it comes to humans that’s not too big of a deal but when it comes to AI that is actually a very big deal! Why? Because our assumptions could shape how the artificial intelligence (AI) of the future make increasingly important decisions.

One of those important decisions could pertain to space travel and human life in space something that is very important to Asgardia, the first ever space nation, whose long-term goal is to set up habitable platforms in space.

In an attempt to illustrate this problem to the public, researchers from the University of Melbourne created Biometric Mirror. Biometric Mirror is an AI that analyzes a person’s face and then displays 14 characteristics about them, including their age, race, and perceived level of attractiveness.

In order to train the system to do this, the Melbourne researchers began by asking human volunteers to judge thousands of photos for the same characteristics. This became the dataset Biometric Mirror referenced when analyzing new faces. Since the information these volunteers gave was subjective, so was Biometric Mirror’s output. If a majority of the human respondents thought people with beards seemed less trustworthy, that would influence how the Biometric Mirror judged people with beards.

To use Biometric Mirror, the user just has to stand in front of the system for a few seconds. It quickly scans their face and then lists their perceived characteristics on a screen. Next, the AI asks the person to think about how they’d feel if it shared that information with others. For example, how would they feel if they were refused a job because the AI ranked them as having a low level of trustworthiness? Or what if law enforcement officials decided to target them because they ranked highly for aggression?

In a press release, lead researcher Niels Wouters explained that their study’s objective is to provoke challenging questions about the boundaries of AI. It shows users how easy it is to implement AI that discriminates in unethical or problematic ways which could have societal consequences. And by encouraging debate on privacy and mass-surveillance, they hope to contribute to a better understanding of the ethics behind AI.

A system as biased as Biometric Mirror could have enormous consequences as AI becomes more widely used and makes ever more important decisions. Plus,  we’re already seeing examples of AI is being used in today’s systems. While researchers work on determining ways to make sure future systems don’t contain those same flaws, it’s important that the public consider the potential impact of biased AI on society, which is why Biometric Mirror could be a great help.

Would you be interested in pioneering research into space, science, and technology? Then join Asgardia now! Become an official citizen and let your voice be heard.

References: https://bit.ly/2Ov5Fg2

 

 

Imprint

Image Credit: Chombosan / Shutterstock

JOIN ASGARDIA NOW

Become a Citizen