Public access radio that connects community members to one another and the world
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
KDNK's Spring Membership Drive kicks off April 3rd at the Village Smithy. Join or renew early!

How Good — And How Secure — Is Facial Recognition Technology?

AILSA CHANG, HOST:

It's time now for All Tech Considered.

(SOUNDBITE OF ULRICH SCHNAUSS' "NOTHING HAPPENS IN JUNE")

CHANG: This month, we're looking at our bodies the way technology sees them. Here in the U.S., we're gradually getting used to facial recognition technology. It's in our phones, in some airport security systems, on social media. But how good and how secure is this technology? To answer some of those basic questions, we're joined by Alice O'Toole. She's a facial recognition expert at the University of Texas at Dallas. Welcome.

ALICE O'TOOLE: Thank you.

CHANG: Now, I feel like I'm seeing facial recognition pop up a lot more recently - not just on smartphones but in airport security systems, for example. Law enforcement has also experimented with it. Was there some sort of technological breakthrough that allowed wider use of this software?

O'TOOLE: Yes, there has been a breakthrough. In the last five years or so, there's been a new algorithm. And it's modeled after the human visual system whereas older technology would do really quite well with well-controlled facial images - so when the illumination is good, and the viewpoint is frontal. This new kind of algorithm is much better at being able to generalize identity across variable images. So that crazy facial expression or that, you know, change in age and so on, these algorithms really have the potential to be much better at the task.

CHANG: One recurring issue though we keep seeing is facial recognition technology that misidentifies people of color much more frequently than it misidentifies white people. There was that one pretty embarrassing incident where Google tagged faces of black people as gorillas. I'm just wondering, from a technological standpoint, why do mistakes like that happen?

O'TOOLE: Yeah, this is a really excellent point. The current algorithms learn by example. And examples are basically many, many images of a particular person and an accurate label. And so you get what's out there. And what's out there may not be equally representative of different races.

CHANG: So you're saying there are just a lot more photos of white people in the databases these pieces of software read?

O'TOOLE: That's exactly right.

CHANG: So how do you solve this problem technologically? How do you make the web, I guess, less racially biased? Do you just flood it with more photos of people of color?

O'TOOLE: Yes.

CHANG: Oh.

O'TOOLE: The answer to the question technologically is quite simple. If you had equal representation of faces of different races, we would expect it would be equally accurate. How do you make the web un-race biased? That's a harder question.

CHANG: So even when facial recognition software does work - when it is accurate, there are some troubling implications when it's linked to a database of images. For example, civil liberties groups worry that law enforcement could use this technology to target undocumented immigrants or to track protesters. As someone who really understands where this science is today, how do you think about this?

O'TOOLE: Is face recognition technology abused? That's certainly a possibility. Any technology has the potential for abuse. Whether or not the potential for abuse of this technology outweighs its utility is something that has to be decided by societies in carefully looking at how it works and making decisions about what the right ways to use the technology are.

CHANG: Alice O'Toole is head of the Face Perception Research Lab at the University of Texas at Dallas. Thank you very much.

O'TOOLE: Thank you. Transcript provided by NPR, Copyright NPR.