Troll Watch: AI Ethics
MICHEL MARTIN, HOST:
We're going to spend the next few minutes talking about developments in artificial intelligence or AI. This week, the Trump administration outlined its AI policy in a draft memo which encouraged federal agencies to, quote, "avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth," unquote. And at the Consumer Electronics Show, the annual technology showcase, U.S. Chief Technology Officer Michael Kratsios elaborated on the administration's approach, warning that overregulation could stifle industries. But this stance comes as companies are announcing some boundary pushing uses for AI, including to create composite images of fake people and to conduct background checks. And those uses are raising ethical issues.
So to hear more about this, we've called Drew Harwell. He covers artificial intelligence for The Washington Post. He's with us now. Drew, welcome. Thanks so much for joining us.
DREW HARWELL: Thanks for having me.
MARTIN: So let's start with these recent developments in AI that you wrote about. You wrote about a new technology that generates photos of fake people. What is this, and how does it work?
HARWELL: So, pretty much, you can put in a lot of images of real people into the machine, and out it spits lots of computer-generated portraits that look really deceptively like the real thing. What this company is doing is selling those fake images to advertisers. The sales pitch they're making to companies is, hey, you want to check the diversity box. So come to us. Click a little checkbox, and you can say, I want all faces of older Asian people or young white people or middle-aged black people, and out it spits whatever you want.
But that's not real diversity, right? And so maybe they have good intentions, but this is sort of checkbox inclusion. It's not real. And what we found, too, is, you know, these creations are modeled after a certain sort of limited number of faces. It's just kind of mimicking what it's seen before.
MARTIN: The other concern that some of the critics raised, which is that this makes it even easier for these - somebody to create false scenarios which people can't distinguish from real ones. This has been something that people have been very concerned, that this is the kind of technology that's going to be used to create false scenarios that then real people will be grafted into, people won't be able to tell the difference. When you talk to the company about this, what do they say?
HARWELL: The companies kind of recognize that that's an issue. They understand that that's potentially corrosive to truth online and how we understand reality. But from their standpoint, it's not their problem. They feel like the real regulation from this should come from the government, somebody bigger than them to actually impose some law on the whole industry, not just one company at a time. But maybe there is something they could do. I mean, I've talked to a lot of AI researchers who say, what's to stop a company from this - putting like a watermark or a timestamp, some sort of signature so that people could understand that what they're looking at is not totally real? But right now, it's still the Wild West. And these companies can, without any real regulation, do whatever they want.
MARTIN: Speaking of Wild West scenarios, I mean, Airbnb maybe has a patent for a new tool which uses AI to assess potential guests based on social media posts and criminal record and other sort of information that is available or may be available on social media. Can you tell us a little bit more about that, and what are the ethical concerns that this is raising?
HARWELL: Yeah. So it's important for Airbnb to know that the customers who are renting homes from people are not a danger. And so their big idea was we can create an AI that processes everything we can find about a person that's online. And we can design the system that can figure, out in the patent's words, how Machiavellian is this person? Is this person a potential psychopath? You know, the dream being that they could create this psychographic profile of a person that could then sort of separate the good people from the bad.
Think about what's online about any one of us right now. It's a really incomplete portrait. And to think that a computer would start making these judgments about the complicated inner lives that we all lead, it's just kind of a scary proposition.
MARTIN: So let's talk a little bit about this whole question of regulation. As we mentioned, the Trump administration released a draft memo on this issue. In it, they wrote that, quote, where AI entails risk, agencies should consider the potential benefits and cost of employing AI when compared to the systems that AI has been designed to complement or replace. So that doesn't sound as though they're trying to impose any sort of external rules on this. I mean, given that you cover this field, what - you know, what do you say about that?
HARWELL: There's a few competing dynamics. One is that the U.S. doesn't want to be, quote-unquote, "beat" in the AI arms race by China. China is an extremely sophisticated force in developing artificial intelligence around the world. And there's a fear among American industry that they could get left behind. But, you know, from others in civil society, the fear is that if you don't actually impose laws on these things before they become more widespread, that's a real danger.
And these are not sort of fantastical sci-fi systems that aren't in people's lives right now. I mean, AI is in the facial recognition that's used to arrest people that the police and the FBI are using right now. It's creating fake videos. Whenever we search or get a recommendation on Spotify or Netflix, all of that is AI. So to think about, you know, a government saying, we're going to sort of stand back and let the industry figure this one out, it's a little alarming.
MARTIN: Before we let you go, is there any discussion within the industry about surfacing these ethical issues beyond the company itself?
HARWELL: Yeah. You're starting to hear that. The big giants like Amazon and Microsoft are starting to say - around facial recognition specifically - this is a powerful technology, and we don't want to be the ones to decide the rules of the road. It's just too important. So you are seeing sort of a bigger kind of push to establish a framework for AI ethics and AI laws. But every company is different, right? And every company has their own sort of incentives to go for the laws that would benefit them and potentially undermine their competition. So without a general consensus, it's just sort of, you know, every man for himself.
MARTIN: That was Drew Harwell. He's a reporter covering AI for The Washington Post. Drew Harwell, thanks so much for talking to us.
HARWELL: Thank you. Transcript provided by NPR, Copyright NPR.