Professor Hao Li used to think it could take two to three years for the perfection of deepfake videos to make copycats indistinguishable from reality.
But now, the associate professor of computer science at the University of Southern California, says this technology could be perfected in as soon as six to 12 months.
Deepfakes are realistic manipulated videos that can, for example, make it look a person said or did something they didn’t.
“The best possible algorithm will not be able to distinguish,” he says of the difference between a perfect deepfake and real videos.
Li says he’s changed his mind because developments in computer graphics and artificial intelligence are accelerating the development of deepfake applications.
A Chinese app called Zao, which lets users convincingly swap their faces with film or TV characters right on their smartphone, impressed Li. When ZAO launched on Aug. 30, a Friday, it became the most downloaded app in China’s iOS app store over the weekend, Forbes reports.
“You can generate very, very convincing deepfakes out of a single picture and also blend them inside videos and they have high-resolution results,” he says. “It’s highly accessible to anyone.”
On the problems with deepfakes
“There are two specific problems. One of them is privacy. And the other one is potential disinformation. But since they are curating the type of videos where you put your face into it, so in that case, this information isn’t really the biggest concern.”
On the threat of fake news
“You don’t really need deepfake videos to spread disinformation. I don’t even think that deepfakes are the real threat. In some ways, by raising this awareness by showing the capabilities, deepfakes are helping us to think about if things are real or not.”
On whether deepfakes are harmful
“Maybe we shouldn’t really focus on detecting if they are fake or not, but we should maybe try to analyze what are the intentions of the videos.
“First of all, not all deepfakes are harmful. Nonharmful content is obviously for entertainment, for comedy or satire, if it’s clear. And I think one thing that would help is … something that is based on AI or something that’s data-driven that is capable of discerning if the purpose is not to harm people. That’s a mechanism that has to be put in place in domains where the spread of fake news could be the most damaging and I believe that some of those are mostly in social media platforms.”
On the importance of people understanding this technology
“This is the same like when Photoshop was invented. It was never designed to deceive the public. It was designed for creative purposes. And if you have the ability to manipulate videos now, specifically targeting the identity of a person, it’s important to create awareness and that’s sort of like the first step. The second step would be we have to be able to flag certain content. Flagging the content would be something that social media platforms have to be involved in.
“Government agencies like DARPA, Defense Advanced Research Projects Agency, their purpose is basically to prepare America against potential threats at a technological level. And now in a digital age, one of the things that they’re heavily investing into is, how to address concerns around disinformation? In 2015, they started a program called MediFor for media forensics and the idea is that while now we have all the tools that allow us to manipulate images, videos, multimedia, what can we do to detect those? And at the same time, AI advanced so much specifically in the area of deep learning where people can generate photorealistic content. Now they are taking this to another level and starting a new program called SemaFor, which is semantic forensics.”
On why the idea behind a deepfake is valuable
“Deepfake is a very scary word but I would say that the underlying technology is actually important. It has a lot of positive-use cases, especially in the area of communication. If we ever wanted to have immersive communication, we need to have the ability to generate photo-realistic appearances of ourselves in order to enable that. For example, in the fashion space, that’s something that we’re working on. Imagine if you ever wanted to create a digital twin of yourself and you wanted to see yourself in different clothing and do online shopping, really virtualizing the entire experience. And deepfake specifically are focusing on video manipulations, which is not necessarily the primary goal that we have in mind.”
On whether he feels an obligation to play a role in combating the use of deepfakes for spreading disinformation
“First of all, we develop these technologies for creating, for example, avatars right, which is one thing that we’re demonstrating with our startup company called Pinscreen. And the deepfakes are sort of like a derivative. We were all caught by surprise.
“Now you have these capabilities and we really have an additional responsibility in terms of what are the applications. And this goes beyond our field. This is an overall concern in the area of artificial intelligence.”
This article was originally published on WBUR.org.