Despite people's fears, sophisticated, deceptive videos known as "deepfakes" haven't arrived this political season. But it's not because they aren't a threat, sources tell NPR. It's because simple deceptions like selective editing or outright lies have worked just fine.
"You can think of the deepfake as the bazooka and the video splicing as a slingshot. And it turns out the slingshot works," said Hany Farid, a professor at the University of California, Berkeley specializing in visual misinformation.
The potential for deepfakes has been demonstrated: In 2018 a deepfake video created with artificial technology showed Barack Obama saying words he never uttered. And with such a potent tool of disinformation in the public realm, experts still believe it's just a matter of time before we see them appear more widely.
"This sort of technology is going to continue to advance, so these sorts of capabilities are going to become easier to use," said Matt Turek, who heads an office at the Pentagon's Defense Advanced Research Projects Agency focused on detecting manipulated media.
But amid protests over race relations in American cities, conspiracy theories about the Coronavirus crisis, tension over President Trump's Supreme Court pick, and a contentious presidential race, few deepfakes have been used this political season. One notable exception was a faked video showing former Vice President Joe Biden sticking his tongue out, which was tweeted out by the president himself.
"That actually was manipulated using deep learning-based technology," said Lindsay Gorman, the Emerging Technologies Fellow at the Alliance for Securing Democracy, about the Biden video. "And I would classify that as a deepfake."
Because of the way that deepfake audio and videos could alter the very sense of reality, the technology has been a source of alarm for years, especially following the foreign misinformation campaign employed by Russia during the 2016 election. And scientists in the field have been closely watching America's adversaries for signs these technologies might be employed.
"If you look at who the United States considered foreign entities [with] an interest in interfering with U.S. elections or just causing trouble in general... China, Russia, Iran and even North Korea falls into that category," said Brian Pierce, a visiting research scientist with the Applied Research Laboratory for Intelligence and Security at the University of Maryland.
Effective deepfakes could create false intelligence that alters decisions at the Pentagon, could impact the public's view of law enforcement, could affect commercial industries like the insurance industry and the evidence they require after accidents, Turek told NPR, listing off a few potential examples of the technology's effects.
These fears have not come to fruition in the political sphere, scientists working in this space say, largely because more primitive misinformation techniques remain so effective. Misleading or false text, photoshopped or improperly contextualized images and selectively edited videos don't require sophisticated technology and aren't considered deepfakes.
Plus, deepfake technology can be resource-intensive and technically difficult.
"It's not like you can just download one app and... provide something like a script and create a deepfake format and make [an individual] say this," Turek said. "Perhaps we will [get to that point] over the next few years, then it's going to be very easy potentially to create compelling manipulations."
With all the attention paid to misinformation in politics, the most substantial ongoing damage caused by deepfakes is happening in the personal sphere.
"We are seeing it being weaponized, just not necessarily in [politics], but in the form of nonconsensual pornography," said Farid. "So this is a huge problem on the internet right now, where women's likenesses are being inserted into sexually explicit material and that material is being distributed."
And that is where experts say American vulnerability most lies — because fakes of national candidates will be quickly debunked. David Doermann, director of the Artificial Intelligence Institute at the University at Buffalo, thinks that if deepfakes were used on a local level, targeting local election races, there could be substantial damage.
"The place that we saw these fakes hurt people, initially, was at a very grassroots level. They were using it for revenge on a spouse or a partner. And you know at that level, it can do a lot of damage," Doermann told NPR.
DAVID GREENE, HOST:
Sophisticated, computer-generated audio or video is something known as a deepfake. It's been a concern ever since this video went viral.
(SOUNDBITE OF VIDEO)
JORDAN PEELE: (As Barack Obama) We're entering an era in which our enemies can make it look like anyone is saying anything at any point in time, even if they would never say those things.
GREENE: That's from a deepfake featuring the voice of actor Jordan Peele. His face was transposed on a video of President Obama, making it look and sound like the president delivered those words. Experts are worried our adversaries would use the deepfake technology to meddle in an election. Tim Mak and Dina Temple-Raston from NPR's investigations team explain why this hasn't happened yet.
TIM MAK, BYLINE: The first deepfake of the 2020 election season was tweeted out by someone you might not expect - President Donald Trump. Back in April, the president retweeted a crudely manipulated video of former Vice President Joe Biden.
LINDSAY GORMAN: Biden appeared to have his tongue out and in kind of a ridiculous pose.
MAK: That's Lindsay Gorman. She's an expert in technology and disinformation at the Alliance for Securing Democracy.
GORMAN: And it turns out that that actually was manipulated using deep learning-based technology. And I would classify that as a deepfake.
DINA TEMPLE-RASTON, BYLINE: Deep learning-based technology is sophisticated. It's more than just Photoshopping something. It uses a kind of artificial intelligence, or AI, and it works a bit like the brain does and takes lots of little bits of information and brings them together. And it's that deep learning that computers can now do that makes deepfakes so believable. An example of just how far we've come popped up in China recently. A news anchor for Xinhua, China's state-run news agency, was completely generated with AI.
(SOUNDBITE OF VIDEO)
COMPUTER-GENERATED VOICE: (Non-English language spoken).
TEMPLE-RASTON: This is from part of a newscast read by a computer-generated person. And she seems pretty real.
MAK: Which is a problem because when experts look at adversaries who might use deepfakes against the United States, China is at the top of the list, so says Brian Pierce, a scientist at the Applied Research Laboratory for Intelligence and Security at the University of Maryland.
BRIAN PIERCE: Foreign entities who have an interest in interfering with U.S. elections or just causing trouble in general, yes, we certainly put China, Russia, Iran and even North Korea falls into that category.
MAK: Pierce says deepfakes haven't played the role people feared in this election season because at this point, to make one is still time-consuming and expensive. You can't just cook one up in rapid response to something that just happened. So that's the good news. The bad news, he says, is that selective editing, textual misinformation and lies are all rather effective. And you don't need AI for that.
PIERCE: I think things are getting better now. There's - a lot of times, there are strategies probably you could employ, but I hesitate to focus too much on deepfakes because I think there's a lot of other ways they could achieve their goal.
MAK: We saw that in 2016 with Twitter accounts and Facebook pages that can sow division without all that software and computing power. David Doermann, director of the AI Institute at University of Buffalo, says that deepfakes right now are highly personal, and they may enter the political scene locally.
DAVID DOERMANN: And the place that we saw these deepfakes hurt people initially was a very grassroots level. They were using it for revenge on a spouse or a partner. And at that level, it can do a lot of damage.
TEMPLE-RASTON: And that makes sense. When America's adversaries first started testing their cybercapabilities, it wasn't at the national level. They also went local. They cracked into local election databases just to look around and to see if they could. In the same way, Doermann says, people trying to meddle in our national politics might test their deepfakes on local election races first. And that's what we should watch for. For NPR News, I'm Dina Temple-Raston in New York.
MAK: And I'm Tim Mak in Washington.
(SOUNDBITE OF SHIGETO'S "WHAT WE HELD ON TO") Transcript provided by NPR, Copyright NPR.