Tech

Facebook Takes a Stand on Political Deepfakes, a Problem That Doesn’t Exist

Woman looking at Facebook.

Last year, an artist made a deepfake of Facebook founder Mark Zuckerberg. It was meant to be a jab at the company’s inability—or refusal—to take down manipulated imagery, by nudging the man in charge himself.

Facebook scrambled to draft a policy to respond. It started a long-overdue conversation about what types of manipulated media should be allowed to stay on Facebook, and what to do with satire, parody, and non-malicious edits.

Videos by VICE

Nearly a year later, Facebook seems to think it’s figured it out. In a new policy announced on Tuesday, Facebook says it will remove deepfakes, or algorithmically-generated face-swapped videos, from its platform.

It will remove media that’s edited using machine learning, with the intent to mislead viewers. Everything else—satire, parody, art, silly stuff like Nicholas Cage’s face on a woman’s body, non-algorithmic edits—stays up, but may receive a flag that warns viewers that it’s manipulated.

The company seems to have listened to experts’ advice. “Too often, platforms and users assume the only option is to take content down or not,” Corynne McSherry, legal director for the Electronic Frontier Foundation told Motherboard in May. “But there are other options—like providing additional information.”

According to a press release, Facebook will remove media that fits the following criteria:

  • It has been edited or synthesized—beyond adjustments for clarity or quality—in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say, and,
  • It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.

“While these videos are still rare on the internet, they present a significant challenge for our industry and society as their use increases,” the press release states.

But these videos aren’t just rare. They’re functionally non-existent. Despite anxieties over political manipulation using deepfakes, with Zuckerberg calling the technology an “emerging threat” while testifying before Congress in October and multiple bills introduced in recent years to combat their spread, there have been no serious uses of deepfakes to manipulate politics. They’ve almost exclusively been used to manipulate women’s bodies.

Sam Gregory, program director at human rights nonprofit WITNESS (which is also a partner on Facebook’s recently-instated “Deepfakes Detection Challenge“), told Motherboard that he’s less skeptical about this policy, but acknowledges its limits.

“Given how poorly the platforms did globally on dealing with other forms of misinformation and disinformation as they rapidly grew in scope and scale, there’s value in investing in detection capacity and trying to craft a deepfakes policy in advance of the threat expanding even further,” Gregory said. “I also think that deepfakes in disinfo/misinformation present a particular problem because we’re not cognitively well-equipped to discern them, haven’t had experience with them like edited videos, and journalists don’t generally have the tools to detect them.”

Alongside the new policy and challenge, Facebook announced a partnership with Reuters to train journalists on how to identify and address manipulated media.

Taken together, this amounts to a thorough policy and prevention plan for a problem that doesn’t yet exist in real life. While we’ve seen “shallowfakes,” or videos crudely edited to misrepresent the subject matter, like the slowed-down video of Nancy Pelosi made to look incoherent, there’s been no convincing use of deepfakes for manipulation in US politics. This policy doesn’t address these types of media, even though they fool viewers as easily—or easier—than a true, algorithmically-generated deepfake.

Shallowfakes and non-algorithmic manipulated media could be a bigger problem abroad than in U.S. politics. “In the expert meetings we’ve held in Brazil and South Africa, the consistent feedback is that Facebook and other platforms need to provide tools and support on problems that have not yet been adequately dealt with like shallowfakes,” Gregory said.

What we have seen globally, however, are hundreds of thousands of examples of deepfakes and manipulated imagery being used to target, harass, and defame women, using their images without their consent. A 2019 report by research firm Deeptrace showed that 96 percent of the total deepfake videos online portray women in porn. But we don’t need researchers to tell us that: Women have been saying it since the technology’s inception.