The future of the deepfake — and what it means for fact-checkers - Poynter (2024)

The future of the deepfake — and what it means for fact-checkers - Poynter (1)

By:Tim Hwang

December 17, 2018

The work of the fact-checker is perpetually evolving. As tactics of spreading disinformation are exposed and countered, perpetrators continuously innovate new ways of distributing falsehoods and distorted narratives. Fact-checkers must contend with finding efficient ways of verifying information in the present, while actively preparing for the information environment of the future.

In this vein, “deepfakes” — the use of recent breakthroughs in artificial intelligence to create believable fakes in images, audio, and video — have raised concerns throughout the past year. This has been driven in part by a number of striking demonstrations that illustrate just how far the technology has come, from unsettling reproductions of presidential voices and the substitution of faces to create fake p*rn to the seamless deletion of objects in video. Policymakers and researchers, in turn, have worried that this technology will be applied to manipulate political discourse and for other harmful purposes.

The editing of images and video for deceptive effect is nothing new, of course. Doctored images and video have a long history of being shared and believed, and deepfakes only offers a new route by which to engage in an old method of deceit.

On that count, the potential threat posed by deepfakes is less around introducing a new kind of disinformation, and more around influencing quality and cost. Deepfakes seem to offer would-be creators of disinformation access to Hollywood-level movie magic without needing the massive resources or staff of a professional special effects team. So, the relevant questions for the fact-checking community are: How will deepfake techniques be used? By whom and when?

Predicting the future is always challenging, but we have a few hints of where things may be going based on how the research around these technologies is progressing.

For one, it is worth noting that the prerequisites to create a highly believable deepfakes still remains relatively high. Machine learning, the subfield of artificial intelligence which has driven much of the latest advances in the technology, relies on large amounts of data with which to “train” the system. Imitating Obama’s facial movements requires lots of existing video of Obama’s face. Simulating Donald Trump’s voice requires lots of audio of Donald Trump speaking. The more data similar to what is being faked, the better.

RELATED ARTICLE:We tried to create a deepfake of Mark Zuckerberg and Alex Jones — and failed. Here’s what happened.

This means that deepfakes are likely to make an appearance in circ*mstances where a significant amount of data of the person or thing to be faked is available. Public figures may be more “fakeable” through this method than private ones. Visually routine situations, like a press conference, are more likely to be faked than entirely novel ones.

Beyond data, there are additional requirements. Machine learning is a computationally intensive process — you need lots of computers in order to pull it off in a reasonable timeframe. Creating a customized, high-fidelity deepfake also requires specialized machine learning expertise. At the time of writing, it is still far from being a technology that anyone can easily pick up and use.

The upshot of all this is that we are not likely to be awash in deepfakes anytime soon. This technology will remain, for the near-term, a narrow technique likely to be leveraged by states and other well-resourced actors. That’s particularly true in a world where there are significantly cheaper and equally effective means of spreading disinformation. Simply taking an existing image and asserting that it is something that it is not, for instance, might achieve the same impact as a deepfake with none of the hurdles of data, computing power and expertise. Crude, rough-and-ready deception will remain the norm.

Finally, it is also worth noting that while machine learning might generate strikingly realistic video and audio, it still relies on fallible humans to create believable context. Machine learning cannot yet write a believable script for a fake Donald Trump, nor magically stage the video in a likely time and place.

This means that while deepfakes might render certain fact-checking techniques that look at the doctoring of media less effective, it still remains vulnerable to investigative work that looks at context. Finding eyewitnesses, looking for inconsistencies, and assessing corroborating facts have been core to the work of fact-checking, and will remain the key tool even in a world of deepfakes.

Read the rest of our predictions

Support high-integrity, independent journalism that serves democracy. Make a gift to Poynter today. The Poynter Institute is a nonpartisan, nonprofit organization, and your gift helps us make good journalism better.

Donate

Tags: Fact Checking, The Future of Facts

The future of the deepfake — and what it means for fact-checkers - Poynter (3)

Tim Hwang

Tim Hwang

More News

Opinion | More media questions from the Justice Samuel Alito recording

Lauren Windsor released more of her secretly recorded conversation with Alito and stood her ground in a Q&A with Politico.

June 13, 2024

Tom Jones

Opinion | AI’s coming inverted pyramid moment for journalism

Technology has long had an effect on copy, starting with a staple of journalism that feels engrained in the nature of news itself.

June 13, 2024

David Cohn

Hunter Biden guilty on gun charges; here’s what it means and what’s next

A jury in Delaware on June 11 returned guilty verdicts in all three counts against Hunter Biden, the president's son

June 13, 2024

Jeff Cercone

Journalists Recovery Network was built to let people ‘know that they aren’t alone’

This project offers stories, resources and space for people with substance use disorder

June 12, 2024

Kristen Hare

Opinion | Was it unethical to secretly record Supreme Court Justice Samuel Alito?

Lauren Windsor calls herself a journalist. But she didn’t act like one when she deceived Alito to get him to talk.

June 12, 2024

Tom Jones

Back to News

The future of the deepfake — and what it means for fact-checkers - Poynter (2024)
Top Articles
Latest Posts
Article information

Author: Golda Nolan II

Last Updated:

Views: 6083

Rating: 4.8 / 5 (58 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Golda Nolan II

Birthday: 1998-05-14

Address: Suite 369 9754 Roberts Pines, West Benitaburgh, NM 69180-7958

Phone: +522993866487

Job: Sales Executive

Hobby: Worldbuilding, Shopping, Quilting, Cooking, Homebrewing, Leather crafting, Pet

Introduction: My name is Golda Nolan II, I am a thoughtful, clever, cute, jolly, brave, powerful, splendid person who loves writing and wants to share my knowledge and understanding with you.