A report published Tuesday by New York University warns that fake videos and other misleading or false information could be deployed by domestic and foreign sources in efforts influence the U.S. 2020 presidential election campaign and details strategies to combat such disinformation.
“We urge the companies to prioritize false content related to democratic institutions, starting with elections.”
—Paul M. Barrett, report author
The report—entitled Disinformation and the 2020 Election: How the Social Media Should Prepare—predicts that for next year’s election, so-called “deepfake” videos will be unleashed across the media landscape “to portray candidates saying and doing things they never said or did” and, as a result, “unwitting Americans could be manipulated into participating in real-world rallies and protests.”
Deepfakes, as NPR reported Monday, are “computer-created artificial videos or other digital material in which images are combined to create new footage that depicts events that never actually happened.” Manipulated videos like those of Democratic House Speaker Nancy Pelosi (Calif.) that spread virally online earlier this year—often called shallowfakes or cheapfakes—also pose a threat to democratic elections, the report says.
In terms of delivery of disinformation, the NYU report spotlights the messaging service WhatsApp and the video-sharing social media network Instagram—which are both owned by Facebook. A report commissioned by the Senate Intelligence Committee in the wake of the 2016 election accused Russia of “taking a page out of the U.S. voter suppression playbook” by using social media platforms including Facebook and Instagram to target African-American audiences to try to influence their opinions on the candidates in that race.
The NYU report predicts that governments such as Russia, China, and Iran may work to disseminate lies in attempts to sway public opinions regarding the next race for the White House, but “domestic disinformation will prove more prevalent than false content from foreign sources.” Digital voter suppression, it warns, could “again be one of the main goals of partisan disinformation.”
To combat disinformation from all sources, the NYU report offers nine recommendations for major social media companies:
- Realistic but fraudulent videos have the potential to undermine political candidates and exacerbate voter cynicism.
- The platforms already remove hate speech, voter suppression, and other categories of content; we recommend that they add one more.
- : Each company needs an executive with clout to supervise the process of guarding against disinformation.
- The photo-posting platform needs the concerted attention of its parent, Facebook.
- Users should be restricted to forwarding content to one chat group at a time.
- The companies must prepare for false content generated by hired-gun firms.
- Narrowly tailored bills pending on Congress could help curb some forms of disruption.
- For example, when one platform takes down abusive accounts, others should do he same with affiliated accounts.
- Users have to take responsibility for recognizing false content, but they need more help to do it.
Paul M. Barrett, the report’s author and deputy director of the NYU Stern Center for Business and Human Rights, told The Washington Post that social media companies “have to take responsibility for the way their sites are misused.”
“We urge the companies to prioritize false content related to democratic institutions, starting with elections,” he said. “And we suggest that they retain clearly marked copies of removed material in a publicly accessible, searchable archive, where false content can be studied by scholars and others, but not shared or retweeted.”
While the removal of disinformation by social media giants is touted as a positive strategy by Barrett and others, such calls have sparked censorship concerns, especially as online platforms such as Facebook and YouTube have recently blocked content or shut down accounts that spread accurate information.
SCROLL TO CONTINUE WITH CONTENT