Facebook has announced that it will be taking steps to crack down on videos manipulated using artificial intelligence.
The announcement was published in a blog post on Monday ahead of a House Energy and Commerce hearing on manipulated media.
“Today we want to describe how we are addressing both deepfakes and all types of manipulated media,” said Facebook’s vice-president of global policy management, Monica Bickert. “Our approach has several components, from investigating AI-generated content and deceptive behaviors like fake accounts, to partnering with academia, government and industry to exposing people behind these efforts.”
Going forward, Facebook will remove “misleading manipulated media” if it has been edited or synthesized in ways that aren’t apparent to an average person and would likely mislead someone into believing that a subject of the video said words they didn’t actually say.
In addition, Facebook will remove videos if “it is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”
However, the policy does not cover videos that is parody or satire, or has been edited “solely to omit or change the order of words.” Of course, this means that users could argue that their flagged video is a parody or is intended to be satire.
Ahead of the 2020 US presidential elections, deepfakes represent a significant challenge, with huge amounts of fake news and disinformation expected to be released, thus misleading voters.
Many have questioned why Facebook have focused just on deepfakes, and not rather the issue of misleading videos.
“Facebook’s policy does not get to the core issue of how their platform is being used to spread disinformation, but rather how professionally that disinformation is created,” said campaign spokesman Bill Russo in a statement. “Banning deepfakes should be an incredibly low floor in combating disinformation.”
Boston University Law School professor Danielle Citron praised Facebook for becoming more proactive, yet pointed out that many deepfakes don’t involve words, but rather actions like deepfake sex videos.
Bickert added that in September, Facebook launched the Deep Fake Detection Challenge, which has encouraged people globally to produce more researcher and open source tools to detect deepfakes.
Facebook will also be partnering with Reuter to helps newsrooms worldwide to identify deepfakes through a free online training course.