$10 million just to track down the deepfakes. But what is a deepfake anyway? Why is Facebook, with a consortium of academics and companies, willing to spend so much money to detect a montage made by Artificial Intelligence? Forecasting AI tells you more about a subject that has made and will continue to make the buzz in the coming years.
Facebook is not the only mastodon to get started on deepfake in the United States, but its recent experience in the last American elections and the takeover of Donald Trump in 2016 have now forced it to take the lead and show good will in their simple detection.
Then Facebook's Deepfake Detection Challenge appeared recently on our screens of connected devices after the publication of a post soberly named Creating a data set and a Challenge for Deepfakes on the American company's blog. But, in fact, what does this new name, deepfake, mean? Where does he come from, what is he hiding? Why is Facebook taking a close interest in this and what steps has it taken to implement its new policy?
But what is a deepfake anyway?
In English, the term deepfake comes from the contraction of two words "Deep" for "DeepLearning" and "Fake" for "false". The principle of deepfake is therefore to transform videos using Artificial Intelligence and deep learning to spread false information in a viral way.
Before, fake news was therefore mainly about writing or images and no one could yet imagine videos being eaten by the same evil. But you have to get used to it, tampering with a video to produce false information and disseminate it has become possible and, indeed, easy.
What are the known techniques of deepfake?
The techniques are numerous and allow us to achieve an amazing result. The swap face allows you to change a person's face by another frame by frame to make them say words or hold expressions that are not theirs.
Lip sync allows you to simply modify the lip base to adapt it to a speech that the person would not have held.
Finally, the face to face allows the face of a personality to be superimposed on that of an actor who has recorded a false version.
These different techniques make it possible to make anyone say anything and especially to make the general public believe it or to spread false information. Some stars or celebrities have already been trapped by deepfake, without asking anyone for anything, finding themselves in the middle of a pornographic scene that they have of course never shot.
Thanks to Artificial Intelligence that learns faster and faster and better video rendering with optimized resolution, deepfake is more and more difficult to distinguish and detect. Hence the concerns of Facebook, other companies and universities before, among other things, the US presidential elections in 2020.
Why does Facebook want to detect deepfake?
And scalded cat fears cold water. After the Cambridge Analytica scandal with the leakage of thousands of user data as part of the 2016 American campaign, the numerous fake news broadcast on social networks such as Twitter or Facebook and Mark Zuckerberg's summons to appear before the American Congress to justify all these differences, the American firm no longer intends to relive such scenarios.
Sentenced to a record $5 billion fine by the Federal Trade Commission at the time (representing 9% of its revenue in 2018) for violating its commitments to protect the privacy of its users in the Cambridge Analytica scandal, the social network now wants to be transparent and ensure that such a misadventure will not happen again. Especially since he had been harshly preached by the American Congress, reminding him on that occasion of the many apologies he had already had to make in previous cases.
What means to detect deepfakes?
Consequently, Facebook is giving itself the means to identify deepfakes, anticipate and avoid a scandal that could taint future elections. Because deepfakes are the future of spreading false information after fake news for written and pictorial content. The objective is simple: to identify fake videos made with the help of Artificial Intelligence and ban them before as many users of the platform as possible can view them online.
The American giant therefore launched the Deepfake Detection Challenge on its own and brought together many universities and digital companies behind this project to achieve its goals and find an effective solution and tools for deepfake detection. In the university sector, MIT, Oxford University and the University of California at Berkeley have joined forces with a wide range of companies, including Microsoft or Apple, Google, Amazon and Intel through their Partnership of AI alliance.
10 million invested in data sets and a competition
Facebook therefore had the idea to create data sets and make them available to contributors to launch a competition on deepfakes detection. "In order to move from the information age to the knowledge age, we must do better in distinguishing the real from the fake, reward trusted content over untrusted content, and educate the next generation to be better digital citizens, says Hany Faried, Professor of Electrical Engineering and Computer Science at the University of California at Berkeley, in a Facebook press release. This will require investments across the board, including in industry/university/NGO research efforts to develop and operationalize technology that can quickly and accurately determine which content is authentic."
Deepfake tested for new solutions
To motivate all those tempted by such a competition and to respect fairness, Facebook has thus transmitted a common database with fake videos, created for the occasion with actors and not extracted from the user data it has. A ranking will also be published to identify the best solutions and submit them to new competitors. And of course prizes and other rewards will be given to the most competitive participants.
This may save him from having to deal with American justice again or having his boss summoned to appear before the American Congress to apologize once again for abuses. In any case, Facebook is taking the lead by doing so, but this should not prevent deepfakes from flourishing until a suitable solution is found. Like the one where Mark Zuckerberg claims to have stolen all the user data from his social network and transmitted it to Spectre, the secret organization well known to James Bond fans. A video that became a viral on Facebook and Instagram in no time at all, owned by... Facebook.