Twitter’s Strategy to Counter Deep Fakes






Deep fake is the linguistic blend of "deep learning" and "fake". It uses deep learning technology, a division of machine learning that applies neural net simulation to massive data sets, to create a fake or synthetic media in which an individual in an existing image or video is replaced with someone else's likeness. Though civilisations have been encountering the problems of fake news since time immemorial, but deep fakes take the deception potential to another level.
The learning methods to generate deep fakes involve training generative neural network architectures, like auto-encoders or generative adversarial networks (GANs). GANs pit two Artificial Intelligence algorithms against each other, one creating the fakes and the other grading its efforts, teaching the synthesis engine to make better forgeries. The number of iterations determine the quality of synthetic content.
Though deep fakes have got uses in entertainment industry (to create humour) but they have also garnered negative attention for their applications in celebrity pornographic videos, electoral campaigns, hoaxesfake news, and financial fraud. Seeing the sensitivity of the issue, both industry and government are seeking solutions to check the spread of deep fakes and their virality.

Deep Fake in media
Very recently, deep fake was used to stir up the USA political scenario. Nancy Pelosi is the speaker of the US House of Representatives. One of her videos was slowed down by 25 percent and the pitch of the video was altered to make it seem like she was slurring her words. The video was posted by a Facebook page called Politics Watchdog and got viral soon. It was shared widely, including by former New York City mayor Rudy Giuliani, who tweeted: “What is wrong with Nancy Pelosi? Her speech pattern is bizarre.” Deep fakes have penetrated our lives to such depths. Facebook initially refused to remove the clip, but said it had reduced its distribution after it was fact checked as false. Later the post was deleted, but the damage had already been done. Since this incident, the issue of deep fakes came into the limelight and the social-media giants felt the need to take concrete steps to combat its notoriety. Most notably, Twitter announced some steps that it might implement to reduce the creation and circulation of deep fakes. 

Twitter’s Strategy

Twitter may implement following tools to combat the issue: 
  • place a notice next to Tweets that share synthetic or manipulated media;
  • warn people before they share or like Tweets with synthetic or manipulated media; 
  • add a link – for example, to a news article or Twitter Moment – so that people can read more about why various sources believe the media is synthetic or manipulated.
The content would be categorised pertaining to the following questions:
  • Is the media significantly and deceptively altered or fabricated?
  • Is the media shared in a deceptive manner?
  • Is the content likely to impact public safety or cause serious harm?


In the new features, potential misinformation will be labelled orange or red with the tag “Harmfully Misleading”, also reducing the tweet’s visibility and will show up on fewer timelines. The message beneath the label will read, “Twitter Community Reports have identified this tweet as violating the Community Policy on harmfully misleading information”. Twitter is looking to encourage community members to write “Notes” and provide “critical context”. It will rely on community-based feedback system to remove misinformation. The labelling of tweets containing deep fakes will start on March 5.

The concern regarding bias
The problem with labels is that the same content could be found appropriate by one individual and inappropriate by some other. Also, community-based feedback could be hampered by ideological differences and biasness. The feature could be exploited by false positives and false negatives. 

Conclusion
Ironically, the solution to fight against AI is AI only. Tech firms are now working on better detection systems that aim to flag up fakes whenever they appear. Digital watermarks are not fool-proof, but a blockchain online ledger system could hold a tamper-proof record of videos, pictures and audio so their origins and any manipulations can always be checked. Whatever the case maybe, we recommend that as viewers, always cross verify the content before sharing or forwarding. Maybe start with this article.

Written By – Prateek Bansal
Edited By – Purav Nayak

Want to join the Eat My News's global community? Here is an opportunity to join the Board of Young Leaders Program by Eat My News. Click here to know more: bit.ly/boardofyoungleaders