Print Friendly, PDF & Email

Insights into Editorial: Countering deepfakes, the most serious AI threat

deepfakes

 

Context:

Disinformation and hoaxes have evolved from mere annoyance to high stake warfare for creating social discord, increasing polarization, and in some cases, influencing an election outcome.

Recently, cybercrime officials in India have been tracking certain apps and websites that produce nude photographs of innocent people using Artificial Intelligence (AI) algorithms.

Deepfakes are a new tool to spread computational propaganda and disinformation at scale and with speed.

Access to commodity cloud computing, algorithms, and abundant data has created a perfect storm to democratise media creation and manipulation.

Deepfakes are the digital media (video, audio, and images) manipulated using Artificial Intelligence. This synthetic media content is referred to as deepfakes.

About Deep Fake:

Deep fakes or deep nudes are computer-generated images and videos. Cybercriminals use AI softwares to superimpose a digital composite (assembling multiple media files to make a final one) onto an existing video, photo or audio.

Using AI algorithms a person’s words, head movements and expressions are transferred onto another person in such a seamless way that it becomes difficult to tell that it is a deep fake, unless one closely observes the media file.

Deep fakes first came into notice in 2017 when a Reddit user posted explicit videos of celebrities. After that several instances have been reported.

Undermining democracy:

  1. A deep fake can also aid in altering the democratic discourse and undermine trust in institutions and impair diplomacy.
  2. False information about institutions, public policy, and politicians powered by a deepfake can be exploited to spin the story and manipulate belief.
  3. A deep fake of a political candidate can sabotage their image and reputation.
  4. Leaders can also use them to increase populism and consolidate power. Deepfakes can become a very effective tool to sow the seeds of polarisation, amplifying division in society, and suppressing dissent.
  5. Another concern is a liar’s dividend an undesirable truth is dismissed as deep fake or fake news.

Damage to Personal Reputation of personalities:

  1. Deepfake can depict a person indulging in antisocial behaviours and saying vile things.
  2. These can have severe implications on their reputation, sabotaging their professional and personal life.
  3. Even if the victim could debunk the deep fake, it may come too late to remedy the initial harm.
  4. Further, Deepfakes can be deployed to extract money, confidential information, or exact favours from individuals.
  5. A deepfake could act as a powerful tool by a nation-state to undermine public safety and create uncertainty and chaos in the target country.
  6. Nation-state actors with geopolitical aspirations, ideological believers, violent extremists, and economically motivated enterprises can manipulate media narratives using deepfakes.
  7. It can be used by insurgent groups and terrorist organisations, to represent their adversaries as making inflammatory speeches or engaging in provocative actions to stir up anti-state sentiments among people.

Concerns regarding deep fake images:

  1. The technology becomes vulnerable because deep fake images, audio and videos are very realistic and can be used by cybercriminals to spread misinformation to intimidate or blackmail people, seek revenge or commit fraud on social networking and dating sites.
  2. It has become one of the modern frauds of cyberspace, along with fake news, spam/phishing attacks, social engineering fraud, catfishing and academic fraud.
  3. It can be used to create fake pornographic videos and to make politicians appear to say things they did not, so the potential for damage to individuals, organisations and societies is vast.
  4. With the improvement in technology, deep fakes are also getting better.
    1. Initially, an individual with advanced knowledge of machine learning and access to the victim’s publicly-available social media profile could only make deep fakes.
  5. Development of apps and websites capable of such editing became more frequent and easily accessible to an average user.

Way Forward Solutions:

  1. To defend the truth and secure freedom of expression, we need a multi stakeholder and multi¬modal approach.
  2. Media literacy for consumers and journalists is the most effective tool to combat disinformation and deepfakes.
    1. Media literacy efforts must be enhanced to cultivate a discerning public.
  3. As consumers of media, we must have the ability to decipher, understand, translate, and use the information we encounter.
  4. Even a short intervention with media understanding, learning the motivations and context, can lessen the damage.
  5. Improving media literacy is a precursor to addressing the challenges presented by deepfakes.
  6. Meaningful regulations with a collaborative discussion with the technology industry, civil society, and policymakers can facilitate disincentivising the creation and distribution of malicious deepfakes.
  7. We also need easy-to-use and accessible technology solutions to detect deepfakes, authenticate media, and amplify authoritative sources.

Conclusion:

To counter the menace of deepfakes, we all must take the responsibility to be a critical consumer of media on the Internet, think and pause before we share on social media, and be part of the solution to this infodemic.

To defend the truth and secure freedom of expression, there is a need for a multi-stakeholder and multi-modal approach.

Collaborative actions and collective techniques across legislative regulations, platform policies, technology intervention, and media literacy can provide effective and ethical countermeasures to mitigate the threat of malicious deep fakes.