How to spot deepfakes? Explained by Digimagg

Discover techniques to identify deepfake content and safeguard yourself against misinformation.

Apr 5, 2024 - 15:44
Apr 9, 2024 - 12:09
How to spot deepfakes? Explained by Digimagg
deepfakes

The rise of deepfakes is starting to challenge our understanding of reality, blurring the distinction between truth and falsehood, particularly impacting politicians and celebrities who are prime targets for manipulated content.

From deepfake audio allegedly featuring London Mayor Sadiq Khan making inflammatory statements just prior to the nation's Armistice Day commemoration, to Taylor Swift being targeted with AI-generated explicit images, the ability to manipulate content is increasingly accessible and widespread, available to virtually anyone, anywhere.

With the unveiling of Sora by OpenAI — undoubtedly an impressive feat in producing sophisticated video simulations of real-world scenarios — the challenge of distinguishing between authentic and fabricated media is anticipated to become increasingly complex in 2024.

Considering the significant proliferation of deepfakes, it's evident that there is a pressing need for improvement among vendors, regulators, and users in detecting and controlling the dissemination of deepfake content. Moreover, consumers require comprehensive assistance in discerning the authenticity of content. What efforts is the artificial intelligence (AI) industry making to address this issue, and what steps can individuals take to protect themselves? Continue reading to explore further.

How AI vendors are addressing & The DeepFake crisis

Following the controversy surrounding the Taylor Swift deepfakes, Microsoft swiftly implemented a ban on creating images of celebrities. Previously, the company had restrictions in place for generating images of public figures and those containing nudity.

Numerous other vendors, such as OpenAI and Google, uphold comparable content moderation guidelines, prohibiting the creation of content featuring public figures using tools like DALL-E 3 and ImageFX.

However, through inventive prompts and circumventions, users can manipulate AI image and voice generation tools into producing content that violates these moderation policies.

For instance, in Telegram channels where Swift deepfakes circulated, some users deliberately misspelled celebrity names or utilized suggestive language to prompt image generators to produce fake images.

The fact remains that whenever AI vendors introduce new moderation policies, malicious actors will attempt to find loopholes. While vendors like OpenAI and Google are endeavoring to enhance these policies by digitally watermarking AI-generated images, there is still considerable progress to be made.

Examining the present regulatory environment

Currently, the legal and regulatory framework concerning the production of deepfakes is in its early stages, with authorities in the United States and across the European Union seeking to mitigate the proliferation of deepfake pornography and election-related content.

In the United States, at least 14 states have proposed legislation aimed at addressing the threat of deepfakes disseminating misinformation during elections. These legislative efforts vary, ranging from bills mandating disclosure requirements, such as those introduced in Alaska, Florida, and Colorado, which would necessitate disclosures on AI-generated media distributed to influence elections, to outright bans like those proposed in Nebraska, which would prohibit the dissemination of deepfakes before elections.

Within the European Union, the European Council and Parliament have reached a consensus on a proposal to criminalize the unauthorized sharing of intimate images, encompassing AI-generated deepfakes. Similarly, the United Kingdom intends to criminalize the dissemination of deepfake images under forthcoming online safety legislation.

While these measures represent initial steps, they underscore regulators' increasing scrutiny of the risks associated with AI-generated content. However, for the time being, it remains primarily the responsibility of users to identify deepfakes when encountered.

How to detect deepfakes?

Users can safeguard themselves against deepfakes by familiarizing themselves with the distinctive characteristics found in deepfake images, videos, and audio.

Some indicators of a deepfake image include:

  • Anomalies in hand portrayal
  • Rough contours around the face
  • Inconsistent skin texture
  • Blurriness in certain areas
  • Unusual lighting or distortions

Certain indicators of a deepfake video consist of:

  • Artificial eye and hand/body motions
  • Mismatch between lip movements and audio
  • Abnormal lighting or shadows

That said, identifying deepfakes is often easier said than done. Ammanath stated, "Although humans can sometimes spot deepfakes, the task is becoming increasingly challenging as the technologies used to create counterfeit content become more sophisticated."

"Advanced AI/ML algorithms, especially neural networks, can be trained to detect deepfakes and other falsified content in real-time, thus mitigating their dissemination. Neural networks trained for deepfake detection can recognize distinctive patterns and subtle irregularities within manipulated media files."

Illustrating the potential of AI, Ammanath elucidated that AI-driven detection algorithms can detect subtle alterations such as fading or grayscale pixels around an individual's face in manipulated photographs.

Therefore, utilizing deepfake detectors equipped with machine learning algorithms and neural networks trained on extensive datasets of authentic and manipulated videos and images is a reliable method to more accurately identify fabricated content.

The increased importance during an election year

As the U.S. election unfolds throughout 2024, both consumers and businesses must anticipate an increase in AI-generated content as unscrupulous actors seek to exploit this technology for their political agendas or interests.

One common tactic involves the creation of deepfakes featuring political figures seemingly endorsing or making political statements, aimed at influencing voter behavior.

Instances such as the Biden robocall illustrate this, where a New Orleans magician alleged that a consultant for Dean Phillips' campaign paid him to fabricate audio of President Biden, with the aim of dissuading voters from participating in the New Hampshire primary on January 23, 2023.

To address these threats effectively, it's crucial to familiarize oneself with the telltale signs of deepfakes and, whenever feasible, utilize deepfake detection tools to identify them and verify the source. Taking these measures will help mitigate the potential consequences of AI misuse during the election season.

In conclusion, deepfakes have become a permanent fixture, and until AI vendors and regulators develop strategies to curb their proliferation, it falls upon users to acquire the skills to identify them. There is no simple solution for determining the authenticity of digital content anymore.

While this may not alleviate the distress or harm experienced by those targeted by deepfakes, it can help mitigate the dissemination of misinformation.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow