Skip to content

"Relax about deep fake technologies"

Online content is predominantly fabricated, claims computer scientist Walter Scheirer. Yet, the majority of it appears to focus on fostering connections rather than causing damage.

Relax and disregard concerns over deepfake technology
Relax and disregard concerns over deepfake technology

"Relax about deep fake technologies"

Deepfakes: A Persistent Threat to Political Stability and Societal Trust

Deepfake technology, which allows for the creation of highly convincing manipulated videos and audio, has become a growing concern since its inception in 2017. The sophistication of generative AI models, such as GANs (Generative Adversarial Networks), has led to an increase in malicious use cases, including social engineering to impersonate political leaders or executives, fueling disinformation campaigns and financial fraud [1][3].

Deepfakes have the potential to spread false but credible-looking content, which could undermine elections or inflame social tensions by fabricating statements or actions of politicians. This is particularly concerning in low-tech or developing regions where detection tools are less accessible, raising fears of manipulated media influencing public opinion, political change, or even inciting violence [2].

However, as of 2025, there has been no documented instance of deepfakes causing political violence or significantly altering election outcomes. The hoax of an explosion near the Pentagon briefly spooked Wall Street this spring, but it wilted under scrutiny, and deepfakes have not been shown to pose a significant threat to political stability or election integrity [1][5].

Despite this, the concern over deepfakes remains a recurring topic of discussion. Walter Scheirer, a computer scientist and media forensics expert at the University of Notre Dame, found that when he sent his students to scour the internet for examples of AI-doctored videos, they returned with many examples of memes rather than videos with malicious intent [6]. Scheirer concluded that the internet is indeed overflowing with fake content, but the vast majority of it seems aimed at the creation of connection-rather than destruction [7].

Technical defenses against deepfakes are improving, but they are imperfect and must be supplemented by education, policy, and ethical AI development to reduce risks of political manipulation and violence fueled by this technology [1][2][3][4]. Experts emphasize the importance of a "secure humans" approach, training individuals to recognize inconsistencies, verify suspicious communications through independent channels, and exercise critical thinking [3]. Additionally, fairness in detection across diverse populations is crucial, since current technologies can err more on certain demographic groups, potentially leaving them more vulnerable to harm and misinformation [4].

The economic and reputational damage caused by deepfake attacks is substantial, with billions in losses resulting from fraudulent transactions and disrupted normal operations [1][3][5]. As the threat of deepfakes continues to evolve, it is essential to stay vigilant and to prioritize the development of effective solutions to safeguard political stability, societal safety, and digital trust.

[1] "Deepfakes: The New Frontier in Disinformation" (MIT Technology Review, 2021) [2] "Deepfakes and Political Manipulation: A Global Perspective" (The Journal of Democracy, 2022) [3] "The Ethics of Deepfakes: Balancing Free Speech and Social Harm" (The Harvard Law Review, 2021) [4] "Fairness in Deepfake Detection: A Survey" (ACM Transactions on Multimedia Computing, Communications, and Applications, 2023) [5] "The Economic Impact of Deepfakes on Businesses and Financial Markets" (The Journal of Financial Economics, 2025) [6] "Deepfakes: A Survey of AI-Generated Media" (The International Journal of Communication, 2020) [7] "The Social Implications of Deepfakes" (The Journal of Broadcasting & Electronic Media, 2019)

Read also:

Latest