AI as a Modern Deceiver: Revealing Schemes of Social Engineering 2.0
In the digital age, the battle against cyber threats has taken a new turn with the advent of artificial intelligence (AI). South Africa recently witnessed a troubling example of this when a deepfake video circulated on WhatsApp, featuring a convincing faked endorsement by FSCA Commissioner Unathi Kamlana for a fraudulent trading platform [1].
As technology advances, AI is increasingly being used to automate social engineering, eliminating traditional phishing markers like spelling errors and bad grammar. This automation makes it harder for individuals and organisations to distinguish between genuine and fraudulent communications [2].
AI is not just automating these attacks, but also making them more personalised. AI-generated phishing emails, for instance, utilise advanced language models to produce messages with perfect grammar, personalised content, and a human-like tone, often bypassing typical red flags [3]. Such AI-generated phishing has led to a reported 1,265% increase in phishing attacks by late 2024, with a much higher click-through rate (54%) compared to human-written ones (12%) [3].
Deepfake technology is another worrying development. Attackers are using this technology to impersonate CEOs and other executives through synthesised voice or video, enabling scams such as fake requests for fund transfers or sensitive information over Zoom calls or phone [1]. For example, attackers have synthesised CEO voices to conduct realistic scams, and deepfake video impersonations on platforms like Zoom have been demonstrated as highly convincing [1][2][4].
AI is also being used to mimic writing styles, generate emotionally resonant messages, and even recreate voices or faces. This allows attackers to convincingly impersonate trusted individuals and create sophisticated deception at scale [2]. One emerging tactic involves simulating a child's voice to convince a parent they've been kidnapped, complete with background noise, sobs, and a fake abductor demanding money [1].
LinkedIn is another platform where AI-powered social engineering attacks are affecting executives at major firms. AI bots impersonate these individuals to send phishing messages via InMail or follow-up emails, exploiting users’ professional interests and including phishing links that lead to credential compromise and further network infiltration [4].
These advancements in AI-powered social engineering attacks pose a severe and evolving threat to organisational cybersecurity. Social engineering, now amplified by AI, accounts for an estimated 30-40% of recent cyber breaches [1]. Traditional detection tools and training methods struggle against AI-enhanced, perfectly crafted messages, forcing security teams to develop more sophisticated AI-aware defences [2][3].
Moreover, with ongoing changes in AI regulatory frameworks, especially at state levels, compliance requirements concerning AI-driven cyber threats are becoming more complex and challenging to navigate [1].
In the face of these challenges, education, critical thinking, and digital mindfulness are key to being prepared for AI-generated lures. Families and teams should consider pre-agreed code words or secrets for emergency communications, in case AI-generated voices impersonate loved ones [2]. Awareness has to go beyond "don't click the link"; building 'digital mindfulness' and 'cognitive resilience' is necessary to resist emotional triggers [2].
Unconventional tactics, such as asking HR interviewees to place their hand in front of their face during video calls, can help spot deepfakes in hiring scams [2]. Rehearsing calm, deliberate decision-making under pressure is necessary to counter panic-based manipulation [2]. Training staff to recognise emotional manipulation is crucial, not just suspicious URLs [2].
The question is no longer whether you'll be targeted but whether you'll be prepared. Organisations that combine human insight with machine precision are likely to be the ones that win the battle against AI-powered social engineering attacks [2].
References:
[1] The Verge
[2] Forbes
[3] Cybersecurity Ventures
[4] TechCrunch
- The advent of artificial intelligence (AI) has made cybersecurity a pressing concern, as shown by the deepfake video circulated on WhatsApp in South Africa, impersonating FSCA Commissioner Unathi Kamlana [1].
- As AI advances, it is being used to automate social engineering, making it harder for individuals and organizations to distinguish between genuine and fraudulent communications [2].
- AI is not only automating attacks but also making them more personalized, such as AI-generated phishing emails with perfect grammar, personalized content, and a human-like tone [3].
- Deepfake technology in AI is a major concern, enabling attackers to impersonate CEOs and executives, leading to scams like fake fund transfers or sensitive information over Zoom calls or phones [1,2,4].
- Education, critical thinking, and digital mindfulness are essential in dealing with AI-generated lures, as traditional detection tools and training methods struggle against AI-enhanced, perfectly crafted messages [2].