AI security breaches are no longer mere tales of caution - they've become a troubling reality.
================================================================
In the ever-evolving world of technology, the latest concern for organizations worldwide is the impact of generative AI on data breaches. A new report by IBM reveals that AI security and governance are lagging behind AI adoption, creating a fertile ground for cybercriminals.
According to the report, AI was involved in 16% of data breaches last year, most often for AI-generated phishing and deepfake impersonation attacks. This trend is expected to continue, with generative AI increasing risks of private data leakage, insider threats, and sophisticated new attack vectors.
One of the primary concerns is data leakage from training data. Generative AI models, trained on large datasets, can inadvertently reproduce sensitive or private information contained in their training data. Attackers can use techniques like prompt inversion to extract such private data snippets from a model, resulting in data breaches.
Insider threats via uncontrolled AI use is another significant issue. Employees or insiders may unwittingly expose sensitive company data by inputting confidential information into consumer-grade generative AI tools that lack enterprise-level data protections. This can lead to irreversible leaks with minimal traceability.
Cybercriminals are also using generative AI to create deepfakes and synthetic identities. These can be used for disinformation campaigns, fraud, or social engineering attacks, posing a serious threat to individuals and organizations alike.
Attackers are also manipulating generative AI behavior by injecting malicious prompts to cause the AI to disclose confidential information or carry out undesirable actions, potentially subverting security controls.
Moreover, cybercriminals use AI to speed up and improve attacks, evade detection by adapting in real time, craft sophisticated phishing or ransomware campaigns, and conduct data poisoning attacks that corrupt AI model datasets to cause errors or backdoors.
Lack of governance and oversight is a significant factor in these AI-related breaches. Only 3% of affected organizations had proper AI access controls in place, leaving the door wide open for cyberattacks.
Interestingly, the report also finds that 13% of organizations reported breaches involving their AI models or applications, most commonly through compromised apps, APIs, or plug-ins.
Hackers are experiencing "ransomware fatigue", with fewer organizations paying ransoms in 2025 compared to 2024. A slight majority (63%) of organizations hit by ransomware attacks in 2025 did not pay the ransom, compared to 41% that did in 2024.
The cost of a data breach varies across regions. In the US, the cost increased from $9.36 million to $10.22 million in 2025. Canada also experienced a rise in data breach costs, going from $4.66 million to $4.84 million. Organizations in the Middle East faced an average cost of $7.29 million for a data breach, down from $8.57 million in 2024. Brazil had an average cost of $1.22 million for a data breach, a fall of $140,000 from $1.36 million in 2024. Benelux experienced a rise in data breach costs, going from $5.90 million to $6.24 million.
In conclusion, the use of generative AI both increases the attack surface for data breaches by exposing sensitive training data and creating new social engineering tools, and it empowers attackers with faster, more adaptive and harder-to-detect cyberattack methods. The risks are compounded by organizations’ often insufficient AI governance and unregulated employee use of generative AI platforms, leading to unintentional data exposures and compliance violations. It is crucial for organizations to prioritize AI security and governance to mitigate these risks and protect their sensitive data.
Cybersecurity is crucial for organizations in the face of increasing risks posed by the use of generative AI, as seen in the 16% of data breaches last year involving AI. The business sector should prioritize data-and-cloud-computing security, focusing on AI governance and access controls to prevent sensitive data leakage and insider threats. Moreover, the finance industry will likely see an enhanced need for cybersecurity solutions, given the potential costs of data breaches, which have increased in various regions, such as the US, Canada, and Benelux. Thus, technology advancements in AI must be matched with equivalent progress in cybersecurity to safeguard businesses and individuals alike.