Are AI agents a game-changer?

Issue 3 2025 Information Security

AI is transforming cybercrime, making attacks more scalable, efficient, and accessible. The WEF Artificial Intelligence and Cybersecurity Report (2025)1 highlights how AI has democratised cyberthreats, enabling attackers to automate social engineering, expand phishing campaigns, and develop AI-driven malware.

Similarly, the Orange Cyberdefense Security Navigator 20252 warns of AI-powered cyber extortion, deepfake fraud, and adversarial AI techniques. The 2025 State of Malware Report by Malwarebytes3 notes that while GenAI has enhanced cybercrime efficiency, it has not yet introduced entirely new attack methods; attackers still rely on phishing, social engineering, and cyber extortion, now amplified by AI.


Anna Collard.

However, this is set to change with the rise of AI agents, autonomous AI systems capable of planning, acting, and executing complex tasks, posing major implications for the future of cybercrime. A list of common (ab)use cases of AI by cybercriminals follows.

AI-generated phishing and social engineering

Generative AI and large language models (LLMs) enable cybercriminals to craft more believable and sophisticated phishing emails in multiple languages – without the usual red flags like poor grammar or spelling mistakes. AI-driven spear phishing now allows criminals to personalise scams at scale, automatically adjusting messages based on a target’s online activity. AI-powered Business Email Compromise (BEC) scams are increasing, as attackers use AI-generated phishing emails sent from compromised internal accounts to enhance credibility. AI also automates the creation of fake phishing websites, watering hole attacks and chatbot scams, which are sold as AI-powered ‘crimeware as a service’ offerings, further lowering the barrier to entry for cybercrime.

Deepfake-enhanced fraud and impersonation

Deepfake audio and video scams are being used to impersonate business executives, co-workers or family members to manipulate victims into transferring money or revealing sensitive data. The most famous 2024 incident was UK-based engineering firm Arup that lost $25 million after one of their Hong Kong-based employees was tricked by deepfake executives in a video call4. Attackers are also using deepfake voice technology to impersonate distressed relatives or executives, demanding urgent financial transactions.

Cognitive attacks

Online manipulation, as defined by Susser et al. (2018) 5, is “at its core, hidden influence – the covert subversion of another person’s decision-making power”. AI-driven cognitive attacks are rapidly expanding the scope of online manipulation, leveraging digital platforms and state-sponsored actors increasingly use generative AI to craft hyper-realistic fake content, subtly shaping public perception, while evading detection.

These tactics are deployed to influence elections, spread disinformation, and erode trust in democratic institutions. Unlike conventional cyberattacks, cognitive attacks do not just compromise systems, they manipulate minds, subtly steering behaviours and beliefs over time without the target’s awareness. The integration of AI into disinformation campaigns dramatically increases the scale and precision of these threats, making them harder to detect and counter.

The security risks of LLM adoption

Beyond misuse by threat actors, business adoption of AI-chatbots and LLMs introduces their own significant security risks, especially when untested AI interfaces connect the open internet to critical back-end systems or sensitive data. Poorly integrated AI systems can be exploited by adversaries and enable new attack vectors, including prompt injection, content evasion, and denial-of-service attacks. Multimodal AI expands these risks further, allowing hidden malicious commands in images or audio to manipulate outputs.

Additionally, bias within LLMs poses another challenge, as these models learn from vast datasets that may contain skewed, outdated, or harmful biases. This can lead to misleading outputs, discriminatory decision-making, or security misjudgments, potentially exacerbating vulnerabilities rather than mitigating them.

As LLM adoption grows, rigorous security testing, bias auditing, and risk assessment are essential to prevent exploitation and ensure trustworthy, unbiased AI-driven decision-making.

When AI goes rogue

With AI systems now capable of self-replication, as demonstrated in a recent study6, the risk of uncontrolled AI propagation or rogue AI – AI systems that act against the interests of their creators, users, or humanity at large – is growing. Security and AI researchers have raised concerns that these rogue systems can arise either accidentally or maliciously, particularly when autonomous AI agents are granted access to data, APIs, and external integrations.

The broader an AI’s reach through integrations and automation, the greater the potential threat of it going rogue, making robust oversight, security measures, and ethical AI governance essential in mitigating these risks.

The future of AI agents for automation in cybercrime

A more disruptive shift in cybercrime can and will come from AI Agents, which transform AI from a passive assistant into an autonomous actor capable of planning and executing complex attacks. Google, Amazon, Meta, Microsoft, and Salesforce are already developing Agentic AI for business use, but in the hands of cybercriminals, its implications are alarming.

AI agents can be used to autonomously scan for vulnerabilities, exploit security weaknesses, and execute cyberattacks at scale. They can also allow attackers to scrape massive amounts of personal data from social media platforms and automatically compose and send fake executive requests to employees or analyse divorce records across multiple countries to identify individuals for AI-driven romance scams, orchestrated by an AI agent.

These AI-driven fraud tactics do not just scale attacks, they make them more personalised and harder to detect. Unlike current GenAI threats, Agentic AI has the potential to automate entire cybercrime operations, significantly amplifying the risk.

How defenders can use AI and AI Agents

Organisations cannot afford to remain passive in the face of AI-driven threats and security professionals need to remain abreast of the latest development. Here are some of the opportunities in using AI to defend against AI.

AI-powered threat detection and response

Security teams can deploy AI and AI-agents to monitor networks in real time, identify anomalies, and respond to threats faster than human analysts can. AI-driven security platforms can automatically correlate vast amounts of data to detect subtle attack patterns that might otherwise go unnoticed, create dynamic threat modelling, real-time network behaviour analysis, and deep anomaly detection.

For example, as outlined by researchers of Orange Cyberdefense, AI-assisted threat detection is crucial as attackers increasingly use “Living off the Land” (LOL) techniques that mimic normal user behaviour, making it harder for detection teams to separate real threats from benign activity. By analysing repetitive requests and unusual traffic patterns, AI-driven systems can quickly identify anomalies and trigger real-time alerts, allowing for faster defensive responses.

However, despite the potential of AI-agents, human analysts still remain critical, as their intuition and adaptability are essential for recognising nuanced attack patterns and leverage real incident and organisational insights to prioritise resources effectively.

Automated phishing and fraud prevention

AI-powered email security solutions can analyse linguistic patterns, and metadata to identify AI-generated phishing attempts before they reach employees, by analysing writing patterns and behavioural anomalies. AI can also flag unusual sender behaviour and improve detection of BEC attacks. Similarly, detection algorithms can help verify the authenticity of communications and prevent impersonation scams. AI-powered biometric and audio analysis tools detect deepfake media by identifying voice and video inconsistencies. However, real-time deepfake detection remains a challenge, as technology continues to evolve.

User education and AI-powered security awareness training

AI-powered platforms (e.g., KnowBe4’s AIDA) deliver personalised security awareness training, simulating AI-generated attacks to educate users on evolving threats, helping train employees to recognise deceptive AI-generated content and strengthen their individual susceptibility factors and vulnerabilities.

Adversarial AI countermeasures:

Just as cybercriminals use AI to bypass security, defenders can employ adversarial AI techniques, for example deploying deception technologies, such as AI-generated honeypots, to mislead and track attackers, as well as continuously training defensive AI models to recognise and counteract evolving attack patterns.

Using AI to fight AI-driven misinformation and scams

AI-powered tools can detect synthetic text and deepfake misinformation, assisting fact-checking and source validation. Fraud detection models can analyse news sources, financial transactions, and AI-generated media to flag manipulation attempts. Counter-attacks, like shown by research project Countercloud, or O2 Telecoms AI agent Daisy, show how AI-based bots and deepfake real-time voice chatbots can be used to counter disinformation campaigns as well as scammers by engaging them in endless conversations to waste their time and reduce their ability to target real victims.

In a future where both attackers and defenders use AI, defenders need to be aware of how adversarial AI operates and how AI can be used to defend against their attacks. In this fast-paced environment, organisations need to guard against their greatest enemy: their own complacency, while at the same time considering AI-driven security solutions thoughtfully and deliberately.

Rather than rushing to adopt the next shiny AI security tool, decision makers should carefully evaluate AI-powered defences to ensure they match the sophistication of emerging AI threats. Hastily deploying AI without strategic risk assessment could introduce new vulnerabilities, making a mindful, measured approach essential in securing the future of cybersecurity.

To stay ahead in this AI-powered digital arms race, organisations should:

• Monitor both the threat and AI landscape to stay abreast of latest developments on both sides.

• Train employees frequently on the latest AI-driven threats, including deepfakes and AI-generated phishing.

• Deploy AI for proactive cyber defence, including threat intelligence and incident response.

• Continuously test your own AI models against adversarial attacks to ensure resilience.

[1] https://tinyurl.com/ap9pdat8

[2] https://tinyurl.com/3vu7a27f

[3] https://tinyurl.com/ysnkz42v

[4] https://tinyurl.com/y949r4mz

[5] https://tinyurl.com/yzcfdkzw

[6] https://tinyurl.com/f2y3e7pa




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

Highest increase in global cyberattacks in two years
Information Security News & Events
Check Point Global Research released new data on Q2 2024 cyber-attack trends, noting a 30% global increase in Q2 2024, with Africa experiencing the highest average weekly per organisation.

Read more...
Continuous security optimisation.
News & Events Information Security
Cymulate has announced its partnership with SentinelOne, a threat exposure validation and AI-powered cybersecurity platform. The collaboration delivers self-healing endpoint security that empowers businesses to increase protection for every endpoint on their network.

Read more...
Protect your smart home devices
Kaspersky IoT & Automation Information Security Smart Home Automation
Voice assistants, kitchen robots, smart lights and many other intelligent devices have become part of our everyday life. However, with the rise of smart technology comes the need for robust protection against potential vulnerabilities.

Read more...
ISPA’s take-down process protects from local scams
News & Events Information Security
During the recent school holidays, parents could rest a little easier knowing that ISPA, SA’s official internet industry representative body, is removing an average of three to four problematic websites from the local internet every week.

Read more...
NEC XON disrupts sophisticated cyberattack
Information Security
NEC XON recently showcased its advanced cyberthreat detection and response capabilities by successfully thwarting a human-operated ransomware attack targeting a major service provider.

Read more...
What’s your cyber game plan?
Information Security
“Medium-sized businesses are often the easiest target for cyber criminals, because they are just digital enough to be vulnerable, but not mature enough to be fully protected," says Warren Bonheim, MD of Zinia.

Read more...
Upgrade your PCs to improve security
Information Security Infrastructure
Truly secure technology today must be designed to detect and address unusual activity as it happens, wherever it happens, right down to the BIOS and silicon levels.

Read more...
Open source code can also be open risk
Information Security Infrastructure
Software development has changed significantly over the years, and today, open-source code increasingly forms the foundation of modern applications, with surveys indicating that 60 – 90% of the average application's code base consists of open-source components.

Read more...
DeepSneak deception
Information Security News & Events
Kaspersky Global Research & Analysis researchers have discovered a new malicious campaign which is distributing a Trojan through a fake DeepSeek-R1 Large Language Model (LLM) app for PCs.

Read more...
SA’s strained, loadshedding-prone grid faces cyberthreats
Power Management Information Security
South Africa’s energy sector, already battered by decades of underinvestment and loadshedding, faces another escalating crisis; a wave of cyberthreats that could turn disruptions into catastrophic failures. Attacks are already happening internationally.

Read more...










While every effort has been made to ensure the accuracy of the information contained herein, the publisher and its agents cannot be held responsible for any errors contained, or any loss incurred as a result. Articles published do not necessarily reflect the views of the publishers. The editor reserves the right to alter or cut copy. Articles submitted are deemed to have been cleared for publication. Advertisements and company contact details are published as provided by the advertiser. Technews Publishing (Pty) Ltd cannot be held responsible for the accuracy or veracity of supplied material.




© Technews Publishing (Pty) Ltd. | All Rights Reserved.