Cybercriminals embracing AI

May 2024 Information Security, Security Services & Risk Management

Across the globe, organisations of all sizes are exploring how artificial intelligence (AI) and generative AI, in particular, can benefit their businesses. While they are still trying to figure out how best to use AI, cybercriminals have fully embraced it.

Whether they are creating AI-enhanced malware (that enables them to steal sensitive data more efficiently, while evading anti-virus software) or using generative AI tools to roll out more sophisticated phishing campaigns at scale, the technology looks set to have a massive impact on cybercrime. As an example of how significant AI’s impact has already been, SlashNext’s State of Phishing Report 2023 puts the 1 265% increase in phishing emails largely down to targeted business email compromises using AI tools.

For businesses, this increase in cybercrime activity comes with significant risks. Those risks do not just include the compromising of customer data either. Cyberattacks also come with reputational and financial risks and even legal liabilities. Therefore, organisations must do everything in their power to ready themselves for the onslaught of cybercriminals using AI tools. That includes ensuring that their own AI use is safe and responsible.

Massively enhanced innovation, automation, and scalability

Before examining how organisations can do so, it is worth discussing what cybercriminals get from AI tools. For the most part, it is the same thing as legitimate businesses and other entities are trying to get out of it: significantly enhanced innovation, automation, and scalability.

When it comes to innovation and automation, cybercriminals have built several kinds of AI-enhanced automated hacking tools. These tools allow them to, amongst other things, scan for system vulnerabilities, launch automated attacks, and exploit weaknesses without innovation. Automation can, however, also be applied to social engineering attacks. Whilst a human-written phishing email is more likely to be opened, an AI-written version takes a fraction of the time to put together.

All of that adds up to a situation where cybercriminals can launch many more attacks more frequently. That means more successful breaches, with more chances to sell stolen data or extort businesses for money in exchange for the return of that data.

Those increased breaches come at a cost. According to IBM’s 2023 Cost of a Data Breach Report, the average breach cost in South Africa is now ZAR 49,45 million. However, that does not take into account reputational damage and lost consumer trust. Those costs also do not account for the legal trouble an organisation can find itself in if it has not properly safeguarded its customers’ data and violated relevant data protection legislation or regulations such as the Protection of PersonaI Information Act or the GDPR.

Education, upskilling, and up-to-date policies

It is clear then that cybercriminals' widespread adoption of AI tools has significant implications for entities of all sizes. What should organisations do in the face of this mounting threat?

A good start is for businesses to ensure that they are using cybersecurity tools capable of defending against AI-enhanced attacks. As any good cybersecurity expert will tell you, however, these tools can only take you so far.

For organisations to give themselves the best possible chance of defending themselves against cyberattacks, they must also invest heavily in education. That does not just mean ensuring that employees know about the latest threats but also inculcating good organisational digital safety habits. This would include enabling multi-factor authentication on devices and encouraging people to change passwords regularly.

It is also essential that businesses keep their policies up to date. This is especially important in the AI arena. There is a very good chance, for example, that employees in many organisations are logging in to tools like ChatGPT using their personal email addresses and using such tools for work purposes. If their email is then compromised in an attack, sensitive organisational data could find itself in dangerous hands.

Make changes now

Ultimately, organisations must recognise that AI is not a looming cybersecurity threat, but an active one. As such, they must start putting everything they can in place to defend against it. That means putting the right tools, education, and policies in place. Failure to do so comes with risks that no business should ever consider acceptable.




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

Highest increase in global cyberattacks in two years
Information Security News & Events
Check Point Global Research released new data on Q2 2024 cyber-attack trends, noting a 30% global increase in Q2 2024, with Africa experiencing the highest average weekly per organisation.

Read more...
SAFPS issues SAPS impersonation scam warning
News & Events Security Services & Risk Management
The Southern African Fraud Prevention Service (SAFPS) is warning the public against a scam in which scammers pose as members of the South African Police Service (SAPS) and trick and intimidate individuals into handing over personal and financial information.

Read more...
What does Agentic AI mean for cybersecurity?
Information Security AI & Data Analytics
AI agents will change how we work by scheduling meetings on our behalf and even managing supply chain items. However, without adequate protection, they become soft targets for criminals.

Read more...
Phishing attacks through SVG image files
Kaspersky News & Events Information Security
Kaspersky has detected a new trend: attackers are distributing phishing emails to individual and corporate users with attachments in SVG (Scalable Vector Graphics) files, a format commonly used for storing images.

Read more...
Crypto in SA: between progress and precaution
Information Security
“As cryptocurrency gains momentum and legitimacy, it’s becoming increasingly important for people to pay attention to financial security”, says Richard Frost, head of technology and innovation at Armata Cyber Security.

Read more...
Cyber recovery requires a different approach to disaster recovery
Information Security
Disaster recovery is about getting operations back on track after unexpected disruptions; cyber recovery, however, is about calculated actions by bad actors aiming to disrupt your business, steal sensitive data, or hold your system hostage.

Read more...
MDR users claim 97,5% less
Sophos Information Security
The average cyber insurance claim following a significant cyberattack is just $75 000 for MDR users, compared with $3 million for endpoint-only users, according to a new independent study.

Read more...
The impact of GenAI on cybersecurity
Sophos News & Events Information Security
Sophos survey finds that 89% of IT leaders worry GenAI flaws could negatively impact their organisation’s cybersecurity strategies, with 87% of respondents stating they were concerned about a resulting lack of cybersecurity accountability.

Read more...
Rewriting the rules of reputation
Technews Publishing Editor's Choice Security Services & Risk Management
Public Relations is more crucial than ever in the generative AI and LLMs age. AI-driven search engines no longer just scan social media or reviews, they prioritise authoritative, editorial content.

Read more...
How can South African organisations fast-track their AI initiatives?
AI & Data Analytics Security Services & Risk Management
While the AI market in South Africa is anticipated to grow by nearly 30% annually over the next five years, tapping into the promise and potential of AI is not easy.

Read more...