ChatGPT will amplify today's cybercrime

Issue 1 2023 Information Security, AI & Data Analytics

ChatGPT is a new artificial intelligence (AI) that understands natural human language, providing comprehensive and concise responses. It can answer questions that sound like human responses, write essays that feel like a person was the author (much to the concern of teachers and professional writers), and it can also create computer code, sparking worry that ChatGPT could be used as a cybercrime tool. That may happen, but the real risk lies in how this software and its peers could amplify impersonation and other existing cybercrime attacks that already work very well.

Gerhard Swart, CTO at cybersecurity company, Performanta, says, "I can see how ChatGPT will make it easier to access cybercrime tools and learn how to use them, but that is a side concern, at least for now. The bigger problem is how it will be used for scams. ChatGPT and similar AIs will not create new cybercrime threats; they will make current threats worse."

The generative AI revolution

ChatGPT is part of a new trend called generative AI. While it conjures written paragraphs, image generators such as DALL-E and Stable Diffusion create spectacular art in minutes. Several companies, including Google, have AI systems that generate realistic videos. Last year, a startup showcased a fake voice interview between podcast star Joe Rogan and the late Apple CEO, Steve Jobs — created by an AI.

OpenAI, the company behind ChatGPT, also created an AI called Codex that writes computer code. It was not long before criminals and security experts tested the combination of Codex and ChatGPT to create hacker scripts. Darkweb forums, where online criminals meet, started posting examples of AI-generated attack code. This trend is a concern.

“ChatGPT will not make a newcomer good at cybercrime coding. They still need a lot of experience to combine different codes, but an AI could generate code at a pace and scale that would help experienced criminals do more, faster. It could help inexperienced people get better access to the many crime tools available online and learn how to use them. I do not think the concerns about cybercrime are overhyped. They are just not that simple, for now,” says Swart.

The real cybersecurity threat

Yet, generative AI still poses a very real cybersecurity risk. Cybercrime often uses social engineering, a set of proven techniques that scam people into sharing access details or valuable information.

"Social engineering is the oldest trick in the book," says Swart. "It is when someone pretends to be somebody or something else. The Trojans thought they got a big wooden horse as a gift, not an invading army. That has never changed. Cybercriminals do this all the time, using methods like phishing and man-in-the-middle attacks."

‘Phishing’ is when someone fakes correspondence to fool a user, such as pretending to be a bank and getting the victim to log in on a fake banking page. ‘Man-in-the-middle’ attacks intercept and replace correspondence, for example, an invoice with altered banking details. Social engineering can use phone calls, instant messaging and other communication channels designed to fool someone into thinking they are dealing with a trustworthy party.

From that perspective, generative AI could become a significant cybercrime enabler. Criminals can generate emails that mimic the language and style of executives. They can create correspondence in different languages, and they might even start to clone people's voices and faces. There is no evidence that these latter activities have happened, but it is no longer science fiction.

Fortunately, the cybersecurity world knows these tricks. Modern security can deal with phishing and impersonation attacks. It can detect and prevent the type of tricks that generative AI generates, but to create that advantage, people and companies need to take security more seriously.

“Most attacks happen not because we cannot secure systems properly, but because we do not bother to do so," says Swart. "Companies leave security as an afterthought, or just throw money at the problem. They do not collaborate with staff to create security awareness and they do not involve their security people in business conversations. They do not create what I call a cyber-safe environment."

This change means that any organisation that has not yet sorted its cybersecurity has an even bigger target on its back. In the future, generative AI may radically change cybercrime, but it may also already be amplifying what online criminals can do.




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

SMARTpod talks to Sophos and Phishield
SMART Security Solutions Technews Publishing Sophos Videos Information Security News & Events
SMARTpod recently spoke with Pieter Nel, Sales Director for SADC at Sophos, and Sarel Lamprecht, MD at Phishield, about ransomware and their new cyber insurance partnership.

Read more...
Cybersecurity and insurance partnership for sub-Saharan Africa
Sophos News & Events Information Security Security Services & Risk Management
Sophos and Phishield Announce first-of-its-kind cybersecurity and insurance partnership for sub-Saharan Africa. The SMARTpod podcast, discussing the deal and the state of ransomware in South Africa and globally, is now also available.

Read more...
Highest increase in global cyberattacks in two years
Information Security News & Events
Check Point Global Research released new data on Q2 2024 cyber-attack trends, noting a 30% global increase in Q2 2024, with Africa experiencing the highest average weekly per organisation.

Read more...
Corporate and academic teams can register for Kaspersky contest
Kaspersky News & Events Information Security
Kaspersky has announced the registration opening for its new Kaspersky{CTF} (Capture the Flag) competition, inviting academic and corporate teams from around the globe to compete in a battle of skill, strategy and innovation.

Read more...
SA businesses embrace GenAI, but strategy and skills lag
News & Events AI & Data Analytics
South African enterprises are rapidly integrating Generative AI (GenAI) into their operations, but most are doing so without formal strategies, dedicated leadership, or the infrastructure required to maximise value and minimise risk.

Read more...
Eagle Eye Precision Person & Vehicle Detection
Surveillance Products & Solutions AI & Data Analytics
Eagle Eye’s new Precision Person & Vehicle Detection feature detects people and vehicles at long distances with high accuracy and is especially designed for customers who actively monitor for intruders

Read more...
MDR: What you’re really paying for
Information Security
When businesses invest in managed detection and response (MDR), they’re buying more than a product, they’re securing access to an entire ecosystem of human expertise, global threat intelligence, and 24x7 incident response.

Read more...
Continuous security optimisation.
News & Events Information Security
Cymulate has announced its partnership with SentinelOne, a threat exposure validation and AI-powered cybersecurity platform. The collaboration delivers self-healing endpoint security that empowers businesses to increase protection for every endpoint on their network.

Read more...
Protect your smart home devices
Kaspersky IoT & Automation Information Security Smart Home Automation
Voice assistants, kitchen robots, smart lights and many other intelligent devices have become part of our everyday life. However, with the rise of smart technology comes the need for robust protection against potential vulnerabilities.

Read more...
ISPA’s take-down process protects from local scams
News & Events Information Security
During the recent school holidays, parents could rest a little easier knowing that ISPA, SA’s official internet industry representative body, is removing an average of three to four problematic websites from the local internet every week.

Read more...










While every effort has been made to ensure the accuracy of the information contained herein, the publisher and its agents cannot be held responsible for any errors contained, or any loss incurred as a result. Articles published do not necessarily reflect the views of the publishers. The editor reserves the right to alter or cut copy. Articles submitted are deemed to have been cleared for publication. Advertisements and company contact details are published as provided by the advertiser. Technews Publishing (Pty) Ltd cannot be held responsible for the accuracy or veracity of supplied material.




© Technews Publishing (Pty) Ltd. | All Rights Reserved.