ChatGPT will amplify today's cybercrime

Issue 1 2023 Information Security, AI & Data Analytics

ChatGPT is a new artificial intelligence (AI) that understands natural human language, providing comprehensive and concise responses. It can answer questions that sound like human responses, write essays that feel like a person was the author (much to the concern of teachers and professional writers), and it can also create computer code, sparking worry that ChatGPT could be used as a cybercrime tool. That may happen, but the real risk lies in how this software and its peers could amplify impersonation and other existing cybercrime attacks that already work very well.

Gerhard Swart, CTO at cybersecurity company, Performanta, says, "I can see how ChatGPT will make it easier to access cybercrime tools and learn how to use them, but that is a side concern, at least for now. The bigger problem is how it will be used for scams. ChatGPT and similar AIs will not create new cybercrime threats; they will make current threats worse."

The generative AI revolution

ChatGPT is part of a new trend called generative AI. While it conjures written paragraphs, image generators such as DALL-E and Stable Diffusion create spectacular art in minutes. Several companies, including Google, have AI systems that generate realistic videos. Last year, a startup showcased a fake voice interview between podcast star Joe Rogan and the late Apple CEO, Steve Jobs — created by an AI.

OpenAI, the company behind ChatGPT, also created an AI called Codex that writes computer code. It was not long before criminals and security experts tested the combination of Codex and ChatGPT to create hacker scripts. Darkweb forums, where online criminals meet, started posting examples of AI-generated attack code. This trend is a concern.

“ChatGPT will not make a newcomer good at cybercrime coding. They still need a lot of experience to combine different codes, but an AI could generate code at a pace and scale that would help experienced criminals do more, faster. It could help inexperienced people get better access to the many crime tools available online and learn how to use them. I do not think the concerns about cybercrime are overhyped. They are just not that simple, for now,” says Swart.

The real cybersecurity threat

Yet, generative AI still poses a very real cybersecurity risk. Cybercrime often uses social engineering, a set of proven techniques that scam people into sharing access details or valuable information.

"Social engineering is the oldest trick in the book," says Swart. "It is when someone pretends to be somebody or something else. The Trojans thought they got a big wooden horse as a gift, not an invading army. That has never changed. Cybercriminals do this all the time, using methods like phishing and man-in-the-middle attacks."

‘Phishing’ is when someone fakes correspondence to fool a user, such as pretending to be a bank and getting the victim to log in on a fake banking page. ‘Man-in-the-middle’ attacks intercept and replace correspondence, for example, an invoice with altered banking details. Social engineering can use phone calls, instant messaging and other communication channels designed to fool someone into thinking they are dealing with a trustworthy party.

From that perspective, generative AI could become a significant cybercrime enabler. Criminals can generate emails that mimic the language and style of executives. They can create correspondence in different languages, and they might even start to clone people's voices and faces. There is no evidence that these latter activities have happened, but it is no longer science fiction.

Fortunately, the cybersecurity world knows these tricks. Modern security can deal with phishing and impersonation attacks. It can detect and prevent the type of tricks that generative AI generates, but to create that advantage, people and companies need to take security more seriously.

“Most attacks happen not because we cannot secure systems properly, but because we do not bother to do so," says Swart. "Companies leave security as an afterthought, or just throw money at the problem. They do not collaborate with staff to create security awareness and they do not involve their security people in business conversations. They do not create what I call a cyber-safe environment."

This change means that any organisation that has not yet sorted its cybersecurity has an even bigger target on its back. In the future, generative AI may radically change cybercrime, but it may also already be amplifying what online criminals can do.




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

When your security starts thinking with you
Secutel Technologies Surveillance Perimeter Security, Alarms & Intruder Detection AI & Data Analytics
If you manage a warehouse or logistics environment, you already understand how quickly risk can escalate during the day and after hours. The question is: how quickly can you respond?

Read more...
95% do not have full trust in cybersecurity vendors
Information Security Security Services & Risk Management
Trust in cybersecurity vendors is fragile, difficult to measure, and increasingly shaping risk posture at both operational and board levels. Lack of verifiable transparency undermines cybersecurity decision-making, according to Sophos-backed research.

Read more...
The AI goldrush has a credibility problem
Refraime Editor's Choice Surveillance AI & Data Analytics
The single most important question a surveillance buyer can ask is deceptively simple: “Was this system programmed or was it trained?” That question alone will reveal more about what you are evaluating than any feature list or marketing video.

Read more...
Crime behaviour insights more important than ever
Leaderware Editor's Choice Surveillance Training & Education AI & Data Analytics
Behavioural surveillance skills are as essential now as they have ever been, especially in situations where quick evaluation of context is needed. Training operators in behavioural recognition skills is a vital part of control room success.

Read more...
Large-scale AI boosts manufacturing efficiency
Hikvision South Africa Surveillance Industrial (Industry) AI & Data Analytics
Video systems, once used mainly for security, are rapidly becoming one of the most valuable sources of operational data in factories and industrial parks, accelerating smart manufacturing process.

Read more...
Africa’s largest Zero Trust platform
NEC XON Information Security Commercial (Industry)
Africa has reached a significant cybersecurity milestone with the successful deployment of the continent’s largest Palo Alto Networks Prisma Access and Prisma Access Browser Zero Trust environment, supporting secure remote access for more than 40 000 users for a large enterprise in Africa.

Read more...
Supply chain attacks top threat over 12 months
Information Security
Supply chain attacks have become the most prevalent cyberthreat confronting businesses over the past year, according to a new Kaspersky global study, with nearly one-third of companies worldwide experiencing a supply chain threat in the past year.

Read more...
From vibe hacking to flat-pack malware
Information Security AI & Data Analytics
HP issued its latest Threat Insights Report, with strong indications that attackers are using AI to scale and accelerate campaigns, and that many are prioritising cost, effort, and efficiency over quality.

Read more...
NEC XON secures mobile provider’s hybrid identities
NEC XON Access Control & Identity Management Information Security Commercial (Industry)
For a leading South African telecommunications operator, identity protection has become a strategic priority as identity-centric attacks proliferate across the industry. The company faced mounting pressure to secure both human and non-human identities across complex hybrid environments.

Read more...
Microsoft 365 security is a ticking time bomb
Information Security
Across boardrooms and IT departments, a dangerous assumption persists that because data is stored in Microsoft 365 and Azure, it is automatically secure. This belief is fundamentally flawed and fosters a false sense of protection.

Read more...










While every effort has been made to ensure the accuracy of the information contained herein, the publisher and its agents cannot be held responsible for any errors contained, or any loss incurred as a result. Articles published do not necessarily reflect the views of the publishers. The editor reserves the right to alter or cut copy. Articles submitted are deemed to have been cleared for publication. Advertisements and company contact details are published as provided by the advertiser. Technews Publishing (Pty) Ltd cannot be held responsible for the accuracy or veracity of supplied material.




© Technews Publishing (Pty) Ltd. | All Rights Reserved.