The deepfake crisis is here and now

Issue 2 2025 Information Security, Training & Education


Caesar Tonkin.

Deepfakes are a growing cybersecurity threat that blur the line between reality and fiction. According to the US Department of Homeland Security, deepfakes introduce serious implications for public and private sector institutions. Detecting these threats is increasingly complex, something that the CSIRO, Australia’s national science agency, has identified in recent research with Sungkynkwan University. There are, said researchers, significant flaws in existing deepfake detection tools and a growing demand for solutions that are more adaptable and resilient.

These AI-generated synthetic media have evolved from technological curiosities to sophisticated weapons of digital deception, costing companies upwards of $603 000 each. The 2024 Identity Fraud Report has revealed another disturbing trend. In 2024, a deepfake attack occurred every five minutes, while digital document forgeries increased by 244% year-on-year. The Regula survey highlighted that financial services firms are firmly in the crosshairs.

“Deepfakes use AI to create realistic but entirely fabricated videos, images and audio recordings,” explains Dr Bright Gameli Muwador, cybersecurity specialist, Kenya. “While the technology has legitimate uses, it is being weaponised for fraud, disinformation and cybercrime.”

A relatively simple task

What makes deepfakes particularly dangerous is their increasing accessibility. “Previously, it was confined to AI researchers, but now freely available tools allow anyone to create highly convincing fakes,” says Caesar Tonkin, director at Armata Cyber Security. “A recent iProove report found that 47% of companies have encountered deepfake attacks while 62% are not adequately prepared to counter them.”

The financial stakes are alarmingly high and know no borders. In the United States, these sophisticated fakes were used to spread election misinformation and commit financial fraud. They have also been used to create videos of well-known celebrities—a case in point being Taylor Swift—to promote fraudulent cryptocurrency schemes. In Australia, voice-based deepfake attacks target corporations, while Kenya has experienced deepfake-driven misinformation campaigns that have also been aimed at influencing public opinion during elections.

The scale of the problem is staggering. “According to Bitget, a cryptocurrency exchange and Web3 company, there has been a sharp increase in the use of deepfakes for criminal purposes that has led to total losses of more than $79,1 billion since 2022,” says Craig du Plooy, director at Cysec.

Technology to counter technology

As deepfake technology grows increasingly agile and intelligent, detecting it has become increasingly complex. Traditional security measures are proving inadequate. “Digital forensics has become a critical part of deepfake detection,” says Tonkin. “We need AI-driven forensic analysis to identify manipulated content. These techniques include reverse image searches, frame-by-frame analysis, and metadata examination.”

There have been promising developments when it comes to detecting deepfakes. Forensic AI has been designed to analyse pixel-level inconsistencies, and audio forensics are catching deepfake, AI-generated voices. “These voices often struggle with breath control and emotional nuance,” says Dr Mawudor. “Forensic specialists can use a spectrogram analysis to detect these unnatural sound patterns.”

While corporations and governments face significant risks, individuals are not immune. AI-generated scams are using the voices of family members to ask for money, and these threats are increasing. Even voice calls can be faked, and it is easy to believe a family member is in trouble.

What lies ahead is not a clear-cut answer to the deepfake problem. These threats definitely require a multi-faceted approach that leverages real-time detection tools, strengthened authentication processes, and ongoing employee training.

Awareness is critical

“Employees should be trained to verify unusual requests through secondary channels,” says Tonkin. “While the deepfake technology threat detection and prevention industry is rapidly evolving and maturing to rein in these threats, every other avenue must be prioritised to ensure companies and individuals are protected.”

On a regulatory level, governments must enforce AI content labelling and require social media platforms to flag AI-generated videos. Dr Mawudor says this is critical, alongside strengthening the legal consequences for deepfake abuse, especially fraud and digital harassment.

As the line between authentic and artificial content continues to blur, perhaps the keywords for companies going forward are vigilance, education, and technology. Companies' countermeasures are crucial to ensuring their systems are protected and to maintaining trust and integrity in a digital-first world.

For more information, contact Richard Frost, Armata, [email protected], www.armata.co.za




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

Highest increase in global cyberattacks in two years
Information Security News & Events
Check Point Global Research released new data on Q2 2024 cyber-attack trends, noting a 30% global increase in Q2 2024, with Africa experiencing the highest average weekly per organisation.

Read more...
The rise of AI-powered cybercrime and defence
Information Security News & Events AI & Data Analytics
Check Point Software Technologies launched its inaugural AI Security Report, offering an in-depth exploration of how cybercriminals are weaponising artificial intelligence (AI), alongside strategic insights defenders need to stay ahead.

Read more...
CCTV control room operator job description
Leaderware Editor's Choice Surveillance Training & Education
Control room operators are still critical components of security operations and will remain so for the foreseeable future, despite the advances of AI, which serves as a vital enhancement to the human operator.

Read more...
Strong industry ties set Securex South Africa apart
News & Events Training & Education
Securex South Africa, co-located with A-OSH EXPO, Facilities Management Expo, and Firexpo, is a meeting place of minds, where leading security, safety, fire, and facilities professionals come together, backed by strong ties with the industry’s most influential bodies.

Read more...
What does Agentic AI mean for cybersecurity?
Information Security AI & Data Analytics
AI agents will change how we work by scheduling meetings on our behalf and even managing supply chain items. However, without adequate protection, they become soft targets for criminals.

Read more...
Phishing attacks through SVG image files
Kaspersky News & Events Information Security
Kaspersky has detected a new trend: attackers are distributing phishing emails to individual and corporate users with attachments in SVG (Scalable Vector Graphics) files, a format commonly used for storing images.

Read more...
Crypto in SA: between progress and precaution
Information Security
“As cryptocurrency gains momentum and legitimacy, it’s becoming increasingly important for people to pay attention to financial security”, says Richard Frost, head of technology and innovation at Armata Cyber Security.

Read more...
Cyber recovery requires a different approach to disaster recovery
Information Security
Disaster recovery is about getting operations back on track after unexpected disruptions; cyber recovery, however, is about calculated actions by bad actors aiming to disrupt your business, steal sensitive data, or hold your system hostage.

Read more...
MDR users claim 97,5% less
Sophos Information Security
The average cyber insurance claim following a significant cyberattack is just $75 000 for MDR users, compared with $3 million for endpoint-only users, according to a new independent study.

Read more...
The impact of GenAI on cybersecurity
Sophos News & Events Information Security
Sophos survey finds that 89% of IT leaders worry GenAI flaws could negatively impact their organisation’s cybersecurity strategies, with 87% of respondents stating they were concerned about a resulting lack of cybersecurity accountability.

Read more...