The deepfake crisis is here and now

Issue 2 2025 Information Security, Training & Education


Caesar Tonkin.

Deepfakes are a growing cybersecurity threat that blur the line between reality and fiction. According to the US Department of Homeland Security, deepfakes introduce serious implications for public and private sector institutions. Detecting these threats is increasingly complex, something that the CSIRO, Australia’s national science agency, has identified in recent research with Sungkynkwan University. There are, said researchers, significant flaws in existing deepfake detection tools and a growing demand for solutions that are more adaptable and resilient.

These AI-generated synthetic media have evolved from technological curiosities to sophisticated weapons of digital deception, costing companies upwards of $603 000 each. The 2024 Identity Fraud Report has revealed another disturbing trend. In 2024, a deepfake attack occurred every five minutes, while digital document forgeries increased by 244% year-on-year. The Regula survey highlighted that financial services firms are firmly in the crosshairs.

“Deepfakes use AI to create realistic but entirely fabricated videos, images and audio recordings,” explains Dr Bright Gameli Muwador, cybersecurity specialist, Kenya. “While the technology has legitimate uses, it is being weaponised for fraud, disinformation and cybercrime.”

A relatively simple task

What makes deepfakes particularly dangerous is their increasing accessibility. “Previously, it was confined to AI researchers, but now freely available tools allow anyone to create highly convincing fakes,” says Caesar Tonkin, director at Armata Cyber Security. “A recent iProove report found that 47% of companies have encountered deepfake attacks while 62% are not adequately prepared to counter them.”

The financial stakes are alarmingly high and know no borders. In the United States, these sophisticated fakes were used to spread election misinformation and commit financial fraud. They have also been used to create videos of well-known celebrities—a case in point being Taylor Swift—to promote fraudulent cryptocurrency schemes. In Australia, voice-based deepfake attacks target corporations, while Kenya has experienced deepfake-driven misinformation campaigns that have also been aimed at influencing public opinion during elections.

The scale of the problem is staggering. “According to Bitget, a cryptocurrency exchange and Web3 company, there has been a sharp increase in the use of deepfakes for criminal purposes that has led to total losses of more than $79,1 billion since 2022,” says Craig du Plooy, director at Cysec.

Technology to counter technology

As deepfake technology grows increasingly agile and intelligent, detecting it has become increasingly complex. Traditional security measures are proving inadequate. “Digital forensics has become a critical part of deepfake detection,” says Tonkin. “We need AI-driven forensic analysis to identify manipulated content. These techniques include reverse image searches, frame-by-frame analysis, and metadata examination.”

There have been promising developments when it comes to detecting deepfakes. Forensic AI has been designed to analyse pixel-level inconsistencies, and audio forensics are catching deepfake, AI-generated voices. “These voices often struggle with breath control and emotional nuance,” says Dr Mawudor. “Forensic specialists can use a spectrogram analysis to detect these unnatural sound patterns.”

While corporations and governments face significant risks, individuals are not immune. AI-generated scams are using the voices of family members to ask for money, and these threats are increasing. Even voice calls can be faked, and it is easy to believe a family member is in trouble.

What lies ahead is not a clear-cut answer to the deepfake problem. These threats definitely require a multi-faceted approach that leverages real-time detection tools, strengthened authentication processes, and ongoing employee training.

Awareness is critical

“Employees should be trained to verify unusual requests through secondary channels,” says Tonkin. “While the deepfake technology threat detection and prevention industry is rapidly evolving and maturing to rein in these threats, every other avenue must be prioritised to ensure companies and individuals are protected.”

On a regulatory level, governments must enforce AI content labelling and require social media platforms to flag AI-generated videos. Dr Mawudor says this is critical, alongside strengthening the legal consequences for deepfake abuse, especially fraud and digital harassment.

As the line between authentic and artificial content continues to blur, perhaps the keywords for companies going forward are vigilance, education, and technology. Companies' countermeasures are crucial to ensuring their systems are protected and to maintaining trust and integrity in a digital-first world.

For more information, contact Richard Frost, Armata, [email protected], www.armata.co.za




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

Highest increase in global cyberattacks in two years
Information Security News & Events
Check Point Global Research released new data on Q2 2024 cyber-attack trends, noting a 30% global increase in Q2 2024, with Africa experiencing the highest average weekly per organisation.

Read more...
Cybersecurity a challenge in digitalising OT
Kaspersky Information Security Industrial (Industry)
According to a study by Kaspersky and VDC Research on securing operational technology environments, the primary risks are inadequate security measures, insufficient resources allocated to OT cybersecurity, challenges surrounding regulatory compliance, and the complexities of IT/OT integration.

Read more...
Cybersecurity in South Africa
Information Security
According to the Allianz Risk Barometer 2025, cyber incidents, including ransomware attacks, data breaches and IT outages, are now the top global business risk, marking their fourth year at the top.

Read more...
Are AI agents a game-changer?
Information Security
While AI-powered chatbots have been around for a while, AI agents go beyond simple assistants, functioning as self-learning digital operatives that plan, execute, and adapt in real time. These advancements do not just enhance cybercriminal tactics, they may fundamentally change the battlefield.

Read more...
Disaster recovery vs cyber recovery
Information Security
Disaster recovery centres on restoring IT operations following events like natural disasters, hardware failures or accidents, while cyber recovery is specifically tailored to address intentional cyberthreats such as ransomware and data breaches.

Read more...
A new generational framework
Editor's Choice Training & Education
Beyond Generation X, and Millennials, Dr Chris Blair discusses the seven decades of technological evolution and the generations they defined, from the 1960’s Mainframe Cohort, to the 2020’s AI Navigators.

Read more...
Back-up securely and restore in seconds
Betatrac Telematic Solutions Editor's Choice Information Security Infrastructure
Betatrac has a solution that enables companies to back-up up to 8 TB of data onto a device and restore it in 30 seconds in an emergency, called Rapid Access Data Recovery (RADR).

Read more...
Key design considerations for a control room
Leaderware Editor's Choice Surveillance Training & Education
If you are designing or upgrading a control room, or even reviewing or auditing an existing control room, there are a number of design factors that one would need to consider.

Read more...
The rise of AI-powered cybercrime and defence
Information Security News & Events AI & Data Analytics
Check Point Software Technologies launched its inaugural AI Security Report, offering an in-depth exploration of how cybercriminals are weaponising artificial intelligence (AI), alongside strategic insights defenders need to stay ahead.

Read more...
CCTV control room operator job description
Leaderware Editor's Choice Surveillance Training & Education
Control room operators are still critical components of security operations and will remain so for the foreseeable future, despite the advances of AI, which serves as a vital enhancement to the human operator.

Read more...