The deepfake crisis is here and now

Issue 2 2025 Information Security, Training & Education


Caesar Tonkin.

Deepfakes are a growing cybersecurity threat that blur the line between reality and fiction. According to the US Department of Homeland Security, deepfakes introduce serious implications for public and private sector institutions. Detecting these threats is increasingly complex, something that the CSIRO, Australia’s national science agency, has identified in recent research with Sungkynkwan University. There are, said researchers, significant flaws in existing deepfake detection tools and a growing demand for solutions that are more adaptable and resilient.

These AI-generated synthetic media have evolved from technological curiosities to sophisticated weapons of digital deception, costing companies upwards of $603 000 each. The 2024 Identity Fraud Report has revealed another disturbing trend. In 2024, a deepfake attack occurred every five minutes, while digital document forgeries increased by 244% year-on-year. The Regula survey highlighted that financial services firms are firmly in the crosshairs.

“Deepfakes use AI to create realistic but entirely fabricated videos, images and audio recordings,” explains Dr Bright Gameli Muwador, cybersecurity specialist, Kenya. “While the technology has legitimate uses, it is being weaponised for fraud, disinformation and cybercrime.”

A relatively simple task

What makes deepfakes particularly dangerous is their increasing accessibility. “Previously, it was confined to AI researchers, but now freely available tools allow anyone to create highly convincing fakes,” says Caesar Tonkin, director at Armata Cyber Security. “A recent iProove report found that 47% of companies have encountered deepfake attacks while 62% are not adequately prepared to counter them.”

The financial stakes are alarmingly high and know no borders. In the United States, these sophisticated fakes were used to spread election misinformation and commit financial fraud. They have also been used to create videos of well-known celebrities—a case in point being Taylor Swift—to promote fraudulent cryptocurrency schemes. In Australia, voice-based deepfake attacks target corporations, while Kenya has experienced deepfake-driven misinformation campaigns that have also been aimed at influencing public opinion during elections.

The scale of the problem is staggering. “According to Bitget, a cryptocurrency exchange and Web3 company, there has been a sharp increase in the use of deepfakes for criminal purposes that has led to total losses of more than $79,1 billion since 2022,” says Craig du Plooy, director at Cysec.

Technology to counter technology

As deepfake technology grows increasingly agile and intelligent, detecting it has become increasingly complex. Traditional security measures are proving inadequate. “Digital forensics has become a critical part of deepfake detection,” says Tonkin. “We need AI-driven forensic analysis to identify manipulated content. These techniques include reverse image searches, frame-by-frame analysis, and metadata examination.”

There have been promising developments when it comes to detecting deepfakes. Forensic AI has been designed to analyse pixel-level inconsistencies, and audio forensics are catching deepfake, AI-generated voices. “These voices often struggle with breath control and emotional nuance,” says Dr Mawudor. “Forensic specialists can use a spectrogram analysis to detect these unnatural sound patterns.”

While corporations and governments face significant risks, individuals are not immune. AI-generated scams are using the voices of family members to ask for money, and these threats are increasing. Even voice calls can be faked, and it is easy to believe a family member is in trouble.

What lies ahead is not a clear-cut answer to the deepfake problem. These threats definitely require a multi-faceted approach that leverages real-time detection tools, strengthened authentication processes, and ongoing employee training.

Awareness is critical

“Employees should be trained to verify unusual requests through secondary channels,” says Tonkin. “While the deepfake technology threat detection and prevention industry is rapidly evolving and maturing to rein in these threats, every other avenue must be prioritised to ensure companies and individuals are protected.”

On a regulatory level, governments must enforce AI content labelling and require social media platforms to flag AI-generated videos. Dr Mawudor says this is critical, alongside strengthening the legal consequences for deepfake abuse, especially fraud and digital harassment.

As the line between authentic and artificial content continues to blur, perhaps the keywords for companies going forward are vigilance, education, and technology. Companies' countermeasures are crucial to ensuring their systems are protected and to maintaining trust and integrity in a digital-first world.

For more information, contact Richard Frost, Armata, richard.frost@armata.co.za, www.armata.co.za




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

Who are you?
Access Control & Identity Management Information Security
Who are you? This question may seem strange, but it can only be answered accurately by implementing an Identity and Access Management (IAM) system, a crucial component of any company’s security strategy.

Read more...
Check Point launches African Perspectives on Cybersecurity report
News & Events Information Security
Check Point Software Technologies released its African Perspectives on Cybersecurity Report 2025, revealing a sharp rise in attacks across the continent and a major shift in attacker tactics driven by artificial intelligence

Read more...
What is your ‘real’ security posture?
BlueVision Editor's Choice Information Security Infrastructure AI & Data Analytics
Many businesses operate under the illusion that their security controls, policies, and incident response plans will hold firm when tested by cybercriminals, but does this mean you are really safe?

Read more...
What is your ‘real’ security posture? (Part 2)
BlueVision Editor's Choice Information Security Infrastructure
In the second part of this series of articles from BlueVision, we explore the human element: social engineering and insider threats and how red teaming can expose and remedy them.

Read more...
Sophos announces evolution of its security operations portfolio
Information Security
Sophos has announced significant enhancements to its security operations portfolio via Sophos XDR and Sophos MDR offerings, marking an important milestone in its integration journey following the acquisition of Secureworks in February 2025.

Read more...
Cybersecurity operations done right
LanDynamix SMART Security Solutions Technews Publishing Information Security
For smaller companies, the costs associated with acquiring the necessary skills and tools can be very high. So, how can these organisations establish and maintain their security profile amid constant attacks and evolving technology?

Read more...
AI security with AI Cloud Protect
Information Security
AI Cloud Protect is now available for on-premises enterprise deployments to secure AI model development, agentic AI applications, and inference workloads with zero impact on performance.

Read more...
Kaspersky finds security flaws that threaten vehicle safety.
News & Events Information Security Transport (Industry)
At its Security Analyst Summit 2025, Kaspersky presented the results of a security audit that exposed a significant security flaw enabling unauthorised access to all connected vehicles of one automotive manufacturer.

Read more...
The overlooked risks of everyday connectivity
Information Security
That free Wi-Fi you are using could end up costing you a lot more money than your hotspot data if it has been compromised, says Richard Frost, head of technology solutions and consulting at Armata Cyber Security.

Read more...
Syndicates exploit insider vulnerabilities in SA
Information Security Security Services & Risk Management
Today’s cyber criminals do not just exploit vulnerabilities in your systems; they exploit your people, turning trusted team members into unwitting accomplices or deliberate collaborators in their schemes.

Read more...










While every effort has been made to ensure the accuracy of the information contained herein, the publisher and its agents cannot be held responsible for any errors contained, or any loss incurred as a result. Articles published do not necessarily reflect the views of the publishers. The editor reserves the right to alter or cut copy. Articles submitted are deemed to have been cleared for publication. Advertisements and company contact details are published as provided by the advertiser. Technews Publishing (Pty) Ltd cannot be held responsible for the accuracy or veracity of supplied material.




© Technews Publishing (Pty) Ltd. | All Rights Reserved.