74% of Africans tricked into believing deepfakes are real

Issue 2/3 2023 AI & Data Analytics, Security Services & Risk Management

Fake messages, emails, photos and videos are getting better at tricking people every day, which is making it riskier than ever to operate online, and harder than ever to identify misinformation.

The Top Risks Report 2023 by the Eurasia Group defined advances in deepfakes and the rapid rise of misinformation as ‘weapons of mass disruption’ and it is not far from wrong. Advances in artificial intelligence (AI) and powerful facial recognition and voice synthesis technologies have shifted the boundaries of reality, while the recent explosion of AI-powered intelligences like ChatGPT and Stable Diffusion have made it harder than ever to distinguish between the work of a human versus that of a machine. These are extraordinary and have immense positive potential, but, as Anna Collard, SVP Content Strategy & Evangelist at KnowBe4 Africa points out, there are some significant risks to businesses and individuals.

“Apart from abusing these platforms with online bullying, shaming or sexual harassment, such as fake revenge porn, these tools can be used to increase the effectiveness of phishing and business email compromise (BEC) attacks,” she adds. “These deepfake platforms are capable of creating civil and societal unrest when used to spread mis- or dis-information in political and election campaigns, and remain a dangerous element in modern digital society. This is cause for concern and asks for more awareness and understanding among the public and policymakers.”

In a recent survey undertaken by KnowBe4 across 800 employees aged 18-54 in Mauritius, Egypt, Botswana, South Africa and Kenya, 74% said that they had believed a communication via email or direct message, or a photo or video, was true when, in fact, it was a deepfake. Considering how deepfake technology uses both machine learning and AI to manipulate data and imagery using real-world images and information, it is easy to see how they were tricked. The problem is, awareness of deepfakes and how they work is very low in Africa and this puts users at risk.

Just over 50% of respondents said they were aware of deepfakes, while 48% were unsure or had little understanding of what they were. While a significant percentage of respondents were not clear as to what a deepfake was, most (72%) said they did not believe that every photo or video they saw was genuine, which was a positive step in the right direction, even though nearly 30% believed that the camera never lies.

“It is also important to note that nearly 67% of respondents would trust a message from a friend or legitimate contact on WhatsApp or a direct message while 43% would trust a video, 42% an email and 39% a voice note. Any one of these could be a fake that the trusted contact did not recognise or their account was hacked,” says Collard.

Interestingly, when asked if they would believe a video showing an acquaintance in a compromising position, even if this was out of character, most were hesitant to do so and nearly half (49%) said they would speak to the acquaintance to get to the bottom of it. However, nearly 21% said that they would believe it and 17% believed a video is impossible to fake. The response was similar when they were asked the same question, but of a video with a high-profile person in the compromising situation, with 50% saying they would give them the benefit of the doubt and 36% saying they would believe it.

“Another concern, other than reputational damage, is loss to company,” says Collard. “Most respondents (57%) would be cautious if they got a voice message or an email asking them to carry out a task they would not normally do but 20% would follow the instructions without question.”

When people were asked to select clues that they thought would give away a fake, most said that the language, spelling and expressions used would not be in the person’s usual style (72%) or that the request was out of ordinary or unexpected (63%). If it was an audio or video file, they believed they could identify a fake based on the words, tone and accent sounding unlike the person being emulated (75%), while 54% said the speech would not flow naturally.

When asked ‘What clues do you think would give away a deepfake in a video?’, respondents selected ‘their mouth movements do not sync with the audio’ (73%), ‘The request or the message is out of the ordinary, alarm signals should go off’ (49%), ‘Their head movements seem odd’ (49%), ‘The person doesn’t blink’ (46%), and ‘The person’s skin colour looks unnatural’ (44%).

“The problem is, deepfake technology has become so sophisticated that most people would find it challenging to spot a fake,” concludes Collard. “Training and awareness have become critical. These are the only tools that will help users to understand the risks and recognise the red flags when it comes to faked photo and video content. They should also be trained to understand that they should not believe everything they see and should not act on any unusual instructions without first confirming they are legitimate.”




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

“This Is Theft!” SASA slams Mafoko Security
News & Events Security Services & Risk Management Associations
The Security Association of South Africa (SASA) has issued a stark warning that the long-running Mafoko Security Patrols scandal is no longer an isolated case of employer misconduct, but evidence of a systemic failure in South Africa’s regulatory and governance structures.

Read more...
Making a mesh for security
Information Security Security Services & Risk Management
Credential-based attacks have reached epidemic levels. For African CISOs in particular, the message is clear: identity is now the perimeter, and defences must reflect that reality with coherence and context.

Read more...
From friction to trust
Information Security Security Services & Risk Management Financial (Industry)
Historically, fraud prevention has been viewed as a trade-off between robust security and a seamless customer journey, with security often prevailing. However, this can impair business functionality or complicate the customer journey with multiple logins and authentication steps.

Read more...
Security ready to move out of the basement
AI & Data Analytics Security Services & Risk Management
Panaseer believes that in 2026, a board member at a major corporation will lose their job amid rising breaches and legal scrutiny, as organisations recognise that cyber risk is a business risk that CISOs cannot shoulder alone.

Read more...
Cyber remains top business risk, but AI fastest riser at #2
News & Events Security Services & Risk Management
The Allianz Risk Barometer 2026 ranks cybersecurity, especially ransomware attacks, as the #1 risk, while AI is the biggest riser and jumps from #10 to #2, highlighting the emerging risks for companies in almost all industry sectors.

Read more...
Understanding the promise and perils of AI
AI & Data Analytics
Samuel Turcotte believes AI may kill us all. In this article, a condensed version of a white paper, he discusses AI's development and associated risks, all the while still hoping for a bright future.

Read more...
Access data for business efficiency
Continuum Identity Editor's Choice Access Control & Identity Management AI & Data Analytics Facilities & Building Management
In all organisations, access systems are paramount to securing people, data, places, goods, and resources. Today, hybrid systems deliver significant added value to users at a much lower cost.

Read more...
OT calculator to align cyber investments with business goals
Industrial (Industry) Information Security Security Services & Risk Management
The OT Calculator has been developed specifically for industrial organisations to assess the potential costs of insufficient operational technology (OT) security. By offering detailed financial forecasts, the calculator empowers senior management to make well-informed decisions.

Read more...
AI-powered classification across large areas
Axis Communications SA Surveillance Products & Solutions AI & Data Analytics
Axis Communications announced the upcoming launch of two innovative radars. Each device delivers a 180° or 270° horizontal field of detection, with accurate AI-powered classification across large areas, 24/7, in all weather and lighting conditions.

Read more...
Top five AIoT trends in 2026
IoT & Automation AI & Data Analytics
As we enter 2026, the convergence of artificial intelligence (AI) and IoT infrastructure is reshaping industries, unlocking unprecedented opportunities to optimise operations, enhance security, and improve sustainability.

Read more...










While every effort has been made to ensure the accuracy of the information contained herein, the publisher and its agents cannot be held responsible for any errors contained, or any loss incurred as a result. Articles published do not necessarily reflect the views of the publishers. The editor reserves the right to alter or cut copy. Articles submitted are deemed to have been cleared for publication. Advertisements and company contact details are published as provided by the advertiser. Technews Publishing (Pty) Ltd cannot be held responsible for the accuracy or veracity of supplied material.




© Technews Publishing (Pty) Ltd. | All Rights Reserved.