74% of Africans tricked into believing deepfakes are real

Issue 2/3 2023 AI & Data Analytics, Security Services & Risk Management

Fake messages, emails, photos and videos are getting better at tricking people every day, which is making it riskier than ever to operate online, and harder than ever to identify misinformation.

The Top Risks Report 2023 by the Eurasia Group defined advances in deepfakes and the rapid rise of misinformation as ‘weapons of mass disruption’ and it is not far from wrong. Advances in artificial intelligence (AI) and powerful facial recognition and voice synthesis technologies have shifted the boundaries of reality, while the recent explosion of AI-powered intelligences like ChatGPT and Stable Diffusion have made it harder than ever to distinguish between the work of a human versus that of a machine. These are extraordinary and have immense positive potential, but, as Anna Collard, SVP Content Strategy & Evangelist at KnowBe4 Africa points out, there are some significant risks to businesses and individuals.

“Apart from abusing these platforms with online bullying, shaming or sexual harassment, such as fake revenge porn, these tools can be used to increase the effectiveness of phishing and business email compromise (BEC) attacks,” she adds. “These deepfake platforms are capable of creating civil and societal unrest when used to spread mis- or dis-information in political and election campaigns, and remain a dangerous element in modern digital society. This is cause for concern and asks for more awareness and understanding among the public and policymakers.”

In a recent survey undertaken by KnowBe4 across 800 employees aged 18-54 in Mauritius, Egypt, Botswana, South Africa and Kenya, 74% said that they had believed a communication via email or direct message, or a photo or video, was true when, in fact, it was a deepfake. Considering how deepfake technology uses both machine learning and AI to manipulate data and imagery using real-world images and information, it is easy to see how they were tricked. The problem is, awareness of deepfakes and how they work is very low in Africa and this puts users at risk.

Just over 50% of respondents said they were aware of deepfakes, while 48% were unsure or had little understanding of what they were. While a significant percentage of respondents were not clear as to what a deepfake was, most (72%) said they did not believe that every photo or video they saw was genuine, which was a positive step in the right direction, even though nearly 30% believed that the camera never lies.

“It is also important to note that nearly 67% of respondents would trust a message from a friend or legitimate contact on WhatsApp or a direct message while 43% would trust a video, 42% an email and 39% a voice note. Any one of these could be a fake that the trusted contact did not recognise or their account was hacked,” says Collard.

Interestingly, when asked if they would believe a video showing an acquaintance in a compromising position, even if this was out of character, most were hesitant to do so and nearly half (49%) said they would speak to the acquaintance to get to the bottom of it. However, nearly 21% said that they would believe it and 17% believed a video is impossible to fake. The response was similar when they were asked the same question, but of a video with a high-profile person in the compromising situation, with 50% saying they would give them the benefit of the doubt and 36% saying they would believe it.

“Another concern, other than reputational damage, is loss to company,” says Collard. “Most respondents (57%) would be cautious if they got a voice message or an email asking them to carry out a task they would not normally do but 20% would follow the instructions without question.”

When people were asked to select clues that they thought would give away a fake, most said that the language, spelling and expressions used would not be in the person’s usual style (72%) or that the request was out of ordinary or unexpected (63%). If it was an audio or video file, they believed they could identify a fake based on the words, tone and accent sounding unlike the person being emulated (75%), while 54% said the speech would not flow naturally.

When asked ‘What clues do you think would give away a deepfake in a video?’, respondents selected ‘their mouth movements do not sync with the audio’ (73%), ‘The request or the message is out of the ordinary, alarm signals should go off’ (49%), ‘Their head movements seem odd’ (49%), ‘The person doesn’t blink’ (46%), and ‘The person’s skin colour looks unnatural’ (44%).

“The problem is, deepfake technology has become so sophisticated that most people would find it challenging to spot a fake,” concludes Collard. “Training and awareness have become critical. These are the only tools that will help users to understand the risks and recognise the red flags when it comes to faked photo and video content. They should also be trained to understand that they should not believe everything they see and should not act on any unusual instructions without first confirming they are legitimate.”




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

The global generative AI market surpassed $130 billion in 2024
News & Events AI & Data Analytics
According to a new research report from the IoT analyst firm, Berg Insight, the Generative AI (GenAI) market grew substantially in 2024, experiencing triple-digit growth rates in all three major segments: GenAI hardware, foundation models, and development platforms.

Read more...
Your Wi-Fi router is about to start watching you
News & Events Surveillance Security Services & Risk Management
Advanced algorithms are able to analyse your Wi-Fi signals and create a representation of your movements, turning your home's Wi-Fi into a motion detection and personal identification system.

Read more...
Questing for the quantum AI advantage
Infrastructure AI & Data Analytics
The clock is ticking down to the realisation of quantum AI and the sought-after ‘quantum advantage’. In many boardrooms, however, quantum remains mysterious; full of promise, but not fully understood.

Read more...
IoT-driven smart data to stay ahead
IoT & Automation Infrastructure AI & Data Analytics
In a world where uncertainty is constant, the real competitive edge lies in foresight. Businesses that turn real-time data into proactive strategies will not just survive, they will lead.

Read more...
Cybersecurity and insurance partnership for sub-Saharan Africa
Sophos News & Events Information Security Security Services & Risk Management
Sophos and Phishield Announce first-of-its-kind cybersecurity and insurance partnership for sub-Saharan Africa. The SMARTpod podcast, discussing the deal and the state of ransomware in South Africa and globally, is now also available.

Read more...
SA businesses embrace GenAI, but strategy and skills lag
News & Events AI & Data Analytics
South African enterprises are rapidly integrating Generative AI (GenAI) into their operations, but most are doing so without formal strategies, dedicated leadership, or the infrastructure required to maximise value and minimise risk.

Read more...
Eagle Eye Precision Person & Vehicle Detection
Surveillance Products & Solutions AI & Data Analytics
Eagle Eye’s new Precision Person & Vehicle Detection feature detects people and vehicles at long distances with high accuracy and is especially designed for customers who actively monitor for intruders

Read more...
Can AI improve operational challenges?
AI & Data Analytics Industrial (Industry)
AI offers local manufacturers an answer to a growing list of operational challenges. The increasing sophistication of AI solutions could not come at a better time for South African manufacturers, who are grappling with declining sales and the uncertainty of global trade.

Read more...
Hikvision launches AcuSeek NVR
Surveillance Products & Solutions AI & Data Analytics
By integrating natural language interaction, Hikvision’s AcuSeek NVR enables precise video and image retrieval within seconds, marking a transformative milestone for the security industry's advance into intelligent and efficient applications.

Read more...
Chubbsafes celebrates 190 years
Gunnebo Safe Storage Africa News & Events Security Services & Risk Management
Chubbsafes marks its 190th anniversary in 2025 and as a highlight of the anniversary celebrations it is launching the Chubbsafes 1835, a limited edition 190th-anniversary collector’s safe.

Read more...










While every effort has been made to ensure the accuracy of the information contained herein, the publisher and its agents cannot be held responsible for any errors contained, or any loss incurred as a result. Articles published do not necessarily reflect the views of the publishers. The editor reserves the right to alter or cut copy. Articles submitted are deemed to have been cleared for publication. Advertisements and company contact details are published as provided by the advertiser. Technews Publishing (Pty) Ltd cannot be held responsible for the accuracy or veracity of supplied material.




© Technews Publishing (Pty) Ltd. | All Rights Reserved.