Deepfakes and digital trust

Issue 3 2025 Editor's Choice

The rise of generative AI is expected to revolutionise the ways we use our security systems as well as the information that these systems will be able to provide.

These promises of new levels of efficiency, accuracy and depth of data from artificial intelligence are capturing most of the attention surrounding AI and its impacts on security.

However, the influences from generative AI are not all positive, with some presenting serious threats to the integrity of the industry’s core set of technologies – video surveillance. One of the most pressing areas of concern is the growing prevalence of manipulated or “deepfake” videos, which are made possible by the mainstream availability of video alteration tools based on generative AI technology. Creating these deepfake videos or altering existing videos used to require expert skills and expensive equipment, but can now be done using everyday apps, many of which are free to download or use.

While some instances of deepfakes or manipulated video are easier to spot – think a dinosaur wandering around an office lobby – other alterations to video can be seamless to the viewer because they do not involve obvious changes in the video footage. For example, altering the timestamp of a video clip to a different day or time can provide incorrect information about when the event occurred. Taking out specific frames of a video clip from an event can remove the event or person of interest in the video, which means the clip does not accurately represent the event. More extreme examples include substituting a face with that of another person in a scene, or using generative AI to put a firearm in an individual’s hand.

A real-world, current risk

The threat of deepfakes has moved from theoretical concern to practical reality recently. In 2024, a multinational company in Hong Kong was tricked into wiring $25 million to fraudsters after participating in a video call with a deepfake of the company’s chief financial officer. Law enforcement agencies have also reported instances where manipulated surveillance footage was submitted as evidence in criminal cases, with timestamps and content altered to create false alibis.


Leo Levit.

This ability to alter video can ultimately pose significant challenges to organisational trust in video evidence and the industry’s ability to maintain the authenticity of surveillance footage, which can have severe consequences in many areas. Video is one of the most crucial pieces of evidence used in criminal investigations, court proceedings, and internal corporate security investigations.

In many countries, there is a very robust chain of custody process required as part of law enforcement investigations and the admission of the video as evidence in court. Public distrust in video can easily lead to concerns about reasonable doubt in the eyes of a jury or judicial ruling in court proceedings and corporate investigations.

If the current legal precedents about the admissibility of video evidence are undermined by AI manipulation, courts may be forced to establish entirely new standards for this type of evidence. This could potentially exclude video evidence in cases where authentication cannot be established.

Increasing business risks and costs

For corporate security, the stakes are equally high. Internal investigations rely heavily on surveillance footage to resolve incidents ranging from workplace safety violations, to theft and harassment claims. Human resources departments and corporate legal teams often base critical decisions on video evidence. If this evidence is in doubt, organisations face increased liability risks, higher settlement costs, and greater difficulty in fairly resolving disputes in the workplace. Insurance companies have also begun expressing concern about the ability to verify claims in an era of manipulable video, with some policies now specifically addressing digital evidence reliability.

The impacts extend beyond the courtroom and corporate settings. Public safety organisations, transportation systems, critical infrastructure protection, and national security applications all rely on verified video for both real-time decision-making and after-action reviews.

As these threats continue to grow, traditional forensic techniques to safeguard video footage will not be enough to protect against generative AI’s ability to covertly and overtly alter surveillance video. This growing need for new solutions highlights the importance for industry collaboration and a standardised way to preserve the integrity of video and institutional trust in the footage as an accurate view of a situation.

Finding a solution with media signing

As a global standards organisation, ONVIF is working on a method of video authentication called media signing that provides proof that the video has not been altered since it has left the specific camera sensor that captured the video. Securing the video at its earliest point, when the sensor in the camera captures the video, is key to ensuring the authenticity and trustworthiness of the video footage from camera to court.

On a technical level, the method involves a camera having a unique signing key that is used to sign a group of video frames, where each frame is accounted for. The signature is then embedded in the video. When the video is played through a media player (like a stand-alone video player or video management client) that supports media signing and a trusted root certificate from the camera manufacturer, the media player can verify that the video data originated directly from that specific camera and has not been tampered with. If pixels in a video frame have been altered, or frames have been removed or reordered, the signature verification will fail and the video player signals that the video is not valid.

Simplifying authentication for law enforcement

Standardising video authentication using ONVIF enables a common way to verify the authenticity of the video it has received. This can help to streamline processes for video users such as law enforcement and other criminal justice personnel, who deal with video footage generated by systems from many different manufacturers that might use a variety of methods for protecting video.

In addition, by securing the video right from the specific camera that captured it, there is no need to prove the chain of custody for the video. You can verify the video authenticity at every step, from the camera to a person viewing the exported recording. With authentication provided at the point of capture, the video can be traced back to the device that recorded it.

Open source release

ONVIF is planning to release the implementation of media signing as an open source project. Providing the specification to the open source community will add transparency to the ONVIF method and make it easier for a wide community of developers to use it, helping the standard gain wider adoption in the security industry. Making these standards available via open source will also create transparency in the technical implementation, preserving trust in the authentication process and the integrity of the video itself.

Standardising this process for the security industry and others that rely on camera footage for other uses will provide consistency and reliability in the authenticity of video. ONVIF believes that video authentication at the source (from the camera) through media signing will provide the assurances needed to preserve trust in surveillance video.

Find out more at www.onvif.org




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

The AI goldrush has a credibility problem
Refraime Editor's Choice Surveillance AI & Data Analytics
The single most important question a surveillance buyer can ask is deceptively simple: “Was this system programmed or was it trained?” That question alone will reveal more about what you are evaluating than any feature list or marketing video.

Read more...
Crime behaviour insights more important than ever
Leaderware Editor's Choice Surveillance Training & Education AI & Data Analytics
Behavioural surveillance skills are as essential now as they have ever been, especially in situations where quick evaluation of context is needed. Training operators in behavioural recognition skills is a vital part of control room success.

Read more...
Proactive estate security in Cape Town
neaMetrics OneSpace Technologies Technews Publishing SMART Security Solutions Fang Fences & Guards ATG Digital Editor's Choice News & Events Integrated Solutions Infrastructure Residential Estate (Industry)
SMART Security Solutions started the year with our annual SMART Estate Security Conference in Cape Town on 26 February 2026. Held at Anna Beulah Farm, the conference saw a number of delegates enjoying the farm’s excellent cuisine, while listening to outstanding presenters.

Read more...
How AI video is reshaping real estate security
neaMetrics TRASSIR - neaMetrics Distribution Editor's Choice
Globally, property maintenance and facility operations spending is projected to grow to over US$145 billion by 2034, reflecting rising complexity, compliance pressures, and increased exposure to operational costs. AI systems can protect properties, automate access, and optimise building management.

Read more...
Open systems support hybrid surveillance
SMART Security Solutions Axis Communications SA neaMetrics Editor's Choice
Today, end users can select the most suitable surveillance solution for their needs, whether it is on-site, at the edge, or in the cloud; a hybrid approach combining different options is most effective depending on the scenario.

Read more...
Surveillance & AI roundtable
DeepAlert Lytehouse Refraime SMART Security Solutions Technews Publishing Editor's Choice Surveillance Integrated Solutions AI & Data Analytics
SMART Security Solutions held an online roundtable with a few surveillance experts to explore the intersection of surveillance and AI, gaining insights into the market and how control rooms are evolving.

Read more...
Access trends for 2026
Technews Publishing SMART Security Solutions RR Electronic Security Solutions Enkulu Technologies IDEMIA neaMetrics Editor's Choice Access Control & Identity Management Infrastructure
The access control and identity management industry has been the cornerstone of organisations of all sizes for decades. SMART Security Solutions asked local integrators and distributors about the primary trends in the access and identity market for 2026.

Read more...
Access data for business efficiency
Continuum Identity Editor's Choice Access Control & Identity Management AI & Data Analytics Facilities & Building Management
In all organisations, access systems are paramount to securing people, data, places, goods, and resources. Today, hybrid systems deliver significant added value to users at a much lower cost.

Read more...
Zero Trust access control
Technews Publishing SMART Security Solutions CASA Software NEC XON Editor's Choice Access Control & Identity Management Information Security
Zero Trust Architecture enforces the rule of ‘never trust, always verify’. It changes an organisation’s security posture by assuming that threats exist both inside and outside the perimeter, and it applies to information and physical security.

Read more...
What is your ‘real’ security posture?
BlueVision Editor's Choice Information Security Infrastructure AI & Data Analytics
Many businesses operate under the illusion that their security controls, policies, and incident response plans will hold firm when tested by cybercriminals, but does this mean you are really safe?

Read more...










While every effort has been made to ensure the accuracy of the information contained herein, the publisher and its agents cannot be held responsible for any errors contained, or any loss incurred as a result. Articles published do not necessarily reflect the views of the publishers. The editor reserves the right to alter or cut copy. Articles submitted are deemed to have been cleared for publication. Advertisements and company contact details are published as provided by the advertiser. Technews Publishing (Pty) Ltd cannot be held responsible for the accuracy or veracity of supplied material.




© Technews Publishing (Pty) Ltd. | All Rights Reserved.