Authentic identity

SMART Access & Identity 2024 Access Control & Identity Management

If we reach back into history, the notion of provable, trusted identity was limited to people who were well known or could be vouched for by another trusted individual. Over time, as our world became more connected, the notion of identity documents like passports and driver’s licenses was developed, and these documents were made more secure through physical features, standard formatting, and other factors.

However, as the world has become both thoroughly global and digital, with goods and services exchanged across borders and without any in-person interaction, traditional means for confirming authentic identity, and understanding what is real and what is fake has become impractical.

Today, automated identity checks have become critical, and often rely on the most fundamental approach to identity: recognising someone by their face. In air travel, Automated Border Clearance has become the norm; in banking, eKYC and ID Verification are critical enablers for digital banks and cryptocurrencies; in physical security, face-as-ID is emerging as a compelling alternative to access cards. In other areas, such as social media and video conferencing, strong identity has not been fully embraced, but the need is apparent; users often accept what they see at face value, without any notion of authentication.

Meanwhile, although the need for remote, automated identity verification and the use of digital video for media and communications is skyrocketing, the tools to falsify an identity have become easier to access, and the results have become more compelling.

Seeing is no longer believing. Presented identities can no longer be expected to be authentic identities. The technology to create hyper-realistic synthetic face imagery is now widely available, and in many cases, it is impossible for people to distinguish real from fake. This creates risks for democracy, national security, business, human rights, and personal privacy.

In this paper, we will explore the specific challenges to authentic identity in automated identity verification use cases, as well as applications where we conventionally accept faces as real, and perhaps should no longer do so. We will also dive into what can be done to support authenticity and detect attempts to undermine it.

What is Authentic Identity?

With the relative ease of creating physical reproductions or digital manipulations, matching one face to another with highly accurate face recognition is not enough to prove that a presented identity is authentic. Authentic identity is a collection of technologies, systems, policies, and processes to create trust around identity in both physical and digital domains.

A focus on faces

No doubt, identity can be established, authenticated, or undermined with factors that go well beyond our faces. However, here, we will focus on authentic identity, specifically on faces. Our foundational human reliance on face for identity, the emergence of face recognition as the dominant biometric modality in many applications, and the importance of faces in video for establishing trust in small groups or public communications all demand a special focus.


Identity in the modern world

The implicit question of “Who are you?” and “Can I trust you?” span a number of distinct domains. These include:

1. Identification and authentication. Remote or in-person, the goal of identification and authentication is to confirm that someone is who they say they are for the sake of entry to a building, accessing a bank account, logging into a web service, and travelling into a country. The use cases are very broad by nature and have historically been addressed by some combination of authentication factors (i.e., something you know, something you have, and something you are).

2. Traditional and social media. Historically speaking, identity has been implicitly authentic in media: You see a broadcaster on television, and you believe they are real; you believe that what they are showing or saying is real. However, as traditional media has been augmented or displaced by social media, the means of production and distribution have been decentralised, and misinformation or disinformation has been weaponised; identity presented in media can no longer be implicitly accepted.

3. Communications. Again, the notion of identity has historically been implicit in many aspects of communications where identification and authentication were not explicit requirements (as they are, for instance, when calling a bank). The simultaneous rise of hybrid work and video conferencing due to the COVID-19 pandemic, alongside powerful new AI technologies, argue for a new approach to identity in communications.

Work, banking, travel, news, and entertainment all rely on identity, and so a strategy for authentic identity should be considered in order to deliver trusted results.

Challenges to trust

In order to properly understand the challenges to establishing trust in presented identities, we must consider both threats in the physical world and the digital world.

Physical world: Presentation Attacks

Broadly speaking, challenges to biometric identity in the physical world are referred to as Presentation Attacks (also known as ‘Spoofs’). These direct attacks can subvert a biometric system by using tools called presentation attack instruments (PAIs). Examples of such instruments include photographs, masks, fake silicone fingerprints, or video replays.

Presentation attacks pose serious challenges across all major real-time biometric modalities (such as face, fingerprint, hand vein, and iris). As noted above, we will focus on face recognition-based presentation attacks.

ISO 30107-3[1] defines PAIs as needing to fulfil three requirements: They must appear genuine to any Presentation Attack Detection mechanisms, as genuine to any biometric data quality checks, and must contain extractable features that match the targeted individual.

In practical applications, it is useful to establish a hierarchy in the sophistication and complexity of presentation attacks, which is beyond the scope of ISO 30107-3.

Notably, iBeta https://www.ibeta.com/biometric-testing/) and the FIDO Alliance (https://fidoalliance.org/) have established a three-level presentation attack sophistication hierarchy.


An image of a non-existent person from https://thispersondoesnotexist.com/

Digital world: Deepfakes and beyond

The term ‘Deepfake’ has become a popular way to describe any digital face manipulation and the exact description of what constitutes a deepfake may be argued. Broadly speaking, as defined by Springer Handbook of Digital Face Manipulation and Detection (2022)[2], there are six main categories of digital face manipulations which are relevant to this discussion:

1. Identity swap.

2. Expression swap.

3. Audio- and text-to-video.

4. Entire face synthesis.

5. Face morphing (merging two faces into a single image).

6. Attribute manipulation (synthetically adding features such as eyeglasses, headwear, hair, or otherwise to source images).

We would also add a 7th category:

7. Adversarial template encoding (invisible integration of template information from one face into the image of another face; this is related to, but separate from, face morphing).

Each of these can undermine trust in a presented identity, and we are already beginning to see them play out in public. Perhaps the most broadly known digital face manipulation, DeepTomCruise, set the standard for identity swaps, adding actor Tom Cruise’s face to videos of another person closely resembling him in a way that is largely indistinguishable from reality [3].

In March 2022, a faked video of Ukrainian president Volodymyr Zelenskyy appearing to tell his soldiers to lay down arms and surrender to Russia was widely distributed. It was quickly debunked, but set the stage for more sophisticated political deepfakes[4].

Social media is not immune. In March 2022, it was reported that thousands of records on LinkedIn were fraudulently created using synthetic faces of the type found (for instance) on https://thispersondoesnotexist.com[5]. In August 2022, the chief communications officer of Binance (the world’s largest crypto exchange) reported that hackers had used a deepfake of him in order to fool investors in live Zoom meetings[6]. His account has not been verified, but the case reinforces the insidious nature of misinformation, which is that it becomes increasingly difficult to distinguish reality from fiction.

In addition to these digital face manipulations, the digital world is also prone to cyberattacks. Most specifically, the risk of injection or replay attacks is very real. In this case, data collected from an originally authentic user is replayed at the data level (as opposed to in the physical space or digital image space). Here, ensuring the provenance of data is critical and that data being communicated is real, live, and non-replicable.

Ensuring authentic identity

At this point, the challenges posed to authentic identity may seem overwhelming in both the physical and digital space. Let us understand the opportunities for attack detection or prevention.

Presentation Attack Detection

In the physical world, there is a wide range of available technologies for Presentation Attack Detection (PAD), using a combination of advanced AI detection methods as well as multi-spectral imaging, depth-sensing, and other software – and sensor-level technologies. As noted above, ISO -0.07 codifies PAD, and global test labs offer technology certification. NIST FRVT is now planning a new testing track on PAD as well, which will help foster transparency and stimulate continued technology development. For more information on PAD, please also see Paravision’s white paper, An Introduction to Presentation Attack Detection (available at https://www.paravision.ai/whitepaper-an-introduction-to-presentation-attack-detection/, or via the short link: www.securitysa.com/*paravision1).

Digital Face Manipulation (‘Deepfake’) Detection

Digital face manipulation is a much newer threat to authentic identity, and while PAD largely concerns identification and authentication applications, digital face manipulations such as deepfakes will take shape in a wide range of use cases that will also include traditional and social media, video communications, and any place where people’s faces are presented through digital channels.

With this in mind, we make a few broad assertions about deepfake detection. AI-based detection technologies will play a critical role in helping to assert authentic identity. Deepfakes and synthetic face generators are already more advanced than most people’s ability to discern them from reality.

Automated analysis will not be sufficient to protect the public from the harms of fraudulent presented identities. Both human-in-the-loop analysis, human analysis and dissemination of automated results and public discussion (to stimulate awareness of generic and specific threats) will be a critical complement to automated detection technology.

Cryptographic and related approaches that help ensure the provenance of data sources will play an important role in helping to support authentic data sources. Broad industry consortia have already been formed to begin addressing this issue, such as https://c2pa.org/ and https://contentauthenticity.org/.

Nevertheless, there will be a constant ‘hill-climbing’ issue as is often seen in cybersecurity. New attack vectors can be expected to constantly emerge along with new detection and protection techniques.

Paravision’s approach to Authentic Identity

At Paravision, we look at authentic identity holistically; authentication of real identity and detection of fraudulent identity, in both the physical space and digital space. We have products available that perform advanced Presentation Attack Detection, and in conjunction with trusted government partners, we are actively developing products to detect any of the wide range of digital face manipulations, including, but not limited to, deepfakes. There may be nuanced differences between physical and digital presentation attacks, and so our philosophy is to provide tools to detect attacks and ensure provenance across all domains.

Faces have always been the first line of determining identity, and with recent advances in AI, face recognition has emerged as a very capable tool for biometric matching. Combining best-in-class face recognition technology with Presentation Attack Detection, deepfake detection, and related technology can help to ensure authenticity in cases where automated authentication is key. Meanwhile, in applications where automated face recognition may not be necessary, these detection technologies can be used to ensure trusted communications and news sources and the protection of privacy and human rights.

Our goal is to provide a trust layer in the physical and digital worlds, to power authentic identity, and to protect against malicious actors, fundamentally supported by an understanding of truth and reality in presented identity.

Find out more at https://paravision.ai/HID

This paper has been shortened, the full version is available at https://www.hidglobal.com/sites/default/files/documentlibrary/Authentic_Identity_WhitePaper_Paravision_HID.pdf (or use the short link: www.securitysa.com/*hid7).

Resources

[1] https://www.iso.org/standard/67381.html

[2] https://link.springer.com/book/10.1007/978-3-030-87664-7

[3] https://www.youtube.com/watch?v=nwOywe7xLhs

[4] https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia

[5] https://www.npr.org/2022/03/27/1088140809/fake-linkedin-profiles

[6] https://www.theverge.com/2022/8/23/23318053/binance-comms-crypto-chief-deepfake-scam-claim-patrick-hillmann


Credit(s)




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

Paxton set to launch game-changing new system
Paxton Access Control & Identity Management News & Events
Access control is evolving fast. Installers and end users are looking for systems that are simple to install, easy to manage remotely, and flexible enough to scale. In response, Paxton is exploring how emerging technologies can reshape access control.

Read more...
NEC XON secures mobile provider’s hybrid identities
NEC XON Access Control & Identity Management Information Security Commercial (Industry)
For a leading South African telecommunications operator, identity protection has become a strategic priority as identity-centric attacks proliferate across the industry. The company faced mounting pressure to secure both human and non-human identities across complex hybrid environments.

Read more...
Cloud security in visitor management and access control
SA Technologies Access Control & Identity Management Infrastructure Residential Estate (Industry) Commercial (Industry)
Cloud has become the default platform for modern security operations, from visitor management portals and remote access control to incident logging, reporting, analytics, and integrations. But “in the cloud” does not mean “someone else is securing it for us”.

Read more...
Centurion raises the bar at HomeSec Expo
Centurion Systems News & Events Access Control & Identity Management Residential Estate (Industry) Smart Home Automation Commercial (Industry)
Centurion Systems unveiled its latest product lines at HomeSec Expo 2026, introducing SMART+, a simpler way for installers and end users to manage their Centurion installations - as well as a few new products.

Read more...
What’s in store for PAM and IAM?
Access Control & Identity Management Information Security
Leostream predicts changes in Identity and Access Management (IAM) and Privileged Access Management (PAM) in the coming year, driven by evolving cybersecurity realities, hybridisation, AI, and more.

Read more...
Protecting citizens’ identities: a shared responsibility
Access Control & Identity Management
A blind spot in identity authentication today is still physical identity documents. Identity cards, passports, and driver’s licences, biometric or not, are broken, forged, or misused, fueling global trafficking networks and undermining public trust in institutions.

Read more...
The challenges of cybersecurity in access control
Technews Publishing SMART Security Solutions Access Control & Identity Management Information Security
SMART Security Solutions summarises the key points dealing with modern cyber risks facing access control systems, from Mercury Security’s white paper “Meeting the Challenges of Cybersecurity in Access Control: A Future-Ready Approach.”

Read more...
Access as a Service is inevitable
Technews Publishing SMART Security Solutions ATG Digital Access Control & Identity Management Infrastructure
When it comes to Access Control as a Service (ACaaS), most organisations (roughly 90% internationally) plan to move, or are in the process of moving to the cloud, but the majority of existing infrastructure (about 70%) remains on-premises for now.

Read more...
From surveillance to insight across Africa
neaMetrics TRASSIR - neaMetrics Distribution Access Control & Identity Management Surveillance Products & Solutions
TRASSIR is a global developer of intelligent video management and analytics solutions, delivering AI-driven platforms that enable organisations to monitor, analyse, and respond to events across complex physical environments.

Read more...
Securing your access hardware and software
SMART Security Solutions Technews Publishing RBH Access Technologies Access Control & Identity Management Information Security
Securing access control technology is critical for physical and digital security. Every interaction between readers, controllers, and host systems creates a potential attack point for those with nefarious intent.

Read more...










While every effort has been made to ensure the accuracy of the information contained herein, the publisher and its agents cannot be held responsible for any errors contained, or any loss incurred as a result. Articles published do not necessarily reflect the views of the publishers. The editor reserves the right to alter or cut copy. Articles submitted are deemed to have been cleared for publication. Advertisements and company contact details are published as provided by the advertiser. Technews Publishing (Pty) Ltd cannot be held responsible for the accuracy or veracity of supplied material.




© Technews Publishing (Pty) Ltd. | All Rights Reserved.