Authentic identity

SMART Access & Identity 2024 Access Control & Identity Management

If we reach back into history, the notion of provable, trusted identity was limited to people who were well known or could be vouched for by another trusted individual. Over time, as our world became more connected, the notion of identity documents like passports and driver’s licenses was developed, and these documents were made more secure through physical features, standard formatting, and other factors.

However, as the world has become both thoroughly global and digital, with goods and services exchanged across borders and without any in-person interaction, traditional means for confirming authentic identity, and understanding what is real and what is fake has become impractical.

Today, automated identity checks have become critical, and often rely on the most fundamental approach to identity: recognising someone by their face. In air travel, Automated Border Clearance has become the norm; in banking, eKYC and ID Verification are critical enablers for digital banks and cryptocurrencies; in physical security, face-as-ID is emerging as a compelling alternative to access cards. In other areas, such as social media and video conferencing, strong identity has not been fully embraced, but the need is apparent; users often accept what they see at face value, without any notion of authentication.

Meanwhile, although the need for remote, automated identity verification and the use of digital video for media and communications is skyrocketing, the tools to falsify an identity have become easier to access, and the results have become more compelling.

Seeing is no longer believing. Presented identities can no longer be expected to be authentic identities. The technology to create hyper-realistic synthetic face imagery is now widely available, and in many cases, it is impossible for people to distinguish real from fake. This creates risks for democracy, national security, business, human rights, and personal privacy.

In this paper, we will explore the specific challenges to authentic identity in automated identity verification use cases, as well as applications where we conventionally accept faces as real, and perhaps should no longer do so. We will also dive into what can be done to support authenticity and detect attempts to undermine it.

What is Authentic Identity?

With the relative ease of creating physical reproductions or digital manipulations, matching one face to another with highly accurate face recognition is not enough to prove that a presented identity is authentic. Authentic identity is a collection of technologies, systems, policies, and processes to create trust around identity in both physical and digital domains.

A focus on faces

No doubt, identity can be established, authenticated, or undermined with factors that go well beyond our faces. However, here, we will focus on authentic identity, specifically on faces. Our foundational human reliance on face for identity, the emergence of face recognition as the dominant biometric modality in many applications, and the importance of faces in video for establishing trust in small groups or public communications all demand a special focus.


Identity in the modern world

The implicit question of “Who are you?” and “Can I trust you?” span a number of distinct domains. These include:

1. Identification and authentication. Remote or in-person, the goal of identification and authentication is to confirm that someone is who they say they are for the sake of entry to a building, accessing a bank account, logging into a web service, and travelling into a country. The use cases are very broad by nature and have historically been addressed by some combination of authentication factors (i.e., something you know, something you have, and something you are).

2. Traditional and social media. Historically speaking, identity has been implicitly authentic in media: You see a broadcaster on television, and you believe they are real; you believe that what they are showing or saying is real. However, as traditional media has been augmented or displaced by social media, the means of production and distribution have been decentralised, and misinformation or disinformation has been weaponised; identity presented in media can no longer be implicitly accepted.

3. Communications. Again, the notion of identity has historically been implicit in many aspects of communications where identification and authentication were not explicit requirements (as they are, for instance, when calling a bank). The simultaneous rise of hybrid work and video conferencing due to the COVID-19 pandemic, alongside powerful new AI technologies, argue for a new approach to identity in communications.

Work, banking, travel, news, and entertainment all rely on identity, and so a strategy for authentic identity should be considered in order to deliver trusted results.

Challenges to trust

In order to properly understand the challenges to establishing trust in presented identities, we must consider both threats in the physical world and the digital world.

Physical world: Presentation Attacks

Broadly speaking, challenges to biometric identity in the physical world are referred to as Presentation Attacks (also known as ‘Spoofs’). These direct attacks can subvert a biometric system by using tools called presentation attack instruments (PAIs). Examples of such instruments include photographs, masks, fake silicone fingerprints, or video replays.

Presentation attacks pose serious challenges across all major real-time biometric modalities (such as face, fingerprint, hand vein, and iris). As noted above, we will focus on face recognition-based presentation attacks.

ISO 30107-3[1] defines PAIs as needing to fulfil three requirements: They must appear genuine to any Presentation Attack Detection mechanisms, as genuine to any biometric data quality checks, and must contain extractable features that match the targeted individual.

In practical applications, it is useful to establish a hierarchy in the sophistication and complexity of presentation attacks, which is beyond the scope of ISO 30107-3.

Notably, iBeta https://www.ibeta.com/biometric-testing/) and the FIDO Alliance (https://fidoalliance.org/) have established a three-level presentation attack sophistication hierarchy.


An image of a non-existent person from https://thispersondoesnotexist.com/

Digital world: Deepfakes and beyond

The term ‘Deepfake’ has become a popular way to describe any digital face manipulation and the exact description of what constitutes a deepfake may be argued. Broadly speaking, as defined by Springer Handbook of Digital Face Manipulation and Detection (2022)[2], there are six main categories of digital face manipulations which are relevant to this discussion:

1. Identity swap.

2. Expression swap.

3. Audio- and text-to-video.

4. Entire face synthesis.

5. Face morphing (merging two faces into a single image).

6. Attribute manipulation (synthetically adding features such as eyeglasses, headwear, hair, or otherwise to source images).

We would also add a 7th category:

7. Adversarial template encoding (invisible integration of template information from one face into the image of another face; this is related to, but separate from, face morphing).

Each of these can undermine trust in a presented identity, and we are already beginning to see them play out in public. Perhaps the most broadly known digital face manipulation, DeepTomCruise, set the standard for identity swaps, adding actor Tom Cruise’s face to videos of another person closely resembling him in a way that is largely indistinguishable from reality [3].

In March 2022, a faked video of Ukrainian president Volodymyr Zelenskyy appearing to tell his soldiers to lay down arms and surrender to Russia was widely distributed. It was quickly debunked, but set the stage for more sophisticated political deepfakes[4].

Social media is not immune. In March 2022, it was reported that thousands of records on LinkedIn were fraudulently created using synthetic faces of the type found (for instance) on https://thispersondoesnotexist.com[5]. In August 2022, the chief communications officer of Binance (the world’s largest crypto exchange) reported that hackers had used a deepfake of him in order to fool investors in live Zoom meetings[6]. His account has not been verified, but the case reinforces the insidious nature of misinformation, which is that it becomes increasingly difficult to distinguish reality from fiction.

In addition to these digital face manipulations, the digital world is also prone to cyberattacks. Most specifically, the risk of injection or replay attacks is very real. In this case, data collected from an originally authentic user is replayed at the data level (as opposed to in the physical space or digital image space). Here, ensuring the provenance of data is critical and that data being communicated is real, live, and non-replicable.

Ensuring authentic identity

At this point, the challenges posed to authentic identity may seem overwhelming in both the physical and digital space. Let us understand the opportunities for attack detection or prevention.

Presentation Attack Detection

In the physical world, there is a wide range of available technologies for Presentation Attack Detection (PAD), using a combination of advanced AI detection methods as well as multi-spectral imaging, depth-sensing, and other software – and sensor-level technologies. As noted above, ISO -0.07 codifies PAD, and global test labs offer technology certification. NIST FRVT is now planning a new testing track on PAD as well, which will help foster transparency and stimulate continued technology development. For more information on PAD, please also see Paravision’s white paper, An Introduction to Presentation Attack Detection (available at https://www.paravision.ai/whitepaper-an-introduction-to-presentation-attack-detection/, or via the short link: www.securitysa.com/*paravision1).

Digital Face Manipulation (‘Deepfake’) Detection

Digital face manipulation is a much newer threat to authentic identity, and while PAD largely concerns identification and authentication applications, digital face manipulations such as deepfakes will take shape in a wide range of use cases that will also include traditional and social media, video communications, and any place where people’s faces are presented through digital channels.

With this in mind, we make a few broad assertions about deepfake detection. AI-based detection technologies will play a critical role in helping to assert authentic identity. Deepfakes and synthetic face generators are already more advanced than most people’s ability to discern them from reality.

Automated analysis will not be sufficient to protect the public from the harms of fraudulent presented identities. Both human-in-the-loop analysis, human analysis and dissemination of automated results and public discussion (to stimulate awareness of generic and specific threats) will be a critical complement to automated detection technology.

Cryptographic and related approaches that help ensure the provenance of data sources will play an important role in helping to support authentic data sources. Broad industry consortia have already been formed to begin addressing this issue, such as https://c2pa.org/ and https://contentauthenticity.org/.

Nevertheless, there will be a constant ‘hill-climbing’ issue as is often seen in cybersecurity. New attack vectors can be expected to constantly emerge along with new detection and protection techniques.

Paravision’s approach to Authentic Identity

At Paravision, we look at authentic identity holistically; authentication of real identity and detection of fraudulent identity, in both the physical space and digital space. We have products available that perform advanced Presentation Attack Detection, and in conjunction with trusted government partners, we are actively developing products to detect any of the wide range of digital face manipulations, including, but not limited to, deepfakes. There may be nuanced differences between physical and digital presentation attacks, and so our philosophy is to provide tools to detect attacks and ensure provenance across all domains.

Faces have always been the first line of determining identity, and with recent advances in AI, face recognition has emerged as a very capable tool for biometric matching. Combining best-in-class face recognition technology with Presentation Attack Detection, deepfake detection, and related technology can help to ensure authenticity in cases where automated authentication is key. Meanwhile, in applications where automated face recognition may not be necessary, these detection technologies can be used to ensure trusted communications and news sources and the protection of privacy and human rights.

Our goal is to provide a trust layer in the physical and digital worlds, to power authentic identity, and to protect against malicious actors, fundamentally supported by an understanding of truth and reality in presented identity.

Find out more at https://paravision.ai/HID

This paper has been shortened, the full version is available at https://www.hidglobal.com/sites/default/files/documentlibrary/Authentic_Identity_WhitePaper_Paravision_HID.pdf (or use the short link: www.securitysa.com/*hid7).

Resources

[1] https://www.iso.org/standard/67381.html

[2] https://link.springer.com/book/10.1007/978-3-030-87664-7

[3] https://www.youtube.com/watch?v=nwOywe7xLhs

[4] https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia

[5] https://www.npr.org/2022/03/27/1088140809/fake-linkedin-profiles

[6] https://www.theverge.com/2022/8/23/23318053/binance-comms-crypto-chief-deepfake-scam-claim-patrick-hillmann


Credit(s)




Share this article:
Share via emailShare via LinkedInPrint this page



Further reading:

Integrated, mobile access control
SA Technologies Entry Pro Technews Publishing Access Control & Identity Management
SMART Security Solutions spoke to SA Technologies to learn more about what is happening in the estate access world and what the company offers the residential estate market.

Read more...
Bespoke access for prime office space
Paxton Access Control & Identity Management Residential Estate (Industry)
Nicol Corner is home to a six-star fitness club, prime office space, and an award-winning rooftop restaurant. It is also the first building in South Africa to have its glass façade fully incorporate fritted glazing, saving 35% on energy consumption.

Read more...
Next-generation facial recognition access control system
Enkulu Technologies Products & Solutions Access Control & Identity Management Residential Estate (Industry)
With a modern and innovative design, iDFace is the ideal device for monitoring and controlling people entering and exiting a building using facial recognition technology, including liveness detection, for enhanced security.

Read more...
Long-distance vehicle identification
Products & Solutions Access Control & Identity Management Residential Estate (Industry)
The STid SPECTRE reader can identify vehicles up to 14 metres away, across four traffic lanes, ensuring secure access to an estate without disrupting the traffic flow.

Read more...
Multi-modal access control solutions
Suprema neaMetrics Products & Solutions Access Control & Identity Management Residential Estate (Industry)
Suprema’s latest multi-modal access terminals are top-of-the-range, highly secure, easy to install, and easy to use. They feature biometrics, mobile access, and RFID and are both PoPIA and GDPR compliant.

Read more...
Battery-powered video doorbells
Ring Products & Solutions Access Control & Identity Management Residential Estate (Industry)
Ring has announced the latest addition to its line of video doorbells. The Battery Video Doorbell Pro builds on the capabilities of its predecessor, providing greater value and convenience for homeowners.

Read more...
Tackling estate entrance challenges
Turnstar Systems Products & Solutions Access Control & Identity Management Residential Estate (Industry)
The Velocity Raptor’s retractable spikes deter criminals from entering estate premises; equipped with LED lights, it provides visibility during the day and night, and in adverse conditions.

Read more...
HELLO visitor access management
Products & Solutions Access Control & Identity Management Integrated Solutions Residential Estate (Industry)
HELLO is an on-premises visitor and contractor access management solution designed to be fully integrated and complementary with smart, on-trend technologies, securing estates and businesses alike.

Read more...
Digital transformation in estate environments
Regal Distributors SA Products & Solutions Access Control & Identity Management Residential Estate (Industry)
Digital transformation has brought all users into digital processes across every industry and activity, interlinking activities and crossing industry boundaries. This complexity leads to significant changes in previously established workflows, especially in visitor management.

Read more...
Same old cables, new intercom
Hikvision South Africa Products & Solutions Access Control & Identity Management Residential Estate (Industry) Smart Home Automation
Retrofitting old residential complexes with a modern two-wire HD video intercom system is more than an upgrade. For many homeowners and renters, these systems represent a leap into the future.

Read more...