The Dangerous Deepfake Duo: Presentation & Injection Attacks

1-888-216-3544

             
Contact Us
biometric security deepfake

Understanding Deepfake Presentation and Injection Attacks

Explore how deepfake presentation and injection attacks challenge modern ID authentication, revealing how evolving synthetic media techniques reshape digital trust and security.

The Dangerous Deepfake Duo: Presentation and Injection Attacks

presentation vs injection attack
In the field of document authentication and biometric liveness applied in the mobile online context, the rise of deepfake technology has ushered in a host of new challenges and concerns. While we may have become desensitised to the prevalence and impact of deepfakes, it is crucial to grasp the distinctions between two prominent types of identity threats that lurk beneath the surface: presentation attacks and injection attacks.
By way of brief introduction to each:

A presentation attack is when someone shows something fake to the camera lens - like a printed ID, a deepfake video played on a screen, a mask, or a static selfie.

The system has to decide whether the object in front of the camera is real, live, and physically present.

Under traditional definitions, the focus of deepfakes has been on attacks using biometric traits - in order to mimic a legitimate user. The surge of fraudulent ID document attack types necessitates that presentation attacks now apply to ID documents too - whether a physical drivers’ license or synthetically generated document.

An injection attack is when a fraudster feeds synthetic, pre-recorded, or manipulated content directly into the system - skipping the real camera feed entirely.

Instead of pointing a camera at a document or face, they push a digital file into the feed, so that the system perceives the injected presentation like a real capture.

Here too, the focus of injection attacks previously focused on the human liveness aspect of the IDV journey. Similarly, as for presentation attacks, the chaos around ID document exposure in the onboarding flow requires broadening the threat of injection attacks to include the document analysis too.

Presentation attacks: The art of mimicry

A presentation attack in document and human liveness detection involves the deliberate attempt to deceive document and biometric systems by mimicking legitimate physical traits.

In the context of document attacks, this can be achieved by presenting physical ID documents that contain forged security features or by using Generative AI (Gen AI) technology to create a computer-generated image of an ID document, that will confuse even the most seasoned national immigration security officer. They are delicately dusted with accurate personal attributes (such as name, address, and ID number) that are pulled from a myriad of data breaches that we fail to take notice of on a nearly daily basis.

For human liveness, a presentation attack is launched through various means such as presenting photos, masks, videos, or even sophisticated 3D models to the IDV detection system. GenAI can here too create highly realistic deepfakes, which can be used to perform presentation attacks by showing manipulated videos of legitimate users on another device screen. These deepfakes can imitate facial expressions, voice patterns, and other biometric traits, making it challenging for standard liveness detection systems to differentiate between real and synthetic inputs.

Common to both document and human presentation attacks is the goal: bypassing security measures by creating an illusion of the genuine presence of the user and document.

Compared to an injection attack, manually spotting a human deepfake presentation attack is a bit easier as the attacker needs to hold another device to display the deepfake video. The physical act of holding and positioning the device often introduces inconsistencies, such as unnatural angles or reflections, that can be recognised by advanced liveness detection systems.

Additionally, the interaction between the device and the environment, like lighting and movement, can further reveal the deception, making it less convincing and easier to identify as a fake.

By comparison, however, if the document authentication detection process is permissive of the upload of an ID document image (e.g. by file selection upload), manually spotting a deepfake presentation has become increasingly more challenging, in particular with the latest image AI generation technology. As evidence of this, recent studies of human responses to authentic and deepfake images have shown accuracy of 50 - 60% by the individuals tested. This coin flip probability of success highlights a deep deception bias that humans have when presented with a genuine and deepfake image, one that is trending upwards as the quality of deepfakes improves with continued learning and more powerful AI systems.

Injection attacks: Breaching the digital link

In contrast, injection attacks involve the introduction of manipulated data directly into the document or liveness feed, compromising the inherent integrity of the process. Instead of altering the external presentation of a document or an individual, this method infiltrates the internal data used for authentication.

An injection attack, where false data is introduced into a system to deceive it, is not a new style of attack. But with the advancements in Gen AI and deepfakes, the risk to document authentication and biometric systems has significantly increased.

Deepfakes can create highly convincing synthetic documents and biometric data that, when injected into a system, elevates the threat level, making it imperative to develop more robust detection and prevention strategies. As highlighted above, these deepfakes can either be an image of a fabricated identity document or a video that resembles the photo of a real-life person on an identity document.

Synthetic IDs: It has your name on it

Whilst people are generally aware of deepfake videos, they are mostly in the dark on the attacks that target the document authentication process. How accessible are these deepfake document forgeries? Very, and it is not just on the dark web.

There are hundreds of online sites that offer simple, cheap and easy web-based services with a range of ID document types. Choice is everything. Do you prefer a UK drivers’ license, US passport, or New South Wales drivers’ license? All nicely compatible with the geographic locale of personal data that you have handy, sourced from that data breach. And some face morphing to the portrait image present on the ID document (which fools some IDV systems with a match of the “selfie” to the morphed image) is the final touch to try sneak through that pesky IDV challenge. Alternatively, you can add your real face to a GenAI document to ensure you pass the biometric liveness part of the process.

These template farms or document mills are not obscure dark web sites accessible only to the technically hyper-literate with a TOR address; they are indexed and just a search term away. And while we do see some sites shut down following police investigation, this is a game of Whack a Mole, with a new site opening up shortly thereafter.

The interplay between attack types

Understanding the differences between presentation and injection attacks is crucial, but it is equally important to recognise their interplay. Sophisticated attackers employ a combination of these techniques to create a comprehensive and convincing false identity.

Applying the Standards

There are a number of international standards by which a document authentication or biometric liveness system of a vendor can be independently assessed.

Presentation Attack Detection
ISO/IEC 30107-3: This is the primary global standard defining the principles and methods for performance assessment of presentation attack detection mechanisms. It categorises attacks into:
  • Level 1 (Basic): Attacks using low-cost, readily available artifacts (e.g., paper photos, videos on a smartphone screen).
  • Level 2 (Advanced): Attacks using more sophisticated, custom-made artifacts (e.g., latex masks, prosthetic makeup, video with depth).
FIDO Alliance Biometric Component Certification: Uses the ISO 30107 framework to certify biometric sub-components.

Injection Attack Detection
CEN/TS 18099 (EU): Currently the only published, dedicated standard specifically for Biometric Data Injection. It distinguishes "injection" from "presentation" and provides a methodology for testing resistance against virtual camera drivers, API tampering, and intercepted video streams.

ISO/IEC 25456 (Upcoming): This is the international standard currently in development that may eventually supersede or globalise the CEN/TS 18099.

Of the two scenarios above, testing for presentation attacks is more mature. Injection attack certification is in its nascent stage, with CEN/TS 18099 approved only in October 2024.

Critical to the authority of a certification against any of the above standards is the accreditation of the testing laboratory, which requires a “grading the grader” consideration. Examples of credible laboratories include those that are NIST NVLAP or other government agency standard approved. There should also be inquiry into the level of certification achieved by the vendor (for example, the ISO 30107 Level 1 and 2 mentioned above).

So have we lost?

Winning or losing is a team decision. It depends on some baseline decisions on how your team plays the game. Do you accept image file uploads of ID documents? Or for your end users to perform an IDV transaction from their desktop? If so, you have given the other team a head start that you will likely fail to close.

It depends too on the balance of the age of the players on your team. While a cross section of experience is certainly a credible asset in the implementation, servicing and support of the solution, when it comes to the hard-hitting effectiveness of the core technology solution, yesterday’s tools do not work.

As context, we live in a world with the prevalence of video as media. The usefulness of accepting still single images into onboarding flows, and applying testing around those one-shot frames, is akin to that of a candle to navigate through a forest on a stormy night. Video based testing is the golden knight for detecting both document and human liveness presentation attacks, and the sneaky hidden injection attacks.

Maintaining a watchful eye

The world of identity verification is facing unprecedented challenges with the proliferation of Gen AI technology. Preparedness is more than discerning the nuances between presentation and injection attacks. It is also about understanding the changing nature of the fraud threat landscape itself and evolving to defend against it. By committing ourselves to staying apprised of how bad actors are evolving their fraud techniques, we are better equipped to design and implement effective defenses, thereby ensuring the integrity of digital identities in an increasingly deceptive online world.

Have Sales Contact Me

Related Resources

Loading...

Products You May Be Interested In