Remember the scene in Ocean’s Eleven when Danny Ocean’s heist team tricks the casino’s security system by injecting a pre-recorded video feed of an undisturbed vault into the live feed? The security team has no idea that the heist is in full swing. This moment highlights why organizations need to know how to prevent video injection attacks.
Source: Entertainment Pictures / Alamy Stock Photo
Unfortunately, Hollywood didn’t dream up video injection attacks. They are very real—and on the rise. Detecting breaches in surveillance and data systems is essential to prevent video injection attacks. Let’s delve into the methods behind video injection detection and explore how to prevent video injection attacks so that life doesn’t imitate art in our most secure spaces.
How to Prevent Video Injection Attacks: Start With Understanding
Video injection attacks are cyber assaults where unauthorized video is injected into a surveillance system or data stream. More specifically, fraudulent data streams are inserted between the capture device (the sensor) and the biometric feature extractor during identity verification. The goal is to establish a fraudulent identity by manipulating the video feed.
The threat of video injection attacks is particularly relevant in KYC (Know Your Customer) systems used in the financial services industry. Biometric data, such as video frames of a person’s face, is compared against an identity document for identity verification
Video Injection Attacks: A Delivery Mechanism for Deepfakes
Since deepfakes are digital artifacts, video injection attacks have become one of the common methods for delivering deepfakes into KYC systems, thereby posing serious risks to identity verification processes. These attacks introduce AI-generated videos into data streams, manipulating systems to authenticate fraudulent identities.
While deepfake technology was once a topic of debate, it’s now a common tool in cybersecurity attacks. Attackers are creating highly realistic 3D videos to trick systems into authenticating false identities. In fact, the rise of deepfakes has been growing exponentially in recent years. In 2023, there were an estimated 95,820 deepfake videos, marking a 550% increase from 2019 —poses a significant challenge to both active and passive verification systems.
Video Injection Attacks Can Occur Across Many Different Vectors
Fraudsters use many video injection vectors to spoof remote facial recognition in remote onboarding and KYC systems. Here are some of the most common ones:
- Virtual cameras with apps like ManyCam streaming pre recorded or deepfake videos are widely used for fraud. Fraudsters often rename these virtual cameras to mimic physical ones and may manipulate web browser functions to favor them.
- Hardware video sticks, which connect via USB, capture digital video streams, such as those from another device’s screen. Operating systems recognize them as standard USB cameras.
- JavaScript injection is an issue for web-based KYC. Malicious JavaScript code injected into a browser can alter or substitute the video feed, tricking the verification process.
- Smartphone emulators, used mainly in mobile app development, can replicate a real smartphone’s functionality. Attackers use them to run apps and fake video streams, bypassing security checks meant for actual devices.
- Network traffic interception accesses and alters video data during transmission so attackers can replace legitimate video feeds with fake ones. This is particularly risky on public or insecure networks without encryption.
Vulnerabilities in Most KYC Systems Make it Difficult to Prevent Video Injection Attacks
KYC systems are proficient in detecting standard presentation attacks, guided by well-established standards and certified by organizations such as iBeta Quality Assurance Lab. Yet, the threat of video injection attacks is a weak link for several reasons.
Vulnerabilities | What’s at Stake for KYC |
Standard Recognition Protocols | KYC systems commonly identify cameras using device names or identifiers from the operating system. But they often can’t differentiate between a real physical camera and a skillfully configured virtual one. |
Lack of Physical Hardware Verification | Regular practices in KYC development focus on securing data transmission and encrypting communications. Yet, these measures generally don’t verify the physical authenticity of the hardware, and systems can be misled by a virtual camera that convincingly emulates a real one. |
Inadequate Anomaly Detection | Most KYC systems can spot irregularities in user behavior or data transmission. However, they fall short in verifying the source of the video feed, which allows virtual cameras to go unnoticed. |
Encryption and Obfuscation Limitations | While encryption effectively secures data during transfer, it doesn’t authenticate the data’s origin. Similarly, JavaScript obfuscation protects web-based KYC systems from code tampering but doesn’t prevent manipulation of the video feed before it arrives in the browser. |
Deepfake and Synthetic Video Challenges | The high fidelity of modern synthetic videos makes them hard to distinguish from authentic feeds, even for systems that can detect anomalies in video content. |
In addition, adhering to ISO 27000 standards doesn’t automatically prevent video injection attacks. For instance, ISO/IEC 27001:2013 establishes a robust information security management System, but lacks guidelines for verifying camera authenticity. ISO/IEC 27002:2013 mainly addresses data protection and integrity, not validation of the video feed’s source.
The Fundamentals of How to Prevent a Video Injection Attack
The security landscape for KYC systems will evolve as standards to prevent video injection attacks are developed. The establishment of dedicated certifications and evaluation laboratories will also be an important step forward. We believe that these moves will standardize defenses, strengthening the integrity and reliability of digital verification processes as technological threats advance.
But this day hasn’t arrived, and waiting to act is a clear risk. We have conducted extensive research into how to prevent video injection attacks, and our solutions are grounded in what we have learned. Here are the fundamentals
- Deliver Comprehensive Detection: Use technologies that combine presentation attack detection with video injection attack detection to detect various attack contents, including deepfakes and face morphs.
- Target Delivery Channels: Shut down channels like virtual cameras in desktop browsers and hardware attacks that fraudsters use to deliver deepfakes and other fraudulent content.
- Adopt User-Friendly Security: Implement security measures that don’t require user interaction and or add friction to the user experience.
- Use AI for Detection: Turn to deep learning algorithms for AI-powered detection to identify complex attack patterns that are challenging for humans to detect.