“Know your customer” (KYC) refers to a procedure that banks, fintech companies, and other financial organizations use to confirm the identity of their clients. A typical method of verifying a person’s identity in a KYC authentication process is the use of “ID images,” which are essentially cross-checked selfies. A number of services, including cryptocurrency exchanges Gemini and LiteBit as well as financial services provider Wise, depend on photo IDs for security onboarding.
However, generative AI might call these tests into question.
An attacker may utilize open source and commercial software to download a person’s selfie, change it using generative AI technologies, and then use the altered ID picture to pass a KYC test, according to viral postings on X (previously Twitter) and Reddit. As of this writing, there is zero proof that GenAI products have fooled a genuine KYC system. However, there should be concern about how easily reasonably plausible deep-faked ID photos may be.
Tactics for KYC
The client often uploads a photo of themselves using a government-issued photo ID (such as a passport or driver’s license) in order to complete a Know Your Client ID image verification. In an effort to (hopefully) prevent impersonation efforts, a human or algorithm compares the picture to other papers and selfies already on file.
The security of ID picture authentication has always been a concern. Scammers have been selling fake selfies and identification for quite some time. However, GenAI unlocks a multitude of new opportunities.
A human may be digitally re-created against any background (such as a living room) using Stable Diffusion, an open-source picture generator that is free and available to the public. Online tutorials demonstrate how to do this. By experimenting with different settings, an attacker may make the victim seem like they’re holding an identification paper. Once the victim is deep-faked, the attacker may use any image editor to trick them into opening a document, whether it’s genuine or not.
Installing further extensions and tools and acquiring around a dozen target photos are now necessary for optimal results with stable diffusion. It takes between one and two days to generate a convincing photograph, according to a Reddit user named _harsh_, who has shared a method for making deepfake ID selfies.
However, there is little doubt that the hurdle to entrance is much lower than in the past. A moderate level of expertise with picture editing software was once necessary to create ID photos with realistic lighting, shadows, and settings. At this time, it may not be the case.
It is far simpler to feed deep-faked KYC photos to an app than it is to create them. While using a desktop emulator such as BlueStacks to run Android applications, it is possible to mislead the apps into accepting deep-faked pictures instead of a real camera feed. Software that enables users to turn any image or video source into a virtual webcam may similarly mislead online apps.
Worrying danger
Several applications and platforms use “liveness” checks to confirm user identities as an additional layer of security. Usually, all it takes is a little video of the user doing something humanlike, like shaking their head or blinking their eyes, to prove that they are, in fact, a genuine person.
However, GenAI can also circumvent liveness tests.
Binance CSO Jimmy Su told Cointelegraph at the beginning of the year that current deepfake technologies are enough to pass liveness tests, even ones that demand users execute movements like head rotations in real time.
Key knowledge verification (KYC), which was always a bit of a crapshoot, would soon stop working altogether as a security safeguard. Personally, I don’t think deepfake videos and graphics have become good enough to trick human reviewers just yet. However, it can change in the near future.