top of page

Polish Researcher Uses GPT-4o to Forge Passport, Exposes Gaping Hole in Digital ID Verification

A Polish researcher has successfully used OpenAI’s ChatGPT-4o to generate a fake passport, which was then accepted by a platform that employs standard ID and selfie verification protocols. What might have sounded like a dystopian warning is now a sobering reality, laying bare a critical vulnerability in digital onboarding procedures.


Polish Researcher Uses GPT-4o to Forge Passport, Exposes Gaping Hole in Digital ID Verification

The financial and fintech industries have long prioritized speed and conversion over structure and resilience. In their rush to optimize user onboarding, many providers have stripped their identity verification processes down to the basics: a scanned ID document and a selfie. If the two images visually align, access is granted—sometimes within seconds, and often without deeper analysis.


But these systems were never designed to detect AI-generated content. What’s now emerging isn’t simply sloppy forgery—it’s entire synthetic identities crafted with tools that are growing more sophisticated and scalable by the day.


The researcher behind this discovery wrote, “You can now generate fake passports with GPT-4o. It took me 5 minutes to create a replica of my own passport that most automated KYC systems would likely accept without blinking.” The generated passport, complete with realistic features, was good enough to pass through identity verification systems used by major financial platforms—proving just how inadequate the current standards are.


The real concern isn’t that AI succeeded in slipping through—it’s that many compliance systems remain dangerously shallow. There's a widespread but false belief among decision-makers that handing KYC duties off to third-party providers absolves them of liability. In reality, this merely transfers the risk to systems that may be ill-equipped to detect fraud that now takes minutes to manufacture.


Cyprus Company Formation

When verification relies on facial recognition and static image analysis alone, the integrity of the entire onboarding process becomes suspect. Criminals with basic tech proficiency now have the means to mass-produce accounts with minimal friction.


As the researcher’s example demonstrates, the same level of document fabrication that once demanded advanced Photoshop skills and hours of work can now be achieved using AI tools in mere minutes—and with even more convincing results. That level of access can quickly escalate into synthetic identity fraud, large-scale financial crimes, and irreversible losses.


The industry must wake up. The belief that photo ID and selfie matching are sufficient safeguards is outdated and dangerous. These methods are no longer fit for the modern threat landscape. The real solution begins with verifying identity documents at the hardware level—ensuring a document’s authenticity beyond just how it “looks” in an image.


Moreover, the responsibility doesn’t just rest with regulated institutions or large fintech companies. Every organization, no matter the size, bears responsibility for knowing who their clients are and verifying the legitimacy of the funds they process. This applies equally to small businesses handling local transactions and large corporations facilitating global transfers.


With the accelerating pace of fraudster innovation, it’s clear that standard onboarding protocols will soon be obsolete. Businesses can no longer afford to see compliance, onboarding, or transaction monitoring as one-time administrative hurdles. These functions are now ongoing threats that need to be actively managed.


Blind trust in automation or third-party verification vendors is no longer a viable strategy. The most effective step any organization—big or small—can take is to consult with an expert and build a verification framework that addresses today’s risks head-on. AI-generated fraud isn’t coming. It’s already here.

By fLEXI tEAM


Comments


 Proudly created by Flexi Team

bottom of page