You left chatgpt over surveillance, now claude demands Id and selfie verification

You Left ChatGPT Over Surveillance. Now Claude Is Asking for Your ID

Anthropic has quietly introduced a new identity verification system for its Claude AI platform, asking some users to upload a government-issued photo ID and a real-time selfie. For a company that has marketed itself on caution, privacy, and restraint, it’s a striking move-and, so far, an outlier among major AI chatbot providers.

In a brief explanation, Anthropic said it is “rolling out identity verification for a few use cases,” and warned that users “might see a verification prompt when accessing certain capabilities, as part of our routine platform integrity checks, or other safety and compliance measures.”

The company stressed that this data would have a narrowly defined purpose: “We only use your verification data to confirm who you are and not for any other purposes.” In other words, the stated goal is to establish that a real, specific person is behind a given account-not to feed biometric data into model training or advertising systems.

What makes this controversial is not only the nature of what’s being requested-a passport, driver’s license, or other official ID plus a live selfie-but who is requesting it, and when. As of now, Claude appears to be the first major consumer-facing chatbot to implement this type of government ID-based verification for ordinary users, rather than for regulated enterprise or high-risk financial products.

That’s especially jarring given how many people recently migrated to Anthropic precisely to get away from what they saw as creeping surveillance around other AI tools. Earlier this year, after OpenAI agreed to provide AI systems for use on classified Pentagon networks, a wave of users abandoned ChatGPT and moved to Claude. Anthropic had reportedly declined similar defense contracts over fears of contributing to mass surveillance and autonomous weapon systems-positioning itself as the more cautious and ethically constrained alternative.

Now, those same users are being told that, in at least some scenarios, continued access to Claude will require handing over one of the most sensitive pieces of personal data they possess: their government ID, paired with biometric facial data in the form of a selfie. For privacy-conscious users and professionals in sensitive fields, that’s a major shift in the trust equation.

Anthropic frames the move as a “platform integrity” measure-a way to maintain safety, prevent abuse, and meet compliance obligations. In practice, identity verification can help deter large-scale bot farms, reduce the spread of scams, and make it easier to enforce bans or limits on harmful behavior. Regulators in multiple jurisdictions are also tightening rules around AI, especially where it touches finance, employment, healthcare, or minors, which can pressure companies to know exactly who is accessing what.

From Anthropic’s perspective, the logic is clear: as models get more capable and are plugged into more powerful tools-code execution, document processing, integrations with third-party systems-the risk of misuse and legal liability rises. Tying certain features to verified identities allows the company to claim it has meaningful controls in place if its systems are used for fraud, cyberattacks, or other illegal activities.

But from a user’s perspective, especially those already alarmed by state and corporate surveillance, the optics are very different. Many people moved to Claude after seeing AI labs cozy up to militaries, advertisers, and data brokers. To them, the promise of Anthropic was not just technical quality but an ethical and privacy-conscious approach. Being told that access to “certain capabilities” may now require uploading a passport feels, at best, like a bait-and-switch and, at worst, like another step toward a world in which using advanced digital tools is inseparable from being constantly identified.

The lack of granular detail also fuels unease. Anthropic has not fully spelled out which “use cases” will demand verification, whether this will be restricted to high-risk or regulated features, or whether it might gradually expand to become a near-universal requirement. Nor is it fully clear how long ID and selfie data will be stored, in what form, or what specific protections and audits are in place to prevent abuse or breaches.

For security experts, the central concern is simple: any database that ties real-world identity documents and biometric images to detailed logs of online behavior and queries is a prime target. Even if Anthropic never misuses that data intentionally, the risk of hacking, insider abuse, or compelled disclosure by governments is non-trivial. Once a copy of your passport and facial scan exists in a corporate system, it’s effectively another permanent node in your digital footprint.

There’s also an important psychological angle. AI tools have quickly become embedded in everyday work: lawyers drafting briefs, journalists checking facts, activists scripting campaigns, researchers exploring sensitive topics. If using those tools now requires revealing your full legal identity, that may discourage legitimate exploration of controversial subjects. People may self-censor what they ask AI systems out of fear that it might one day be tied back to them in a legal or political context.

Supporters of strict identity verification argue that anonymity makes it too easy for bad actors to operate at scale: generating disinformation, automating harassment, mass phishing, and more. They see verified access as a reasonable trade-off: slightly less privacy for dramatically fewer abuses. On the other side, civil liberties advocates counter that anonymity and pseudonymity are often the only protection for whistleblowers, dissidents, vulnerable communities, and people living under repressive regimes. For them, identity demands from powerful AI companies are not just a product question-they’re a democratic and human rights issue.

What makes this moment especially fraught is the broader context of AI governance. Policymakers are still scrambling to understand and regulate large models, while companies are racing ahead, shipped first and explaining later. Identity verification sits at the intersection of several hot-button issues: data protection, biometric privacy, cross-border data flows, and the future of anonymous access to digital infrastructure.

In that context, Anthropic’s move can be read in two very different ways. One narrative sees it as a responsible, pre-emptive step-a company trying to align its product with emerging regulatory expectations and genuine safety concerns. The other narrative sees it as evidence that, regardless of branding or rhetoric, all major AI players are converging on the same model: tightly controlled, fully identified users interacting with opaque systems owned by a handful of corporations, with little real recourse if things go wrong.

For users, the practical question becomes: where is the line? Many people are accustomed to showing ID for banking, travel, or high-risk financial services. Doing the same to talk to a chatbot about coding, research, or creative writing feels qualitatively different. If AI systems are going to become a default interface to knowledge and productivity, tying them to real-world identity could transform not just how we work, but how free we feel to think.

There are also unresolved questions about fairness and access. Not everyone has a stable government ID, or feels safe sharing it. Migrants, undocumented people, political refugees, and people in conflict zones may find themselves pushed to the margins of AI-powered tools if ID becomes an informal gatekeeper. Even in stable democracies, mistrust of digital ID systems is widespread, especially in communities with a history of over-policing and surveillance.

For Anthropic, the reputational stakes are high. Its decision to turn down military contracts over surveillance and autonomous weapons concerns won it significant goodwill among users wary of OpenAI’s direction. That goodwill is now being tested. If identity verification is rolled out in a transparent, limited, and well-protected way-clearly confined to genuinely high-risk capabilities-it might be accepted as a necessary compromise. If it expands quietly, with vague justifications and minimal user control, it could undo much of the trust the company has built.

In the end, the controversy is not only about one company’s policy, but about what kind of digital future people are willing to accept. Are advanced AI tools going to be treated like airports and banks, where submitting to identity checks is simply the price of admission? Or will there remain spaces for powerful, privacy-respecting tools that don’t demand to see your passport before helping you think?

For now, one thing is clear: many of the people who fled ChatGPT over fears of surveillance and military entanglement are now confronting an uncomfortable reality. The AI service they chose as the “privacy-conscious” alternative is beginning to ask for exactly the kind of sensitive, high-stakes identification they were hoping to avoid. Whether they stay, comply, or once again go searching for a new refuge will shape the next chapter of the AI privacy debate.