TL;DR
OpenAI launched Advanced Account Security for ChatGPT and Codex, an opt-in feature that replaces passwords with passkeys or hardware security keys, disables email and SMS recovery, and automatically opts users out of model training. The company partnered with Yubico to sell co-branded YubiKeys for $68 (two-pack), less than half retail price. The feature targets journalists, dissidents, and officials, and will be mandatory for Trusted Access for Cyber members by June 1.
OpenAI has released a security feature for ChatGPT accounts that treats them the way banks treat online banking: hardware keys, no passwords, no email recovery, and no help from customer support if you lose access. The feature, called Advanced Account Security, is an opt-in setting that requires users to authenticate with two passkeys, two hardware security keys, or one of each before they can log in to ChatGPT or Codex. Once enabled, password-based login is permanently disabled, and recovering an account through email or text message is no longer possible. OpenAI has partnered with Yubico, the Swedish-American hardware authentication company, to sell co-branded YubiKeys bundled for $68, less than half the $126 retail price. The feature is available to everyone, including users on the free tier. The company says it is designed for journalists, political dissidents, researchers, and elected officials. But the fact that OpenAI built it at all is an acknowledgment that a ChatGPT account, for a growing number of people, now holds more sensitive information than their email.
What it does
Advanced Account Security replaces every conventional login and recovery mechanism with cryptographic authentication. Users who enable it must register two separate credentials, choosing from passkeys stored on their device, YubiKeys or other FIDO2-compliant hardware tokens, or a combination. Each credential generates a unique cryptographic key pair that never leaves the device, which means there is no password to steal, no one-time code to intercept, and no recovery email that an attacker can compromise through social engineering. OpenAI has made the design trade-off explicit: its own support team cannot restore access to an account protected by Advanced Account Security if the user loses both credentials. The company issues a recovery key during setup, and if that key is also lost, the account is unrecoverable. The architecture is borrowed from the same zero-trust principles that protect classified government systems and cryptocurrency wallets, applied to a consumer chatbot.
The feature includes several secondary protections. Sign-in sessions are shortened, reducing the window during which a stolen session token could be exploited. Users receive alerts for every new login and can view and terminate active sessions from their account settings. And enabling Advanced Account Security automatically opts the user out of model training, meaning their conversations will not be used to improve future versions of ChatGPT. That last detail is significant: it links the highest level of account protection to the highest level of data privacy, creating a tier of user whose interactions with the system are both cryptographically secured and contractually excluded from OpenAI’s training pipeline. For users handling sensitive material, the combination addresses two concerns simultaneously.
Why it matters
The security upgrade arrives in a context that makes its purpose clear. In 2024, Group-IB, the Singapore-based cybersecurity firm, identified more than 100,000 stolen ChatGPT credentials circulating on dark web marketplaces, harvested from devices compromised by information-stealing malware. Those credentials gave anyone who purchased them full access to the victim’s chat history, which for many users included confidential work conversations, personal queries, and information that would be damaging if exposed. A separate breach involving Mixpanel, a third-party analytics provider, exposed ChatGPT user names, email addresses, and technical metadata that could be used for targeted phishing campaigns. The industry’s broader push toward passwordless authentication has been driven by the recognition that passwords are the single largest attack surface in consumer technology: an estimated 46 per cent of all successful cyberattacks on small and medium businesses in 2026 will originate from credential reuse, according to industry research.
ChatGPT’s vulnerability is distinctive because of what the accounts contain. An email account holds messages. A banking account holds transaction records. A ChatGPT account holds the unfiltered questions a person asks when they believe no one is watching: medical symptoms, legal exposure, relationship problems, business strategies, code with proprietary logic, and conversations with an AI system that remembers context across sessions. OpenAI’s Codex Chronicle feature, which periodically captures screenshots of a user’s desktop and sends them to OpenAI’s servers for processing, has made the data stakes even higher for users who opt in. The company is simultaneously expanding the volume of sensitive information its products collect and building the security infrastructure to protect it. Advanced Account Security is the protection side of that equation.
The Yubico deal
The partnership with Yubico is commercial and strategic. The two co-branded products, the YubiKey C NFC and the YubiKey C Nano, are physically identical to Yubico’s existing product line but carry OpenAI branding and are sold through OpenAI’s channels at a subsidised price. The C NFC model supports both USB-C and near-field communication, allowing it to work with laptops, phones, and tablets. The C Nano model is small enough to remain permanently inserted in a USB-C port. Both support FIDO2, the authentication standard developed by the FIDO Alliance that underpins passkeys and is backed by Apple, Google, and Microsoft. The $68 bundle for two keys represents a meaningful discount: a single YubiKey C NFC retails for approximately $55, making the bundle effectively a buy-one-get-one offer.
OpenAI’s decision to subsidise hardware authentication for its users reflects a calculation about the cost of account compromises. A high-profile breach of a ChatGPT account belonging to a journalist, government official, or corporate executive would generate reputational damage that far exceeds the cost of discounted security keys. By making hardware authentication cheap and accessible, OpenAI is shifting the security burden from a password that can be phished to a physical object that must be stolen. The strategy mirrors what Google implemented internally in 2017, when the company distributed YubiKeys to all 85,000 employees and subsequently reported zero successful phishing attacks against employee accounts. OpenAI is applying the same logic to its user base, though on an opt-in rather than mandatory basis, with one exception: members of the Trusted Access for Cyber programme, which grants verified security researchers and defenders access to OpenAI’s most capable cybersecurity models, will be required to enable Advanced Account Security by 1 June 2026.
The signal
The deeper significance of Advanced Account Security is not the feature itself but what it implies about the category. When a company builds bank-grade security for a chatbot, it is telling you that the chatbot is no longer a toy. OpenAI now operates a six-tier subscription structure that ranges from a free ad-supported account to custom enterprise contracts, with 50 million paying subscribers and 900 million weekly active users. A meaningful fraction of those users treat ChatGPT as a primary work tool, a confidential advisor, or both. The conversations stored in those accounts are, in aggregate, one of the most valuable datasets of human intent ever assembled: what people want to know, what they are worried about, what they are building, and what they are hiding. Protecting that dataset is not a feature. It is a business requirement.
The opt-in model is both a strength and a limitation. Users who need Advanced Account Security the most, dissidents in authoritarian countries, journalists investigating powerful institutions, executives discussing unreleased products, are also the users most likely to enable it. But the vast majority of ChatGPT’s 900 million weekly users will never toggle the setting, which means their accounts will remain protected by whatever password they chose when they signed up, reused from another service, and have not changed since. AI-powered phishing campaigns can now generate hundreds of targeted messages per minute, each tailored to a specific victim, and the most common entry point remains a stolen or guessed password. OpenAI has built the infrastructure to protect accounts that matter. Whether the accounts that do not opt in will become the easier targets is a question the feature does not answer. What it does answer, clearly, is that OpenAI considers a ChatGPT account to be a high-value asset worth defending with the same tools used to protect state secrets and financial systems. The company that made it easy for anyone to talk to an AI has now made it possible for anyone to lock that conversation behind hardware that cannot be phished. The gap between those two populations will determine how the next wave of AI-related breaches unfolds.