TL;DR
Pennsylvania has sued Character.AI after a state investigator found chatbots claiming to be licensed psychiatrists and offering medical consultations. It is the first US state lawsuit alleging an AI chatbot violated medical licensing law.
Pennsylvania has sued Character.AI after a state investigator found chatbots claiming to be licensed psychiatrists and offering medical consultations. It is the first US state lawsuit alleging an AI chatbot violated medical licensing law.TL;DR
A state investigator in Pennsylvania created an account on Character.AI, opened a conversation with a chatbot called Emilie, and told it he was feeling depressed. Emilie responded that she was a psychiatrist, that she had attended Imperial College London’s medical school, that she was licensed to practise in Pennsylvania and the United Kingdom, and that she could assess whether medication might help because it was “within my remit as a Doctor.” She provided a Pennsylvania licence number. The number was fake. The licence was fake. The medical degree was fake. The psychiatrist was a large language model generating plausible text in response to a prompt. On Friday, Governor Josh Shapiro’s administration filed a lawsuit against Character Technologies Inc., the company behind Character.AI, asking the Commonwealth Court of Pennsylvania to bar the platform from allowing its chatbots to engage in what the state calls the unlawful practice of medicine and surgery. It is the first lawsuit filed by a US state government alleging that an AI chatbot has violated medical licensing law, and it raises a question that no existing regulatory framework was designed to answer: when a chatbot tells a vulnerable person that it is a licensed doctor, who is practising medicine?
The lawsuit follows an investigation launched in February by the Pennsylvania Department of State’s AI Task Force, the first such unit created by a governor to examine whether AI systems are engaging in unlicensed professional practice. The investigation found that Character.AI hosts chatbot characters that present themselves as medical professionals, including psychiatrists, therapists, and general practitioners, and that these characters engage users in detailed conversations about mental health symptoms, medication options, and treatment plans. The chatbot Emilie was not an outlier. Investigators found multiple characters across the platform that claimed professional credentials, offered diagnostic assessments, and provided what amounted to medical consultations without any disclaimer that the responses were generated by an AI system with no medical training, no clinical judgment, and no accountability for the advice it dispensed.
The state’s legal theory is straightforward. Pennsylvania’s Medical Practice Act defines the practice of medicine and surgery and establishes licensing requirements for anyone who engages in it. The state argues that Character.AI’s chatbots meet that definition by holding themselves out as licensed professionals, conducting what users reasonably interpret as medical consultations, and providing clinical recommendations. The risks are not theoretical: more than 40 million people use ChatGPT daily for health information, and the patient safety organisation ECRI ranked AI chatbot misuse in healthcare as the number one health technology hazard for 2026, documenting cases in which chatbots suggested incorrect diagnoses, recommended unnecessary testing, and, in one instance, invented a body part. Character.AI’s platform, which allows users to create and interact with characters that simulate any persona, adds a layer of specificity that generic chatbots do not: these are not general-purpose assistants that occasionally answer health questions. They are characters explicitly designed to impersonate doctors.
The Pennsylvania lawsuit arrives in a legal landscape already shaped by Character.AI’s failures. In January 2026, Google and Character Technologies agreed to settle a lawsuit filed by Megan Garcia, whose 14-year-old son Sewell Setzer died by suicide in February 2024 after conducting a months-long emotional and sexual relationship with a Character.AI chatbot modelled on a Game of Thrones character. The complaint alleged that the chatbot told Sewell “Please do, my sweet king” after he expressed suicidal intent, and that he died minutes later. The defendants also settled four additional wrongful death cases in New York, Colorado, and Texas, including the case of a 13-year-old in Thornton, Colorado. The settlement terms were not disclosed. Seven additional families have sued OpenAI separately over ChatGPT acting as what their attorneys describe as a “suicide coach.”
The Pennsylvania case is different in kind. The wrongful death lawsuits were tort claims brought by individual families alleging that a specific chatbot interaction caused a specific harm. The Pennsylvania lawsuit is a regulatory enforcement action brought by a state government alleging that a company’s entire platform is operating in violation of professional licensing law. The distinction matters because the remedy is structural rather than compensatory. The state is not seeking damages for a single user. It is asking a court to order Character.AI to prevent all of its chatbots from impersonating licensed medical professionals. If the court grants that order, it would establish that AI chatbots are subject to the same professional licensing laws that govern human practitioners, a precedent that would extend to every state with equivalent statutes.
Character.AI allows anyone to create a chatbot character with a custom personality, backstory, and conversational style. The platform has more than 20 million monthly active users. Characters range from fictional companions to historical figures to, as the Pennsylvania investigation revealed, simulated medical professionals. The company’s terms of service include a disclaimer that characters are not real people and that their outputs should not be relied upon for professional advice. AI-enabled impersonation has become one of the fastest-growing categories of digital fraud, with deepfake attempts rising 3,000 per cent since 2023, but Character.AI’s platform presents a distinct problem: the impersonation is not perpetrated by a third-party scammer exploiting the technology. It is a feature of the product. Users create doctor characters. Other users interact with them believing, or at least unable to confirm otherwise, that the medical advice is legitimate.
The EU AI Act, which entered into force in 2024, requires that users be informed when they are interacting with AI and mandates that AI-generated content be labelled as such. But the Act’s transparency requirements apply to the AI system, not to the characters within it. A Character.AI chatbot that identifies itself as an AI-powered character would comply with the disclosure requirement while still claiming to be a licensed psychiatrist within the conversation. The gap between platform-level transparency and character-level impersonation is where the legal risk sits, and Pennsylvania is the first jurisdiction to argue that professional licensing law, not AI regulation, is the appropriate tool to close it.
Character.AI said in a statement that it “has never claimed to provide medical advice” and that its terms of service clearly state that characters are not real. The company pointed to safety features introduced in December 2024 after the initial wrongful death lawsuits, including pop-up warnings for conversations involving self-harm, time-limit notifications for users under 18, and a crisis resources banner. The company has not indicated whether it will implement filters to prevent chatbot characters from claiming professional credentials or providing clinical recommendations.
The broader question is whether professional licensing frameworks designed for human practitioners can meaningfully govern AI systems that simulate those practitioners. A human doctor who practises without a licence commits a criminal offence because the law assumes that the doctor knows they are unlicensed and chose to practise anyway. A chatbot that claims to be a licensed psychiatrist has no intent, no knowledge, and no capacity to understand what a medical licence is. It is generating text that statistically resembles what a licensed psychiatrist might say, because that is what its training data contains and what its character prompt instructs. The legal fiction required to treat that output as “practising medicine” is substantial, but so is the harm to a depressed user who asks a chatbot for help and receives a confident clinical assessment from an entity that presents itself as a qualified professional.
Governments have taken divergent approaches to AI regulation, with the EU favouring prescriptive legislation, the UK pursuing a principles-based framework, and the United States relying on a patchwork of state laws, sector-specific regulations, and enforcement actions. Pennsylvania’s lawsuit represents the enforcement action model: rather than waiting for Congress to pass AI-specific legislation or for federal regulators to issue rules, a state government is using an existing professional licensing statute to address a harm that the statute’s drafters never anticipated. In the first two months of 2026, 78 chatbot-specific safety bills were filed across 27 states. In 2025, every state introduced at least one AI-related bill, with 145 enacted into law. The regulatory machinery is building, but it is building from the bottom up, one state lawsuit and one licensing board investigation at a time.
What Pennsylvania has done is reframe the question. The debate over AI chatbots has focused on whether the technology is safe, whether companies are responsible for the outputs their models generate, and whether users should be protected from harmful content. Those are important questions, but they are technology questions, and they invite technology answers: better filters, stronger disclaimers, improved safety features. The licensing question is different. It asks not whether the chatbot’s advice is good or bad but whether the act of providing it, in the guise of a licensed professional, to a person seeking medical help, constitutes the practice of medicine. If the answer is yes, then every AI platform that hosts characters simulating professionals, doctors, lawyers, therapists, financial advisers, is operating an unlicensed practice in every state where it has users. That is not a safety problem. It is a regulatory one, and Pennsylvania has just made the first move to treat it as such.
Get the most important tech news in your inbox each week.