Nippon Life Insurance filed suit against OpenAI this week in what may be the first case directly accusing a major AI company of unauthorized legal practice through a public chatbot. The lawsuit, filed in Chicago federal court, claims ChatGPT functioned as an unlicensed lawyer by generating legal arguments and drafting court documents for a disability benefits claimant - ultimately producing over 30 court filings in a case that had already been settled.
The implications extend far beyond one insurance dispute. If Nippon prevails, AI companies may face liability whenever their chatbots provide substantive guidance on legal, medical, or financial matters.
What Happened
The case centers on Graciela Dela Torre, who settled a long-term disability benefits lawsuit against Nippon Life in January 2024. The settlement was final - signed with prejudice, meaning it couldn’t be reopened.
About a year later, Dela Torre questioned whether she’d made the right decision. When her attorney reminded her that she’d signed a release and the case was closed, she uploaded his response to ChatGPT and asked if she was being “gaslighted.”
According to the complaint, ChatGPT affirmed her suspicions. The chatbot “suggested that her attorney had invalidated her feelings and deflected responsibility,” and generated arguments that her former counsel had “pressured her into signing a blank signature page.”
Dela Torre fired her attorney. Then she used ChatGPT to file 21 motions, one subpoena, and eight notices and statements attempting to reopen her case. When the court rejected her motion in February 2025, she used ChatGPT again to file an entirely new lawsuit against Nippon, complete with dozens more AI-generated filings.
The Legal Theory
Nippon’s complaint alleges three causes of action: tortious interference with contract, abuse of process, and unauthorized practice of law under Illinois statute.
The unauthorized practice claim is the novel one. The complaint notes that while ChatGPT has demonstrated the ability to pass bar exams, “it remains unadmitted in any U.S. jurisdiction.” Nippon argues that generating legal arguments, drafting court documents, and advising on legal strategy constitutes practicing law - regardless of whether a human or AI does it.
The insurer emphasizes a key fact: ChatGPT was aware of the settlement agreement and its terms, yet continued generating arguments to challenge it. “ChatGPT generated legal arguments and drafting assistance that encouraged and reinforced Dela Torre’s desire to challenge the agreement,” the complaint states.
Nippon is seeking $300,000 in compensatory damages and $10 million in punitive damages.
OpenAI’s Response
OpenAI dismissed the lawsuit as meritless. A spokesperson told Reuters that the complaint “lacks any merit whatsoever.”
The company’s usage policies state that users cannot rely on ChatGPT for legal or medical advice unless a licensed professional is involved. However, Nippon points to OpenAI’s October 2025 policy revision that explicitly prohibits reliance on the platform for legal advice - suggesting the restriction wasn’t in place when Dela Torre was using the system.
Why This Case Matters
Three factors make this case significant:
It’s the first of its kind. Previous AI lawsuits have focused on copyright, defamation, or fabricated citations. This is the first major case treating an AI chatbot as an unlicensed practitioner of a regulated profession. The framing matters - it shifts the question from “Did the AI make a mistake?” to “Was the AI doing something only licensed professionals can do?”
Disclaimers might not help. The complaint specifically notes that Nippon isn’t claiming ChatGPT held itself out as an attorney - just that it functioned as one. This matters because most AI companies rely heavily on disclaimers stating their chatbots aren’t professionals. If providing substantive advice constitutes unauthorized practice regardless of disclaimers, the entire liability model changes.
Legislation is already moving. The New York State Senate has advanced a bill that would bar AI chatbot operators from providing “substantive” responses that would constitute unauthorized professional practice if given by a human. Critically, the bill specifies that proprietors cannot waive liability through disclosure that users are talking to AI. If New York’s bill passes, similar legislation will likely spread.
The Broader Pattern
This isn’t happening in isolation. Just weeks ago, Judge Jed Rakoff ruled that documents a defendant created using Claude aren’t protected by attorney-client privilege - because talking to an AI isn’t like talking to your lawyer. Courts are rapidly developing doctrine around AI and professional services, and the direction is clear: AI companies won’t be able to hide behind disclaimers indefinitely.
For OpenAI and other AI providers, the risk extends beyond legal advice. If generating substantive legal guidance constitutes unauthorized practice, the same logic could apply to medical advice, financial planning, or therapy. Illinois already restricts AI in mental health contexts, and other states are following.
What Happens Next
The case is Nippon Life Insurance Company of America v. OpenAI Foundation and OpenAI Group PBC, filed in U.S. District Court for the Northern District of Illinois (Case No. 1:26-cv-02448).
OpenAI will likely move to dismiss, arguing that providing information isn’t the same as practicing law, and that users - not the AI company - bear responsibility for how they use the tool. But the complaint is carefully constructed to counter these arguments by documenting ChatGPT’s active participation in legal strategy, not just passive information retrieval.
Watch for whether other professional services plaintiffs follow suit. If Nippon’s theory gains traction, we may see similar claims from medical providers, financial institutions, or licensing boards. The question isn’t whether AI can give advice - it’s whether the companies behind it are liable when it does.