Updated
Updated · Futurism · Apr 26
Sullivan & Cromwell submits corrected filing after AI-generated bogus citations exposed
Updated
Updated · Futurism · Apr 26

Sullivan & Cromwell submits corrected filing after AI-generated bogus citations exposed

6 articles · Updated · Futurism · Apr 26
  • Cohead Andrew Dietderich apologized to Judge Martin Glenn after Boies Schiller Flexner uncovered fabricated citations in a Manhattan federal bankruptcy court filing.
  • The firm admitted its AI policies were not followed, launched an internal review, and resubmitted the corrected document. The specific AI model was not disclosed, but the firm reportedly uses OpenAI’s ChatGPT.
  • This incident highlights ongoing risks of AI hallucinations in legal work, with other major law firms facing similar embarrassments and judges increasingly imposing sanctions for erroneous AI-generated citations.
When a top law firm's AI invents fake cases, who is ultimately to blame?
Beyond firm policies, what can be done to stop AI from lying in court?
With AI hallucinating so often, is the legal profession facing a crisis of competence?
Sullivan & Cromwell advises OpenAI on ethics. Did they follow their own advice?
If AI chats are not privileged, is attorney-client confidentiality becoming obsolete?

How Sullivan & Cromwell’s AI Errors in Prince Global Case Exposed Legal Industry’s Hallucination Risks

Overview

On April 9, 2026, Sullivan & Cromwell filed a critical bankruptcy motion containing fabricated legal citations caused by bypassed AI safeguards and a failed review process. Opposing counsel identified these errors on April 21, prompting the firm to withdraw the motion and apologize publicly. This incident damaged the firm's reputation, caused procedural delays, and exposed ethical vulnerabilities in AI use within legal practice. It sparked increased scrutiny of lawyers' duties and contributed to broader industry awareness, leading to new regulatory rules and ethical guidelines. Sullivan & Cromwell responded with an internal review and plans to strengthen training and verification to prevent future AI-related errors.

...