Lawsuit AI Errors and “Hallucinations”

Another case of “hallucinations,” a term falling out of favor, reminds students to follow AI policies and check AI work. Sullivan & Cromwell, a highly regarded Wall Street law firm, submitted court filings with citation errors. Another law firm involved in the case found the errors.

I asked ChatGPT to help find the actual “inaccurate citations and errors” mentioned in news sources, and it made up a couple of examples. I wasn’t surprised.

To rebuild credibility and ensure integrity, Sullivan and Cromwell apologized:

I apologize on behalf of our entire team. I also called Boies Schiller Flexner ​LLP on Friday to thank them for bringing this matter to our attention and to apologize directly to them ​as well.

In a letter to the judge, the firm admitted it didn’t follow its own policies:

We deeply regret that this has occurred.The firm maintains comprehensive policies and training requirements governing the use of AI tools in legal work. These safeguards are designed to prevent exactly this situation. The Firm’s policies on the use of AI were not followed in connection with the preparation of the Motion. In addition, the Firm has general policies and training requirements for the proper review of legal citations. Regrettably, this review process did not identify the inaccurate citations generated by AI, nor did it identify other errors that appear to have resulted in whole or in part from manual error.

The term “hallucination” regarding AI is losing favor. Opponents say it assigns a human characteristic—breaking with reality—to a technology; that it stigmatizes the human medical condition; and that it’s too broad, referring to mistakes, inventions, and other issues. (For example, see articles in Cureus and AI and Society.)

Whatever we call it, LLM output may always need to be checked.

Next
Next

Tough PR Week for DoorDash