ChatGPT's Legal Trouble

ChatGPT might pass the bar exam, but it created havoc in a lawsuit. As we tell our business communication students, authors are responsible for their content, and that applies to lawyers who submit legal briefs.

In his documentation against Avianca Airlines, Steven Schwartz included six previous court decisions that didn’t exist. As we know, ChatGPT is a large language model and cannot be trusted to, for example, cite legal cases; it “hallucinates.”

Schwartz now faces sanctions. The American Bar Association requires competence, which includes supervising other lawyers’ and nonlawyers’ (including nonhuman) work. Another issue is confidentiality. Although some legal AI tools keep client data confidential, ChatGPT does not. In a court response, Schwartz apologized, saying he didn’t realize ChatGPT could give false information (!) and that he “had no intent to deceive this Court nor the defendant.”

Despite ChatGPT’s failings in this situation, AI can benefit law firms, as the Bar Association explains. And yet, law remains one of the top fields expected to be impacted by AI, as this NY Times article describes:

One new study, by researchers at Princeton University, the University of Pennsylvania and New York University, concluded that the industry most exposed to the new A.I. was “legal services.” Another research report, by economists at Goldman Sachs, estimated that 44 percent of legal work could be automated. Only the work of office and administrative support jobs, at 46 percent, was higher.

This case is a good example for students to know—a lesson in accountability for their own work.

{Random: I’m surprised to see that the NY Times include periods after “A” and “I.” This seems to be a conversative approach losing ground. “AI” is easily recognized these days. Then again, the Times was a slow in dropping the hyphen in email, in my opinion.)