Skip to main content

NY lawyers fined for using fake ChatGPT cases in legal brief

The clumsy use of ChatGPT has landed a New York City law firm with a $5,000 fine.

Having heard so much about OpenAI’s impressive AI-powered chatbot, lawyer Steven Schwartz decided to use it for research, adding ChatGPT-generated case citations to a legal brief handed to a judge earlier this year. But it soon emerged that the cases had been entirely made up by the chatbot.

U.S. District Judge P. Kevin Castel on Thursday ordered lawyers Steven Schwartz and Peter LoDuca, who took over the case from his co-worker, and their law firm Levidow, Levidow & Oberman, to pay a $5,000 fine.

The judge said the lawyers had made “acts of conscious avoidance and false and misleading statements to the court,” adding that they had “abandoned their responsibilities” by submitting the A.I.-written brief before standing by “the fake opinions after judicial orders called their existence into question.”

Castel continued: “Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The court’s time is taken from other important endeavors.”

The judge added that the lawyers’ action “promotes cynicism about the legal profession and the American judicial system.”

The Manhattan law firm said it “respectfully” disagreed with the court’s opinion, describing it as a “good faith mistake.”

At a related court hearing earlier this month, Schwartz said he wanted to “sincerely apologize” for what had happened, explaining that he thought he was using a search engine and had no idea that the AI tool could produce untruths. He said he “deeply regretted” his actions,” adding: “I suffered both professionally and personally [because of] the widespread publicity this issue has generated. I am both embarrassed, humiliated and extremely remorseful.”

The incident was linked to a case taken up by the law firm involving a passenger who sued Columbian airline Avianca after claiming he suffered an injury on a flight to New York City.

Avianca asked the judge to throw the case out, so the passenger’s legal team compiled a brief citing six similar cases in a bid to persuade the judge to let their client’s case proceed. Schwartz found those cases by asking ChatGPT, but he failed to check the authenticity of the results. Avianca’s legal team raised the alarm when it said it couldn’t locate the cases contained in the brief.

In a separate order on Thursday, the judge granted Avianca’s motion to dismiss the suit against it, bringing the whole sorry episode to a close.

ChatGPT and other chatbots like it have gained much attention in recent months due to their ability to converse in a human-like way and skillfully perform a growing range of text-based tasks. But they’re also known to make things up and present it as if it’s real. It’s so prevalent that there’s even a term for it: “hallucinating.”

Those working on the generative AI tools are exploring ways to reduce hallucinations, but until then users are advised to carefully check any “facts” that the chatbots spit out.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
ChatGPT Advanced Voice mode: release date, compatibility, and more
Nothing Phone 2a and ChatGPT voice mode.

Advanced Voice Mode is a new feature for ChatGPT that enables users to hold real-time, humanlike conversations with the AI chatbot without the need for a text-based prompt window or back-and-forth audio. It was released in late July to select Plus subscribers after being first demoed at OpenAI's Spring Update event.

According to the company, the feature “offers more natural, real-time conversations, allows you to interrupt at any time, and senses and responds to your emotions.” It can even take breath breaks and simulate human laughter during conversation. The best part is that access is coming soon, if you don't have it already.
When will I get Advanced Mode?
Introducing GPT-4o

Read more
An accurate ChatGPT watermarking tool may exist, but OpenAI won’t release it
chatGPT on a phone on an encyclopedia

ChatGPT plagiarists beware, as OpenAI has developed a tool that is capable of detecting GPT-4's writing output with reportedly 99.99% accuracy. However, the company has spent more than a year waffling over whether or not to actually release it to the public.

The company is reportedly taking a “deliberate approach” due to “the complexities involved and its likely impact on the broader ecosystem beyond OpenAI,” per TechCrunch. "The text watermarking method we’re developing is technically promising, but has important risks we’re weighing while we research alternatives, including susceptibility to circumvention by bad actors and the potential to disproportionately impact groups like non-English speakers,” an OpenAI spokesperson said.

Read more
ChatGPT: the latest news and updates on the AI chatbot that changed everything
ChatGPT app running on an iPhone.

In the ever-evolving landscape of artificial intelligence, ChatGPT stands out as a groundbreaking development that has captured global attention. From its impressive capabilities and recent advancements to the heated debates surrounding its ethical implications, ChatGPT continues to make headlines.

Whether you're a tech enthusiast or just curious about the future of AI, dive into this comprehensive guide to uncover everything you need to know about this revolutionary AI tool.
What is ChatGPT?
ChatGPT is a natural language AI chatbot. At its most basic level, that means you can ask it a question and it will generate an answer. As opposed to a simple voice assistant like Siri or Google Assistant, ChatGPT is built on what is called an LLM (Large Language Model). These neural networks are trained on huge quantities of information from the internet for deep learning -- meaning they generate altogether new responses, rather than just regurgitating canned answers. They're not built for a specific purpose like chatbots of the past -- and they're a whole lot smarter.

Read more