ChatGPT embarrasses lawyer by submitting fake cases to a judge š„ļøšØāāļø
A US lawyer has found out the hard way that the new generation of AI is not always accurate, after he used ChatGPT for research and submitted a number of ābogusā cases to the court.
The case in question involved a man suing Colombian airline Avianca after he said he was injured by a metal serving cart striking his knee during a flight, The New York Times reported.
When the airline tried to have his case thrown out, the manās lawyer, Steven A Schwartz, submitted a brief including a number of previous cases.
But when the opposing lawyer and judge pointed out that these cases didnāt actually exist, the lawyer was forced to admit he had used OpenAIās ChatGPT tool to research the cases and had not checked their legitimacy, other than asking the AI platform whether they were real.
US District Judge Kevin Castel told the court that āsix of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citationsā, and that he would be holding a hearing to consider sanctions against the lawyer in question.
The judge said they were āunprecedented circumstancesā and that the brief provided was āreplete with citations to non-existent casesā.
For example, the brief cited a Varghese v China Southern Airlines case. While this case does not exist, it does appear to reference a real case, but said it was decided 12 years later than it actually was.
In an affidavit, Schwartz admitted to using ChatGPT to conduct research for the brief, and that the incorrect cases were provided by the AI tool, something he said has ārevealed itself to be unreliableā.
This was the first time he had used ChatGPT, Schwartz said, and he was āunaware of the possibility that its content could be falseā.
The lawyer also provided screenshots of him asking ChatGPT whether the sources were real, and the AI chat bot saying they were and could be found in legal journals.
Schwartz said he āgreatly regrets having utilised generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticityā.
A message at the bottom of the ChatGPT tool says that it āmay produce inaccurate information about people, places or factsā, while OpenAIās terms of use also include warnings about potentially inaccurate information.
āGiven the probabilistic nature of machine learning, use of our services may in some situations result in incorrect output that does not accurately reflect real people, places or facts,ā it said.
āYou should evaluate the accuracy of any output as appropriate for your use case, including by using human review of the output.ā
Significant concerns have been raised about generative AI since ChatGPT was launched in November last year.
Recently, OpenAI CEO Sam Altman told a US Senate hearing that he is ānervousā about the future of AI and how it could manipulate people via āone-on-oneā¦interactive disinformationā. At the hearing, Altman proposed a licensing scheme for companies that develop AI āabove a certain scale of capabilitiesā.
āGiven that weāre going to face an election next year and these models are getting better, I think this is a significant area of concern,ā Altman told the hearing. āSome regulation would be quite wise on this topic.ā
An Australian mayor is set to become the first person to sue ChatGPT over false information, saying it had incorrectly labelled him a criminal and said he had been imprisoned over a bribery scandal.
This article was written by by Denham Sadler and first published on the Information Age ACS, 29 May 2023. Read article