Melbourne, Aug 15 (AP) A senior Australian lawyer has issued an apology to a judge after submitting false information in a murder case, which included fabricated quotes and fictional case judgments generated by artificial intelligence.
The incident in Victoria's Supreme Court highlights another instance where AI has caused complications in global justice systems.
Defence lawyer Rishi Nathwani, who holds the esteemed title of King's Counsel, has taken full responsibility for submitting inaccurate information in the case of a teenager accused of murder, as shown in court documents reviewed by The Associated Press on Friday.
"We are deeply sorry and embarrassed for what occurred," Nathwani stated to Justice James Elliott on Wednesday, on behalf of the defense team.
The AI-generated errors resulted in a 24-hour delay in the case’s resolution, which Elliott had intended to finalize on Wednesday. On Thursday, Elliott ruled that Nathwani's client, a minor whose identity is protected, was not guilty of murder due to mental impairment.
"At the risk of understatement, the manner in which these events have unfolded is unsatisfactory," Elliott commented to the lawyers on Thursday.
"The ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice," Elliott added.
The submission errors included fabricated quotes from a speech to the state legislature and nonexistent case citations allegedly from the Supreme Court.
The discrepancies were discovered by Elliott's associates when they were unable to locate the cases and subsequently requested copies from the defense lawyers.
The lawyers admitted that the citations "do not exist" and the submission contained "fictitious quotes," as stated in court documents.
The defense team explained that they initially verified certain citations but incorrectly assumed the rest were also accurate.
The submissions were also sent to prosecutor Daniel Porceddu, who did not verify their accuracy.
The judge highlighted that the Supreme Court had released guidelines last year regarding lawyers' use of AI.
"It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified," Elliott emphasized.
The court documents did not specify which generative AI system the lawyers employed.
In a similar case in the United States in 2023, a federal judge fined two lawyers and a law firm $5,000 after ChatGPT was blamed for their submission of fictitious legal research in an aviation injury lawsuit.
Judge P Kevin Castel noted their bad-faith actions but credited their apologies and corrective measures as reasons for not imposing harsher penalties, aiming to prevent future reliance on AI tools for creating fake legal histories in arguments.
Later that year, additional fictitious court rulings generated by AI were cited in legal papers filed by lawyers for Michael Cohen, former personal attorney for US President Donald Trump.
Cohen assumed responsibility, stating he was unaware that the Google tool he was using for legal research was susceptible to AI hallucinations. (AP) SCY
(Only the headline of this report may have been reworked by Editorji; the rest of the content is auto-generated from a syndicated feed.)