By Sara Merken (Reuters) -Two federal judges admitted in response to an inquiry by U.S. Senate Judiciary Committee Chairman Chuck Grassley that members of their staff used artificial intelligence to help prepare recent court orders that Grassley called "error-ridden." In letters released by Grassley's office on Thursday, U.S. District Judge Henry Wingate in Mississippi and U.S. District Judge Julien Xavier Neals in New Jersey said the decisions in the unrelated cases did not go through their chambers' typical review processes before they were issued. Both judges said they have since adopted measures to improve how rulings are reviewed. Neals, based in Newark, in his letter said a draft decision in a securities lawsuit "was released in error – human error – and withdrawn as soon as it was brought to the attention of my chambers." He said a law school intern used OpenAI's ChatGPT for research without authorization or disclosure. Neals said his chambers has since created a written AI policy and enhanced its review process. Reuters previously reported that research produced using AI was included in the decision, citing a person familiar with the circumstances. Wingate said in his letter that a law clerk in his court in Jackson used Perplexity "as a foundational drafting assistant to synthesize publicly available information on the docket." He said posting the draft decision "was a lapse in human oversight." Wingate had removed and replaced the original order in the civil rights lawsuit and previously declined to give an explanation, saying it contained "clerical errors." The judges did not immediately respond to requests for comment sent to their court staff. Grassley had asked the judges to explain whether AI was used in the decisions after lawyers in the cases said they contained factual inaccuracies and other serious errors. Grassley in a statement on Thursday said he commended the judges for acknowledging the mistakes and urged the judiciary to adopt stronger AI guidelines. "Each federal judge, and the judiciary as an institution, has an obligation to ensure the use of generative AI does not violate litigants' rights or prevent fair treatment under the law," Grassley said. Lawyers have also increasingly faced scrutiny from judges across the country for apparent misuse of AI. Judges have levied fines or other sanctions in dozens of cases over the past few years after lawyers failed to vet the output the technology generated. (Reporting by Sara Merken; Editing by David Bario and Marguerita Choy)
(The article has been published through a syndicated feed. Except for the headline, the content has been published verbatim. Liability lies with original publisher.)
By Medha Singh (Reuters) -Shares of U.S. quantum computing companies jumped on Thursday after the…
LONDON (Reuters) -Apple on Thursday lost a London lawsuit accusing the U.S. tech company of…
By Sam Tobin LONDON (Reuters) -Apple abused its dominant position by charging app developers unfair…
LONDON (Reuters) -Apple on Thursday lost a London lawsuit accusing the U.S. tech company of…
(Reuters) -Microsoft introduced new features in its digital assistant Copilot on Thursday, including collaboration and…
By Laila Kearney NEW YORK (Reuters) -Google has entered into the first corporate agreement to…