Judge Brantley Starr Demands Transparency for AI-Generated Content in Court

Emma Jones


Judge Brantley Starr Demands Transparency for AI-Generated Content in Court

Following a recent debacle involving the use of ChatGPT-generated content in a federal legal proceeding, Texas federal judge Brantley Starr has taken decisive action to prevent similar issues from arising in his courtroom. Judge Starr now requires attorneys to confirm whether their filings have been generated, in part or in whole, by generative artificial intelligence (AI). If AI-generated content is used, it must be verified for accuracy by a human.

This new rule comes in response to attorney Steven Schwartz's decision to use ChatGPT in supplementing his legal research. The AI language model provided six cases and relevant precedents, which later turned out to be entirely fabricated. Schwartz has since expressed deep regret for his actions. However, this incident has prompted Judge Starr to establish a "Mandatory Certification Regarding Generative Artificial Intelligence" as a safeguard against potential errors and biases in AI-generated content within legal filings.

As part of the certification process, attorneys must officially declare if any portion of their filings has been drafted by generative AI, such as ChatGPT or similar platforms. If so, they must also certify that the AI-generated content has been cross-checked against print reporters and traditional legal databases by a human being. This rule applies to quotations, citations, paraphrased assertions, and legal analysis, ensuring that all aspects of a filing are accounted for and verified.

While AI technologies have numerous beneficial applications, Judge Starr's memorandum outlines the rationale behind the new certification requirement, highlighting the potential dangers of hallucinations and biases in AI-generated content. He emphasizes that AI platforms, untethered by human oaths and convictions, lack allegiance to clients, the rule of law, and the truth itself. Instead, AI acts according to computer code and programming, which can be riddled with inaccuracies and influenced by human biases.

In conclusion, Judge Brantley Starr's added layer of scrutiny to AI-generated content in legal filings underscores the need for caution in their application. Although AI can prove to be useful and powerful across various industries, the legal realm demands a thorough examination of the material that influences the outcomes of cases. While this rule currently only applies to one judge's courtroom, it may serve as a catalyst for broader adoption, emphasizing the importance of responsible AI usage in the legal profession.