Texas Judge’s Mandate on Generative Artificial Intelligence Provides Lawyers Job Security … For Now

June 7, 2023

A Texas federal judge affirmed the impending prevalence of artificial intelligence (AI) in the law, while emphasizing the enduring importance of human lawyers.

Judge Brantley Starr of the U.S. District Court for the Northern District of Texas recently updated his judge-specific requirements to include a section titled “Mandatory Certification Regarding Generative Artificial Intelligence.” Specifically, Starr orders all attorneys appearing before the court to file a certificate attesting that either: (1) no portion of any filing will be drafted by generative artificial intelligence; or (2) that any language drafted by generative artificial intelligence will be checked for accuracy by a human being. This precedent-setting Mandatory Certification is one of, if not the, first of its kind establishing the appropriate use of AI in legal proceedings — an issue lawyers are currently troubleshooting.

Indeed, in the U.S. District Court for the Southern District of New York, lawyers who used ChatGPT to draft an opposition to a motion to dismiss will be forced to show cause as to why the court should not issue sanctions against them and their firm. In the now infamous case of Roberto Mata v. Avianca, Inc. , the defendant’s counsel wrote a letter on April 26, 2023, to the court questioning the authenticity of several cases cited by the plaintiff’s counsel in their opposition — namely, asserting that the cases did not exist. The court itself found that “[s]ix of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” and thus issued its order to show cause. In response, one of the plaintiff’s attorneys admitted that “[i]t was in consultation with the generative artificial intelligence website Chat GPT, that your affiant did locate and cite” the nonexistent cases.

As the Avianci case revealed, and as Judge Starr points out, generative artificial intelligence — such as Chat-GPT, Harvey.AI or Google Bard — is not without risks. Starr’s Mandatory Certification acknowledges that the platforms’ propensity for hallucinations, or tendency to “make stuff up,” presents an issue with using them for legal briefing. Additionally, Starr emphasizes the issue of reliability or bias in relying on generative artificial intelligence. Specifically, the Mandatory Certification notes that “attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients.” In contrast, “generative artificial intelligence is the product of programming devised by humans that did not have to swear such an oath.” In other words, AI holds no allegiance, is unbound by any sense of duty and bases its responses on “computer code rather than conviction” and “programming rather than principle.”

Inarguably, the use of AI and automation technology calls to mind several obligations owed by lawyers illustrated in the American Bar Association’s Model Rules. For example, lawyers are required to ensure the conduct of any nonlawyers associated with them is compatible with the professional obligations of the lawyer. (Model Rule 5.3.) The use of AI also implicates a lawyer’s duty of confidentiality and the prohibition of the unauthorized practice of law. (Model Rules 1.6, 5.5.) However, a lawyer is also obligated to maintain tech competence and “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” (Model Rule 1.1.)

AI is arguably the next generation of automated technology that has already become widely used and accepted in the practice of law. For example, lawyers readily rely on Lexis and Westlaw’s algorithms and search functions to find relevant case law — both of which are looking to further develop AI to better assist with case searching. For drafting, a variety of automated programs, from BriefCatch to Microsoft Word itself, offer suggestions for better briefing. And long before the prevalence of technology, lawyers relied on the work of paralegals and practice assistants to assist with the preparation of court filings. Thus, given the historical development of legal aids and lawyers’ obligations to maintain tech competence, the use of generative artificial intelligence in the practice of law — while not infallible — appears inevitable.

Overall, Starr’s Mandatory Certification strikes a balance between recognizing that generative artificial intelligence is incredibly powerful and has many uses, while reiterating the important role lawyers still play in ensuring accuracy, reliability and — in essence — humanity in the practice of law. In other words, lawyers’ jobs are safe … for now. 

Subscribe