ChatGPT’s Role in Court: How a Judge Used AI to Define Landscaping
Judge Kevin Newsom of the Eleventh Circuit provides 3 reasons why ChatGPT could be used to inform judicial decisions.
Context
James Snell, a landscaper, argued that his insurance policy should cover the installation of an in-ground trampoline. The insurance company disagreed, claiming that trampoline installation fell outside the scope of “landscaping” as defined in the policy.
Judge Kevin Newsom ruled against Snell, deciding that trampoline installation did not qualify as “landscaping.” Newsom consulted ChatGPT — a large language model (LLM) — to help define the term “landscaping.”
Why?
Judge Newsom inputted questions into ChatGPT to explore the ordinary meaning of “landscaping.” The AI-generated an explanation that aligned with traditional definitions — focusing on activities like planting, maintaining lawns, or altering outdoor spaces. Based on this, the judge determined that installing a trampoline fell outside this definition.
His case decision offers insight into how judges could benefit by using LLMs (large language models) in their judicial decision-making for 3 reasons.
1. GPT relies on ordinary language inputs.
Legal documents are crafted in plain language. That way, the person who is beholden to the document understands the language. ChatGPT takes information from the internet, which is used by people using ordinary language, and therefore processes language for an ordinary person to understand.
2. GPT understands context.
Take the word bat. ChatGPT can understand the difference between a wooden bat and a flying bat that lives in your basement. Therefore, a language model like GPT may be better suited to explain how a particular word is used in a particular context to an individual.
3. GPT is relatively transparent
Newsom argues that GPT is relatively transparent because it draws from publicly available internet text, allowing users to trace their reasoning to common sources.
I disagree and am frankly confused by the transparency argument. While LLMs rely on internet data, their process isn’t entirely transparent. We don’t know how they prioritize certain sources over others, nor can we fully understand how responses are generated. This lack of clarity raises questions about their reliability and fairness in critical applications like judicial decisions.
Broad Strokes
Judge Newsom’s use of ChatGPT in this case highlights the growing role of AI in legal decision-making. Currently, it is best used as a secondary source. However, if questions around transparency and fairness improve, it’s reliance in the legal field could expand.