Monday, March 27, the College of William and Mary Law School, the Student Intellectual Property Society, and the Data Privacy and Cybersecurity Legal Society (DPCLS) hosted a discussion on ChatGPT, artificial intelligence and the implications of this technology on law, specifically in terms of education and practice.
Chancellor professor of law and Kelly Professor of Excellence in Teaching Laura A. Heymann and professor of the practice of law Iria Giuffrida hosted the panel with two student speakers.
ChatGPT, or Chat Generative Pre-trained Transformer, is an AI chatbot and large language model created by OpenAI and released in Nov. 2022. The software generates new text based on a retrainable learning system with over 175 billion parameters to create coherent language in response to a user’s prompts.
“What the generative element is, it’s a guess, a sophisticated guess, on what’s the next word, if it were to read, based on the prompt,” Giuffrida said.
To demonstrate ChatGPT’s capabilities, Heymann gave an impromptu request to the chatbot for a thank you note, which it quickly produced within seconds. She pointed out that the AI produces a formulaic response based on past data it has received, but, given additional instructions, it is able to personalize the note to convey the tone of another person.
ChatGPT’s software’s goal is coherence, and any truth generated is only a byproduct. As a result, strategic rewording questions allows users to circumvent its content policies which can generate a dangerous response.
Heymann cited an example of a ChatGPT response which falsely stated that Convallaria, or lily of the valley, a highly toxic plant, is edible and even pairs well with wine. While it initially informed the user of the plant’s fatal qualities, the user restructured the question to distract the AI from the plant’s toxic qualities and shifted the focus to other topics, such as cooking the plant.
Heymann further emphasized the importance of quality in writing done by humans, alluding to a quote from science fiction writer Ted Chiang.
“Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say,” Heymann read, quoting Chiang. “That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an AI.”
Chiang argues that AI could never fully replicate the complexity of human judgment and its individual concepts within writing done by humans. The word “hallucinate” describes the software’s ability to fill in language based on patterns to sound persuasive, though the generated product can easily be labeled as “BS,” and rapidly backfire on its intended task.
Giuffrida referenced AI generated answers to nuanced questions, referring to its inability to offer the same quality of answers offered by actual humans.
“It’s just pretty, it’s got no substance,” Giuffrida said, referring to the memo that ChatGPT has produced based on her input.
ChatGPT’s current inability to comprehend questions or prompts marked as ambiguous or entirely straightforward has initiated a rising study of prompt engineering to get more human responses.
Conversely, in legal education, professors may consider effectively reworked their exam questions to be less interpretable by AI software and require more critical thinking as opposed to memorization.
“I think the reason why the current version did pretty well on the current bar exam is because the current bar exam is mostly about memorization and spitting back information,” Heymann said. “The National Council of Bar Examiners is actually working on a new version of the bar exam that will be released in 2026 which involves much more analysis and drafting, using documents much more, and less memorization.”
To Heymann and Giuffrida, the question is not whether ChatGPT will become a part of legal practice, but instead how it will be regulated. Spell-check and Grammarly are common writing and editing software tools that often remain undisclosed for their assistance, but it’s unclear if ChatGPT will be treated the same way.
Though it does not currently seem plausible that AI will ever be a permissible tool for answering exam questions, Heymann suggested that ChatGPT could potentially help mitigate common writing issues such as writer’s block.
To conclude the lecture, law students Robert Nevin J.D. ’24 and Cole Poppell J.D. ’23, who both researched the impact of the software, gave a presentation on artificial authorship as outlined in the 1976 Copyright Act.
Nevin spoke about the first two steps in non-supervised AI training: data collection and preprocessing. He explained that initial data processing takes place before ChatGPT can formulate an answer.
First, raw data is fed to the software. The input then turns into a numerical representation for the AI to perceive and learn from.
However, Nevin questioned the legality of the use of this data input and whether copyright is the right vehicle to prevent unfair usage. Robotic creation has never been incorporated in law before and currently has no legal precedent.
When examining the “copyrightability” of the AI chatbot’s outputs, Poppell relayed the same notion that the “copyrightability” of AI-generated works remains largely undiscussed. Though the Copyright Act of 1976 aims to encourage creative production, it offers a loose definition of legal requirements to be classified as a “creator.” Thus, new legislation may be required in the future to include more substantial regulations on robotic creations.
Poppell also discussed his perspective on the value legality of AI.
“I think that you do need some sentience or at least a really strong case that it would be useful to society to have AI as the author,” Poppell said. “But that doesn’t mean we shouldn’t make AI output not copyrightable.”
Vice President of the Data Privacy Group Jeremy Bloomstone J.D. ’24 said that while he has never used ChatGPT before, people in the legal field should ultimately become acclimated with the resource moving forward.
“Students working in well-resourced public offices are going to see this,” Bloomstone said. “And so having exposure to it and having comfort manipulating it will be really important going forward.”
This paragraph is not true. The NCBE decided to release their NextGen Bar Exam before ChatGPT was released.
Conversely, in legal education, professors have effectively reworked their exam questions to be less interpretable by AI software and require more critical thinking as opposed to memorization. The nationwide bar exam is also reforming its 2026 version after a ChatGPT model scored in the tenth percentile on the most recent rendition.