John Powers ’26 is a prospective Public Policy major who hails from Brooklyn, NY. He is a proud member of the William and Mary debate society. Contact at firstname.lastname@example.org.
The views expressed in this article are the author’s own.
Last semester, I completed an intense group project in which we needed to produce a policy memo. For weeks, our group researched, wrote and edited. We even met through Zoom multiple times over Thanksgiving break. It was quite the effort.
A few weeks after we submitted the memo, one group member sent a text joking that ChatGPT, an Artificial Intelligence chatbot developed by OpenAI, could have done the whole project for us. Attached was a photo of its responses to the prompt. They were eerily similar to what we had worked so hard on.
I had never heard of ChatGPT before I saw that text since it only came out after we submitted the memo. So, I made an account and started playing around. I asked it questions and made it generate fictional stories. I was impressed at its conversational tone and ability to remember previous prompts.
Of course, ChatGPT has its limitations. The website’s interface informs users that ChatGPT has limited knowledge after 2021 and may occasionally generate inaccurate responses. Others have pointed out that ChatGPT has political bias. ChatGPT has been programmed to avoid controversial political topics and to be sensitive about how it responds to prompts concerning marginalized groups, which some claim make it have a liberal bias.
Once again, I have another group project this semester. In light of ChatGPT’s prominence, my professor introduced a policy that allows students to use ChatGPT, so long as it’s only used as a starting point in the way that one would use Wikipedia. If used, the raw generated responses must be attached to the assignment, and all sources must be independently verified. Those who choose not to use it must attach a statement noting this decision.
This professor is not alone in responding to ChatGPT. Some public school systems have banned it entirely while many colleges seem to avoid completely excluding it from curricula, believing doing so would be ineffective. Instead, many professors are working towards redesigning their courses to include more oral exams and in-class essays. Plagiarism detectors are on the rise as well.
These responses are all well-intentioned, but the most impactful solution to this problem won’t come from rules, but rather from a culture shift in the minds of students. We can try to outpace the growth of technology with all the policies we want, but a real conversation about AI and learning will have to take place. We can try to look at ChatGPT as a helpful tool, but reality tells us it is something completely different.
There are those, like the professors integrating ChatGPT into their classes, that would liken it to a knife. A knife doesn’t hurt you so long as you use it responsibly. However, this is the most fundamental fault of AI supporters. ChatGPT is not a tool. It is the barrier that will prevent people from writing better and learning from mistakes. It is the slab which crushes efforts to be self-responsible and have your own voice. It is the weapon which contributes to a sense of worthlessness because a machine could do the work that took you hours in just a few seconds. It is the beginning of the road to more AI, less human creativity and more outsourcing.
In saying that it could be a helpful starting point like Wikipedia, we ignore that Wikipedia has hurt us by perpetuating a single narrative that gets repeated over and over again on the internet and suppressing nuance. In downplaying fears by saying that math wasn’t destroyed by the existence of Photomath, we ignore that math scores fell in Photomath’s wake. It makes sense: copying down the machine’s answers doesn’t help a student in the way that learning from mistakes and going to office hours helps.
Make no mistake, though, ChatGPT poses far greater of a threat than Wikipedia or Photomath. Unlike Wikipedia, ChatGPT produces unique, almost human-like responses every time. With its conversational tone, ChatGPT has the capacity to plant the seed in our minds that sentient AI is acceptable and puts us on the path to interpersonal relationships with machines.
Unlike Photomath, ChatGPT threatens the very foundation of humanity. Sorry math majors, but mathematics does not have the power to unite people in the way that literature and writing do. Much more pressing is the possibility of the replacement of writers that ChatGPT opens up. Sure, replacing mathematicians with calculators that solve equations is upsetting. But when we replace the people who write books and speeches with AI, we forfeit our individuality and originality.
In that reality, humans no longer become the controllers of our destiny, nor do we unleash the ideas that change the course of history. We cannot avoid that reality with handbooks and addendums. We cannot avoid that reality by playing into the legitimacy of ChatGPT. We can only avoid it by waking up to the dangerous and disquieting effects it could have on academia and our world.
We need more collaboration, more seeking of feedback, more learning from mistakes. Not the introduction of on-demand writing services that will demonstrably harm us.
I end this article where I began. Writing and critical thinking take quite an effort, but only we can keep it that way.