There is debate recently on student use of ChatGPT, a chatbot using a large language model (LLM) which uses machine-learning algorithms to understand natural language and generate responses (Radford et al., 2019). The main debate has been in its use of plagiarism among students. A subset of this debate is that due to the large amounts of unstructured text data, ChatGPT will generate fake or biased content, a reflection of larger data that is drawn.
ChatGPT-3 is not a search engine. It is a text predictor based upon a subset of data found on the internet. With that large amount of data, there are assumptions of truth and bias but the reality is that ChatGPT-3 has no soul. It does not monitor the data and is not a reflection of something larger or more nefarious occurring on the internet. In essence, it is a text calculator. This is also why ChatGPT-3 has been described as a stochastic parrot (Bender et al., 2023) meaning that based upon predictive algorithms, it links text data in remarkable ways. Much like Harry Potter’s experience with the “Mirror of Erised,” it will reflect your deepest desires even if it is not real.
I know this because of my own experience. Rather than banning ChatGPT, I chose to embrace its possibilities. As many faculty do in their tenure, I redeveloped a course. Among the assignments was the annotated bibliography. I assigned a variation of this assignment in an attempt to inform my students’ clinical practice and broaden their technical writing. I painstakingly labored over creative ways to reformulate the same task. I patted myself on the back for creating better clinicians using this model. Then, I discovered ChatGPT-3.
In one instance, ChatGPT-3 obliterated my efforts, or so I thought. Rather than curse the application, I wondered if I could incorporate its use in my courses and embrace the possibilities. I began to type in my own prompts that I used throughout my course, and each time ChatGPT-3 did not disappoint. It created beautiful annotated bibliographies with experts in my field that were familiar to me using journals both reputable and lacking in quality. Quickly, I dismissed the lower quality and kept the more reputable journal submissions. I thought it was a game changer and kicked myself for my poor research skills because the chatbot had me beat right down to its use of digital object identifiers (DOI) for the articles it presented. Then, I cross-referenced the information. None of it was real. DOIs went to different articles of different disciplines. There were documented collaborations of authors that did not exist and journal submissions that did not match the author’s work no matter how much I dug through the author’s CV online. The articles simply did not exist. I sat there staring at some alternative, better world reflected on my computer screen where my research heroes collaborated. My interests were found in the best journals and it was easy to find. It was fabricated. ChatGPT simply generated my request, without soul, without thought, and that was okay with me.
My experience may be rookie error, but frankly, I believe a majority of us are rookies, especially our students. Most faculty, like myself, do not have a computer science background but need to mitigate the issues technology could present in our courses. Much like faculty, students will always be looking for an edge, just as we did when we were in their shoes. Rather than penalty, I would urge compassion of faculty toward students caught in the act of ChatGPT use. As human beings, we are designed to find shortcuts and that has led to great innovation. Even the title of this article appeared to offer great promise to thwart ChatGPT misuse, but as with most things, it is more complicated. We can however use this innovation to take a few minutes to teach to the value of research, both in its limitations and strengths. As seasoned faculty, we can speak to the pitfalls of poor research. We can discuss our own errors along the way rather than simply penalizing students, and until the computational mirror does something more than reflect, we can empathize.
I remember completing my graduate thesis pouring over microfiches and ordering journals. It was neither productive nor motivating. I found the long lag time a hindrance to synthesis and analysis, and I wished for a shortcut. A decade or so later, my students use various search engines to solicit the same information that I did a decade prior. I would be dishonest to say that I am not a little jealous given the barriers I waded through when I completed my research.
The ubiquitous essay is still relevant despite ChatGPT. It represents a means of assessment to demonstrate acquisition, synthesis, and analysis of the content taught. We still have a need to demonstrate these qualities by assigning the essay and various iterations of it. For now, I recommend faculty get educated and attend their university-hosted seminars on ChatGPT, and in turn, educate their students.
As for my graduate students, my hope is that they may one day find themselves in the face of their own students’ innovative shortcuts that make them reflect on the relevancy of their own archaic research practices. When that day occurs, I offer a seat at the table to commiserate just as my professors did when I had the advantage of microfiche. In the meantime, as others have before and after me, I will reflect on the, “Mirror of Erasid” that ChatGPT created and reflect on what could be.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623). Retrieved from: https://dl.acm.org/doi/10.1145/3442188.3445922
Radford, A., et al. (2019). Language Models are Unsupervised Multitask Learners. OpenAI Blog.