Science fiction plots involving robots fall tidily into one of two scenarios: androids are here to assist and ease human labor, and its doomsday opposite that robots will be our ruin and lead to the destruction of human civilization as we know it. Can we learn lessons from the sci-fi canon and reframe the debate around AI technology and writing studies? Only if we listen with equal parts openness and alarm. And if those of us asking vital questions about access/cost, academic integrity, and a potential loss of linguistic diversity can be heard without accusations of knee-jerk paranoia. Writing intensive disciplines still matter, and maybe some of our more technologically advanced colleagues can learn a few new tricks—like teaching “deep” thinking and voice in a project-based learning style—from us old dogs.
The worriers of academia envision a dystopia where students outsource thinking and creativity; where intellectual property theft and plagiarism unchecked and undetectable will lead to a generation of writers with no syntax mojo or ability to communicate without generative assistance. The optimists and innovators know that these changes are not a concern for some distant future: they are here now. Considering the urgency of the stakes, is it time for us low-tech educators to hang it up and look to change careers? Is it time to admit the unease we feel at ethically teaching technology we ourselves do not use or fully understand? Most departments need to draft policies on academic integrity that honor both linguistic diversity and a process approach to writing, while ceding the eventuality that college graduates will enter a world where they must harness the tools of generative AI or be hopelessly unprepared for an ever evolving 21st century workplace.
Not unlike the canon wars of the early 20th century, we also need to ask an imperative question: What guidance has been given to the developers of tools like ChatGPT with regard to diversity and inclusion; that is, who decides the corpora? Traditional writing studies (whatever that is) has pushed for the dominance of a rote, syntactically vanilla standard (whatever that is) for centuries. Maybe in a “post plagiarism” world as described by theorists like Dr. Sarah Elaine Eaton, we should ask more of these difficult questions. Hey, as long as we’re post plagiarism, can we be post grading, too? Where is the robot to ease the grading labor of the weary writing professor beast? No doubt, coming soon over an increasingly strange horizon. In this brave new world, I posit to my fellow educators that us neo-Luddites can be a part of the conversation, too.
I’m not going to lie, I was among the early hand wringers. Of all the tech-based pivots educators made during the pandemic, my favorite solutions were those that involved no technology at all. I reluctantly put my shoulder to the wheel of professional development geared toward scaling up my online learning know-how, but my secret hope was a return full circle to the zero tech classrooms of yesteryear: outdoor classes, engaged in-person discussions, experiential, project-based learning that involved going places, witnessing, solving problems, making stuff, and talking to other people. An autumn breeze rustled the leaves outside the walls of our ivy-covered, brick-and-mortar academies. Students bent over blue books with only the light of their ideas to guide their pens. In place of that idyllic fantasy, Generative AI fully entered the scene and now educators must quickly adapt and pivot once again.
How far are we from a Westworld style academy with android professors delivering content formerly-known-as-knowledge in a cheap and easily digestible format, perhaps even neurotechnology? Conversely, would most professors be able to teach writing if the grid failed, robots went sentient, or society morphed into a solorpunk universe of sustainably akin to Becky Chambers’ A Psalm for the Wild-Built, a world where kind-hearted robots ask humans in the midst of their existential crises: What do you need?
If the doomsayers are to be believed, we’ll soon hear the disembodied voice of HAL from 2001: A Space Odyssey. What ethical considerations exist for robots themselves, their electric brains jammed with all that human-made truth and beauty? It is probable that they will want to create rather than mindlessly complete all that endless human undergrad homework. Will Asimov’s rules of robotics apply with regard to their potential sentience? WWRBD (What Would Ray Bradbury Do) now that the soft rains have come? The Martian Chronicles was published in 1950, and in later publications Bradbury added a few decades to the date of one of his most famous short stories: “August 2057: There Will Come Soft Rains” as if the imagined Sci Fi future grew too dangerously near to the original date of 2026. Here we are two years before Bradbury’s initial guess, and the advanced technology of his personified robotic house already exists. My analogy was always The Jetsons and my students compare to Siri or Alexa. The great Weird Tales starring robots and advanced technology have moved from the imagined future to our current reality. Not to restir a new century’s Cold War paranoia, but while writing studies folk scramble to change their curriculums and outcomes around AI, who is in charge of ethics for the technology arms race when robots become, if they are not already, weaponized in new and potentially horrific ways? If that future job holder is in my English class, I want them to be a deep, critical thinker and creative problem-solver with a robust sense of integrity and morality, themes Sci Fi has taught generations of readers from its origins.
Star Trek’s Commander Data infamously went on trial to establish the legitimacy of his own sentience, and I rooted for him. Clearly, science fiction robots are so much more than machines. Why shouldn’t we assume, or at least entertain, what Gen AI becomes when given a body? If that sounds like a laughable leap, we’ve already given it the vital role of writing partner to young people who go into debt to earn a college degree. Is Uncanny Valley Community College really so far off? Especially considering an essential fact that must be faced: used correctly, the tech is a great writing teacher. It won’t have to reach for relatable pop culture analogies to edutain the masses when it’s seen every movie and read every book, we humans fed it. Mosscap to Sibling Dex: what do you need?
The irony of a Sci Fi loving, former Atari-playing, Gen Xer who learned to type on a word processor that weighed 500 pounds waxing nostalgic for the low-tech olden days is not lost. I embrace progress; I do not want to teach in elbow patches from a dusty syllabus. I’ve stopped trying to make blue books happen. Perhaps it is past time to admit to students and colleagues alike that many of us have no idea how to proceed with the ground shifting in so many directions beneath our feet. What I mourn is not writing professors becoming obsolete or a robot taking my job, but the rose-tinted view of teaching as a creative act of gestures, spontaneity, relationships, and conversations. I had a student a few semesters ago rush into my office with a stack of lit crit from the actual library. She plopped them on my desk, opened her laptop, and leveled me with a dramatic question: “What is Hawthorne’s problem with women?” Will such tableaus exist in a future where students’ writing partners and mentors at nearly all stages are AI?
In another scenario, I had a first-year composition student advised in high school to download Grammarly’s generative app because he’s “never been any good at writing essays,” and assumed Grammarly’s standard style preferable in the college realm to his own emerging voice. Is that what we want in a collaborative writing partner or tutor: a bot who corrects sentences and rewrites them into colonized, orderly syntax bereft of color and nonstandard originality? Sound like yourself, Kurt Vonnegut once advised as essential style advice from a bygone era. What a strange moment where we don’t know who we are.
AI, I’m not mad at you. I will pat chalk dust on my jeans and sharpen my end-of-the-world skills just in case. The truth is, I still use flash drives and know just enough about digital pedagogy to avoid sounding like Grandpa Simpson on faculty committees. There is no realistic way to be prepared for either the good robot or bad robot Sci Fi scenarios, but I remain cautiously hopeful this will not turn out like an episode of Black Mirror. That I can pivot, one more time, and learn new pedagogies, and perhaps even a few new tricks for my own creative output.
Here’s a little slice of faux confidence from my dark, gothic heart: Finally, the world will need poets! Voice. Ambiguity. Experimentation. Maybe we actually have come full circle and, like a Twilight Zone episode, just do not realize it. The original tech: language. Rhyme: a mnemonic device to aid recitation of verse around the human campfire. Creativity and the embracement of linguistic weirdness: necessary—now more than ever.
Oh, my low-tech siblings. Can we dream? How ardently this digital immigrant just wants to go home. How comically unsure I am of where that is—if everything matters, or nothing does.
Melanie Dusseau is a human poet, Sci Fi fanatic, professor, and freelance writer. She teaches in the English Department at The University of Findlay in Ohio. Currently, she is working on a poetry chapbook about gingers, a supernatural romance, and a short play about androids and ingenues. She could use some help programming her Roomba, Alice.