Metaphors and Realities: Generative AI's Role in Writing Education
- Katherine
- Jul 8, 2024
- 4 min read
As generative AI continues to evolve, how should educators frame their understanding of tools like ChatGPT to foster effective and ethical use in the classroom?
In my previous post, I looked at Kristin L. Arola’s article “Culturing as Composition,” discussing plagiarism in digital writing. This discussion naturally leads us to consider the role of generative AI, a topic Salena Sampson Anderson explores in her article “‘Places to Stand’: Multiple Metaphors for Framing ChatGPT’s Corpus.” In her article, Anderson examines several of the metaphors that have been common when referring to ChatGPT: the tool and the collaborator. She argues that neither metaphor truly fully encompasses what ChatGPT does, and proposes instead using medical and surgical terminology when discussing ChatGPT: “Medical and surgical metaphors highlight the ways that ChatGPT acts upon both the enormous corpus, or body of human writing, on which it was trained and our social body in our academic communities and beyond” (1). In so doing, she also argues that “our relationship with writing has changed in essential ways with the advent of large language models” and calls educators to learn more about LLMs to teach digital literacy in the classroom. Overall, the tone of Anderson’s article is highly cautionary and negative; an important contrast to Arola’s article, which, although it critiques trends such as remix culture, is overall much more positive.

Personally, as a foreign language major, I find the linguistic discussion fascinating. Yet, I cannot help but feel that using medical, especially surgical, terminology to describe generative AI leans a bit towards fear-mongering in some way. “Surgery” is a term that, for most people, elicits fear; the one receiving surgery has no control over what happens during the procedure, and their limited understanding of the procedure creates fear. The surgical metaphor, then, suggests a level of invasiveness and control that can evoke fear. For instance, surgery implies a lack of control and a high-stakes environment, which parallels fears of AI dominating human processes. This aligns with the discourse and tendency to view AI as something that will take over the world. Historically, new technologies like the printing press and calculators faced resistance and fear of disrupting established practices. Similarly, current fears about AI in education echo these past anxieties, yet history has shown that such fears often diminish as we adapt and integrate new tools effectively. This reminds me of the hot-button discussion, particularly in language classrooms, a number of years ago of robots replacing teachers. There was quite a bit of fear at the time from language instructors that they would be out of a job. Ultimately, though, the debate faded; robots will not be replacing language instructors any time soon. I feel the same pattern is happening with generative AI regarding the writing process.
By contrast, the tool and collaborator metaphors that Anderson references are far less fear-inducing. A tool is something we use all the time: my computer is a tool, the glasses I wear every day are a tool, my cell phone, my watch… Tools are all around us. Of course, as Anderson rightly points out, a tool is only as good as the person using it - both in terms of skill and in terms of bias. A collaborator also is a much more friendly metaphor and certainly implies a human aspect to ChatGPT. Yet, generative AI is not human; it is not sentient. So, I can agree with Anderson that this metaphor is not quite the right one. I lean more towards the tool metaphor, as I believe it speaks the most to how ChatGPT is approached and utilized. I certainly don’t think my students approach ChatGPT from the mindset of collaborating on a project, rather they approach it as a useful tool to get their job done.
Perhaps I have a more positive outlook on the use of generative AI because I have experimented with using it as a tool to generate ideas, look for weaknesses in my writing, and overall help in the drafting process - particularly when I don’t have the availability of a writing center, an editor, etc. I have found that using a certain way of prompting has helped me quite a bit. I usually ask ChatGPT to take on a role as my writing tutor and interrogate my writing. Instead of offering suggestions and writing paragraphs, ChatGPT is only tasked with asking me questions. This helps me outline my arguments and identify potential gaps in my reasoning. Of course, I am an adult with a background in writing; my students do not have the same level of skill.
There is, though, the idea of bias and an ethical use of ChatGPT. Anderson rightly points out that “the use of ChatGPT is not without risk and not without ethical dimensions” (9). She goes on to define that risk as the spread of various forms of bias, including racism and sexism, as well as misinformation, arguing that this stems from a lack of “comprehensive screen of [...] contributions before inclusion” in the texts that ChatGPT is trained on. The idea of bias within technologies is not new. Numerous articles have been published, including by the ACLU, on the racial bias that exists in facial recognition software, and early forms of artistic AI renderings produced image after image of white subjects. Again, a tool is only as good as the person using it. If generative AI is a tool that has to be trained, it will only be as good as the material on which it is trained; and if the trainer has a bias, then the tool will have that bias as well.
Therefore, just as Anderson and others point out, it is important to teach digital literacy. Just as we ask students to interrogate and hold dialogue with texts and authors, they should learn to do the same with the technologies they choose to use and consume, including generative AI programs. By fostering digital literacy, we can empower students to use AI tools responsibly and effectively, preparing them for a future where technology and human creativity coexist.




Comments