A few thoughts about plagiarism, precarity, and pedagogy in the era of AI Panic.
You see, as we enter 2023, the academic communities I’m part of are awash in fevered conversation about the Machine Learning text generator known as ChatGPT. ChatGPT is the great-grandchild of GPT-2, a system I tried to call people’s attention to years ago. Back then my colleagues treated my interest in Machine Learning text generation with a sort of bemused concern, uncertain if I was joking or having some sort of anxiety attack. Now they come to me and ask, “have you seen this ChatGPT thing!?!”
I am in no way bitter my previous attempts to spark conversation on this topic went unheeded. In no way bitter.
Anyway, the sudden interest in ChatGPT seems to stem from the fact that it can produce plausible output from prompts that aren’t so different from classroom assignments, like so:
Note I said plausible not good. ChatGPT writes prose that sounds natural, and which would fool Turnitin, but if often makes some factual mistakes and odd interpretive moves. For example, Veronica Cartwright would like a word with paragraph three above. Paragraph four glosses over the male gender of the creature’s victim in a way that is unsatisfying. Still, these are also mistakes a student might plausibly make. That makes a merely half-assed assignment response difficult to distinguish from a plagiarized one generated by the machine.
Thus, ChatGPT has lead to a veritable panic about the coming wave of machine-generated plagiarism in college classes. The desired responses to this often trend towards the punitive. We need to make a better Turnitin that will detect GPT! We need to make students handwrite everything in class under supervision! We need a tool that will monitor the edit history in a student’s google doc and detect big chunks of pasted in text! We need to assign writing assignments to students arranged in cells around a central observation tower so we can observe them without ourselves being seen and get them to internalize the value of not plagiarizing!
Ok, not that last one, but the other ones I have actually seen proposed.
These punitive measures come from an understandable place of frustration, but they also enshrine what Freire called the banking model of education. In this model, students are passive recipients of Established Knowledge. Writing assignments are designed to ensure the Established Knowledge (either content or writing skills) have been passed on successfully. Students’ reward for demonstrating that they have received the Established Knowledge is a grade and ultimately a credential they can use on the labor market.
Machine Learning text generators threaten this entire learning paradigm by allowing students to fake the receipt of knowledge and thus fraudulently gain credentials they don’t deserve. To prevent this, the thinking goes, punitive measures must be put in place. GPT must be stopped.
Let me now briefly relate an ironic moment of learning from my own life that I think illustrates a different model of the process of education, before going on to explain the social context that makes it almost impossible to get beyond the banking model in the contemporary classroom.
You see, one of my responses to the rise of ChatGPT and its cousins has been to try to understand Machine Learning better. As part of this process, I’ve been working my way through a textbook that teaches Deep Learning concepts using the Python programming language. The book provides a number of sample pieces of Python code that the student is meant to reproduce and run for themselves on their own computer.
As I went through the text, I entered the code examples into an interpreter window on my computer and executed them. I re-typed the examples myself, slowly typing out unfamiliar new terms and being careful not to misspell long and confusing variable names. This practice, of copying code examples by hand, is typical of programming pedagogy.
As a writing assignment, this sort of work seems strange. I am literally reproducing the code that’s already been written. I am not asked to “make it my own” (though I did tweak a variable here and there to see what would happen). I am not yet demonstrating knowledge I have acquired, since the code example is in front of me as I type. It’s a practice of mimesis so primitive that, in another context, it would be plagiarism.
And yet, I still did this assignment myself, I did not have it done for me by machine, though it would have been trivial to do so. I have an e-book of my text, I could have simply copied and pasted the code from the book into the interpreter, no AI writing system needed. No one would have caught me, because no one is grading me!
Indeed, I think I chose to write the code by hand in part because no one is grading me. There is nothing for me to gain by “cheating.” I wrote the code, not to gain a credential, but to improve my own understanding. That’s the purpose of an exercise like this, to have the student to read the code slowly and thoughtfully. I often found that I understood my own blind spots better after reproducing the code examples, and quickly started maintaining another interpreter window where I could play around with unfamiliar functions and try to understand them better. At one point, I did matrix multiplication on a sheet of paper to make sure I understood the result I was getting from the machine.
So my re-typing of code becomes a sort of writing assignment that doesn’t verify knowledge it produces knowledge. This assignment isn’t driven by an exterior desire for a credential or grade, but by my own intrinsic desire to learn. In such a situation, plagiarism becomes pointless. No punitive methods are required to stop it.
Lots of people much smarter than me have long advocated for a greater focus on the kind of assignments described above in college classrooms, and a diminished amount of attention to credentials, grades, and the banking model of education. In the wake of ChatGPT, the call for this kind of pedagogy has been renewed. If the banking model can be cheated, all the more reason to pivot to a more engaged, more active, more productive model of learning.
I think this is a great idea, and I intend to do exactly this in my classrooms. However, I think larger social forces are likely to frustrate our attempts at solving this at the classroom level. Namely, our students’ experience of precarity threatens to undermine more engaged learning before it can even begin.
In my experience, the current cohort of college students (especially at teaching-focused Regional Public Universities like mine) are laser-beam focused on credentials, and often respond to attempts to pivot classrooms away from that focus with either cynical disengagement or frustration. I don’t think that’s because they are lazy or intellectually incurious. I think that’s because they are experiencing a world in which they are asked to go into substantial debt to get a college education, and have substantial anxiety about putting effort into learning that is not immediately tied to saleable skills. This is exacerbated by the high stakes of a precarious labor market and shredded system for provisioning public goods that threatens all but the best and most “in-demand” professionals with lack of access to good housing, good health care, a stable retirement, and education for their children.
So, either the precarity goes, or we educators do. The punitive measures that would stop plagiarism in high-stakes classrooms will almost certainly fail. A pivot to learning as a constructive experience will only work with buy-in from students liberated from the constant anxiety of needing to secure job skills to survive.
So, as we enter the Machine Text era this spring, I call on us to engage and organize beyond the classroom and beyond pedagogy. How we build our classes will matter. How we build our society will matter more.