Gen AI can GTFO

Recently I had the chance to speak about generative AI in front of a mid-sized crowd of colleagues and students, along with five colleagues, on a panel discussion entitled "GenAI in the English Department." Our mandate was to comment on "how [we] are responding to GenAI in their teaching."

Although I worked only from notes, and barely that, the below is roughly what I bellowed. I've used a title here, but I didn't have one there, as it wasn't that kind of event.

There's so much more to talk about (agentic AI, people allowing some AI encroachment, fascism, etc etc), but this is what I had time for!

"GenAI Has No Place in My Classroom"

The previous speaker talked about Oxford's decision this fall to welcome ChatGPT Edu into their environment. I have many thoughts about this data point, which includes noting that this decision was mostly taken by a rogue chancellor installed by an incomprehending government that had lost its way even before it was elected. As the same speaker remarked in closing, though, "we need as many voices as possible to inform our thinking" about this issue, and among other things, I take that to mean we should pay closer attention to the Oxford example.

However, I'm going to pay the Oxford example as little attention as possible. My position about GenAI in the classroom comes from agreeing with the idea that "we need as many voices as possible to inform our thinking":

  • GenAI has no voice, or if it does, it's a monolithic single voice;
  • GenAI cannot "inform" us of anything, because it has no sense for what "informing" or "information" means, or indeed no sense for meaning itself; and
  • when we're using GenAI, we're not thinking at all, so much as we are shopping (Guest et al., 2025, pp.13-14).

I've found this shopping metaphor especially useful:

"AI stands in contrast to a tool like a saw involved in the predominantly overt labour of woodworking (that is, to cut wood), where the person cutting also puts in labour with actual control over the output of the labour; often more than the creator of the saw. AI users, on the other hand, are customers much more like the person buying the end product of woodwork than carpenters themselves. If not in some sense more so, as they remain unaware of, and are even tricked into thinking they performed the relevant labour" (Guest et al., 2025, pp.13-14).

When we tell a GenAI interface to do something, all we're doing is command that someone or something other than ourselves should do something. We can think of ourselves as prompt engineers if we like, but to continue the metaphor, really we're just telling the carpenter to do it again and again, until we decide to stop bossing them around and live with the results. We're not, ourselves, doing the labour or developing the expertise.

These days, I often find myself thinking of a particular scene from the wonderful, ancient TV drama Homicide: Life on the Streets. Late in the series, a detective named Meldrick Lewis finds himself explaining to gangster Luther Mahoney that even though Luther has the money, the power, and the upper hand, Meldrick is structurally incapable of backing down, partly from learning to be a beat cop:

"My sergeant told me that sometimes you're going to have to clear the corner. No big thing, just invite everybody to move on. For most people that's enough, you know, the police tell you to move along, that's what you're gonna do, you're gonna move along. But every once in a while, he told me, there's gonna be some knucklehead FOOL, that's going to wanna stand there on your corner talking trash. And then he told me, he said, don't you ever, EVER, let some knucklehead stand there. Cause the minute you do that, the minute you let somebody shame you, you're finished as a beat cop. So what he suggested I do, was that I take out my nightstick and I pop him upside the head so hard, that everybody who hears it knows who had the last word.... You're on my corner."

And so I want GenAI off my corner.

In every other way, let me be clear that I'm deeply uncomfortable with stretching this reference toward having me police the community that is my classroom, but GenAI is on my corner, and it needs to move along. (During the talk, I loosely paraphrased the quotation, really keeping just the line, "You're on my corner.")

I tell my students that if AI was capable of wishes, it would wish it was them, because they're better than GenAI. Fundamentally, it's incapable of mimicking so much of what my students can easily do, or what they can do with a type of effort that's alien to GenAI's software and hardware. It can generate strings of word-like concatenations of letters very quickly, but who needs that?

It's important to bear in mind that the people behind GenAI's largest instances don't have our best interests at heart. These are billionaires doggedly pursuing their goal of becoming trillionaires, and their primary vehicle for doing that is making all of us (but especially students) feel badly about ourselves, making us feel worse about our abilities and our potential.

GenAI's path to profitability is to weaken our critical thinking, our sense of our own abilities, and our community-mindedness. GenAI gains nothing from our continuing to be able to do just fine without it, as we did until roughly 2022.

The technological limitations are clear, as well. At bottom, LLM-style genAI is only a "best-guess" machine, an extruder whose product is text-like strings of symbols and tokens. Because of its architecture, GenAI is only ever going to generate plausible text that needs to be checked by someone who knows the difference between plausible and useful, plausible and correct, plausible and appropriate, plausible and beautiful.

If all you want is good enough, and if there's nothing to be gained from the process (THERE IS ALWAYS SOMETHING TO BE GAINED FROM THE PROCESS), then maybe an expert could not unreasonably use it to generate text-like strings of symbols or tokens. It's just that this expert would have to be able to (at minimum) fact-check its claims, reconsider its analysis, and recalibrate its rhetoric, and none of that's within GenAI's capability.

If nothing else, our students need to become experts in something, probably multiple things, and that's a reality that hasn't changed with the advent of GenAI. If our students use GenAI, and if we let them use GenAI, they're not becoming expert, and we're failing to prepare them.

As John Warner puts it in his book More Than Words, "The arrival of ChatGPT changes nothing about the fact that writing is thinking, writing is feeling, and together, this thinking and feeling allows us to project ourselves into the world by communicating. This is not the work of bots" (p.87.

I recognize that our students, and ourselves, are buried under advocacy, hype, and tech that has been installed on all our devices (some of it bought and imposed on us by our institutions). Resistance to GenAI cannot end with individual refusal, and it's unreasonable to expect ideological purity at every moment from our students, our colleagues, or ourselves.

Right now, though, our job as educators is clear, and that job is to tell GenAI that it's on our corner, and that it needs to move on.

If the knucklehead fool that's GenAI wants to keep standing on our corner talking trash, then we need to take additional steps: the nightstick, to follow the reference.

I'll admit that I don't know what that would look like.

But I do know that as an educator, I don't see another path for myself other than resistance.

Comments

Popular Posts