On writing positively about "AI"
I get it: we all want to believe in magic.
If you're writing about something that purports to be nigh-on magical, like LLM-style "AI" tech, you need to be careful. If you're not careful enough, then just like a certain CEO and HR manager last week, you'll lose your credibility, and then you'll be done.
A friend and colleague, who knows he's in danger of losing his "friend" status if he keeps sending me articles calculated to make my head explode, this week passed along a Substack essay that he'd received from a comp.sci colleague. Reading this piece by Nick Potkalitsky, and a few others that Potkalitsky linked to in the article, did nothing to lift my sense of despair about the critical thinking ability of AI enthusiasts.
Potkalitsky's article is entitled "The Secret AI Skill You Already Have (If You Read Literature)," and it's meant to be reassuring. I mean, here's what he tells literature teachers: "You're not being displaced by AI—you're uniquely qualified to teach the dual awareness that effective human-AI collaboration requires. Your expertise in developing cognitive flexibility and managing textual complexity transfers directly to AI literacy instruction."
One problem here is that if we're teaching AI literacy, then we're not teaching literature. In such a scenario, we might be showing students how to use AI, but the students couldn't possibly come away with the "dual awareness" Potkalitsky is going on about.
That's not the only problem, but even though I'm a bear of very little brain, I do think it a significant one.
Potkalitsky goes on to remark that this dual awareness is "a practical skill that determines whether you'll be overwhelmed by AI or empowered by it."
Among other things, he seems unaware that he's working from a thoroughly individualist model, of individual teachers equipping individual students who'll be confronting the Borg. The message appears to be, "Keep at it, my special flower, and 'Resistance is futile' might not apply to you the way it'll apply to everyone who isn't you."
If your plan is that the LLM-style AI tech is going to be kept in check only by the private actions of individual citizens, you don't have a plan, and also the citizenry doesn't have a future.
As for the "productive partnership" between human intelligence and AI mentioned in Potkalitsky's final paragraph: some might have described colonization as a productive partnership, too. But they would've been wrong.
This week, Futurism published Joe Wilkins' article "Tech Industry Figures Suddenly Very Concerned That AI Use Is Leading to Psychotic Episodes," which was prompted by the apparent mental health issues of one Geoff Lewis. Among other things, Lewis is a fund manager who seems to have been broken by his intense use of ChatGPT into "discovering" a non-existent shadow government agency that has hurt or killed numerous people. This is a sad thing, and "ChatGPT psychosis" needs to be taken seriously, but that's not my point here.
My point is that in the Futurism article, Eliezer Yudkowsky contrasted Lewis with "people sufficiently low-status that AI companies should be allowed to break them."
One more time: "people sufficiently low-status that AI companies should be allowed to break them."
Some have said that Yudkowsky was being implying that the tech companies feel that way, critiquing them rather than speaking for himself. I don't know about that, but let's set that aside and focus on what should be an inescapable fact here: no one holding that kind of sentiment ought to have a scintilla of authority over any other person, in any context, ever.
(Breathe, dude. This isn't good for your health.)
While trying to formulate a response to Potkalitsky's piece, I went a-clicking through its various links, and one that seemed intriguing at first was Mike Kentz' guest post on Potkalitsky's own Substack, "The Art of Conversational Authoring: How AI Interaction Mirrors the Craft of Fiction Writing." That's an intriguing collocation of ideas, so I buckled up and took a close look.
Listen, I'm trying to read these things generously, even while I'm reading critically. Potkalitsky's asking for "dual awareness" or "double reading"; that's what I'm doing with these articles, just like I'm doing virtually all the time.
More or less, I gave up on Kentz' piece after finding this short passage:
"You're absolutely right - I shifted to a much more casual, blog-style tone. Let me revise to match the academic register of the rest of the piece:"
![]() |
Screenshot from 2025-07-22, at 10.41.34 AM Pacific |
The passage has now been silently deleted: no explanatory note, no reply to my comment, nothing. [EDIT: Mike Kentz has now replied after all, which is great to see, and we've had quite a positive, productive exchange in the comments under his post. {Humanity FTW!} I'll share more context etc at the bottom of this post.]
![]() |
Screenshot 2025-07-23, at 1.48.46 PM Pacific |
Anyone who wants to characterize my response to someone's enthusiasm about "AI" as some version of doomerism, denial, or ignorance needs to grow up and move beyond ad hominem chicanery. I've read enough SF where computers solved (or helped to solve) intractable problems that I've always been firmly of the view that humans can be better than we currently are. Fundamentally, we're a tool-using species, and LLM-style "AI" tech might eventually become a tool rather than a potentiality.
At this point, though, my opposition to this tech is only intensifying. For my mental health, I'm not going to write any more about that in this post. Another day, perhaps.
(On a personal note, I'm not optimistic about my attitude during the coming academic year.)
UPDATE:
As noted above, Mike Kentz did reply to my inquiry about the Claude-ish passage, so we've had a fairly long and positive exchange about the issues it raises. Nick Potkalitsky hasn't yet replied to my comment on his own post, but of course people are free to make their own decisions, and that's fine. It's neither easy nor fun to handle pointed criticism, so I appreciate Kentz' replying much more than I regret Potkalitsky's not doing so.
Should I delete the swear-word above, that I use to characterize my current willingness to trust those enthusing about LLM-style "AI" use? A deletion would be more community-minded, certainly, and it does sound emotional rather than analytical, so I thought about it.
On the other hand, humans have emotions as well as thoughts. Historically we brainy types have spent a lot of time pretending otherwise (I think of Doug Coupland's Microserfs: "I feel like my body is a station wagon in which I drive my brain around, like a suburban mother taking the kids to hockey practice"), so I'm going to leave the feelings visible. If Kentz or Potkalitsky ever wanders by here, hopefully they'll read right to this point and realize that I agonized a little over whether to curse at them.
And yeah, I need to post more often about this, given how much mental space it's taking up in my brain (and how much it's all up in my feels). There's too much to do already, though, and I resent losing my grip on all those things I value much more highly than a plagiarism-based, planet-warming, plausible-token generator.
Comments