Because humanities education is a topic I’m sentimental about, I can’t resist responding to this New York Magazine piece by James D. Walsh. It’s about how language models (LLMs), led by ChatGPT, are upsetting the basic compact of writing assignments in the halls of academe.
The news is bad. Kids are cheating, and thereby failing to learn, and many instructors understandably want to quit. It’s unclear what might be done about any of this, which is not only sad but existentially unsettling, since it raises all kinds of questions about what it will mean in the future to be a refined, capable, educated person.
Before going too deep into the rise of the machines, I’d like to shamelessly rep my previous writing about AI. In my piece for Blood Knife two years ago, I made the case for one reason why AI would not be able to supplant human artistic writers, namely that texts can never really be separated from our interest in their authors. I feel even better about that prediction now than I did at the time. A friend of mine who’s worked in AI recently told me that artists are likely safer than, say, programmers, because what artists do is not “strictly verifiable.” That’s a comforting heuristic. We’ll be OK as long as we all work on being as un-verifiable as possible.
Then again, if no one younger that my generation even learns how be creative (or incisive, or critical), then it’s all moot and the machines really have defeated us.

Walsh’s piece quotes several students and instructors, all to arrive at a predictable conclusion: Students are using LLMs to write their assignments because it makes it easier to get a good grade, and that’s the main thing they want out of the class. To wit, we have Wendy:
Once the chatbot had outlined Wendy’s essay, providing her with a list of topic sentences and bullet points of ideas, all she had to do was fill it in. Wendy delivered a tidy five-page paper at an acceptably tardy 10:17 a.m. When I asked her how she did on the assignment, she said she got a good grade. “I really like writing,” she said, sounding strangely nostalgic for her high-school English class — the last time she wrote an essay unassisted. “Honestly,” she continued, “I think there is beauty in trying to plan your essay. You learn a lot. You have to think, Oh, what can I write in this paragraph? Or What should my thesis be? ” But she’d rather get good grades. “An essay with ChatGPT, it’s like it just gives you straight up what you have to follow. You just don’t really have to think that much.”
I’d like to thank Wendy for keeping it real. I remember being a college student well enough to understand that it feels as if there are always too many demands on your time, because you’re there to launch a successful life in every sense, while also facing the pressures of having to live The Best Years of Your Life (usually false, but you wouldn’t know that yet). Earnest learning can thus only be at best one of your priorities, and often not the top one. Because you’re there for the credential, and because the value of your credential will be measured in part by how good your grades are, you don’t have to be too nihilistic to start cutting corners. We’ve all done it.
Still, it’s a disheartening passage. The fear you probably share with me when reading a piece like this is that AI is going to sabotage the educations of countless kids, or maybe even all of the kids. Walsh talks to students in humanities classes at prestigious universities; if they’re not learning how to read and write in whatever we mean when we say a ‘deep’1 way, then who is? We’ve got Wendy here, Wendy who loves to learn and enjoys writing, admitting that it’s nice to not “have to think too much.” What can we be, other than doomed?
Doomed we may be, but, though you wouldn’t know it by reading the contemporary Internet, dooming should be earned. Most people reading this are probably a good bit older than Wendy and the other undergrads featured in the piece, which means we have to rein in our sweeping claims about what The Kids are like today. It’s always tempting, past a certain age, to lecture 20-year-olds about how little they know. But the funny thing about these 20-year-olds is that some of them inevitably become literate people who read the books we care about and, despite what we’ve heard, can even make eye contact when discussing what they’ve read. I know this because I’ve met some of them. I hope you have, too.
If anything, we should probably be amazed that there are still kids like that, given how badly we’ve warped the incentive structures under which intelligent, ambitious Zoomers/Alphas/whatevers have to live. There were hundreds of English majors each year at my alma mater in the Nineties, only to be pared down to a few dozen by the time I graduated—and surely even fewer now. What happened? We all know versions of the story, which in this context I’ll distill down to a lack of reassurance: Fewer and fewer kids believe that simply being earnest and curious on a college campus is going to help them get where they want to go, because they’re told over and over again that they won’t be rewarded2 for doing so. They’ll be rewarded for something else, such as grubbing for ever-inflated grades.
The real problem here is that students have every incentive to be cynical and instrumentalist about their studies. The problem, in other words, is that today’s undergraduates don’t have strong enough incentives to focus on becoming educated, which pushes them towards prioritizing the flimsier accomplishment of becoming credentialed. Where we’re collectively failing these kids is not in giving them access to AI, but in giving them too many reasons to use it in ways that undermine their own growth.
What do we do about either the overarching incentives or the narrower issue of AI? I started to type out some ideas, but then I remembered that I’m not an academic and have no expertise in education. I’ll leave that debate to those of you who are qualified to have it. I’d love to hear from you in the comments, if you’re an instructor or a student, about how AI is shaping your experiences at various levels of education. I don’t have any money riding on the outcome, but I’m curious to learn more. Please educate me.
Calvin and Hobbes commentary is suspended until I find a good way to source and link to the comics again. As ever, if you have a good rec for that, let me know.
A poem
Molly McCully Brown is finding perfect, dead birds.
One of the funny and bamboozling things about talking about humanities education is that, even though so much of the talking comes from people who in theory have special training in being articulate, there rarely seems to be a good way to dispense with vague and weary terms such as ‘critical thinking.’ Maybe this is because anything more freshly coinable risks sounding like marketing copy.
I have a lot to say about this. Enough to derail this piece. Briefly, it’s always worth noting that it’s forever been ridiculous to pretend that your undergrad major—stripped of the context of your school, your network, your personality, your actual skills, and so on—is going to determine the arc of your life. I mean, give me a break. But because you can choose your major, it’s reassuring to think that all you need to do is pick the right one. This is one of those little tricks we play on ourselves, like telling ourselves that the discolored spot on the ceiling will go away if we just ignore it.
I'll make the case that AI is in fact *the* problem.
I think humans as a species excel at rationalizing behavior that makes life easier in the short term and is detrimental over the long term. I also think humans generally tend to underestimate how habitual they are and form habits without really meaning to. I think AI is designed to capitalize on that. I think AI is unique in its ability to get us to outsource our critical thought. I also think that if you use AI for one edge case here and one edge case there, it's very easy to find yourself dependent on the technology in a year or so. Consider this r/nyu post that I saw in a Chronicle essay "Is AI Enhancing Education or Replacing It?" (itself worth a read):
"I literally can’t even go 10 seconds without using Chat when I am doing my assignments. I hate what I have become because I know I am learning NOTHING, but I am too far behind now to get by without using it. I need help, my motivation is gone. I am a senior and I am going to graduate with no retained knowledge from my major."
It is very difficult to admit this to oneself. For that reason, I suspect this individual's situation reflects the situation for the median American college student, many of whom are lying to themselves about how much they're learning. I graduated recently from a liberal arts school with an English degree and many of my peers regularly cheated their way through their degree. My (crank) opinion is that the incentives (get a good GPA to be successful later), incentives though they are, actually are not the primary causal factor here. Instead, if you let a person pick between something that (1) is easy in the moment but may lead to (nebulous) negative consequences later, or (2) something that is hard in the moment but may lead to (nebulous) positive consequences later, most people will pick the former. That's especially true for younger individuals. If you give a student an easy and (supposedly) risk-free way to cheat they will usually take it. AI makes cheating exponentially easier. This feels obviously true to me but is tough to talk about in polite society because it is judgmental and somewhat nihilistic. And, in the professional workforce, AI is everywhere, so why shouldn't students use it? AI is writing emails, summarizing articles, and replacing not only human connection but human thought. That's not something that has any historical parallel and I don't know how it will play out as every professional organization (many school districts included) race to be "AI-driven."
What depresses me is that I feel like I'm losing a fight and I can't rightfully articulate what I'm fighting for. You mention this as well -- I can say ChatGPT is limiting "critical thinking" but that's not a very compelling argument. And I don't know how to make that argument, especially because there isn't yet a major and readily apparent downside of AI dependency.
I've written far more than I should in the comments of a Substack newsletter and I do apologize for that, especially because I don't really disagree with any arguments you make. I also don't have a unique experience to share. (My fiancée, a high school English teacher, tells me "It's bad out there.") I just think that AI comes with a unique downside that will make our broader American society less interesting to live in.
What do we do? Obviously, we remove all grading that isn't in-person proctored examinations. Or, we close down the joints, slowly and painfully. Looks like academia is going for the latter. Full pay till the last day!