10 min read

ChatGPT and the University's Existential Crisis

To grapple with ChatGPT in the classroom is also to grapple with the purpose of education: what are its means and what are its ends?
ChatGPT landing page and website login
Photo by Jonathan Kemper / Unsplash

Note: This article first appeared in Le Grand Continent, where it was published in French.

ChatGPT et la crise existentielle de l’université | Le Grand Continent
Tous des tricheurs ? L’université n’a pas échappé à l’irruption de ChatGPT et, plus généralement, de la popularisation de l’intelligence artificielle. Mais faire la chasse au plagiat est stérile. Il faut plutôt se demander ce que nous attendons de nos universités. Dans une perspective informée par s…

The English version is printed below, with permission.


In The End of Education, the American teacher and technology critic Neil Postman argued that schooling has two problems that need solving: “one is an engineering problem; the other, a metaphysical one.” This duality is one of means versus ends. Is schooling about the technical processes by which students might become more efficient workers, doers, and “leaders”? Or is it about the kind of life one should lead—that is, should it not be about “how to make a life, which is quite different from how to make a living”?

It is into this situation that ChatGPT has barged into classrooms across the globe. It is quickly being embraced—often without students fully understanding it—as a research and writing tool. Its great promise is that it might boost productivity and increase efficiency, two concepts of almost mystical importance in the modern neoliberal university. But its rapid adoption is causing myriad problems for professors—not least of which is how to trust the homework students are turning in. To grapple with ChatGPT and its role in the classroom is also to grapple with the above question about the purpose of education: what are its means, and what are its ends?

There are two obvious ways to deal with it: 1) try to ban it wholesale, or 2) try to incorporate it into the curriculum in responsible ways. I tried banning it first, and that did not work well. So, I improvised by creating an assignment designed to teach students how to (and how not to) use ChatGPT.

Banning It from the Classroom

My first impulse was simply banning ChatGPT from the classroom. Over-reliance on it would harm my students’ cognitive development and acclimatize them to a technology that they did not know could be inaccurate and misleading. I believed, as I still do, that it could interfere with the purpose of education, especially the humanities—which is the cultivation and study of humanity. This is why we read difficult books, write essays on arcane topics, and contemplate abstract and difficult questions about ethics and society. For those of us who truly believe in the lifelong value of a humanities education, forbidding technology that hampers such critical development is a no-brainer.

The problem arises when students try to circumvent the rules and use AI to write their papers anyway. This past semester, in my religious studies courses at Elon University, a private, American university in North Carolina, I caught several students doing just this. Apparently unaware that ChatGPT could be wrong, a few students turned in papers that contained confabulated information and fake sources. It was fairly easy to catch them, though it will likely get harder and harder in the future—especially since AI detectors are not always reliable.

So, how does one deal with such a situation? Professors could assign only in-class essays, written by hand. No laptops or phones allowed. Oral exams could make a return for end of the year assessments. A more creative method could be the mandated use of Google docs with permission to view file history, so professors can see if students simply copy and pasted an AI-generated text.

Designer sketching Wireframes
Photo by Unseen Studio / Unsplash

But there are many issues with these approaches. For one, the onerous time involved for every instructor. Checking sources is time-consuming enough, imagine also having to check each essay’s version history. Couple that with the difficulty of reading hand-written essays in a digital age in which penmanship is moribund, and the workload for even a simple assignment would balloon far out of control. Furthermore, most professors have too many students to conduct oral exams in any reasonable manner. It might work under the tutorial system at Oxford University, but it would not work at an American Big Tech State University. And that is not to mention that, with the adjunctification of higher ed, the burden of this extra monitoring will fall disproportionately on already underpaid and overworked gig employees (something I, as an adjunct myself, know all too well).

Beyond all this, though, there is the bigger question of the purpose of education. Why, after all, should we ban ChatGPT? Some have argued that it is just a tool—something like a calculator “but for writing.” Banning it, under such a view, would simply hamstring students who need to understand how this technology works in order to be competitive in the job market. If schooling is about “making a living,” then forbidding outside technology would unfairly penalize students who need to learn how to use it in order to get jobs and pay off the debts with which their degrees have nobly saddled them. At least, this is what most students—and most administrators—believe. And, to return to Postman’s point, it is possible that simply restraining the means students use to learn will not address the bigger question of ends. If students believe that education is for jobs training, banning new technology will only engender resentment and frustration. We have to think about the bigger picture, and only then can the question of ends be sufficiently addressed—allowing us to return to the question of means (and what technology should or should not do in class).

Because banning the technology proved unhelpful for me, I tried a different approach later in the semester. Ironically, in doing so, I was more effective in convincing my students of the humanities’ importance.

Teaching With, Not Against, Artificial Intelligence

Typewriter with phrase Artificial Intelligence
Photo by Markus Winkler / Unsplash

When I realized that my students were using ChatGPT without understanding it, I decided to create a new assignment to teach them what it does. My hope was that, in learning more about it, they would be less likely to over-rely on it. It is easy to forget how new this all is. Most of my students had not even knowingly interacted with an AI until April, when Snapchat released theirs (something that annoyed a lot of them—and many others).

My assignment consisted of each student generating their own essay from a prompt I gave them. The instructions were rather long and complicated, but the gist of it was that each student would take their unique essay output and then “grade” it. They were required to leave five comments on the essay, chronicling its strengths and weaknesses, and then answer a questionnaire about what they learned. The focal point, for me, was their task to find out if ChatGPT had confabulated sources—that is, fabricated books, articles, authors, or even quotes that were not real. To my surprise, each one of the 63 essays contained false information.

My students were shocked. Most of them, in their feedback, wrote that they had no idea an AI could be wrong. Some thought that it was “all-knowing” and could not believe it was so error-prone. Others wondered why it was being marketed so heavily if it had such obvious flaws. Many reported that they would be less likely to rely on this technology for their homework once they realized it was not an infallible oracle.

The irony here is that, by using the technology in class, my students took a step back and reflected on the meaning and purpose of education. Demonstrating that the means—using technology as an essay-writing shortcut—were not as effective as they had thought raised new questions about why they even write essays to begin with. Many students reported feeling more confident in their own abilities—after all, as some said, they could do better than that! And if one uncritically embraced this technology, many of them wrote, it could easily lead to a dumbing-down and homogenizing of thought. Not to mention one could simply learn incorrect information by taking ChatGPT at face value. I was especially gratified to read some students begin to reflect on what it means to be human—and what makes humanity special—in an age of machines.

That said, there are downsides to this approach, too. Students could, perhaps, learn from this exercise how to cheat more effectively, were they so inclined. And while the assignment was a success in orienting my class around classic humanities questions of being, meaning, and purpose, how effective can such a lone, isolated stand be against the wave of technological dehumanization? Against the axiomatic understanding that education is meant for nothing other than “to get a job”?

Why Are We Here?

Classic Oxbridge University Building
Photo by Julius Dūdėnas / Unsplash

Every semester, I raise this question to my classes: “Why are we here? What is college for?”

The answers are invariably: to get a job, to learn how to make money, or—as one student put it more ominously—“to survive.” It is only after about ten replies that one student will sheepishly nominate “learning.”

This is not new. In his dialogue Anti-Education, the young Friedrich Nietzsche had one of his characters opine that there was an opposition between “institutions of education and institutions for the struggle to survive. Everything that exists today falls into the latter category.” Perhaps my students would agree. The general consensus, then, seems to be that employment and skills-training is the end for which universities exist. Nietzsche concluded this two hundred years ago, writing, “Here we have Utility as the goal and purpose of education, or more precisely Gain: the highest possible income.” Postman concurred, writing that the current “god” of education is that of Economic Utility: “Its driving idea is that the purpose of schooling is to prepare children for competent entry into the economic life of the community.” In my experience, this idea is so pervasive that students do not even think to question it.

There are those who resist. Johann Neem, for instance, argues in What's the Point of College? that we must recover higher education’s chief purpose of stimulating intellectual inquiry and rational judgement, which would—in the end—help students on the job market anyway, because they would be capable of adapting to novel situations. William Deresiewicz has made similar arguments in his book Excellent Sheep, which, though revolving around the Ivy League, is applicable almost anywhere. But these tactics feel at best like rearguard action. If they are not full-blown retreats in the face of the hegemon of Economic Utility, they are little more than an “advance to the rear” (to borrow Chesty Puller’s defensive quip about his brutal retreat during the Korean War).

The problem here is that if the “god” of Economic Utility remains education’s sovereign, then AI will only exacerbate the issue. Its more enthusiastic proponents will even help it do so. In AI 2041, authors Kai-Fu Lee and Chen Quifan propose to argue for a vision of AI’s future that is not as dystopian or frightening as is common right now. Instead, they describe “a future that we would like to live in—and to shape.” This confessed goal makes their vision for AI and education even more nightmarish. In the short story “Twin Sparrows,” they depict a school in which students each have a personal Natural Language Processing platform (perhaps built on something like GPT-4) that tailors their lessons to their individual preferences and needs. Nearly the entirety of the students’ education is filtered through an omniscient digital assistant and friend, capable of making “boring” things interesting by rehashing each topic into one the student might like, such as a history lesson from their favorite cartoon character. The classroom teacher fades into the background. Sensing, perhaps, that this would eliminate a crucial human relationship, the authors observe in a commentary on the story that “there will still be plenty for human teachers to do,” such as “be human mentors and connectors for the students.” They can also focus on building “emotional intelligence, creativity, character, values, and resilience in the students.” The teacher will, in this great and glorious future, be finally relegated into life coaching and babysitting. Think, after all, of how much money this would save.

Implementing AI in this way would surely be fatal—for teachers and students. As Postman points out, education is fundamentally a communal endeavor. “You cannot have democratic…community life,” he writes, “unless people have learned how to participate in a disciplined way as part of a group. One might even say that schools have never essentially been about individualized learning.” An unthinking embrace of individualized technology that turns education into a solitary activity means the telos of education really would be Economic Utility. It is the logical end-point of seeing each human being as a rational consumer, a mechanical being whose atomic existence is oriented around each one’s individual preferences. Such intellectual and emotional siloing would invert John Donne’s famous line: now, every human is an island.

This vision of education makes sense if the point of college is “to get a job,” but it makes none for anything with higher aspirations than that. Because of how widespread this notion is, however, I am not especially sanguine about the future. In order to deal adequately with AI in the classroom, these other bigger questions must be addressed too. We can quibble about means all day, but unless there is agreement on what higher education is for, little actual progress can be made.

In the short-term, those of us still in the trenches can try to incorporate AI into our lessons in the manner outlined above, in the hopes that it can both educate students about its uses and pitfalls, while also using it as a springboard to talk about the big picture. But this is not a long-term solution. In reckoning with AI, educators everywhere will be forced to grapple with the problem of means and ends, and find solutions to them. My hope is that AI will help stimulate philosophical reflection about the purpose of schooling beyond simply jobs and skills training. But my fear is that it could end up remaking the entirety of education into one of means and no end—continuing the trajectory it is already on.