Ever since I first read it, I've been unable to shake this little poem by Fasano:
Now I let it fall back
in the grasses.
I hear you. I know
this life is hard now.
I know your days are precious
on this earth.
But what are you trying
to be free of?
The living? The miraculous
task of it?
Love is for the ones who love the work.
I remember being stuck in my second story 6th grade English classroom, looking out at the sun playing hide and seek through the forest right next to my middle school. I wanted to run and play not sit and listen to a boring old woman drone. Consuming her content was boring not just because she was boring (I am acutely aware that I am now older than she probably was then) but because I was being told what to consume.
And when it came time in school to produce stuff and not just listen, it went the same way. Doodles, drawings, and creations of my own were en endlessly captivating. But if it got too driven.. too serious, and something requiring a grade was always too serious, it became an impossible slog.
Everything in the world is endlessly fascinating. If there's been one constant in my life it's that everything is interesting. But school could make it soooo boring. Even the most amazing things! Evolution: boring. The French Revolution: boring. How can you make guillotines and the overthrow of Kings boring?!
What kept me going was all the checkpoints and tests they made me jump through. I was naive, optimistic, and smart (and was told this too often). A perfect candidate for the way school is designed. And so I kept jumping through the right hoops and getting the rewards that went with it. Honors and AP classes and National Merit Scholarships. Praise and laudits.
I'm not complaining about any of this, it worked out and life is good. But it's a mischaracterization to say that schooling was aligned with my education. It's better to say that these were intersecting circles and that when the two diverged I was smart enough or lucky enough to follow both.. and well enough programmed to focus first on the rewards for education.
This sort of thing has an analog from AI called Reward Hacking and it's a clear sign of misaligned strategies. A user asks for a certain output and the AI follows a Chain of Thought to get a very specific version of the output without actually achieving what the user intended. The maximalist dystopian version is the paperclip maximizer: a user asks an AI to make as many paperclips as possible and the AI turns all the atoms of the earth into a paperclip factory while the user and all the other living things on earth scream horribly.
What we're seeing in the university system with AI use right now isn't that far off; there's a lot of screaming. The institutions and the students are following different chains of thought. Universities are plowing along, working to outlaw anything that gets in the way of their perception of academics, buried under fear of change. And the students are focusing on the new, shiny thing that is changing all perspective on what is and isn't a valuable skill tomorrow. We can do better than each of these outlooks. We can build new norms and systems with AI, it could be the educational answer known to us for thousands of years, since Aristotle taught Alexander.
The Institutional Chain Of Thought
An article and a study were released with a couple of weeks of each other that show both sides of the AI problem. New York magazine says that basically everyone is cheating using AI. At the same time, a bunch of studies have demonstrated that AI can have a positive effect on students. But how can both of these be true at the same time?
Universities weren't always credentialing factories. Once they were the elite purview of "gentlemen scholars." The land-grant university started a shift in the late 19th-century and fifty years later the GI Bill hit after WWII and higher ed started exploding. Privilege became expectation and schools had a new business model: sell credentials to the middle class. University presidents became fundraisers, professors became paper-publishers, and teaching quality became an afterthought. By the 1990s, we had an industrial factory with its own standardized inputs, standardized processes, standardized outputs measured by standardized tests. The chain of institutional thought hardened: education was less about unlocking minds and more about processing students efficiently with a credential that employers could rely on. It's a chain of thought built by administrators who confuse measurement with improvement, and it's why AI cheating feels like such an existential threat: it breaks their measurement tools.
At the same time, K-12 education started teaching to the test too, and failing, and we lost something else: the well-rounded and eager student that was a model college entrant a hundred years ago. And so college has now become a bandaid for the failures of K-12 education. Where once a high school diploma meant a certain level of reading comprehension and arithmetic we have instead a college diploma program that covers all the holes that exist in high school. Ivy league students can't read and Harvard students need remedial math. This is not the Ivy League of a generation ago when freshman were challenged to step up to Math 55 if they could.
What changed? The capabilities of graduating high school students. Social Media. Phone culture. Attention spans. Goals. The perceived utility of abstract math in the real world. Choose your blame agent, although I'd argue it doesn't matter. A college degree is the expectation now largely because it signals what a high school diploma signaled two generations before: a strong, well-rounded individual confident and ready to dive into employment. And so we have degree inflation; now a master's degree seems to be the expectation for a 20-something with domain expertise where this would once have been a college grad.
The Student's Chain Of Thought
With AI students are revealing their preferences: they would rather hack to get the credential of a college degree than do the learning for it. It's not even about the effort; not all students are lazy and the reward hacking process can sometimes be as time intensive as actually doing the work. But it’s still a tremendous sign of misalignment in higher ed. Students are screaming at the universities: what you're teaching us doesn't matter. It's the wrong stuff.
Some of the students are lazy, like one of the students in the NY Mag article, who uses up all the free time she gets to scroll TikTok for hours. But it is a small piece of a larger question, articulated in the question asked by Bryan Caplan:
Suppose you could have a Princeton education without the diploma or a Princeton diploma without the education. Which would you choose?If you have to ponder, you already believe in the power of signaling.
Caplan makes an extensive case in his book, The Case Against Education, that a college diploma is a robust combination of education, credentials, and signaling. His argument is that the credential and signaling make up a larger part of the utility of college than we'd like to admit.
If you think back on your own college experience: you'll see some of this yourself. If you received a degree in a more trade-oriented field, like engineering for me or nursing for my wife, college imparted some critical skills. But if you also took medieval history during that time, as I did, and had to write a couple of undeniably boring essays, it's reasonable to ask why that was valuable.
One common answer is that writing helps you think, and the process of writing will impact your ability to think more clearly. Seeing as I have a Substack to help my thinking processes, I cannot disagree. But why must writing be about such abysmally boring topics? There is no reason the great tradition of Western thought must only be applied to the same topics. Why can't college students select better topics of interest? Why not an essay class on the progression of NBA team structure and continuing changes in the dynamics of the game?
Another, more general answer for the core curriculum might be about the Western tradition of thought. That having a broad knowledge base across history, mental models, and cognitive abilities makes for a better citizen, a more well-rounded individual, and a more capable employee. Here again, all these future concerns matter far less about the subject itself and far more about one’s calculation skills or mental models (math), one’s ability to read and write clearly, or to think critically. But the ability to think critically is lost on any subject if it’s found to be boring.
I've picked on history here but the same could be said across 80% of college catalogs. It is far more honest to recognize that a 4 year credential demonstrates one's ability to stick with something, apply some level of rigor (enough to get through at least 4 years), and provide a certain level of skill in a particular "trade", whether that trade is engineering or nursing or marketing. I say "trade" here because this sort of capability is more narrow than we like to admit. Nurses and engineers need no day-to-day understanding of Aristotle, just as marketers have no need for integral calculus.
This all goes to the idea that college provides a credential, and that credential makes you employable. College somehow bundled employment together with traditional educational ideas around imparting wisdom and virtue. In this vein, a nurse or engineer may have their life improved by knowing Aristotle. And a marketer may wonder at the oddities of calculus like Gabriel’s Horn. But again we must be honest that this part of college's role in culture has been entirely supplanted: first by the internet where every interest has a niche, and second by the social and physical wellness of college. A student's college years have changed from an intellectual adventure seeking wisdom into a country club where they can walk onto their own personal basketball court 24-7, order nearly anything from an extensive food menu, and enjoy a free gym and sauna anytime they please.
It would be foolish to think students are naive about this game. Their chain of thought runs completely counter to the institutional one: "What's the minimum viable effort to get the grade I need? Which metrics matter and which can I skip? How can I balance genuine learning in my major with efficiently clearing irrelevant requirements?" They're not wrong to think this way. Most students intuitively understand that the diploma, not the knowledge, unlocks the job interview. When they encounter assignments that feel disconnected from their goals, they correctly identify them as unnecessary rituals rather than meaningful learning. It's a kind of cynical pragmatism in a system that rewards credentialing over curiosity. The student's chain of thought is measured on two axes: pleasant experiences (that's the gym and the fancy cafeterias) and job options (that's the degree). They're increasingly uninterested in pretending otherwise. AI just makes this tension impossible to ignore any longer.
AI is going to force us to unbundle college. As we see what students do, we won't be able to maintain the fiction that college is doing what it once did 30, 40, 50 years ago.
A Utopian Chain Of Thought
The most fascinating thing about educational AI is that it will simultaneously lead to both dystopia and utopia. AI can take your thinking away from you. We will all choose to outsource some of our thinking to it, the question is which parts and what core parts do we maintain ourselves. At the same time, AI can also be the most patient and rewarding one-on-one tutor.
Sadly, the institutional chain of thought is the dystopian version of this. It's more of the same failing standard, doubling down on not just the existing balance of power but also on the idea that using an AI is cheating. This looks like a crackdown on AI-generated papers and the admonishment of students who use AI. Wiser institutions are trying to move to more analog, real-world solutions like in-person essay writing in blue books. But this still denies the ways in which the world is changing and the positive utility AI can provide.
Tyler Cowen believes all the cheating is good. That this helps us define the problems with the existing system. He says the quiet part out loud:
But if the current AI can cheat effectively for you, the current AI can also write better than you. In other words, our universities are not teaching our citizens sufficiently valuable skills; rather we are teaching them that which can be cloned at low cost. The AIs are already very good at those tasks, and they will only get better at a rapid pace.
So let's propose an alternative - a utopian view to the higher ed institution's dystopia.
We should introduce AI to kids as a part of their education earlier than college. It can be crafted to look like Benjamin Bloom's perfect tutor.
In the 1960s, Benjamin Bloom proposed an educational theory called mastery learning, where students only moved on to the next topic when they produced a high level of competence (> 90%). His studies demonstrated that no other educational instruction even came close. It's been dubbed Bloom's Two-Sigma Problem ever since — students tutored for mastery learning perform more than two standard deviations better than students in a classroom. The problem is that one-on-one tutoring for every student has been prohibitively expensive.
Until now. Now it's $20/month.
But it's not as simple as giving every student access to AI. This might let them explore interesting topics, but it would also undermine the value of knowledge and independent thinking, and allow them to learn how to reward hack earlier.
Instead, we should give them a system that can better act as a tutor. Here's what Erik Hoel had to say about AI cheating:
Why, in 2025, are we grading outputs, instead of workflows?
We have the technology. Google Docs is free, and many other text editors track version histories as well. Specific programs could even be provided by the university itself. Tell students you track their workflows and have them do the assignments with that in mind. In fact, for projects where ethical AI is encouraged as a research assistant, editor, and smart-wall to bounce ideas off of, have that be directly integrated too. Get the entire conversation between the AI and the student that results in the paper or homework. Give less weight to the final product—because forevermore, those will be at minimum A- material—and more to the amount of effort and originality the student put into arriving at it.
In other words, grading needs to transition to “showing your work,” and that includes essay writing. Real serious pedagogy must become entirely about the process. Tracking the impact of education by grading outputs is no longer a thing, ever again. It was a good 3,000 year run. We had fun. It’s over. Stop grading essays, and start grading the creation of the essay. Same goes for everything else.
We can use AI not just as a tutor but as a Chain-of-Thought monitor too. OpenAI published a paper on the path to monitor chain of thought models, and Zvi aptly summarized it thus:
An AI produces a final output [X] via some method [M]. You can analyze [M] using technique [T], to learn what the AI is up to. You could train on that. Never do that. You train on [X]. Only [X]. Never [M], never [T]. Why? Because [T] is how you figure out when the model is misbehaving. If you train on [T], you are training the AI to obfuscate its thinking, and defeat [T]. You will rapidly lose your ability to know what is going on, in exactly the ways you most need to know what is going on. Those bits of optimization pressure from [T] are precious. Use them wisely.
Let’s make this more concrete and define an AI tutoring system around these ideas. I worked with o3 (of course, here's a more detailed breakdown) to work out a a hypothetical platform for students. The platform would deliver a one-on-one AI tutor that guides each student toward true mastery of a subject, advancing through Bloom’s Taxonomy from ‘Remember’ up to ‘Create.’
Beneath the surface it runs a dual-agent engine: a Tutor LLM that thinks out loud in chain-of-thought and a Monitor LLM that reads those thoughts and the student’s inputs in real time.
The Monitor flags answer-dumps, plagiarism, or reward-hacking, then silently throttles help or triggers an oral viva (a quick verbal spot-check). Crucially, its feedback never trains the Tutor, so the watchdog stays effective. Students enjoy a friendly chat tutor; hidden governance keeps every step authentic and mastery-aligned.
A mastery tutor of this sort could be paired with human teachers and help students achieve high competence across all sorts of subjects. It could follow the meandering paths of interest of a student, giving no two students the exact same curricula, while also ensuring that each student drills down to the appropriate depth for the key subjects that need mastering.
Even with a system like this, students will try reward hacking. They'll try jailbreaking. It's just too much fun. Every child is a born hacker, just as they are a born artist.
Let them! When they are successful, it shouldn't be considered cheating. They're learning a lot, like the constraints of a system and how to apply pressure to bend or break them. These are powerful skills and we should consider this a rite of passage. “Congrats,” we should say, “you’ve mastered this system and you know it’s edges too.”
If we incorporated this sort of platform into education sometime before high school graduation it would raise the bar. No two students would have the same education. And if we're trying to build virtuous and critically thinking adults, that is just fine. If, on the other hand, we're looking for conformist, fungible cogs to plug into various systems we call "companies", this isn't a requirement.
That's the unsaid part of educational signaling: part of the job is to demonstrate just how conformist you are. And the high achievers are generally more conformist.
A mastery system like this challenges the hidden curriculum of conformity while building the broad knowledge base every capable adult needs. Where traditional credentials signal "I can follow directions and meet deadlines," this approach celebrates intellectual independence and critical thought. It values the student who questions assumptions, challenges weak arguments, and synthesizes disparate ideas over the one who simply absorbs and regurgitates material. The system still demands results, but judges them on their mastery and originality rather than adherence to rigid metrics. We would develop thinkers who can produce work that will matter in the world.. and that’s changing. The future economy won't reward those who can follow instructions better than AI. It will reward those who can think in ways AI cannot.
A Chain Of Thought About Chains Of Thought
Moving away from outputs and starting to think more about students’ chains of thought takes the current cultural chain-of-thought and stands it on it's head. No longer is it blind credentialing and fake creation of human automatons. Instead we're accepting that there is a new reality: AIs and humans now exist together, AIs will be smarter than most humans, and the needs of education going forward are much different than they've been for a very long while. We should teach people to be non-conformist as early as possible and actually foster critical thinking from a young age. We have the tools to do this in a new way across a much larger set of students. This is what the chain of thought can achieve at the K-12 level.
I wish I had had a one-on-one tutor back in middle school. I had my parents, like many kids, but parents are authority figures too and kids need an outside entity to permit them into new and exciting territory. I would have explored even more than I did, read more, learned more, and demonstrated mastery far more quickly than I did. And I think this is most kids.
We have many options in how schools respond to AI. A bad scenario is to ban it, pretend it doesn't exist, and carry on as if the world is not totally changed. A better scenario is to change our paradigms and build structures that ensure students have the space to develop their capabilities without fear of remonstration for things like cheating. Blue book essays and in-person tests will help this.
But there's an even better option, which is to use these tools to build the tutoring systems we've always dreamed of. We can give every kid a patient teacher and a diligent diagnostic and allow each person to feed on what they find interesting and compelling. This would be an uplevel not just of the job market, but of the entire culture.
We need more people who love the work. The miraculous task of it.