Note: I wrote this in August 2023, almost exactly a year ago. I’m working on a new version with a different (better?) structures to understand the value of human thought in an AI world.
In the meantime, a bunch of famous-er intellects have voiced similar theses. You should read or listen to them here:
A Bull Market In The Humanities (Luke Burgis)
Every time I sit down with a blank sheet in front of me I’m scared and excited. I’ve usually spent a day or two grappling with some text or interview or idea in my head. There are some fleeting moments of clarity but more frustration moments of exhaustion. This is, so they say, one of the primary reasons to write: to figure out what it is we actually think.
And so I try it. Over and over. I sit down excited to breathe into the world the beguiling and beautiful thoughts I’ve been thinking. Battle ensues. Some days I win and some days the empty page remains empty. The words I get out are never what I expect and they prove that too often those deep thoughts I’m so proud of.. aren’t that deep. Or insightful. Or beautiful. But they are mine.
Writing is hard. Uniquely hard. It’s also uniquely human. Language seems to be one of the unique traits that differentiates humans from other species. Which means no chimp can call out “Hey Alice, this tree has good food in it!” Animals seem to do a decent job of communicating without the invention of language. But you won’t hear a chimp tell a story about the amazing food tree they found near those four hills and how Alice almost fell three times trying to get there. This kind of storytelling is different and it’s propelled by our language.
There are two things that writing can do that make it, along with its close cousin oratory, is so important, so hard, and so human.
Make Things Manifest
One of my favorite passages from any book is from Stephen King’s On Writing:
“Look- here’s a table covered with red cloth. On it is a cage the size of a small fish aquarium. In the cage is a white rabbit with a pink nose and pink-rimmed eyes. … On its back, clearly marked in blue ink, is the numeral 8. Do we see the same thing? We’d have to get together and compare notes to make absolutely sure, but I think we do. There will be necessary variations, of course: some receivers will see a cloth which is turkey red, some will see one that’s scarlet, while others may see still other shades. … The most interesting thing here isn’t even the carrot-munching rabbit in the cage, but the number on its back. Not a six, not a four, not nineteen-point-five. It’s an eight. This is what we’re looking at, and we all see it. I didn’t tell you. You didn’t ask me. I never opened my mouth and you never opened yours. We’re not even in the same year together, let alone the same room… except we are together. We are close. We’re having a meeting of the minds. … We’ve engaged in an act of telepathy. No mythy-mountain shit; real telepathy.”
Damnit, Steve, this little paragraph makes me angry. It’s so easy to see the rabbit! King makes magic with words flippantly, casually, with a little flick of the wrist. And in doing so he shows us the first reason writing is magical.
Writing lets us manifest in the world the ideas that we hold. Everything from a meal with friends to a new company to the grandest castle started out as an idea first. Philosophers will argue about the stuff that makes up thoughts and whether it’s words or sounds or even language at all. But at the end of the day, if I want to get the thought-matter stuff that’s in my head out into the world - or more importantly, into your head - I’ve got to use words to do it. I must string them together as a sort of building block to create castles.
Tolkien took this idea into the realm of mystical theology with his idea of sub-creation. He believed that since God creates, and we are made in His image and likeness, we are imbued with the same abilities. This is not just a right and an ability, but a higher calling and a way of worship. Tolkien laid the groundwork for the many fantasy worlds of the last hundred years:
“Fantasy remains a human right: we make in our measure and in our derivative mode, because we are made: and not only made, but made in the image and likeness of a Maker.” -JRR Tolkien
Tolkien used language to build his fantasy worlds complete with cultures, languages, and new species. And then he used fantasy to teach us plenty about reality too, and this more than anything else is why his works are so revered today. Tolkien spun truth out of the web of stories about completely imagined worlds. You could argue that this is about as circuitous a route as you could possibly take to manifest your ideas. Not every love story about an ageless beauty and a hidden prince needs thousands of pages of backstory like Arwen and Aragorn. But then, not every story is as beautiful or as detailed or as precious and heartfelt to the reader. There’s a magic in Tolkien’s detail that captivates.
The medium for this magic is language, and the building block is the same for all of us. The engineer who describes a product, the salesman who makes a sale, and the leader who inspires a crowd all use the same means to make manifest their visions and goals. The way to make something real in the world is to make it real in the minds of other people. That way is to use words.
“If you want to build a ship, don’t drum up the men to gather wood, divide the work, and give orders. Instead, teach them to yearn for the vast and endless sea.” -Antoine de Saint-Exupery
Explanation
Explanation is the second reason writing is so important. Explanation seems to be another uniquely human characteristic, something David Deutsch focuses on in his own writing. According to him, explanation is the core telos for all scientific progress.
Deutsch gives explanation an almost alchemical quality. Physics has proven that all elements heavier than iron were born in supernovae, a stunningly beautiful explanation of science. Deutsch points out that this was perfectly true until humans, with all their explanations, were able to re-create gold in a lab:
“And yet, gold can be created only by stars and by intelligent beings. If you find a nugget of gold anywhere in the universe, you can be sure that in its history there was either a supernova or an intelligent being with an explanation. And if you find an explanation anywhere in the universe, you know that there must have been an intelligent being. A supernova alone would not suffice.” -David Deutsch
The capacity not just to observe and to sense, but to draw conclusions from observing and sensing provides all of the remarkable triumphs of human history. It leads us beyond the animalistic urge to react based on genetically encoded behavior and into an entirely different world - a world that includes understanding. If you want to understand why humans are special in a purely material sense - it’s not opposable thumbs, or our ability to sweat, or our bipedal gait. It’s our ability to understand and explain the world.
Explanatory knowledge is perhaps the most important substance in the universe. And every brick and piece of mortar of explanation is composed of nouns and verbs and grammar.
New Tools
And now we have these new tools in the form of gigantic interactive language models. They’re fed on all of the books, dialogues, tweets, journal articles, reddit threads, code and whatever else they can get their greedy algorithmic mouths on. Effectively, they’re built on a large percentage of the sum total of human knowledge. For the first time, we have a tool capable of telling us how to extricate our PBJ from the VCR in whichever style you prefer King James Bible:
“And it came to pass that a man was troubled by a peanut butter sandwich, for it had been placed in his VCR, and he knew not how to remove it. And he cried out to the Lord, saying, “Oh Lord! how can I remove this sandwich..?”
Some people are wildly troubled by this development. They think it’s going to kill us all or, even worse, help you cheat on your term paper.
Just twenty years ago, two art historians developed and published a new theory about the incredible leaps in painting realism during the Renaissance. The Hockney-Falco thesis suggested that a new technology called the camera obscura - a predecessor to the photograph - allowed painters in the 14th and 15th centuries to develop light, shadow and color into hyperrealistic compositions.
Justin Murphy makes the corollary for today’s new tools:
In other words, the great artists from this period—Peter Paul Rubens, Botticelli, Michelangelo, Caravaggio and others—are remembered as great partially because they were aggressive and shameless exploiters of Artificial Intelligence. Were they cheating? Presumably some contemporary observers must have thought so! Posterity, however, says no. The human qualities they brought to their work—the emotionality, the symbolic resonances, the larger vision they pursued in their work over time—was not commoditized by the new instruments and this is perhaps what distinguishes the great Renaissance painters from the merely good ones.
Hat tip to the anonymous reader who pointed this out to me. He adds, "Caravaggio was a pimp and thug who killed people, yet he painted like an angel because he 'cheated.' Real artists are always looking for either new techniques or new technology, while losers write off those who seem to be better as innately talented."
Fifty years ago Italo Calvino gathered acclaim for his considerable effort dialoging Impossible Interviews with a Neanderthal. Today, Tyler Cowen can feed GTP4 with the collected works of Jonathan Swift and achieve the same dramatic effect in a few hours. Realistic selfies of Napoleon, Jesus and cave dwellers are trivial. Transforming napkin sketches into working web apps is easy. And we’ve only been using these things for a few months.
LLMs are going to do a lot of stuff. They’re going to replace a lot of crappy original writing with crappy computer-generated writing. They may - and we can only hope and pray here - finally help eliminate the standard five paragraph essay from the middle and high school curriculums. On the leading edge, the most driven artists and creators will find new ways to use these tools to refine their work or to produce entirely new and barely imagined visions.
Chess
LLMs are going to do a lot of stuff and this is making me think a lot about chess-playing robots. Most people only know about Deep Blue and its historic win over Kasparov. But far more important is the all-computer competition that saw AlphaZero obliterate Stockfish.
Stockfish has an ELO rating way over 3000 and has long been the top chess computer. The current #1 human player has an ELO around 2850, which means no human even has a chance to draw against Stockfish. And then AlphaZero shows up on the scene in 2017 and beats Stockfish in 100 games without a single loss!
(Note: there are deserved caveats about these games in terms of computing power.)
Chess videos worth watching:
GothamChess walks through some AlphaZero/Stockfish games:
Vishi Anand - former world champion - discusses his frustration and wonder at AlphaZero’s play:
So why have I been thinking about chess? First, because what's already happened in chess is what people claim to be worried about more generally. That is to say: the AIs got so freaking good at something that we literally have no chance at keeping up. Moreover, they aren't just beating us on pure computation. As Vishi says above, there's some kind of reasoning going on there. And last - and this is the scary part - we don't understand it. We literally have no idea what AlphaZero is thinking.
Understandability is one of the Capital S Scary things. We call these models artificial intelligence, but alien intelligence might be a better term.
Maybe that's not surprising since we also don't really understand the human creative or reasoning process. And we definitely don’t understand consciousness. We're at a really weird point in history where the big existential crises we worry about.. we don't totally understand. Don’t let the neuroscientists fool you: we don't have a good definition of consciousness. The most practical definition is The Turing Test and the LLMs blew past that and we barely blinked and still don't think of them as conscious. There are all these doomsayers gesticulating about AI as the eschaton and we can't even define what it is we're scared of! At least we all know what an earth-crushing asteroid is.
Anyway, there's another side to this understandability problem too, which is that chess is a very discrete world with defined rules. People are translating the success in these small well-defined domains into very large domains. Cal Newport did a breakdown of GPTs that describes all of the weights in this giant inscrutable word matrix. He notes that it’s important to remember that there are only around 50,000 tokens in English it's dealing with. That's a huge vocabulary for humans but tiny for computers. We're taking this interesting linear algebra problem and imbuing it with all sorts of human reasoning properties. And we're extrapolating the narrow rules of some domains into much wider domains with open-ended unknowns.
Stochastic Parrots
There was a paper a couple years ago that defined the term "stochastic parrots”. That’s what these LLMs are - probabilistic repeaters. Watching what they do presents an interesting question: how much of humanity is us being stochastic parrots?
I've been in plenty of situations where a certain set of conversational stimuli light up the same brain circuits, bringing back specific memories, and I find myself telling the same story. We follow familiar paths while grocery shopping and repeat the same steps to start our cars We reuse the same idea templates when we build a spreadsheet, or design patterns when we build a database API, or embellish the same features when we create an original drawing.
The mob is suggesting that AI is making what is distinct about humans smaller, narrower. Absent philosophical or theological arguments on free will, human dignity, Cartesian dualism, or sin and virtue, I think this is true. We are repetitive creatures, parroting what we see in others and repeating our behaviors in fits of mimetic bliss. The last outpost of humanity I’m defending is the 5% of generative human thought that is truly novel. Original. Special. There’s a 5% to the "what makes us human" piece. Call this a divine spark if you want. Call it the creative essence of the human spirit. Call it an evolutionary step. But it’s still distinct from AI. GPTs can build sparkling castles out of words, but the cloudy, fuzzy, beautiful idea about the castle - the essence - still has to come from somewhere.
I read one article describing these new models as "blackboxes with emergent behavior that are still being studied.." I heard that and I thought to myself: that's us. That's humans. Most people are still focused on the capabilities side of the AI curve: what these models can do. But the "stochastic parrot" idea makes me think about people. If these things can operate so much like us.. then yeah, maybe 95%+ of what we are isn't as special as we have thought in the past. I keep thinking about what Edward Teller said about John von Neumann - that he was so much smarter than everyone else that "only he was fully awake".
Here's what I mean: we're used to thinking about a 80 IQ person and a 150 IQ person and believing them to be wildly different in capability. But.. what if they're not? What if the range that we define that has so many graduated levels in it is just a small sliver of the intelligence curve.
That's what AlphaZero proves to us in chess: Magnus Carlsen is by far the best in the world. He's 2850 ELO and he regularly and easily trounces measly 2500-2600 ranked grandmasters. Those, in turn, will never lose to a 2000-2200 rated player. And those will never lose to a 1500 rated player. And for all of human history the top of that chart has been the pinnacle of achievement and we've regarded that range of chess playing ability to be very wide. But now we have Stockfish with a rating way over 3000. And Magnus can't even draw a game against it. And AlphaZero trounces Stockfish. And maybe the range we thought was so vast is narrower than we thought.
I'm using "intelligence" as if it’s some obvious and inherent good which is an overdetermined perspective. (Creativity and originality are probably loosely correlated.) The real question is: how much of our behavior and our intelligence and our free will is really just conditioning and training to certain events and stimuli and environments?
It’s always been difficult for me to tell where my stochastic parrot begins and ends. Even when I’m trying to synthesize existing work or understand my own new thoughts I’m always drawing from and standing on the shoulders of other thinkers and texts. And the shape of my thoughts always suspiciously looks a lot like the last few books I’ve read. Is what I’m thinking original? Novel? Regurgitated? I always have a slightly uncomfortable feeling that the stochastic parrot’s beak reaches just a bit further than I’d like it too.
There is always a broader context of culture and memes swirling around us. In most situations and at most times, we are the stochastic parrots. This idea gives me some hope in the longer term. Intelligence and consciousness and free will are all "suitcase" words: you can pack a lot of different meaning inside of them. If we're forced to extrude out more precise attributes for each, maybe we can understand more about what is fundamentally special and human and made "in the image and likeness".
Storytelling
I had a fun debate awhile back with my brother-in-law that started with an article about Presentism and ended with a conversation about the stories that different groups tell themselves about our country. History is supposed to be the study of the events of the past, but the past gets fuzzy very quickly. What we end up with is a set of different myths that emphasize different values and are all wrong in some way. The Nikole Hannah-Jones vision of the 1619 founding of America on slavery is as wrong as it is disparaging. The pure and innocent vision of triumphant and noble Founding Fathers freeing humanity is wrong and naive too. The true history of events is in there somewhere - more nuanced than we could ever portray - but the stories we choose to build around history are far more important. America has always been on the rise because the proportion of grand, beautiful and uplifting stories of “American exceptionalism” has triumphed over the negative, declinist stories of evil. Where our myths are wrong they remain positive on the whole, and this averaged-out optimism still drives the immigrant’s desire to come to the US, the American Dream, the free markets, and the innovation economy.
Storytelling mixes the explanatory and creative powers of language. It is the most distinctive trait of humanity. The tools we’ve used to weave and capture our stories - all focused on language - have advanced in big leaps over time. We started with simple cave paintings and oral traditions that lasted tens of thousands of years. We advanced to writing systems soon after agriculture let us settle into cities and kept those systems for thousands of years. Gutenberg’s printing press gave our stories and ideas a range that ushered in the modern world in just a few hundred years. And the Internet made it faster and more ubiquitous and transformed culture in just decades.
The wisdom and institutions of the day have always rejected these advances. Socrates himself rejected writing as effective communication in Phaedrus. (Incidentally, the reason Socrates is probably wrong here is the same reason context windows are so important for LLMs: some of the best ideas - including those laid down in Plato’s Socratic dialogues - are larger than our working memory window. We can’t hold them entirely in our head at any one time.) The Hockney-Falco theory of art suggests that new tools usher into being new techne - new modalities and new crafts and new ways of doing. The tools drive the craft.
David Friedberg has a thesis about the future of the economy moving towards narration - where we can literally speak our ideas into existence and rely on powerful tools to build and execute them. Here’s his overview:
Look, my core thesis is I think humans transition from being, let's call it, passive in this system on earth to being laborers. And then we transition from being laborers to being creators. And I think our next transition with AI is to transition from being creators to being narrators. And what I mean by that is as as we started to do work on earth and engineer the world around us. We did labor to do that. We literally plowed the fields, we walked distances, we built things and over time we built machines that automated a lot of that labor, you know everything from a plow to a tractor to caterpillar equipment to a microwave that cooks for us. Labor became less, we became less dependent on our labor abilities. And then we got to switch our time and spend our time as creators as knowledge workers and a vast majority of the developed world now primarily spends their time as knowledge workers creating and we create stuff on computers. We do stuff on computers. But we're not doing physical labor anymore. As a lot of the knowledge work gets supplanted by AI or as it's being termed now, but really gets supplanted by software. The role of the human I think transitions to being one of the narrator where instead of having to create the blueprint for a house, you narrate the house you want and the software creates the blueprint for you. And instead of creating the movie and spending $100 million producing a movie, you dictate or you narrate the movie you want to see and you iterate with the computer and the computer renders the entire film for you because those films are shown digitally anyway, so you can have a computer render it instead of creating a new piece of content. You narrate the content you want to experience.
AI represents the next major step-function change in our storytelling ability. LLMs are built on a significant and growing percentage of our shared knowledge. If these tools are used as an oracle and imbued with authority, they will, at best, give the most staid, milquetoast, and unoriginal answers to prompts and, at worst, demonstrate incoherence, incorrectness, and hallucinations. On the other hand, if we can accept that most of our own output is stochastic parroting and focus on that last Tolkienesque and highly generative 5%, we can use these tools to shape ideas we might otherwise struggle to articulate. They will help us translate our vision for the world around us into reality.
And, just like Tolkien, the building block for all of this incredible world-building and creation will be the written word.
In the beginning was the Word, and the Word was with God, and the Word was God. The same was in the beginning with God. All things were made by him; and without him was not any thing made that was made. -John 1:1-3
English Is The Building Block
The building blocks will be specifically English. Since World War 2 and America’s rise as the world’s hegemonic power, English has been the primary language of science and business. When the internet began its disruption in America, English was reinforced as the primary language that can cross boundaries. When Americans travel today and try haltingly to speak German or Japanese or French, the locals laugh and switch to English.
English represents more than half of all content on the internet. This advantage is getting enshrined in LLMs today as the “programming language” of AI and will eventually do more to keep English in the number one spot than America or business or the Internet.
Storytelling and language have always been the strongest and most important tools of the human species. In the last couple hundred years, a dramatic shift has occurred as the innovative tools of science, mathematics, and engineering have played a new role in reshaping our world. Computers added another new tool to the creative repertoire over the last few decades.
Education finally caught up to the trend and defined STEM as a key component of the curriculum: Science, Technology, Engineering, Math. At the same time, the humanities have become déclassé and deteriorated. Most people can’t write anymore. They don’t read much either.
Our world is filled with technology and scientific explanations and the STEM fields have become table stakes. Going forward, they are necessary but not sufficient for those looking to create or change the world. With the new language tools we have in AI, the next couple of decades will see a resurgence in the importance of the English language. The most important and sought after skill will be the ability to articulate your stories, your thoughts, your ideas, and the ability to use a new generation of models and tools to translate your words into practical and pragmatic reality.
We’re entering a new Renaissance for the ability to write. If you want to change the world in the future, your primary tool is English. It’s always been true that the better you can use language, the more you can change the world. That’s more true today than ever before. It doesn’t matter if you’re an engineer building prompts using a GPT, an architect, a movie producer, an executive, a fascist autocrat, an activist, or a philosopher. Words are the primary construction material of the future.
English is the new STEM.
Postscript
There’s an old funny story about God that goes like this:
God was once approached by a scientist who said, “Listen God, we’ve decided we don’t need you anymore. These days we can clone people, transplant organs and do all sorts of things that used to be considered miraculous.”
God replied, “Well, don’t need me, huh? How about we put your theory to the test. Why don’t we have a competition to see who can make a human being, say, a male human being.”
The scientist agrees, so God declares they should do it like he did in the good old days when he created Adam.
“No problem!” says the scientist as he bends down to scoop up a handful of dirt.”
“Whoa” says God, shaking his head in disapproval. “Not so fast. You get your own dirt.”
This is a good analogue for how I feel about AI right now. It can do a lot of stuff, but we still have our own dirt too.
And our dirt is still just God’s. Carl Sagan said, “If you wish to make an apple pie from scratch, you must first invent the universe.”