Losing the mess
On artificial intelligence, childhood intelligence, the WGA strike, and the possibility of losing human imprecision.
One of the most beautiful emails I got last year (via the what-have-you-lost page) was written by a fourteen year old named Sanat. They wrote to me about two talismans they’d carried around for good luck before losing, the first being a coin they’d received while visiting London:
…an old coin that was minted in the early 20th century, whose year I can't recall. I do recall visiting a statue in the middle of a park which featured four people from four corners of the earth, each from a territory ruled by the British Empire. One of the four was from India, and I recall looking at their face for a long time, studying the rusting green of the bronze. On return to America, I carried that coin with me every day to school, seeking it in my pocket and gently rubbing the copper. Then one day, I was walking back from school and realized it was gone.. I felt sort of a light emptiness, just for a moment, and don't remember how I felt after that.
The message went on to describe another lost talisman, a “wooden statuette of Ganapati that was painted orange with small diamonds in its eyes.” Sanat wrote with a kind of patience and confidence that is rare in both teenagers an adults.
Months later, I wrote Sanat back to ask if I could include their message in this project. Again I was moved by their sincerity—
It's easier to dwell on our lost pencils, chapsticks, and coins, rather than a language or identity. When I'm faced with trying to express how I feel like I've lost my "Indian-ness," I return to Agha Shahid Ali's poem "Snow on the Desert," where the snow whitening the Saguaro cacti refers him to another moment, a singer's voice in the dark, which becomes, as he puts it:
a time / to recollect / every shadow, everything the earth was losing, // a time to think of everything the earth / and I had lost, of all // that I would lose, / of all that I was losing.
Even here what he has lost, the abstract, remains unspoken. Language seems to only offer us the tangible or the vague.
Being an audience to Sanat’s young intellectual florescence felt (and still feels) heartening, not just because it’s reassuring to know there are teenagers out there who know a poem well enough that they return to it, but also because it helped me remember what being fourteen really felt like. (It’s easy to forget.)
I know I didn’t write as well as Sanat does when I was in high school, but I could still recognize my younger self in their voice, their concerns; like Sanat, I also carried talismans for reasons I didn’t exactly know, and I also wondered if literature (in my case The Bible, mostly) could shed light on the confusion of living. Most people seem to spend their adolescence losing touch with the innate wisdom and curiousity of childhood, but Sanat seemed able to access it a bit more easily.
This week I started reading Animal Joy by Nuar Alsadir in which the author asks her three-year-old daughter, “What does beautiful mean?” (Is this what having a poet mother must be like?) The daughter replies, “Beautiful means most self.” (Is this what having a poet child must be like?)
This response plays a part in Alsadir’s larger pursuit of exploring how a “self” is constructed, and how we express and recognize truth in other people’s selves, but it also sounded like the kind of question someone might ask ChatGPT.
I have been (whether consciously or unconsciously) avoiding reading about or interacting with any of the suddenly ubiquitous AI things, but the recent developments (or lack thereof) in the Writers Guild negotiations have reminded me: just because you can’t see it, doesn’t mean it can’t see you. (As basic as Grownups come back!)
Thankfully I have friends who are smarter and more curious than me. Like this one:

I was having lunch with Wilhelmina on Monday when she started asking me what I thought about the direction AI is going. I had to admit I was almost completely out of the loop.
“What’s the first question you asked ChatGPT?”
I’d never asked it anything.
“Do you want to?” She started to get her phone out.
I told her no, I did not want to. Though we both have reservations (aka anxieties) about what could happen to a society saturated with AI, Wilhelmina was facing her fears, while I preferred to go on living in a world where I’ve never “chatted” with ChatGPT.
I’m just not interested, I said, at which point she asked the child’s perpetual question— Why?
“Why don’t you want to do anything with ChatGPT?”
It was only then I realized that my disinterest in artificial intelligence is actually a preference for the inexact mess of human thought. I like the incomplete answers that people give each other. I like our compromised, never entirely accurate perspectives. I like the gaps in our understanding. I don’t really find these composite AI regurgitations to be humorous or telling or deep. I just find them lacking.
Since such large language models are built from the information we give them, and since the texts AI produces are just the distilled average of those texts, then it will always lack the incompleteness of an individual perspective. The bent particularity of the human mind—the thing AI is designed to remove—is exactly what its output will always lack.
(For the purposes of this letter, I’m only thinking about the AI-produced language, not images or deep-fakes or Drake songs.)But after we parted I realized what I wanted to ask ChatGPT (or rather, what I wanted to ask Wilhelmina to ask it for me since I was still unwilling to make an OpenAI account.)
“What does beautiful mean?”
“It’s important to note that beauty is a subjective experience,” felt like a funny phrase to be generated by a program which essentially lacks subjectivity in favor of optimization. Such a platform could never
say “Beautiful means most self,” because the incompleteness of that sentence depends on the context and the implications of the human moment in which such an answer arises. It also necessitates a curiosity and creativity in the audience. (In that case, a poet mother.)But that last sentence of ChatGPT’s was the most uncanny: “Ultimately, beauty is a complex and multifaceted concept that elicits different interpretations and responses from different individuals.” This tone of tasteless comprehensiveness is a pitch perfect facsimile of corporate non-speech, and it sounded an awful lot like the May 4th statement from the Alliance of Motion Picture and Television Producers.
“AI raises hard, important creative and legal questions for everyone.”
In addition to those “hard, important” questions about creativity, AI also raises the opportunity for studio executives to further limit and destabilize the livelihoods of screenwriters if executives chose, as AI is programed to choose, optimization over all else.
It’s no wonder that their language sounds the same; the grotesque and extreme form of capitalism we’re living in today permits the people who control such capital to detach themselves completely from the economic reality of the majority. To imagine that the multi-billionaires the AMPTP represents actually care whether a writer can afford a decent standard of living is no different than wondering if ChatGPT really truly actually fell for that New York Times journalist. (I would love to be proven wrong, and I would love for those executives to try to live in the state of precarity—for even a week—to which so many of their subjects have grown accustomed.)
Predictably, this answer from ChatGPT (much like the letter from the AMPTP) confirmed what we already know: when optimization is the goal, almost everything good gets left out.
After talking with Wilhelmina I gave up the personal embargo and spent a couple days learning about what these technologies can and could do. I listened to a long interview with Sam Altman, one of the co-founders of OpenAI, then a discussion between Stuart Russell, Gary Marcus, and Sam Harris, and finally a recorded event at the MIT Media Lab with the “Godfather of AI,” Geoffrey Hinton.
(During the Altman interview he pointed out how long after we’ve developed computers that can beat human beings at chess, chess fans still want to watch two human beings play each other, even if a game between two computers might be more theoretically perfect. But before I could be heartened by that news, the interviewer asked him what he thought of the alarmists who worried that the development of Artificial Intelligence would inevitably lead to the annihilation of the human race he agreed that it was, at minimum, “a risk.”)
After all this time trying to form a least a little understanding of how these platforms work and how they might eventually be used (it’s alternately thrilling and terrifying) I remembered this line from Sarah Manguso— “I am a machine that turns coffee into words.”
I’ve always identified with this quote. I like being such a machine. I like spending hours trying to translate emotion into language. My process is far from efficient, nor do I seek to make it more efficient. I like being slow and inexact and watching it happen. There’s no object that more clearly demonstrates my attachment to this process than my notebook, which is filled with useless things I never plan to publish or share.
Last week two friends of mine told me horror stories of losing their journals while being robbed on their travels. John was mugged while waiting for a bus in Perú in the nineties; Catherin’s bag was taken from a café in Germany just last year. The only irreplaceable thing they lost was also the only thing that wouldn’t have mattered to anyone else—their journals.
Of course it’s not the factual content of the journals we miss, nor is it the object itself, but the intangible quality of the way our mind was working (or not) at any given time. When I revisit old journals I’m always looking for what I couldn’t see back then, for what I left out in willful or accidental blindness.
I thought of that second message from Sanat—how it’s easier to dwell on the object itself than the intangible things that are lost. A lost journal isn’t the same as the time we lose to time, but aching for that notebook is more straightforward than wishing you could live another year in your life again or wishing you could go back to see that view, that person, that city the way it used to be.
Chief among my anxieties about the way AI could be used to devalue human life and labor, is the fear that we’ll become a culture that prefers the quick and flat to the slow and deep. I don’t fear a bot writing a book better than a human could; I do fear that we might collectively lose a taste for the bent particularity of a human thought, and gain a tolerance for comprehensive slickness.
It’s true that we could lose this. We can’t replace it.
I can’t imagine anyone would bother to pretend to be a precocious fourteen year old ruminating on loss with such specificity, but it’s also true you never know exactly who is on the other side of an email.
Granted, I’m really only talking about trying to use AI to generate creative texts.
I realize there’s almost no point in saying “never” about AI because of how little we understand its potential in this moment, but it seems to me we are so far away from such a possibility we might as well say never for now.
They’re the group of studio executives that the WGA is striking against for fair wages.
I don’t remember where she said or wrote this, but just trust me that she did.