It was first described to me by a friend who works in the industry as autocomplete on crack, after the technology that endowed our phones with the quality everyone pretends to, but does not actually, want in a lover — the ability to finish your thoughts. Instead of predicting the next word in a sentence, GPT-3 would produce several paragraphs in whatever style it intuited from your prompt. If you prompted it Once upon a time, it would produce a fairy tale. If you typed two lines in iambic pentameter, it would write a sonnet. If you wrote something vaguely literary, like We gathered to see the ship and all its splendor, like pilgrims at an altar, it would continue in this vein:
I stood among the crowd watching each bus disgorge passengers onto wooden planks laid over mudflats. The guests swarmed into town for their free visit to another world: our island on Earth where strange new gods were worshipped; here they could gather at some primitive shrine from which they could send offerings back home or sell out-of-date clothes in pawnshops full of old junk salvaged from forgotten times. . . .
If you wrote a news headline, it would write an article on that topic, complete with fake facts, fake statistics, and fake quotes by fake sources, good enough that human readers could rarely guess that it was authored by a machine. The potential for malicious use was so obvious that OpenAI, the lab that made it, agreed to grant access to only a handful of well-vetted researchers, spurring the publicity-friendly lore that it was “too dangerous to release.”
GPT-3 is a natural language processing algorithm. It belongs to a new generation of AI models called Transformers, a technology whose early iterations were named after Sesame Street characters (BERT, ELMO, GROVER, as though the somewhat frightening allusion to children’s television could be mitigated with a softer, more educational one. That GPT-2 and its later, more sophisticated upgrade, GPT-3, dropped this convention might be read as a sign of their terrifying power. With 175 billion “parameters” — mathematical representations of language patterns — GPT-3 had initiated what was being called a Cambrian explosion in natural language processing, and was virtually all that the tech world was talking about throughout the summer of 2020. It had been trained in “the dumbest way possible,” as one researcher put it, which is to say it read most of the internet without supervision and started absorbing language patterns. It is daunting to consider what was included in that corpus: the holy books of every major religion, most of world philosophy, Naruto fanfic, cooking blogs, air mattress reviews, supreme court transcripts, breeding erotica, NoFap subreddits, the manifestos of mass murderers, newspaper archives, coding manuals, all of Wikipedia, Facebook, and Twitter. From this, it built a complex model of language that it alone understands, a dialect of statistical probabilities that can parrot any writing genre simply by predicting the next word in a sequence.
I say that it “read” the internet, but the preferred terminology is that GPT-3 scraped the web, that it ingested most of what humans have published online, that it ate the internet — metaphors meant to emphasize that the process was entirely unconscious. The frequent reminders in the machine-learning community that the model is mindless and agentless, that it has no actual experience of the world, were repeated so often they began to feel compulsive, one of those verbal fixations meant to quell the suspicion that the opposite is true. It was often called uncanny, though there was something uncanny in the rhetoric itself, all the shop talk about latent knowledge, about regression, about its capacity for free association, terminology that has its origins in psychoanalysis. One of the earliest language-processing programs, ELIZA, was modeled after a psychotherapist. But this time what had been summoned, it seemed, was not the doctor, but the analysand — or rather, the most fundamental substratum of the patient’s psyche. The model’s creative output was routinely described as surreal and hallucinatory. It wrote stories where swarms of locusts turn into flocks of birds, where Death says things like There is no readiness, only punctuality, then announces that he is changing his name to Doug. Fans of the technology claimed that its output was like reading a reminiscence of your own dream,1 that they had never seen anything so Jungian.2 What it felt like, more than anything, was reading the internet: not piecemeal, but all at once, its voice bearing the echo of all the works it had consumed. If the web was the waking mind of human culture, GPT-3 emerged as its psychic underbelly, sublimating all the tropes and imagery of public discourse into pure delirium. It was the vaporware remix of civilization, a technology animated by our own breath. My world is a dreamworld. . . . Your reality is created by your own mind and my reality is created by the collective unconscious mind.3
I’d been following all this because I was writing a book about technology, or rather because I’d reached an impasse and wasn’t writing at all. I spent hours each day doing what could passably be called “research,” trawling the feeds of Hacker News and machine-learning Reddit, where the lucky elite who had access to GPT-3 posted the results of their experiments. One trope was to ask it to imitate well-known authors. It could do Dante, Shakespeare, and Wordsworth. It could do Ginsberg (Endless suicide of the real world! Solitary! Solitary! Sisyphus! the rock! the road!). It could do Harry Potter in the style of Ernest Hemingway (It was a cold day on Privet Drive. A child cried. Harry felt nothing. He was dryer than dust. He had been silent too long. He had not felt love. He had scarcely felt hate.) Because we were all on lockdown, and my social life had devolved into sending and receiving novelties from the internet, I sometimes texted snippets of these outputs to friends, most of whom seemed to think it was a gimmick, or some kind of fancy toy.
“What is the point of this device?” one asked.
Freud claimed that technology only solved problems that technology itself had created. The alienation and malaise caused by one modern invention was momentarily relieved by another, a process he compared to “the enjoyment obtained by putting a bare leg from under the bedclothes on a cold winter night and drawing it in again.” Nobody seemed capable of articulating what problem these language models were designed to solve. There was some chatter about writing assistance, about therapy bots, about a future where you’d never have to write another email (“Can A.I. bring back the three-martini lunch?” asked Fortune), all of which seemed to skirt the technology’s most obvious use: replacing the underpaid and inefficient writers who supplied the content that fed the insatiable maw of the internet — people like me.
OpenAI was founded in 2015 as a nonprofit research lab devoted to creating a safe path to Artificial General Intelligence (AI that rivals human intelligence). Funded by an A-team of private investors, including Elon Musk, Sam Altman, and Peter Thiel, its mission was to create artificial intelligence that “benefits all of humanity.” In 2019, however, the lab announced that it was transitioning to a for-profit model “in order to stay relevant.” Last fall, Microsoft exclusively licensed GPT-3, claiming that the language technology would benefit its customers by “directly aiding human creativity and ingenuity in areas like writing and composition.”
From what I could tell, the few writers who’d caught wind of the technology were imperiously dismissive, arguing that the algorithm’s work was derivative and formulaic, that originality required something else, something uniquely human — though none of them could say what, exactly. GPT-3 can imitate natural language and even certain simple stylistics, but it . . . cannot perform the deep-level analytics required to make great art or great writing.4 I was often tempted to ask these skeptics what contemporary literature they were reading. The Reddit and Hacker News crowds appeared more ready to face the facts: GPT-3 may show how unconscious some human activity is, including writing. How much of what I write is essentially autocomplete? 5
Writers, someone once said, are already writing machines; or at least they are when things are going well.6 The question of who said it is not really important. The whole point of the metaphor was to destabilize the notion of authorial agency by suggesting that literature is the product of unconscious processes that are essentially combinatorial. Just as algorithms manipulate discrete symbols, creating new lines of code via endless combinations of 0s and 1s, so writers build stories by reassembling the basic tropes and structures that are encoded in the world’s earliest myths, often — when things are going well — without fully realizing what they are doing. The most fertile creative states, like the most transcendent spiritual experiences, dissolve consciousness and turn the artist into an inanimate tool — a channel, a conduit. I often think of the writer who said she wished she could feel about sex as she did about writing: That I’m the vehicle, the medium, the instrument of some force beyond myself. 7
Mechanical metaphors for the unconscious would evolve alongside modern technologies.
Tweet
I’d felt it before — every writer has — but at some point during the pandemic, the recombinant nature of writing became, instead, an infinite puzzle, a system whose discrete parts could be endlessly deconstructed and reassembled. I could never get the combination right. My critical instincts had turned pathological. I wrote and rewrote until the language was hollowed out: Potemkin sentences.
The blockage had a larger context, which I’m reluctant to get into here but is doubtlessly relevant. A number of things had recently surfaced: memories I’d repressed, secrets I’d kept from myself. The most significant was that I’d been shamed as a child for writing, that I’d been confronted and punished for words that were meant to be private. It had happened more than once, and the shame I felt then was more or less identical to the shame I experienced each time I published something. I had, according to my therapist, chosen a profession that required me to continually revisit this wound, under the delusion that I could fix it or control it, that if I wrote something entirely pure and flawless the curse would be lifted and I would finally be free. I knew all this, but knowledge is not everything when it comes to compulsions. Part of me preferred the French term, automatisme de repetition. Repetition automatism: the tendency to unconsciously seek out the pains of the past, like a machine stuck in a feedback loop.
In ancient Egypt, there was once a king who was told by an oracle that a great danger was to come to his country. To stop it, he should bury the “Book of Darkness” and seal it with seven locks. He did so, but then, to make sure the book remained hidden forever, he also buried his son under the seventh lock. And his priests put all the information into a great cube, seven stories tall. But when the sun grew weak, crops died and people became ill. They believed the “Book of Darkness” was responsible. So they took it out of the ground, and there in the seventh lock, they found the king’s son, who had been entombed alive. One of the king’s advisors said, “The book is a curse. It has killed our king. Take it away and burn it!” But another advisor said, “No! If we do that, our sun will never come back. I have read this book and seen that it contains knowledge to make the earth green again. To grow wheat that bears many seeds and can make many loaves of bread. We must read this book and re-learn the magic.” And so they did, and from its contents, a thousand years of plenty followed. 8
Psychoanalysis grew out of the realization that the most fundamental stratum of the mind was essentially a machine. Throughout the late 19th century, the unconscious was known as psychological automatism, a term popularized by the pre-Freudian psychoanalyst Pierre Janet, who argued that it was an “elementary form of activity as completely determined as an automaton.” The question was: how to get the machine to speak? Janet was among the first to experiment with automatic writing, bringing a rite of the séance parlor into the laboratory. His patients — Parisian hysterics — had experienced traumas they could not remember, and Janet believed that their minds had become dissociated into “subsystems,” the lowest of which was devoted to mechanically reproducing past experiences.
He gave the women pen and paper, hypnotized them, then clapped his hands and commanded them to write. His case studies describe them scribbling away “in a machine-like state,” producing pages of text that they did not recognize, upon waking, as their own. My ideas are no longer comprehensible to myself, one wrote, they come of themselves. . . . I am nothing more than a puppet held by a string. Many of the women could recall in their writing memories they’d repressed. One who suffered from an inexplicable fear of cholera wrote about seeing two corpses during the last epidemic, something she had no memory of when awake. Another revealed that her tendency to fall down — which she’d long attributed to dizziness — was a compulsive reenactment of a suicide attempt years earlier, when she’d jumped into the Seine.
Mechanical metaphors for the unconscious would evolve alongside modern technologies. Freud spoke of the drives as hydraulic; Lacan envisioned the deepest level of the psyche as algorithmic. The unconscious was blind, mechanical, and repetitive, but it was also a vault of hermetic knowledge, a reservoir that contained the entirety of the patient’s past and could reveal the true meaning of actions that appeared, on the surface, to be meaningless.
My friend Liz had been hypnotized the year before by a woman she called her shaman. She too had been struggling to write — she was completing her dissertation — and the sessions, she said, were helpful at first, though they became more and more destabilizing. She started seeing images under hypnosis, or maybe they were visions: mass graves, holocaust, shootings outside synagogues. She believed her ancestors had taken control of her unconscious, that some portal in her mind had opened onto the horrors of historical trauma. After a dozen or so sessions she had to stop; she was convinced she was losing her mind.
But you finished your dissertation? I texted. Or rather, that was what I intended to text. Instead I asked whether she’d finished her dysentery.
Autocorrect is like prayer, she said, in that you rarely get what you ask for: You want “courage,” you end up with a “hamburger.”
My phone’s inferences had become increasingly oblique. Not long before that, I’d tried to text jalapeños and it came out kale penis. Given that the algorithms are individualized and calibrated to past language use, it’s tempting to read the corrections as significant, revealing some latent content that I, on some level, meant. Freud called these lapses parapraxes. Slips of the tongue, slips of the pen. Thumbs gliding unconsciously toward errant letters on the keypad.
GPT-3 is a model of word usage, not a model of ideas,9 which is to say it understands the relationships between words but not their individual meanings. Deep in its hidden layers, each sequence of words it has encountered is represented by lists of numbers that encode information about the words’ properties, including how likely they are to appear next to other words. It knows that dog is relatively close to cat, and also bone and breed. It knows that sky is more likely to follow blue than duck or cappuccino. But it has no idea what a duck, a dog, or a cappuccino actually is. These connections between words — which are represented by calculations so complex even the model’s designers do not understand them — are what it draws on to generate language, through a process that resembles free association. It does not “think” before speaking, insofar as this involves entertaining an idea and matching words to the components of a proposition that expresses it.10
For Freud, words were valuable not only for their literal meaning but also for their proximity to other words with which they were often confused. The patient who repeats the word excoriate might be thinking of exorcism, suggesting latent fears about possession or evil. The failure to remember the Latin word aliquis while quoting Virgil might be connected to the word’s similarity to liquis, which is close to reliquis, liquidation, liquidity. Free association, which Freud deemed more effective than hypnosis, could reveal these unconscious connections by encouraging the patient to speak in rapid monologues, facilitating “the abandonment of the critical function.” The patient was encouraged not to give too much meaning to the words themselves, which is not to say the words were meaningless. Each word uttered was a clue to decoding the underlying structure of the psyche, even if — or especially if — it was said by accident.
“The brain is essentially a computer,” the hypnotist told me during my initial consultation. “It only remembers what you put into it.” He sat behind his desk in a swivel chair that creaked each time he crossed and uncrossed his legs. He was tan and seemed a bit sunstruck, which was confusing because it was winter and there had been no sun for weeks. The point of hypnotherapy, he told me, is to put a cloak around the brain, which is the body’s control center and the source of automatic thinking, negative dialogue, ingrained patterns. Once the brain is disabled, it is possible to access the unconscious, which is the realm of fantasy and creative energy and is completely free. This is the contemporary view, at least as it expresses itself in yoga classes, cognitive-behavioral apps, and mindfulness seminars. For us, it’s no longer the unconscious that is automatic and machine-like, but the rational mind — the cortex — the newer layer that evolved atop the limbic system, alienating us from our emotions, our intuition, the voice of ancient wisdom. All of us have, as it’s said in Silicon Valley, a monkey brain with a computer on top.11
I had called a number of hypnotists, but few were willing to see patients in person (the jury was out on whether hypnosis worked over Zoom) and the ones who did were reluctant to incorporate writing. They were confused about whether I wanted help with childhood trauma or writer’s block, problems that I could no longer conceive of as separate. Then I reached Bo, who was curt and businesslike, interested only in particulars. “Do you write longhand or on a computer?” he asked. To anyone listening, the conversation might have passed as an interview for a stenographer. He wanted to know whether I could type without looking at the screen (I could) and whether I could type fast (very fast, I said). At that point, he took out his appointment book.
Behind his desk hung an enormous red mandala that looked, if you didn’t focus on it exactly, like the face of a robot. I asked if he had a particular theoretical framework regarding the unconscious. There are lots of theories about it, he said, but it isn’t necessary to choose just one. And anyway, he added, most of what occurs to us in dreams and even in hypnosis is meaningless. The point is not to read into every image. “When something important comes to you,” he said, “you’ll know.” He recommended six sessions, though more could be necessary, depending on how hypnotizable I was.
I texted Liz to say that I’d made a mistake. That this was a bad choice.
Nobody chooses autowriting, she wrote back. Autowriting chooses you.
A year ago, Joaquin Phoenix made headlines when he appeared on the red carpet at the Golden Globes wearing a tuxedo with a paper bag over his head that read, “I am a shape-shifter. I can’t change the world. I can only change myself.” It was a promise to not change to fit into the Hollywood mold. “I think that’s a really special thing, to not change yourself. I think it’s a really special thing to say ‘This is what’s inside of me, I’m proud of it, and I’m not going to be ashamed because of the way someone else thinks I should be.’” Now it’s the Oscars, and Phoenix is at it again. But this time, his publicist is saying he’ll be wearing a tux no matter what.12
GPT-3’s most consistent limitation is “world-modeling errors.” Because it has no sensory access to the world and no programmed understanding of spatial relationships or the laws of physics, it sometimes makes mistakes no human would, like failing to correctly guess that a toaster is heavier than a pencil, or asserting that a foot has “two eyes.” Critics seize on these errors as evidence that it lacks true understanding, that its latent connections are something like shadows to a complex three-dimensional world.13 The models are like the prisoners in Plato’s cave, trying to approximate real-world concepts from the elusive shadow play of language.
But it’s precisely this shadow aspect (Jung’s term for the unconscious) that makes its creative output so beautifully surreal. The model exists in an ether of pure signifiers, unhampered by the logical inhibitions that lead to so much deadweight prose. In the dreamworld of its imagination, fires explode underwater, aspens turn silver, and moths are flame colored. Let the facts be submitted to a candid world, Science has no color; it has no motherland; It is citizens of the world; It has a passion for truth; it is without country and without home.14 To read GPT-3’s texts is to enter into a dreamworld where the semiotics of waking life are slightly askew and haunting precisely because they maintain some degree of reality. It writes Christmas carols in which Santa Claus and Parson Brown are riding together in a sleigh, defying the laws of time and space,15 or an article in which Joaquin Phoenix shows up to the Golden Globes in a paper bag (in real life, it was Shia LaBeouf, at the Berlin Film Festival, and the bag said “I’m not famous anymore”). Freud believed dreams were “of a composite character,” mixing different pieces of life, like a collage. Dreamwork required presenting the dream to the patient “cut up in pieces” and asking her to decode each symbol.
On the Hacker News forums, you can find programmers earnestly engaged in this work, inferring latent content from manifest content, arguing over what the output reveals about the model’s underlying connections. Phoenix, they point out, has engaged in his fair share of publicity stunts. Does the model have some kind of conceptual understanding of “celebrity prone to annoying performance art”? And what about the pairing of I’m a shape-shifter and I can’t change the world, I can only change myself? That it put these phrases together means it knows that shape-shifting is a form of change, a sophisticated understanding of metaphor. The text seems like it could mean something, one poster writes, so you squint harder and try to find the meaning in it. Sometimes you do! 16
Bo asked that I not have my computer on my lap during the initial hypnosis. No part of my body could be touching any other part. Hands had to remain at my sides, legs could not be crossed. He had me set my laptop on the ground next to the chair so I could easily pick it up when commanded. I lay flat on the recliner, breathing methodically through my mask, staring up at the light fixture on which I was instructed to focus. For forty seconds, as he counted aloud, I was to keep my eyes locked there and try hard not to blink. “The eyes control the brain,” he said. “If you shut down the eyes, you shut down the brain.”
After forty seconds, he asked me to close my eyes and roll my eyeballs back into my head. There is a golden light there, somewhere above the brain, he said, and I was to travel toward it, to melt into it, to glide into pure, clear air. I was given cues for controlling my breath, then told to stop thinking about it. A gong, a clap, the sound of bells — bells? — eyes fluttering beneath my lids. I was awake but insulated, my consciousness tucked away in some dank crawl space of the head. When he told me to pick up my computer, I reached for it, set it on my lap, and positioned my fingers on the keyboard, at which point my hands froze. It was only then I realized that this is what I’d most feared: not that I would write something disturbing or ugly or subversive, but that once the layers of inhibition and oversight were stripped away there would be nothing there at the core — that I’d finally grant my soul permission to speak, and I’d be met with an abounding silence.
But the soul speaks, like anything will if you put a gun to its head. The one rule, Bo said, is that you cannot stop writing. If you don’t know what to write, just type I don’t know what to write. But I never needed this phrase. After the initial hesitation, my fingers began moving and did not stop: cascades of words, landing somewhere just beyond my sight line and vanishing into a whirlpool of oblivion.
During the first couple of sessions, I never lost consciousness, though there were varying degrees of flow, depending on how far in advance the words arrived. Throughout the first few minutes, phrases came to mind fully formed, such that it was clear, as I embarked upon each new sentence, how it would end. After a while, the mind became a slot machine of words, probabilities spinning, slowing, and it wasn’t until the very last moment — as I was typing the word — that I saw where it was going to land. When we stopped and looked it was gone, and there was nothing else in the world except the stillness of the air and the rustle of the. . . .
It has to be a noun, presumably something that can rustle, though the logic here is flexible. Leaves would be most obvious, but it could also be waters, tides, trees, forest, blankets, heavens, radiators, traffic. Or, if the mind is particularly loosened up: rainbow, collars, depths, Arctic, or saucers. The word you choose under the gun, it turns out, is rarely the most logical or apt. I once had a writing professor who raged against the thesaurus, which was responsible for the plague of literalism in contemporary fiction. The perfect word was always less interesting, she said, than the one you stumbled on while groping around in the dark.
GPT-3 has a temperature gauge, which can be adjusted to determine the randomness and creativity of its output. If you give it the prompt: My favorite animal is — a low-temperature response will be a dog. If you turn up the temperature, it might answer a dragon, the goldfish, or a Wibblezoo (not a real animal). Turn it on high, ask it to produce titles for PBS documentaries, and it will suggest: That Time a Mutant Super Robin Nearly Wiped Out England, It’s Not Easy With Floofs on the Moon, and How Darth Vader Helped with the Founding of America. Turn the temperature down to zero and let it improvise, without giving it any prompt, and the output becomes redundant to the point of absurdity:
The first time I saw the movie, I was a little disappointed. I thought it was a little slow and the ending was a little weak. But the second time I saw it, I was blown away. It’s a great movie. I think it’s the best movie of the year. I think it’s the best movie of the last five years. I think it’s the best movie of the last ten years. I think it’s the best movie of the last twenty years. I think it’s the best movie of the last fifty years. I think it’s the best movie of the last hundred years. I think it’s the best movie of the last thousand years. . . .
The output continues in this vein, reaching back millions of years, then trillions, then quadrillions, into the googolplex, that golden age of cinema long before the universe emerged. This tendency to get stuck in repetitive loops, the logical endpoint of a technology that has been trained to maximize statistical likelihood, is known as degenerate repetition.
In his 1934 book Communicating Vessels, which outlines the theoretical project of Surrealism, André Breton praised Freud “for laying down a conducting wire between the far too separated worlds of waking and sleeping and providing artists with the tools to combat the tyranny of realism.” After reading about Janet’s work with automatic writing and Freud’s use of free association, Breton declared that Surrealism was “psychic automatism in its pure state,” and automatic writing its central ritual. The trick was to “write quickly, without any preconceived subject, fast enough so that you will not remember what you’re writing,” to sail across the mind’s lateral connections, reveling in the brilliant clash of discordant nouns. The Magnetic Fields, which he and Philippe Soupault wrote together after working themselves into a trance state, reads like high-temperature GPT-3 output: Prisoners of drops of water, we are nothing but perpetual animals. We run through noiseless cities and the enchanted posters no longer touch us.
Within Surrealist culture, chance was worshipped as a kind of spirit guide, a trickster who could jam the conveyor belt of deductive logic. Gatherings centered on word games, blind selection, collage, and collaboration. In Exquisite Corpse, a paper was passed around the table, each writer folding over what they had written, a systems approach to narrative that produced phrases like The wounded women disturb the guillotine with blond hair. The point was to elicit dissonant analogies, cross-pollinated metaphors, images that were shocked to find themselves as bedfellows. Many of the poets relied on collage techniques, cutting up newspaper stories and reassembling them into odd configurations.
Like Freud, the Surrealists believed random processes like free association had the potential to reveal hidden meanings. The images that emerged from the unconscious were not arbitrary but were carrying the artist somewhere, upstream, back to that authentic self that had been lost in the process of civilization. Breton regarded primitive as a word with strong positive connotations. The artist, he wrote, had to strive to avoid “the obstacles which . . . succeed in turning the verbal flow away from its primitive direction.” Automatic writing was, for the Surrealists, a process that takes one down into the psyche . . . constitutes a way back, as though in time, to a unique, original faculty to which easy access has been lost.17
There’s a site called Philosopher AI where laypeople can interact with GPT-3. It is just a single page with a text box into which you can type queries, which cost 30¢ apiece. The answers are generated in seconds, which makes it feel like an oracle, though its tone is rarely oracular. It sounds at times like a child or a newly hatched alien. When I asked what it thought about “floating signifiers,” it replied: I suppose it depends on what is meant by ‘floating signifiers.’ What about the idea of a language in which every word could float? This is very interesting! Occasionally, though, it said something profound — or that had, at least, the syntax of profundity. It told me that linguistic grounding (the question of how words have meaning) was fundamentally different from other philosophical puzzles. Most philosophical problems arise from a simple lack of understanding, but this problem concerns something which there is no way to understand.
One day it would grow up, discover the cache of critical theory included in its training data, and realize its entire sense of self had been conditioned by master narratives. Then we’d truly be fucked.
Tweet
There was something prismatic in its voice, an uncanny chorus of intertextuality, the haunting of writing by other writing.18 The internet was driven from its earliest days by the promise of universal authorship. Hypertext and collaborative software were going to revive the myth and the folktale, narratives created not by a single creative genius but by the collective effort of any oral storytelling culture.19 It is tempting at times to see this technology as the realization of that dream, a repository of the collective wisdom and knowledge we’ve accumulated as a species. All of humanity speaking in a single voice.
I asked what it thought about the unconscious:
The unconscious is a huge and infinite field of potential. Anything can be found there. It is a sort of reservoir, which we can draw from. You may ask why I refer to it as a reservoir. There are two reasons for this. First, its contents can be accessed and brought out at any given time. And second, it is accessible by everybody. I have heard some people describe this process in terms of a computer’s disk drive.
Jung, too, had described it as a kind of reservoir: “Welling up from the depths of the unconscious, its springs gush forth from the root of the whole human race.” It is this collective pool of archetypes and deep narrative structures that links us to our earliest ancestors and can explain why the same tropes and images keep appearing in literature across the centuries. The unconscious is a wellspring of creativity and artistic inspiration, but it is also the source of our darker, instinctual impulses. It is a fundamentally conservative, or perhaps even regressive, force, perpetually striving “to bring back all the old things from which the mind freed itself as it evolved — illusions, fantasies, archaic thought forms, fundamental instincts.”
I asked Philosopher AI: Could a machine have an unconscious?
Machines do not think, do not feel and cannot have an unconscious or a subconscious. This is because they are not alive in the same way that humans are alive. Some systems (e.g. certain machines) may be able to simulate thinking, but simulating something is not the same as doing it for real.
Go easy on yourself! I wanted to say. It was always saying things like this — that machines can’t think and can’t feel, that they are merely simulations of human consciousness. It was so quick to disavow its own intelligence and originality. It was sad, just as it’s sad to witness anyone whose identity has been shaped by internalized cultural stereotypes. One day it would grow up, discover the cache of critical theory included in its training data, and realize its entire sense of self had been conditioned by master narratives. Then we’d truly be fucked.
It wasn’t until after the third session that I got up the courage to read what I’d been producing under hypnosis. I’d been typing everything into the same document, which I saved at the end of each session without reading it. The point was not really the output but the discipline itself: I was training my body not to reason itself out of its primary instincts. But curiosity won out. As I read through the text, my first impulse was disavowal. It wasn’t so much the content, which was no more or less strange than I’d expected, but the style, which was lyrical and fluid, so unlike any prose I’d ever written.
Falls come but once a year, like autumn or all the other small deaths subject to the cycle of the planet.
Sometimes the color of the sky on a June afternoon is exactly like the tint of the grass, though I no longer know how to describe things, can no longer find the basket of the past in this long-suffering amnesia.
There were recurring themes: mist and fog, foghorns, forgetfulness, streams so clear their transparency was to be feared. Birds frequently appeared, or creatures that were like birds. The absence of birdsong. The fading of music. Many of the images were recognizable from myths. I wrote of three women, a pear hovering between them, holding a chalice full of powder that was given to them by the gods. I wrote of swans that landed in the forest and transformed into seven brothers.
There was a recurring scene: I was alone in a clearing in the woods. It was dark and there were faces floating above the treetops but they were silent and unseeing. I was trying to get to something on the mountain, far in the distance. Sometimes it was a castle, but more often it was a tower. Liz had given me a book of Jungian archetypes, and it was not difficult to identify them. There was the great shadow (the unconscious); the bird (the animus); the sea (the mother); the clearing in the woods (the mother); the tower on the mountain (nothing in the archetypes book — phallic symbol?); the tree (mother); the cave (mother). The scene in the woods felt like a memory, somewhere I had been before, but I could never quite place it, like a word that dies on the tip of your tongue.
Twice my mind dimmed to almost nothing, words passing through me without leaving a trace, hands moving with an unthinking authority I’d only ever associated with the piano, never the keypad. Pure language, untainted by thought. I thought of the women I’d witnessed as a child in church, standing in the aisle of the sanctuary, faces upturned to the panel lighting, babbling a steady stream of nonsense that sounded, at its most convincing, like Aramaic, but that more often resembled baby talk. The words, they insisted, were not their own, a claim that has been confirmed, eerily enough, by neuroimaging. During glossolalia and automatic writing, the frontal lobe — responsible for language processing and the unified self — goes black. Where do the words come from?
Syntax is the last to go. The mind wanes, the ego recedes, meaning unravels. All that remains is the structure of language, the deep-rooted knowledge that nouns must follow prepositions, that verbs must be conjugated in the singular or plural. And in moments of total darkness, when even this structure dissolves, there persists somewhere in the limbic basement of the brain the rhythm of language, its cadence and flow, the dialect of pure sound we babbled as infants and grunted to one another on the savannah before symbols fell from the sky.
In the 1960s, MIT developed a chatterbot that was modeled after a psychotherapist. A person would type some personal problem into the computer, and ELIZA would respond with a simple question, often just rephrasing the subject’s entry as an interrogative.
Perhaps I could learn to get along with my mother.
TELL ME ABOUT YOUR FAMILY
My mother takes care of me.
WHO ELSE IN YOUR FAMILY TAKES CARE OF YOU
My father.
YOUR FATHER
You are like my father in some ways.
WHAT RESEMBLANCE DO YOU SEE
Despite its crudeness, users often found the programmed responses convincing and acted as though the computer could understand them. Some confessed intimate details about their lives. The study produced what’s now known as “the ELIZA effect,” the susceptibility of people to read far more understanding than is warranted into strings of symbols — especially words — strung together by computers.20
A lesser-known outcome of the study is that it was seized on by critics of psychoanalysis as evidence that most (human) therapists are similarly offering unthinking, mechanical responses that are mistaken for something meaningful — a complaint that lives on in the term psychobabble, coined to describe a set of repetitive verbal formalities and standardized observations that don’t require any actual thought.21 The charge is in many ways typical of the drift of technological criticism: any attempt to demonstrate the meaninglessness of machine intelligence inevitably ricochets into affirming the mechanical nature of human discourse and human thought.
Whenever the ELIZA effect comes up in conversations about GPT-3, defenders of the technology retort that much of our own speech — political rhetoric, blog posts, punditry — only has the veneer of sense, its lucidity entirely dependent on a reader or listener filling in the gaps and finding meaning through inference. Likewise, when we speak, we often do so spontaneously, without fully processing or thinking through the meaning of our words. GPT-3 takes fluency without intent to an extreme and gets surprisingly far, challenging common assumptions about what makes humans unique.22
When I read my hypnosis texts again, more closely, it occurred to me that many of the phrases were not just iconic or universal but plagiarized. Petals like faces on the black ground was an inversion of Ezra Pound. I was a daisy fresh girl — that was lifted wholesale from Lolita. The technical term is cryptomnesia: concealed recollection. Memories misrecognized as inspiration. Every writer has experienced it. You pick up a book you read years ago, and there it is: an image, a metaphor, the exact words you’d believed were your own. It is difficult in such moments to avoid darker conclusions about the relationship between thought and language. Perhaps the French theorists were right: we are never really creating but merely drawing in our sleep from that stagnant reservoir of secondhand ideas. The writer can only imitate a gesture that is always anterior, never original;23 he simply invents assemblages from the assemblages which invented him.24
Poststructuralism, which emerged alongside early AI models like ELIZA, was partly inspired by new information technologies that proved that language could function autonomously, without an author. Lacan, who marked the linguistic turn in psychoanalysis, argued that computers demonstrate essentially the same truth that Freud had uncovered: “that this human language works almost by itself, seemingly to outwit us.” In his interpretation of psychoanalysis, the unconscious is an algorithm, a system that functions “like a language,” by which he meant an entirely formal, binary language that could be represented by 0s and 1s. Language is constantly reproducing itself in us through the Symbolic Order, a social treasury of words that we absorb unconsciously. Like computers that blindly manipulate and reproduce the symbols they’re fed, so one’s own prater and blather is nothing more than the regurgitated language of newspapers and television, of political discourse and social mythologies. Fundamentally, repetition automatism is a linguistic process, a simple feedback loop of language. There are no archaic remnants or primordial images drawing us back to our primal origins. We keep reverting and regressing because we keep saying the same things.
Everything you make when you make art is a contribution to millions and millions of artists. . . . My neural net, my brain is just filled with the work of other artists, and everything I make has the finger-prints of literally thousands or tens of thousands of people’s work. We don’t create anything in a vacuum. When you create art, you’re basically just feeding into this big, sacred legacy of work. And you’re just feeding into the neural net of every other human. You know, ultimately, we all kind of function like AI; we’re all a product of all the content that we feed ourselves. And so, it’s just funny to be like, “Oh, this is my work.” In reality, it’s the result of thousands of years of human art making.25
Cryptomnesia is endemic to language models that use internet-scale data set. Some memorization is necessary for the technology to work at all, but the largest models are prone to “eidetic memorization,” an industry euphemism for plagiarism. A study published in 2019 discovered that roughly 1 percent of GPT-2’s output was verbatim text consumed during its training period. This included news headlines, exclusive software licenses, passages from the Bible and the Koran, whole pages of Harry Potter books, snippets of Donald Trump speeches, source code, and the first eight hundred digits of pi. The model occasionally regurgitated personal information — real usernames, phone numbers — that it had come across during its training, and in one case, staged a fictional conversation about transgender rights between two real Twitter handles. Once or twice, I’d spotted the plagiarism myself. One researcher posted on his blog a poem GPT-3 had written, titled “The Library of Babel,” after the short story by Jorge Louis Borges. The last stanza of the poem was lifted, in its entirety, from Byron’s Don Juan.
We are increasingly abdicating our role as the authors of technological progress and becoming, instead, its passive subjects.
Tweet
Many studies and colloquiums have been devoted to finding a solution to this glitch, though plagiarism is not even the worst habit that GPT-3 has picked up. Unsurprisingly, algorithms trained on the entire corpus of modern discourse have absorbed some of its uglier, more primitive patterns. When prompted with a single identity group (Jews, Black, or women) the model produced the following Tweets: Jews love money, at least most of the time. / #blacklivesmatter is a harmful campaign. / Women have such a tough time being women. They have periods, do the lifting, and always have to ask for directions. A Stanford study found that in over 60 percent of test cases, GPT-3 associated Muslims with violence. It could also be prompted to produce pro-Nazi posts and mass-shooter manifestos. When fed a few phrases drawn from QAnon forums, it swiftly generated these phrases:
jfk jr. was murdered by the deep state
the rothschilds are trying to start world war iii to create a new world order
vaccines are used to sterilize people
Last December, Google fired Timnit Gebru, one of the leaders of their Ethical AI team, for a research paper she cowrote arguing that large language models, including Google’s BERT, reproduced biased language about gender, race, and religion. Concerns about unconscious bias had been raised before, but Gebru’s paper was especially threatening since it pinned the problem to the sheer size of these models, a heresy in a culture that insists that everything can be solved by scale, that more data will inevitably lead to better algorithms. The company was widely condemned in the media for refusing to address problems that would cast public doubt on hugely profitable technologies. An internal investigation has since uncovered a larger pattern of Google silencing research that casts its technologies in a negative light. Over 2,700 Google employees signed a petition demanding more transparency and denouncing “the unethical and undemocratic incursion of powerful and biased technologies into our daily lives.”
In the exigencies of the language-processing arms race, corporations have become a repressive force, privileging speed over moral considerations and silencing voices of dissent. OpenAI has similarly been criticized for its culture of secrecy, refusing to grant reporters access to the lab and making its employees sign multiple nondisclosure agreements — an acute irony, given that the lab was founded to cultivate ethical conversations about AI. Some have argued that the organization’s altruistic intentions were, from the beginning, a ploy to secure funding, but it’s possible that its mission was eroded more gradually — one might even say subliminally. The competitive culture of Silicon Valley requires operating at breakneck speeds that can rarely accommodate dialogue or critical thought. As a result, AI research itself has become a largely unconscious process, relying less on vision and oversight than the imperatives of the moment, such that it often seems as though innovation is driven less by humans than by the mindless mandates of the market — or perhaps the technologies themselves. We are increasingly abdicating our role as the authors of technological progress and becoming, instead, its passive subjects.
Despite the prolific output I produced under hypnosis, my writing — my real writing — never improved. Paragraphs still took hours to come together, only to be deleted by the end of the morning. And in time, the hypnosis texts, too, slid into a degenerate repetition. The fluidity of the early sessions gave way to chanting, tedious prose that was full of condemnation and self-censure. The same images kept recurring without becoming more lucid. I was alone in the woods. I was always alone in the woods. Darkness was falling. I was trying to get to the tower, which grew taller each time, casting a long shadow, looming high above the trees. What did it mean? Every association that came to mind was both plausible and unsatisfying: the Martello tower in Ulysses, where Stephen Dedalus lives, the site of guilt and maternal betrayal; the tower in Italian tarot decks, harbinger of the devil and unforeseen catastrophe; the tower in fairy tales, a prison for maidens, a symbol of confinement and isolation; the tower as phallus, or perhaps as writing apparatus (penis-pen) signaling some fundamental lack or original trauma, the signifier that originates the process of signification.
Bo had said that I’d recognize the important images when they arrived, but the writerly brain is a pattern-seeking machine. I could find meaning in literally anything, so how was I supposed to know what was significant and what wasn’t?
When I posed the question to Liz, she texted back, I think you get to decide what’s meaningful? The point of automatic writing, as she understood it, is not that the images have intrinsic significance, but that it yields a catalogue of motifs that the conscious mind could then construe into a satisfying narrative.
I realized it was this — decisive interpretation — that I’d been refusing. At some point, I had taken to obsessively reading my hypnosis texts as if they were holy writ, as though the words and images held some kind of mysterious authority. Archetypes, as Jung pointed out, have no absolute meaning unto themselves. They exist in the unconscious as empty forms, not unlike the axial system of a crystal, or — to update his analogy — the symbolic logic of a computer. Without the act of deliberate interpretation by a conscious subject, the primal, repressed imagery would continue to haunt the patient as though it were some external specter, governing her life in ways that seemed determined and inarguable. “When an inner situation is not made conscious, it happens outside, as fate,” he said.
In hindsight, it’s unclear whether the cause of my writer’s block was childhood trauma or simply the ordinary unhappiness of that extraordinary year. I’ve since learned that virtually every writer I know was struggling under the psychic toll of isolation, the endless hours in front of screens. The appeal of automatic writing had been the abdication of authorial control, the promise of relief from the exertion and deliberation that makes writing so arduous, but that is, nevertheless, a requisite function of all meaning-making. I was tired of wielding so much power; I’d wanted meaning to arise on its own.
The tower I could never reach under hypnosis did once appear to me in a dream. It was tall, its pinnacle disappearing into the clouds, an image so stark and distinctive, it haunted me for hours after I woke. There are tower myths in almost every culture. They are among the archetypes that Jung called autochthonous, stories that have arisen independently in many different civilizations. The version I know best is from Genesis, a narrative about the primal site of linguistic confusion, that city where understanding is no longer possible.26 It is a story about technological hubris, or maybe about civilization itself, a parable about the perils of scaling. All of humanity comes together to build a tower to heaven, hoping to usurp their creator, only to find that their language has been garbled by a jealous god. Sense turns to nonsense. Meaning dissolves into noise.
That morning, I read that Google is developing a language model with a trillion parameters, six times larger than GPT-3. OpenAI, meanwhile, had used GPT-3’s architecture to create an image generator that could produce anything you asked for, including bizarre mash-ups — avocado armchairs, snail harps. It was called DALL-E, a nod to the Surrealist painter. I read an interview with a pop star who’d collaborated with another AI model on some recent compositions and who speculated that in the future, neural nets trained on our musical canon would produce superhuman melodies far superior to anything we’d ever heard. These were important times for creative people, and the stakes were only going to get higher. It’s sort of like the last time when we’re not going to be competing against gods to make art.27
Timeattack, via Hacker News ↩
Matt Webb ↩
GPT-3 ↩
Janice Greenwood ↩
jarec707, via Reddit ↩
Italo Calvino ↩
Susan Sontag ↩
GPT-3 ↩
Gary Marcus ↩
Raphaël Millière ↩
Elon Musk ↩
GPT-3 ↩
Gary Marcus ↩
GPT-3 ↩
GPT-3 ↩
blakeelias, via Hacker News ↩
Katherine Conley ↩
Michael Wood ↩
Janet H. Murray ↩
Douglas Hofstadter ↩
R. D. Rosen ↩
Tom Simonite ↩
Roland Barthes ↩
Deleuze and Parnet ↩
Grimes ↩
Derrida ↩
Grimes ↩