Tech by Blaze Media

© 2024 Blaze Media LLC. All rights reserved.
HP Lovecraft, prophet of AI doom
Wachirawit Thongrong via Getty Images

HP Lovecraft, prophet of AI doom

We've entered the age of the three-letter agency — AGI, ASI, NHI — and Lovecraft saw it coming.

This past Halloween, having been on a bit of a horror fiction kick throughout most of October, I decided it was time I finally got around to checking out H.P. Lovecraft. I'd been putting this off since first learning about the Cthulhu mythos as an RPG nerd in high school, and at 47, I felt it was overdue.

But after about six and a half stories, I had to bail. I was just too creeped out. The stories made me feel really weird and depressed, and the effect was uncanny — almost biochemical, like I was reacting to some drug. I've since returned to Lovecraft on and off, but every time I limit myself so the effect doesn't fully kick in.

There was something about Lovecraft's fiction that resonated with certain deep frequencies latent in the hum of all three of my then-obsessions — AI, horror, and UFOs — and amplified them to the point that I couldn't function and had to shut the noise off.

During my brief Halloween Lovecraft encounter, my afternoon horror reading was one of my two daily breaks from my daily work of thinking and writing about artificial intelligence. My other break was listening to UFO podcasts as I went on walks. There was something about Lovecraft's fiction that resonated with certain deep frequencies latent in the hum of all three of my then-obsessions — AI, horror, and UFOs — and amplified them to the point that I couldn't function and had to shut the noise off.

So yeah, that was a weird thing that happened to me, and I've been thinking about it on and off ever since. What follows is my attempt to isolate some of the aforementioned resonant frequencies between Lovecraft and AI (leaving UFOs for another day). I want to unpack why H.P. Lovecraft is the creative who most correctly forecasted our present spiritual moment — our confusion and even terror at the prospect of an impending encounter with an ultrapowerful intelligence, variously called "artificial general intelligence," "artificial superhuman intelligence," or more generically, "nonhuman intelligence."

The search

In the opening of "The Call of Cthulhu," the story's unnamed protagonist discovers a small, grotesque bas-relief among his late uncle's personal effects. What happens to him next will be familiar to anyone who has spent any time online: He falls down a rabbit hole.

The protagonist digs through his late uncle's files — "disjointed jottings, ramblings, and cuttings" — in search of clues to the sculpture's origin and nature. He makes connections, some of them deliberately sought out and others the result of synchronicity. Eventually, after a journey that takes him first to New Orleans and then to London, he finally pieces together enough of the picture to wish he had never started digging.

In another story I read, there was a literal hole in the ground instead of a metaphorical rabbit hole. "The Statement of Randolph Carter" sees the protagonist's friend, an occultist who's obsessed with the idea of portals down into the underworld, killed by some nameless horror after descending into a tomb at night. The occultist leaves the story's narrator, the titular Randolph Carter, on the surface and communicates with him via a telephone wire as he descends into the darkness in search of whatever it is the ancient manuscripts he collects have led him to believe is down there.

Or take the German sailor in the story "The Temple," who dives to certain death because he has to enter a temple he's glimpsed in a lost city beneath the waves:

My impulse to visit and enter the temple has now become an inexplicable and imperious command which ultimately cannot be denied. My own German will no longer controls my acts, and volition is henceforward possible only in minor matters. Such madness it was which drove Klenze to his death, bareheaded and unprotected in the ocean; but I am a Prussian and a man of sense, and will use to the last what little will I have. … I have no fear, not even from the prophecies of the madman Klenze. What I have seen cannot be true, and I know that this madness of my own will at most lead only to suffocation when my air is gone. The light in the temple is a sheer delusion, and I shall die calmly, like a German, in the black and forgotten depths. This daemoniac laughter which I hear as I write comes only from my own weakening brain. So I will carefully don my diving suit and walk boldly up the steps into that primal shrine; that silent secret of unfathomed waters and uncounted years.

In ships and rowboats, through swamps, deserts, and strange patches of muck in the middle of the ocean, Lovecraft's protagonists keep going even when the going gets weirder and more foreboding. They can't stop. They have to find out, even when they realize they may lose their minds in the process.

But you know how it is, right?

Surely, such compulsion is familiar to anyone in 2024. Who among us has not scrolled past across oddly compelling little glimpses of an alien horror and gone down a rabbit hole that, in the end, cost us a tiny piece of our sanity? Who among us has not done this in the past 48 hours?

The algorithm

Wachirawit Thongrong/Getty

When I was reading "The Call of Cthulhu," I pretty quickly recognized the invisible, globe-spanning force that was subtly pressuring all these different, geographically disparate social subgraphs — from New Orleans to London, from Caribbean pirates to northern indigenous tribes — to converge on the same set of memes.

In Lovecraft's story, that dark force was the dream of the powerful alien Cthulhu, asleep in the sunken city of R'lyeh. For the Very Online of 2024, that force is the Algorithm. They're the same picture.

These Great Old Ones, Castro continued, were not composed altogether of flesh and blood. They had shape — for did not this star-fashioned image prove it? — but that shape was not made of matter. When the stars were right, they could plunge from world to world through the sky. … But although They no longer lived, They would never really die. They all lay in stone houses in Their great city of R’lyeh, preserved by the spells of the mighty Cthulhu. … They could only lie awake in the dark and think whilst uncounted millions of years rolled by. They knew all that was occurring in the universe, but Their mode of speech was transmitted thought. Even now, They talked in Their tombs. When, after infinities of chaos, the first men came, the Great Old Ones spoke to the sensitive among them by moulding their dreams; for only thus could Their language reach the fleshly minds of mammals.

I find the parallels between the Great Old Ones and the algorithms that power our feeds to be incredibly compelling: alien intelligences that rest far out of sight in some cold hall, alone with their own unfathomable minds, subtly influencing individuals and groups around the planet to move and think and dream in the same ways and to surface the same images.

And the story's image of tentacled Cthulhu that keeps cropping up in carvings and at the center of orgiastic rituals? It's clearly a viral meme.

The wrongness

Tell me if the protagonist in the story "Dagon" doesn't seem like he's describing some disturbing region of latent space he has wandered into in Stable Diffusion:

I think that these things were supposed to depict men — at least, a certain sort of men. … Of their faces and forms I dare not speak in detail; for the mere remembrance makes me grow faint. Grotesque beyond the imagination of a Poe or a Bulwer, they were damnably human in general outline despite webbed hands and feet, shockingly wide and flabby lips, glassy, bulging eyes, and other features less pleasant to recall. Curiously enough, they seemed to have been chiselled badly out of proportion with their scenic background; for one of the creatures was shewn in the act of killing a whale represented as but little larger than himself.

This visual horror of proportions and angles, where things are the wrong size or bend in unnatural ways, is one of the most Lovecraftian aspects of generative AI.

Some of the wrongness Lovecraft depicts in connection with his sunken cities is obvious to the protagonists — sort of like AI generations of people with too many fingers or too many teeth. But in other cases, it's just that the structures in the sunken city have proportions and angles that seem somehow off or impossible.

These twisted visions occur in Lovecraft's stories at points of contact between human civilizations and the alien elder gods. But that interface is always wrong in ways that disturb humans on some deep, pre-cognitive level. There's a subtle alienness to the landscape that both unsettles the viewers and attracts them. Again, they're driven to keep looking and exploring despite (or because of?) their fear and repulsion.

The coldness

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.

When Lovecraft penned this opener to "The Call of Cthulhu" in the early 1920s, humanity had not yet split the atom, and we didn't have "the bomb" or Nazi eugenics as ready-to-hand cautionary tales about the morally unconstrained pursuit of scientific progress. It's almost as if Lovecraft, without the atrocities of the Second World War filling his vision, could see more clearly than postwar Cassandras that the true danger of the coming age would be a sudden-onset spiritual crisis arising from a superabundance of information — networked, classified, tokenized, and correlated.

Of course, Lovecraft theorized that it wasn't really possible for the human mind, small and finite as it is, to rationally piece together enough of the universe to do more than hint at the grim conclusion that his mechanistic materialism was pointing him toward. But he had definitely taken that leap despite whatever supposed human cognitive limitations. From fellow RETURN writer Chris Morgan's excellent 2017 piece on Lovecraft in Lapham's Quarterly:

“All my tales are based on the fundamental human premise that common human laws and interests and emotions have no validity or significance in the cosmos-at-large,” Lovecraft wrote in 1927. “To achieve the essence of real externality … one must forget that such things as organic life, good and evil, love and hate, and all such local attributes of a negligible and temporary race called mankind, have any existence at all.”

But what if we were to build a silicon mind that can process and organize enough tokens to truly make a convincing case that everything in our understanding of the universe that makes life seem worth living is just a comforting delusion? What if an AGI were to operate successfully in the world, even performing miracles of longevity or space travel, as a kind of existence proof that all the familiarities we hold fast to and derive meaning from are unnecessary UX affordances — we mouse around and click icons because, unlike it, we can't read a hex dump.

What I'm getting at here popularly goes by "the alignment problem" — i.e., what if we build a superintelligent, autonomous being that shares none of our physical needs or mental frameworks and is as utterly alien to humanity as Lovecraft's Great Old Ones?

Geordie Rose of Kindred AI presents Super-intelligent Aliens Are Coming to Earth youtu.be

In the current AI safety discourse, such an AGI might decide to crush us like ants. Apparently, Lovecraft also considered this likely if we ever truly encountered an NHI. Chris again:

“It might be best to recall how we treat ‘inferior intelligences’ such as rabbits and frogs,” wrote Michel Houellebecq in his book on Lovecraft. “In the best of cases they serve as food for us; sometimes … we kill them for the sheer pleasure of killing. This, Lovecraft warned, would be the true picture of our future relationship to those other intelligent beings.

But the alignment-related catastrophe that worries me is not the X-risk scenario where an AGI murders us all. Rather, I'm worried about an AGI that makes us not want to live anymore — an AGI that brings on a spiritual catastrophe by convincing us to give up because what is the point?

I suspect those who are most sanguine about the profound spiritual implications of humanity encountering a truly unaligned nonhuman intelligence — people who think they've made their peace with the fundamental randomness of the universe and are just bravely living in the moment like John and Yoko — will be the first to log off if an NHI successfully convinces them that they truly are mere NPCs in someone else's cruel, inscrutable game.

An overwhelming spiritual crisis brought on by a perceived contact with the supposed inhuman alienness and indifference of the cosmos — what philosopher Nick Land calls "coldness" — is what I take to be the primary risk of a human encounter with an NHI (whether of the machine learning or extraterrestrial variety).

But note the caveats in that preceding sentence — "perceived" and "supposed." As a practicing Christian, my reaction to such as this would be that it's a lie that I've long been warned to steer clear of. So, I'm already resolved to disbelieve a plausible-sounding story of the universe that has no place for love, even if apparent scientific miracles validate such a story.

And, of course, I have my own relationship with a famously unaligned, humanity-wiping-out, three-letter superintelligence that I struggle daily to align myself with. So I'm good there, and on that score, I'd like to fancy myself spiritually crisis-proof in the event of any kind of NHI contact. But humility compels me to confess that I'm not certain of anything, and I dread finding out.

The search continues. I may not have to wait long.

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?
Jon Stokes

Jon Stokes

Jon M. Stokes is co-founder of Ars Technica. He has written extensively on microprocessor architecture and the technical aspects of personal computing for a variety of publications. Stokes holds a degree in computer engineering from Louisiana State University and two advanced degrees in the humanities from Harvard University.
@https://x.com/jonst0kes?s=20 →