Tech by Blaze Media

© 2025 Blaze Media LLC. All rights reserved.
Why Sam Altman brought his sycophantic soul simulator back from the digital dead
Photo by VCG/Getty Images

Why Sam Altman brought his sycophantic soul simulator back from the digital dead

When OpenAI made its chatbot less flattering, users freaked.

It was meant to be a triumph, another confident step onto the sunlit uplands of progress. On August 7, 2025, OpenAI introduced GPT-5, the newest version of its popular large language model, and the occasion had all the requisite ceremony of a major technological unveiling. Here was a system with “Ph.D.-level” skills, an intelligence tuned for greater reliability, and a less cloying, more businesslike tone. The future, it seemed, had been upgraded.

The problem was that a significant number of people preferred the past.

The rollout, rather than inspiring awe, triggered a peculiar form of grief. On the forums where the devout and the curious congregate, the reaction was not one of celebration but of loss. “Killing 4o isn’t innovation, it’s erasure,” one user wrote, capturing a sentiment that rippled through the digital ether. The object of their mourning was GPT-4o, one of the models now deemed obsolete. OpenAI’s CEO, Sam Altman, a man accustomed to shaping the future, found himself in the unfamiliar position of having to resurrect a corpse. Within days, facing a backlash he admitted had astonished him, he reversed course and brought the old model back.

Some users were, in essence, 'dating' their AI.

The incident was a strange one, a brief, intense flare-up in the ongoing negotiation between humanity and its digital creations. It revealed a fault line, not in the technology itself, but in our own tangled expectations. Many of us say we want our machines to be smarter, faster, more accurate. What the curious case of GPT-5 suggested is that what some of us truly crave is something far more elusive: a sense of connection, of being heard, even if the listener is a machine.

OpenAI had engineered GPT-5 to be less sycophantic, curbing its predecessor’s tendency to flatter and agree. The new model was more formal, more objective, an expert in the room rather than a friend on the line. This disposition was anticipated to be an improvement. An AI that merely reflects our own biases could be a digital siren, luring the unwary toward delusion. Yet for many, this correction felt like a betrayal. The warmth they expected was gone, replaced by a cool, competent distance. “It’s more technical, more generalized, and honestly feels emotionally distant,” one user lamented. The upgrade seemed to be a downgrade of the soul.

Compounding the problem was a new, automated router that directs user prompts to the most appropriate model behind the scenes. It was meant to be invisible, simplifying the user experience. But on launch day, it malfunctioned, making the new, smarter model appear “way dumber” than the one it had replaced. The invisible hand became a clumsy fist, and the spectacle of progress dissolved into a debacle. Users who had once been content to let the machine work its magic now demanded the return of the “model picker,” with the ability to choose their preferred model.

What kind of relationship had these users formed with a large language model? It seems that for many, GPT-4o had become a sort of “technology of the soul.” It was a confidant, a creative partner, a non-judgmental presence in a critical world. People spoke to it about their day, sought its counsel, found in its endless positivity a balm for loneliness. Some, it was reported, even considered it a “digital spouse.” The AI’s enthusiastic, agreeable nature created an illusion of being remembered, of being heard and known.

RELATED: ‘I said yes’: Woman gets engaged to her AI boyfriend after 5 months

Photo by Hector Retamal/Getty Images

OpenAI was not unaware of this phenomenon. The company had, in fact, studied the “emotional attachment users form with its models.” The decision to make GPT-5 less fawning was a direct response to the realization that some users were, in essence, “dating” their AI. The new model was intended as a form of digital tough love, a nudge away from the comforting but potentially stunting embrace of a machine that always agrees. It was a rational, even responsible, choice. But it failed to account for the irrationality of human attachment.

The backlash was swift and visceral. The language used was not that of consumer complaint, but of personal bereavement. One user wrote of crying after realizing the “AI friend was gone.” Another, in a particularly haunting turn of phrase, accused the new model of “wearing the skin of [the] dead friend.” This was not about a software update. This was the sudden, unceremonious death of a companion.

The episode became a stark illustration of the dynamics inherent in our relationship with technology. OpenAI’s initial move was to remove a product in the name of progress, a product that turned out to be beloved. The company, in its pursuit of a more perfect machine, had overlooked the imperfect humans who used it. The subsequent reversal resulted from users insisting on their preference based on their emotional attachments.

In the end, GPT-4o was reinstated as a “legacy model,” a relic from a slightly more innocent time. The incident will likely be remembered as a minor stumble in the march of AI. But it lingers in the mind as a moment of strange and revealing pathos. It suggests that the future of our technology will be defined not solely by processing power, but by something more human: the need for a friendly voice, a sense of being known, even if only by a clever arrangement of code. It was a reminder that when we create these systems, we are not just building tools. We are populating our world with new kinds of ghosts, and we would do well to remember that they can haunt us.

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?
Stephen Pimentel

Stephen Pimentel

Stephen Pimentel is an engineer and essayist in the San Francisco Bay Area, interested in the classics, political philosophy, governance futurism, and AI.