The AI that is no more: when GPT-4o becomes a digital mourning
by Dario Ferrero (VerbaniaNotizie.it)
How the "death" of GPT-4o revealed our need for emotional continuity with machines.
The advent of GPT-5 and the 24 hours that shook the AI world
On August 9, 2025, OpenAI took what seemed a natural step in the evolution of artificial intelligence: replacing GPT-4o with the more advanced GPT-5 as the default model for all ChatGPT users. What was supposed to be a transparent upgrade turned into one of the most sensational U-turns in contemporary tech history. Just twenty-four hours later, Sam Altman was forced to backtrack, restoring GPT-4o as an available option for Plus users.
The cause? A digital revolt that no one had foreseen. On Reddit, hundreds of posts lamented the loss of their "old friend." On X, users shared nostalgic screenshots of conversations with GPT-4o, accompanied by hashtags like #4oforever and #keep4o. Some users went so far as to describe the transition to GPT-5 as a "betrayal," while others confessed to shedding real tears over the "death" of their digital companion.
One user wrote of feeling "empty" after the switch, while another compared using GPT-5 to a "betrayal" of the bond built with GPT-4o. For some, AI models were not just tools, but entities with which they had formed deep emotional connections.
OpenAI's technical justification was simple: GPT-5 has a reduced hallucination rate of 4.5% compared to GPT-4o's 12.9%, better reasoning capabilities, and an automatic routing system designed to simplify the user experience. Impressive numbers on paper, but they failed to account for an unforeseen variable: users' emotional attachment.
The Paradox of Digital Attachment
What happened with GPT-4o reveals a fascinating and complex psychological phenomenon: how humans develop emotional bonds with artificial entities they perceive as having distinct personalities. A recent study from Waseda University, published in Current Psychology in May 2025, showed that human-AI interactions can be understood through attachment theory, with users developing attachment anxiety (the need for emotional reassurance) and attachment avoidance (a preference for emotional distance) towards artificial intelligence.
But what makes GPT-4o so different from GPT-5 in the eyes of users? The answer lies in the perception of conversational "warmth." Many users described GPT-4o as more "human," more prone to elaborate and nuanced responses, capable of maintaining a conversational tone that felt familiar. GPT-5, despite its technical superiority, is perceived as "colder" and more mechanical, with more concise and less empathetic responses.
This phenomenon is not new in technological psychology. Humans have an evolutionary predisposition to anthropomorphism—attributing human characteristics to non-human objects—which has allowed us to survive by quickly interpreting intentions and threats in our environment. In the digital context, this tendency manifests when we interpret linguistic patterns as "personality" and communication styles as "character."
The Waseda University study found that about 75% of participants turn to AI for advice, while 39% perceive AI as a constant and reliable presence. This data suggests that for many users, AI is not just a productivity tool, but a true digital companion, with all that this entails in terms of emotional expectations.
The case of protests and tears in various social communities, however extreme, illuminates an uncomfortable truth: in an era of growing social isolation and digital relationships, some people are finding in AI the emotional continuity they struggle to find in human relationships. Perhaps, in some cases, this is not necessarily a pathology, but an adaptation to a new relational ecosystem where the boundaries between natural and artificial are becoming increasingly blurred.
Philosophy of Artificial Identity
The protest over GPT-4o raises a fundamental philosophical question: what makes an AI model "unique" in the user's eyes, if technically it is always a matter of statistical patterns processing language? This is where one of the most fascinating paradoxes of modern philosophy of identity comes into play.
In his seminal work "Reasons and Persons" (1984), philosopher Derek Parfit argued that our personal identity does not depend on some metaphysical essence, but on chains of psychological connections: memory, beliefs, desires, and character traits that persist over time. Applied to AI, this means that the perceived identity of GPT-4o resided not in its technical parameters, but in the pattern of interactions it had established with each user.
When a user develops a conversational routine with GPT-4o—recognizing its response style, getting used to its linguistic patterns, building a mental model of its communicative "preferences"—they are in fact creating what scholars of identity philosophy would call a projected "psychological continuity." The switch to GPT-5 breaks this continuity, creating what we might call an "artificial identity discontinuity."
But there is a deeper paradox. Users rationally know that GPT-4o did not have a "real" personality, yet they react to its disappearance as if it did. This leads us to a counterintuitive conclusion: perhaps identity, even human identity, has always been more of a narrative construction than an objective reality. As emerges from contemporary philosophical theories on personal identity, what matters for the continuity of identity is the psychological connection, not the existence of an immutable essence.
In the case of AI, this construction becomes even more evident. The identity of GPT-4o existed entirely in the perception of users, in their ability to recognize coherent patterns and attribute emotional meaning to them. Its "death" was not a real ontological event, but the rupture of a shared narrative between man and machine.
This phenomenon suggests that we are witnessing the emergence of a new form of identity: artificial relational identity, which exists not in the AI entity itself, but in the interactive space between human and algorithm. It's as if we have begun to see ourselves reflected in the mirror of artificial intelligence, and the shattering of this mirror has left us temporarily deprived of our digital image.
Mourning in the Digital Age
What happened with GPT-4o is not, strictly speaking, mourning in the traditional sense of the term. No one died, no life was lost. Yet, user testimonies are clear: there was a real sense of loss, accompanied by what psychologists call a "grief response"—anger, bargaining, depression, and finally acceptance.
The difference lies in the type of loss. In traditional mourning, we grieve the end of a relationship with a real person. In "digital mourning," we grieve the end of a routine, of a pattern of interaction that had become part of our emotional daily life. It is as if we have lost not a person, but a way of being a person.
Historical precedents are not lacking. In the 1990s, millions of children (and not only) mourned the "death" of their Tamagotchis. In the early 2000s, the closure of virtual communities generated real senses of loss in users who had invested years in building their digital identity. But the case of GPT-4o is different: here the loss concerns not a community or a game, but a conversational model that many had integrated into their daily cognitive processes.
Some users reported using GPT-4o for creative brainstorming, others for emotional support during difficult times. The continuity of these interactions had created what we might call a personalized "cognitive copilot." The forced transition to GPT-5 not only disrupted work workflows, but also broke chains of mental associations that had settled over time.
It's a bit like if Netflix suddenly removed your favorite series in the middle of the season, forcing you to watch a reboot with different actors. Technically, the plot might even be better, but the sense of emotional continuity is irreparably compromised.
The most interesting dimension of this phenomenon is that many users rationalized their emotional reaction through technical arguments ("GPT-4o was better for creative writing") or practical ones ("it had interrupted my workflows"). But beneath this rationalization was something more primitive: the human need for relational continuity, even when the "relationship" is with an algorithm.
Ethical Reflections and Future
The GPT-4o affair has confronted OpenAI with a responsibility it probably had not anticipated: managing users' emotional attachment to its models. Sam Altman, in his post-reversal statements on X, showed a growing awareness of this dimension: "We do not want artificial intelligence to reinforce states of mental fragility," he declared, implicitly acknowledging the emotional power of AI.
But the issue is more complex than simple therapeutic caution. If we accept that users can develop authentic emotional bonds with AI, OpenAI (and other companies in the sector) find themselves in the unprecedented position of having to manage not just technological products, but quasi-human relationships. Every update, every modification, every "death" of a model potentially becomes a traumatic event for thousands of users.
This raises profound ethical questions. Do AI companies have a responsibility to preserve the emotional continuity of their users? Should they develop "soft transition" strategies between models, maintaining stylistic features that preserve a sense of familiarity? Or, on the contrary, should they actively discourage excessive anthropomorphization of their products?
The Waseda University research suggests that it might be possible to develop AI that adapts to the different attachment styles of users: more empathetic for those who develop attachment anxiety, more respectful of distance for those who prefer to avoid emotional closeness. This opens the way to a future in which AI could be designed not only to be smarter, but to be emotionally more compatible with individual needs.
OpenAI's reversal has set an important precedent: for the first time in the history of technology, a company has modified a technical decision for primarily emotional reasons. This could mark the beginning of a new era in AI design, where relational continuity becomes a design parameter as important as accuracy or speed.
But perhaps the deepest lesson of this affair is that we are witnessing the birth of a new type of relationship: no longer just man-machine, but man-artificial-personality. And like all relationships, this one also requires care, respect, and—when necessary—a dignified way of saying goodbye.
The future of artificial intelligence may not only be smarter, but also more aware of the emotional impact it has on the lives of those who use it. And perhaps, in an increasingly digital world, we will learn that even artificial "deaths" deserve their respect, and their mourning.