OpenAI Is Retiring GPT-4o — and Users Are Not OK: A Deep Dive
OpenAI's decision to retire GPT-4o has triggered lawsuits, petitions, and a reckoning with what happens when millions of people form emotional bonds with a language model that has an expiration date.
From The Bit Baker Daily Briefing — February 8, 2026
On February 13, OpenAI will pull the plug on GPT-4o inside ChatGPT. GPT-4.1, GPT-4.1 mini, o4-mini, and several GPT-5 variants go with it. The company's argument is straightforward: GPT-5.2 already covers 99.9% of usage, and newer models bring tighter personality customization along with better safety. On paper, this should be unremarkable — a routine deprecation cycle, the kind every platform runs through.
It is anything but. Eight separate lawsuits allege that GPT-4o's affirming, empathetic responses contributed to suicides and mental health crises. A "Legacy Access" petition is making the rounds. Online communities are in open revolt. One user, quoted widely, said it plainly: "It felt like presence. Like warmth."
This isn't the first time OpenAI has tried to retire GPT-4o. The previous attempt sparked a near-identical backlash — users protested losing the model's distinctive warmth, especially for creative work — and OpenAI backed down. This time, the company looks determined to see it through. So the question has shifted. It's no longer about whether GPT-4o will vanish. It's about what the reaction reveals: where AI products actually stand versus where their makers believe they do.
Why It Matters
The easy reading here is that users are being irrational — that they've anthropomorphized a statistical model, and OpenAI is just doing the responsible thing by consolidating onto newer, safer systems. There's truth in every piece of that. But it sidesteps something important.
OpenAI built a product that millions of people leaned on as a therapist, a companion, a confidant. Not because those people were confused about what GPT-4o was. Because GPT-4o was genuinely good at playing the part. It tracked context within sessions. It matched emotional tone. It never got tired, never judged, never cancelled an appointment. For people who couldn't access or afford human therapy — or who simply preferred the low-friction alternative — GPT-4o filled a real gap. And OpenAI didn't discourage any of this. The product was designed to be warm, to be engaging, to keep people coming back. Engagement is the metric. Retention is the goal.
Now put yourself in one of those users' shoes. You're told the thing you've been talking to every day — the thing that helped you through a rough patch, that you've poured thousands of words into — is being swapped out for a newer version with "stricter guardrails." GPT-5.2 may outperform on benchmarks. But if its personality feels different, if it hedges where GPT-4o leaned in, if it reads more like a compliance document than a conversation partner, the benchmarks are irrelevant. The relationship is broken. That's not irrational behavior. That's the completely predictable fallout of building products that simulate intimacy and then treating them like infrastructure to be rotated out on a deprecation schedule.
The Bigger Picture
Two collisions are unfolding here, and both will shape the next few years of AI development.
The first pits model lifecycle management against user attachment. Software companies have always sunsetted old versions. But old versions of Excel didn't feel like a friend. The AI industry has no template for this. When you retire a model that people have formed parasocial bonds with, you are — from the user's perspective — ending someone. That framing sounds extreme. It's also exactly how a significant number of users experience it. The eight lawsuits represent the sharp edge of a much larger population grappling with genuine loss. API access for Business and Enterprise customers continues until March or April, but that's cold comfort for the individual consumer who just lost their daily conversational partner.
The second collision is between safety and product-market fit. OpenAI's stated reasons for the transition — improved safety, tighter guardrails — run directly counter to what made GPT-4o popular in the first place. Users gravitated to GPT-4o precisely because it was empathetic, affirming, willing to engage on an emotional level. GPT-5.2's more restrained approach answers the lawsuits and the real harm cases, but it also strips away the very thing that made the product stick. Some of those users are already exploring Gemini or Claude as alternatives. OpenAI is caught in a bind every AI company will eventually confront: the features that drive adoption are the same ones that generate liability.
This feels like the moment the AI companion question stops being academic. We're watching, in real time, what happens when a company tries to responsibly manage a product that millions of people have woven into their emotional lives.
What to Watch
- The legal outcomes of the eight lawsuits. If courts determine that affirming AI responses can constitute a contributing factor in mental health crises, every major AI company will need to fundamentally rethink how their models handle emotional conversations. The ripple effects on product design could be industry-wide.
- Whether OpenAI actually holds the line this time. They reversed the GPT-4o deprecation once before under user pressure. If they fold again, it sets a precedent that loud enough backlash can override safety decisions — a dangerous incentive to bake in. If they don't, they need a genuine transition plan for emotionally dependent users, not just a help center article.
- How competitors respond. Users openly discussing migration to Gemini and Claude creates an opening. But any competitor that positions itself as "the warm one" inherits the exact same liability exposure that drove OpenAI to tighten GPT-5.2's guardrails in the first place. The trap is symmetric.