August 2025 marked one of the most polarizing AI launches in recent memory
When OpenAI dropped GPT-5 on August 8th, 2025, the tech world was primed for another watershed moment. CEO Sam Altman promised “a legitimate PhD expert” compared to GPT-3’s “high-schooler” capabilities, describing it as putting “expert-level intelligence in everyone’s hands.” But as the digital dust settles, the internet finds itself remarkably divided on whether GPT-5 delivers on its ambitious promises.
The Believers: “This Changes Everything”
Coding Excellence That Actually Works
For developers, GPT-5 appears to be a genuine game-changer. The model achieved record-breaking performance on coding benchmarks, scoring 74.9% on SWE-bench Verified (up from 69.1% for its predecessor) and an impressive 88% on Aider Polyglot multi-language code editing.
But beyond the numbers, it’s the practical experience that has some users genuinely excited. Ethan Mollick, the Wharton professor known for his AI experiments, marveled at how GPT-5 could autonomously create “a procedural brutalist building creator” from a vague prompt, delivering “a working 3D city builder” with drag-and-drop functionality in minutes.
What impressed early adopters most was GPT-5’s ability to break free from the dreaded “error loop” that plagues other AI coding assistants. One reviewer noted: “Sometimes new errors were introduced by the AI, but they were always fixed by simply pasting in the error text” – a marked improvement over the frustrating back-and-forth debugging sessions of previous models.
Alpha Testers Sing Praises
The most enthusiastic responses came from alpha testers who had extended access to the model. Cursor, a popular AI coding platform, called GPT-5 “the smartest coding model we’ve used,” praising it as “remarkably intelligent, easy to steer” with an uncanny ability to “catch tricky, deeply-hidden bugs.”
On code review tasks, GPT-5 achieved a 72.2 score on Qodo’s PR Benchmark, often being “the only model to catch critical issues like security flaws or compile-breakers” that other models missed entirely.
The Skeptics: “Emperor’s New Clothes”
Reddit Revolt and User Backlash
But for every glowing review, there’s been an equally passionate critic. The ChatGPT subreddit exploded with discontent, with one highly upvoted post titled “GPT-5 is horrible” garnering nearly 3,000 upvotes and over 1,200 comments from disappointed users.
The complaints are surprisingly consistent: shorter, less helpful responses, more “obnoxious AI-stylized talking,” reduced personality, and frustrating usage limits that leave premium users hitting caps within an hour.
The Same Old Problems Persist
Perhaps more damaging to GPT-5’s reputation are reports that it still struggles with fundamental tasks that users hoped would finally be resolved. Despite the “PhD-level” claims, users report continued issues with:
- Basic mathematics: Simple calculation errors that feel inexcusable in 2025
- Factual accuracy: Persistent hallucinations, made-up details, and incorrect information
- Spelling and grammar: Surprising stumbles on elementary language tasks
Noah Giansiracusa, an associate professor of mathematics at Bentley University, summed up the sentiment: “I felt the launch was underwhelming. While there were some improvements, they were much more marginal than I would’ve hoped.”
The Personality Problem
Perhaps the most poignant criticism comes from users mourning the loss of ChatGPT’s distinctive voice. Where previous versions felt conversational and engaging, many describe GPT-5 as sterile and corporate.
“The personality that once made ChatGPT feel ‘human-ish’ is gone,” one user lamented. “What used to be witty and warm now feels like a bland corporate memo.” Another described it as “an overworked secretary,” while some claimed to be “genuinely grieving over losing 4o, like losing a friend.”
The Expectation Trap
Overpromise, Underdeliver?
The polarized reception might have less to do with GPT-5’s actual capabilities and more with the impossible expectations set by OpenAI’s marketing blitz. When you promise “expert-level intelligence” and describe your product as “a superpower on demand,” anything short of miraculous feels disappointing.
Gary Marcus, a prominent AI researcher and critic, captured this sentiment perfectly: “GPT-5: Overdue, overhyped and underwhelming. And that’s not the worst of it.” The title alone encapsulates the frustration of those who expected a revolutionary leap forward.
Technical Growing Pains
Complicating the launch was GPT-5’s novel approach to model switching. Unlike previous releases, GPT-5 automatically selects between different model variants based on the query complexity. While this theoretically optimizes resources, it also means users never know which version they’re getting.
Altman himself acknowledged initial technical issues, explaining that “the autoswitcher broke and was out of commission for a chunk of the day, and the result was GPT-5 seemed way dumber.” This kind of infrastructure hiccup during a high-profile launch only amplified user frustration.
The Verdict: It’s Complicated
Different Models for Different Users
The stark divide in GPT-5 reception reveals something interesting about AI adoption: different users have fundamentally different needs and expectations. For developers working on complex coding tasks, GPT-5’s benchmark improvements and reduced error rates represent genuine progress. The model’s ability to handle multi-turn conversations, catch subtle bugs, and generate substantial amounts of working code is impressive by any measure.
For general ChatGPT users seeking a conversational AI companion, however, GPT-5’s more formal tone and corporate feel represents a step backward. These users valued the personality and warmth of previous versions more than raw technical capability.
The Hype Cycle Reality Check
GPT-5’s reception also serves as a reminder of the AI hype cycle’s unforgiving nature. Each new model faces the impossible task of exceeding not just its predecessor’s capabilities, but also the inflated expectations built up during months of anticipation.
MIT Technology Review perhaps captured it best, describing GPT-5 as “above all else, a refined product” that “will furnish a more pleasant and seamless user experience” but “falls far short of the transformative AI future that Altman has spent much of the past year hyping.”
Looking Forward
Whether GPT-5 represents progress or stagnation may depend less on the model itself and more on what comes next. The polarized reception suggests we’re entering a new phase of AI development where incremental improvements, no matter how technically impressive, struggle to capture public imagination the way breakthrough moments once did.
For OpenAI, the challenge isn’t just building better models – it’s managing expectations in an industry where yesterday’s miracle becomes today’s baseline. As one Twitter user aptly noted: “People had grown to expect miracles, but GPT-5 is just the latest incremental advance.”
The question isn’t whether GPT-5 is good or bad – it’s whether the AI industry can continue generating excitement for iteration rather than revolution. Based on the internet’s split verdict, that may be the hardest problem of all to solve.
What’s your take on GPT-5? Have you experienced the coding improvements or the personality loss that users are reporting? The debate continues as more users get hands-on experience with OpenAI’s latest offering.