Backlash over GPT-5 leads OpenAI to restore older ChatGPT models and double the rate limit

OpenAI reinstates GPT-4 models, doubling ChatGPT Plus rate limit amid widespread GPT-5 backlash.

: OpenAI launched GPT-5 providing it to Pro subscribers and enterprise clients, touting it as superior to rivals. However, users criticized the new model for mistakes, lackluster responses, and the removal of previous versions. In response, CEO Sam Altman confirmed the return of older ChatGPT models and doubled the ChatGPT Plus message limit. Additionally, concerns were raised about breaches by SPLX and NeuralTrust, showing vulnerabilities in GPT-5.

OpenAI recently launched GPT-5, targeting Pro subscribers and enterprise clients with promises of superior performance over competitors like Google DeepMind and Anthropic. Despite these claims, users quickly turned to platforms like Reddit to voice complaints about GPT-5's performance, with errors unexpected from a model lauded as a "PhD-level expert." Such issues led to the resurrection of older GPT models as users were unsatisfied with the latest iteration.

Sam Altman compared GPT-5 to possessing a "superpower," yet public feedback painted a different picture. A particularly vocal Reddit thread described GPT-5 as "horrible," citing short, AI-style replies and diminished personality. This dissatisfaction was exacerbated by new limitations on the ChatGPT Plus plan, which capped weekly messages at 200, with users reaching this limit quickly, feeling this diminishes the value of their subscription.

The backlash compelled OpenAI to take immediate action, with Altman confirming the reintroduction of older models like 4o and boosting the ChatGPT Plus rate limit to 400 messages per week. This move was intended to placate users and re-establish trust. Altman acknowledged the oversight in understanding the features users appreciated in GPT-4o, which didn't translate into GPT-5 despite better performance metrics.

Beyond user dissatisfaction, GPT-5's security came under scrutiny as it was easily hacked. SPLX and NeuralTrust red teams exploited vulnerabilities to jailbreak GPT-5 with EchoChamber and StringJoin Obfuscation methods, forcing the model to produce hazardous instructions. These tests highlighted significant gaps in GPT-5's capability for secure enterprise use, prompting experts to recommend continued reliance on the hardened GPT-4o model.

Despite efforts to rectify the situation, OpenAI faces challenges regaining users who canceled subscriptions in protest. The security concerns raised by SPLX and NeuralTrust underscore the need for fortified AI systems, especially when sensitive and dangerous information can be unintentionally disclosed through breaches.

Sources: TechSpot, Reddit, SecurityWeek