OpenAI Reverses Course, Drops Costly AI Model Router for Free ChatGPT Users

Pasukan Editorial BigGo
OpenAI Reverses Course, Drops Costly AI Model Router for Free ChatGPT Users

In a significant strategic shift, OpenAI has rolled back a core feature of its flagship ChatGPT service for non-paying users. The company has disabled the automated "model router" system for users on its Free and USD 5-a-month Go tiers, a move that highlights the ongoing challenge of balancing cutting-edge AI performance with the practical realities of cost and user experience in the consumer market.

OpenAI Quietly Reverses a Major ChatGPT Feature

Just four months after its high-profile launch, OpenAI has quietly removed the automated model routing system from ChatGPT for its vast base of free and low-tier paying users. The system, introduced alongside the GPT-5 series, was designed to intelligently analyze user prompts and direct complex questions to more advanced, "reasoning" AI models, while simpler queries would be handled by faster, cheaper models. The goal was to provide users with the smartest AI exactly when they needed it, without forcing them to navigate a complex menu of model choices—a feature CEO Sam Altman had publicly criticized. However, this experiment in automated intelligence allocation has now been scaled back, signaling a recalibration of priorities.

The High Cost of Smarter AI

The reversal appears to be driven by a combination of financial and user engagement metrics. While the router succeeded in its technical goal—increasing usage of advanced reasoning models among free users from less than 1% to 7%—this came at a steep computational cost. Serving these powerful models is significantly more expensive for OpenAI. More critically, internal data suggested the feature may have negatively impacted daily active user metrics. The core issue was speed: reasoning models can take minutes to "think through" a problem, a delay that proved frustrating for most consumers who prioritize quick answers over potentially more nuanced ones, even if the latter is technically superior.

Impact of the Model Router (August - December 2025)

  • Goal: Automatically route complex user queries to advanced "reasoning" AI models.
  • Result for Free Users: Increased usage of reasoning models from <1% to 7%.
  • Primary Issue: Reasoning models are slower (can take minutes) and significantly more expensive for OpenAI to serve.
  • User Metric Impact: Reported to have negatively affected daily active users (DAU) for ChatGPT.
  • Competitive Context: SimilarWeb data shows ChatGPT's average visit duration fell below Google Gemini's in September 2025.

User Experience Trumps Raw Power in Consumer Chatbots

The decision underscores a fundamental tension in the consumer AI space. As Chris Clark, COO of AI inference provider OpenRouter, notes, for general-purpose chatbots, the speed and tone of responses are often paramount. "If somebody types something, and then you have to show thinking dots for 20 seconds, it’s just not very engaging," Clark explained. He draws a parallel to Google Search, which has always prioritized speed, asking rhetorically if it ever considered delivering better answers at the cost of slower performance. OpenAI's own assessment, based on user feedback, concluded that free and Go users preferred a consistent, fast default experience, with the option to manually select a more powerful model only when explicitly desired.

Navigating a Heated Competitive Landscape

This product adjustment comes amid intensifying competition, particularly from Google's Gemini. OpenAI recently declared a company-wide "code red" to marshal resources around improving ChatGPT. While ChatGPT boasts over 800 million weekly active users, third-party data from firms like SimilarWeb indicates its growth has flattened as Gemini's has risen. Furthermore, metrics such as average visit duration on ChatGPT have reportedly fallen below those of Gemini since September. In this environment, ensuring a snappy, engaging user experience has become a critical battleground, potentially outweighing the bragging rights of deploying the most advanced AI for every query.

Safety and Strategic Implications of the Rollback

The change also has implications for AI safety protocols. Initially, the model router served a dual purpose, automatically directing sensitive queries—such as those from users exhibiting signs of mental health distress—to reasoning models deemed better equipped to handle them. An OpenAI spokesperson stated that with the increased performance of the GPT-5.2 Instant model on safety benchmarks, this automatic routing is no longer necessary. Strategically, the model router remains active for paying subscribers on the USD 20-a-month Plus and USD 200-a-month Pro tiers, indicating OpenAI's continued belief in the technology's long-term value for users who prioritize peak performance over cost.

The Future of AI Model Routing

Despite this setback, industry experts believe the concept of intelligent model routing is here to stay. Robert Nishihara, co-founder of Anyscale, argues that using different amounts of computational power for different problems is fundamentally sound. "No matter what happens in the short term, I expect routing to continue to be right," he stated. For OpenAI, the challenge will be refining the technology to better align with user expectations for speed and simplicity before potentially relaunching it for its broader user base. The episode serves as a poignant case study in the difficult journey of integrating powerful, resource-intensive AI into seamless, mass-market consumer products.