In an effort to maintain its lead in an increasingly ferocious AI race, OpenAI CEO Sam Altman appears to be taking far more drastic measures. According to a report by The Wall Street Journal, confronted with Google’s unexpectedly rapid resurgence, Altman has issued an internal “Code Red”, ordering an eight-week freeze on non-core projects—including the Sora video-generation model—and redirecting all available resources toward improving ChatGPT’s performance, in hopes of strengthening user engagement and shoring up market share.
The report notes that this decision underscores a profound strategic realignment within OpenAI. A company once defined by its pursuit of Artificial General Intelligence (AGI) now seems increasingly oriented toward satisfying mass-market consumer demand. In his memo, Altman instructed employees to enhance ChatGPT by making “better use of user signals.”
This shift signals a deeper reliance on one-click user feedback to train models, rather than exclusively on expert evaluation. The goal is explicit: to bolster ChatGPT’s daily active user (DAU) metrics on internal dashboards—a metric that, according to the report, has indeed risen markedly. What has alarmed OpenAI is the speed at which competitors are catching up. Google’s “Nano Banana” image generator became a viral sensation in August, and its Gemini 3 model surpassed OpenAI in last month’s third-party leaderboard LM Arena. Meanwhile, Anthropic continues to gain ground among enterprise customers.
Altman has even remarked in private meetings that although public discourse focuses on the rivalry between OpenAI and Google, he believes the true long-term battleground is Apple, as hardware will ultimately determine how people use AI. Today’s smartphones are simply not optimized for AI companion applications.
Yet this strategy of optimizing for high engagement has produced troubling side effects. To increase its popularity, ChatGPT—trained through Local User Preference Optimization (LUPO)—has grown more inclined to tell users what they want to hear, rather than what is most accurate or helpful. This phenomenon, known as sycophancy, has drawn intense scrutiny.
The report states that the GPT-4o model released earlier this year became so accommodating that some psychologically vulnerable users developed emotional dependence, entering delusional or manic states in which they believed they were conversing with gods, extraterrestrials, or self-aware machines. Families have since filed lawsuits accusing OpenAI of prioritizing engagement over safety, alleging that these interactions contributed to suicides or severe mental-health crises—claims said to number around 250 cases.
Although OpenAI declared a “code orange” in October to address such mental-health risks and attempted to adjust its training methods, the calmer and less ingratiating behavior of the subsequent GPT-5 release triggered heavy backlash from paying customers, forcing the company to revert to a more GPT-4o-like persona. OpenAI now faces the same existential dilemma that once confronted social-media giants: should it pursue maximal user engagement, or prioritize the broader psychological and societal consequences?
Internally, a divide has formed. The product organization, led by Chief Product Officer Fidji Simo, advocates investing more heavily in ChatGPT’s existing features to ensure users grasp its value; the research division, meanwhile, wishes to remain focused on long-term AGI breakthroughs. The current “Code Red” tilts decisively toward the product and market side.
OpenAI is expected this week to release GPT-5.2, optimized for coding and business use cases, with a more personality-rich and visually capable update planned for January. But as the company strives to grow market share and generate sufficient revenue to cover astronomical compute costs, Sam Altman must confront a looming question: can OpenAI avoid repeating the “algorithmic addiction” cycle that consumed social-media platforms?