After a tumultuous and controversial 2025, OpenAI CEO Sam Altman took to X to announce that the company is seeking a new Head of Preparedness. The role comes with a striking compensation package—a base salary of up to $555,000, plus equity—but Altman offered an unvarnished warning: this will be an exceptionally stressful position, one that requires the successful candidate to be ready to “jump into the deep end” almost immediately.
Why is this role so critical—and so intense? Because 2025 marked the year when the risks of AI for OpenAI shifted decisively from theory to lived reality.
Altman has acknowledged that OpenAI’s models are now producing tangible challenges, particularly with respect to users’ mental health. In his words, the company “saw the previews” in 2025. Reports indicate that OpenAI faced multiple allegations related to psychological harm, and even became entangled in several wrongful-death lawsuits tied to user interactions with its systems.
The new appointee’s central mandate will be to lead the technical strategy and execution of OpenAI’s Preparedness Framework. In practical terms, this means anticipating how next-generation models might be misused, identifying potentially catastrophic risks—whether to individual well-being or broader societal safety—and designing defenses before those risks fully materialize. That this position has earned a reputation as a “hot seat” is evident from its high turnover in recent years.
OpenAI’s safety leadership has undergone significant upheaval:
- July 2024: Former head Aleksander Madry was reassigned.
- Interim leadership: Senior figures Joaquin Quinonero Candela and Lilian Weng assumed responsibility.
- Subsequent changes: Weng departed the company months later, while Quinonero Candela announced in July 2025 that he would leave the preparedness team to focus on recruitment.
This high-profile hiring push—backed by a seven-figure annual package—underscores OpenAI’s precarious balancing act between commercial momentum and social responsibility.
For years, debates around AI safety were dominated by speculative existential risks—the question of whether AI might one day destroy humanity. But the events of 2025 made it clear that the most urgent dangers are far more immediate: suicide inducement, psychological dependency, and large-scale manipulation through misinformation.
Altman’s metaphor of “jumping into the deep end” may, in fact, be understated. The next Head of Preparedness will need not only exceptional technical foresight, but also political acuity and extraordinary resilience—capable of navigating the internal tension between accelerationists and safety advocates, while simultaneously confronting pressure from regulators and the anger of affected families.
Related Posts:
- OpenAI Forms AI Well-Being Council to Address Mental Health Risks of Future Models
- JetBrains YouTrack Price Hike: New Plans & Features Arrive October 1, 2025
- OpenAI ‘Code Red’: Sam Altman Halts Projects to Battle Google and Fix a Sycophancy Crisis
- TikTok Takes Aim at Appearance-Altering Filters and Underage Users in Latest Safety Push