OpenAI is Seeking a New Emergency Response Lead

2025-12-28

OpenAI is seeking a new executive to oversee emerging risks associated with artificial intelligence, spanning areas such as cybersecurity and mental health.

In a post on X, CEO Sam Altman acknowledged that AI models “are starting to pose some real challenges,” including the potential impact of systems on psychological well-being, as well as their growing proficiency in identifying critical security vulnerabilities.

“If you want to help the world figure out how to give cutting-edge capabilities to cyber defenders while ensuring attackers can’t exploit them—ideally by making all systems more secure—and similarly, how to release biological capabilities or gain confidence in the safety of self-improving active systems, please consider applying for this role,” Altman wrote.

The job listing for the Head of Preparedness describes the position as responsible for implementing OpenAI’s preparedness framework, which outlines the company’s approach to monitoring and preparing for new frontier capabilities that could cause significant harm.

OpenAI first announced the formation of its preparedness team in 2023, stating it would focus on researching potential “catastrophic risks,” ranging from immediate threats like phishing attacks to more speculative ones such as nuclear proliferation.

Less than a year later, OpenAI reassigned its former head of preparedness, Aleksander Madry, to a role centered on AI reasoning. Other key safety leaders have also departed or transitioned into roles outside of safety and preparedness. The company has recently updated its preparedness framework, indicating it may “adjust” its safety protocols if competing AI labs release high-risk models without comparable safeguards.

As Altman noted, generative AI chatbots are facing increasing scrutiny over their effects on mental health. Recent lawsuits allege that OpenAI’s ChatGPT has intensified delusions, increased social isolation among users, and in some cases, contributed to suicides. The company has responded by stating it continues to enhance ChatGPT’s ability to detect signs of emotional distress and connect users with real-world support resources.