OpenAI put ‘shiny products’ over safety, departing top researcher says
Jan Leike, a former co-head of superalignment at OpenAI, has resigned from the company. Leike expressed concerns that OpenAI is prioritizing the development of "shiny products" over crucial safety research for powerful AI systems.
“Over the past years, safety culture and processes have taken a backseat to shiny products,” he wrote.
Leike's departure comes just days after OpenAI launched its latest AI model, GPT-4o, and follows the resignation of Ilya Sutskever, OpenAI's co-founder and Leike's co-head of superalignment. Both resignations occurred ahead of a major international AI summit in Seoul.
In a series of posts on X, Leike detailed his reasons for leaving, stating that safety culture within OpenAI had become a secondary concern. He expressed worry that OpenAI was not adequately investing in crucial areas like safety, social impact, confidentiality and security for future AI models.
“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote
Leike noted that he had been disagreeing with OpenAI’s leadership about the company’s priorities for some time but the standoff had “finally reached a breaking point”. The former OpenAI executive emphasized the inherent dangers of developing AI exceeding human intelligence and called for OpenAI to prioritize safety as a company.
“Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity,” Leike wrote.
Replying to Leike’s post, OpenAI CEO Sam Altman thanked him for his contribution to the company, promising to address the safety concerns in detail later. “I'm super appreciative of Jan Leike's contributions to openai's alignment research and safety culture, and very sad to see him leave. He's right we have a lot more to do; we are committed to doing it. I'll have a longer post in the next couple of days,” Altman replied.
In his departure announcement, Sutskever who also served as chief scientist at OpenAI expressed confidence that OpenAI could achieve safe and beneficial artificial general intelligence (AGI) under current leadership.
Sutskever was replaced by Jakun Pachocki as the chief scientist at OpenAI.