OpenAI disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday.
Some team members will be reassigned to several other teams within the company, said the person, who spoke on condition of anonymity.
The news comes days after the team's leaders, OpenAI co-founders Ilya Sutskever and Jan Lake, announced they were leaving the Microsoft-backed startup. “OpenAI's safety culture and processes have taken a back seat to the shiny products,” Leike wrote Friday.
The OpenAI Superalignment team, announced last year, is focused on “scientific and technical discoveries to direct and control AI systems smarter than we are.” At the time, OpenAI said it would devote 20% of its computing power to the initiative over four years.
OpenAI did not provide a comment and instead directed CNBC to co-founder and CEO Sam Altman's recent post on X, where he shared that he was sad to see Leike leave and that the company has more work to do. On Saturday, OpenAI co-founder Greg Brockman posted a statement attributed to him and Altman on X, asserting that the company had “raised awareness of the risks and opportunities of artificial general intelligence so the world can better prepare for it.”
News of the team's dissolution was first reported by Wired.
Sutskever and Leike on Tuesday announced their departure on social media platform X, hours apart, but on Friday, Leike shared more details about why he was leaving the startup.
“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X. “However, I had been disagreeing with OpenAI’s leadership on the company’s core priorities for some time, until we finally reached a breaking point.”
Leike wrote that he believes much of the company's bandwidth should be focused on security, surveillance, preparedness, safety and societal impact.
“These problems are very difficult to solve, and I worry that we are not on track to get there,” he wrote. “Over the past few months, my team has been sailing against the wind. At times we were struggling for (computing resources) and it was getting harder and harder to get this important research done.”
Lake added that OpenAI should become a “safety-first AGI company.”
“Building machines more intelligent than humans is an inherently dangerous endeavor,” he wrote. “OpenAI has a tremendous responsibility on behalf of all of humanity. But over the past years, our safety culture and processes have taken a back seat to our brilliant products.”
Leike did not immediately respond to a request for comment.
The high-profile departures come months after OpenAI was hit by a leadership crisis involving Altman.
In November, OpenAI's board fired Altman, saying in a statement that Altman had not been “consistently honest in his communications with the board.”
The issue seems to be getting more complex every day, with the Wall Street Journal and other media reporting that Sutskever has focused his focus on ensuring that AI won't harm humans, while others, including Altman, have been more eager to move forward with solutions. New technology.
Altman's ouster led to resignations or threats of resignations, including an open letter signed by nearly all of OpenAI's employees, and an uproar from investors, including Microsoft. Within a week, Altman was back at the company, and board members Helen Toner, Tasha McCauley, and Ilya Sutskever, who had voted to fire Altman, were out. Sutskever remained on staff at the time but did not return as a board member. Adam D'Angelo, who also voted to oust Altman, remains on the board.
When Altman was asked about Sutskever's condition on a Zoom call with reporters in March, he said there were no updates to share. “I love Elijah… I hope we work together for the rest of our career, my career, whatever,” Altman said. “There is nothing to announce today.”
On Tuesday, Altman shared his thoughts on Sutskever's departure.
“This is deeply saddening to me; Elijah is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend,” Altman wrote on X. “His brilliance and vision are well known, and his warmth and love. Compassion is less known but no less important.” Research director Jacob Paczucki, who has worked at OpenAI since 2017, will replace Sutskever as chief scientist, Altman said.
News of the departure of Sutskever and Leike, and the dissolution of the Superalignment team, comes days after OpenAI launched a new AI model and desktop version of ChatGPT, along with a refreshed user interface, the company's latest effort to expand the use of its popular chatbot.
The update brings the GPT-4 model to everyone, including free OpenAI users, chief technology officer Mira Moratti said Monday in a live-streamed event. She added that the new model, GPT-4o, is “much faster” with improved text, video and audio capabilities.
OpenAI said it eventually plans to allow users to video chat with ChatGPT. “This is the first time we have taken a big step forward when it comes to ease of use,” Moratti said.