© 2024 Blaze Media LLC. All rights reserved.
OpenAI created the 'Superalignment' team to help keep 'superintelligent' AI safe, and now it has been disbanded: Report
Photographer: Stefan Wermuth/Bloomberg via Getty Images

OpenAI created the 'Superalignment' team to help keep 'superintelligent' AI safe, and now it has been disbanded: Report

The 'Superalignment' team was reportedly not given the promised funding to secure the safety of 'superintelligent' AI.

The Superalignment team at OpenAI, tasked with coming up with ways to govern and direct "superintelligent" AI systems, was previously promised 20% of the company's compute resources, but that never manifested. This resulted in the team not being able to sufficiently complete their job, according to Tech Crunch.

As a result, several team members, including co-lead Jan Leike, have decided to resign from their posts. Leike was a former DeepMind researcher who helped develop ChatGPT, GPT-4, and ChatGPT's predecessor, InstructGPT.

'These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.'

In a lengthy thread, Leike posted to X, writing: "I joined because I thought OpenAI would be the best place in the world to do this research. However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point."

“I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics," he continued. "These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”

Now, the OpenAI's Superalignment team has been disbanded just one year after the company announced the group, according to CNBC. The report mentioned that OpenAI co-founder Ilya Sutskever has also decided to leave the company.

Leik said that OpenAI's "safety culture and processes have taken a backseat to shiny products."

When OpenAI's Superalignment team was announced in 2023, it noted that it was focused on "scientific and technical breakthroughs to steer and control AI systems much smarter than us."

The announcement added: "Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction."

Co-founder and CEO Sam Altman commented on Leike's thread, writing: "i'm super appreciative of @janleike's contributions to openai's alignment research and safety culture, and very sad to see him leave. he's right we have a lot more to do; we are committed to doing it. i'll have a longer post in the next couple of days."

Leik wrote: “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting harder and harder to get this crucial research done.”

Leike noted that OpenAI must be more focused on the safety of its technology, saying that "[b]uilding smarter-than-human machines is an inherently dangerous endeavor."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?