It’s time to celebrate the amazing women pioneering AI! Nominate your inspiring leaders for VentureBeat’s Women in AI Awards today before June 18. He learns more
Ilya Sutskever revealed what he is working on next He will step down in May as chief scientist at OpenAI. Along with him OpenAI Fellow Daniel Levy and former Apple AI head and Cue co-founder Daniel Gross, the trio announced they are working on Safe Super Intelligence Companya startup designed to build secure superintelligence.
In a letter posted on the currently barren SSI website, the founders wrote that building secure superintelligence is “the most important technology problem of our time.” In addition, “We treat safety and capabilities side by side, as technical problems that must be solved through revolutionary engineering and scientific breakthroughs. We plan to enhance capabilities as quickly as possible while ensuring that our safety always remains at the forefront.”
What is it exactly Super intelligence? He is a virtual agent with intelligence far superior to that of the smartest humans.
It’s a continuation of Sutskever’s work when he was at OpenAI. He was Part of the company’s Superalignment team, is tasked with designing ways to control powerful new artificial intelligence systems. But with Sutskever’s departure, that group was disbanded, a move that was drastic One of the former leaders, Jan Lek, criticized him.
Registration for VB Transform 2024 is open
Join enterprise leaders in San Francisco July 9-11 for our flagship AI event. Connect with your peers, explore the opportunities and challenges of generative AI, and learn how to integrate AI applications into your industry. Register now
Of course, the OpenAI co-founder played a central role in the brief ouster of CEO Sam Altman in November 2023. Sutskever later said he regretted his role in the situation.
SSI claims it will seek to achieve secure superintelligence in a “straight shot, with one focus, one target, one product.”