Ilya Sutskever, the co-founder and former chief scientist of OpenAI, has left the artificial intelligence company to start a new venture called Safe Superintelligence, Inc. (SSI).
His goal? To create a safe and powerful AI system that doesn’t risk harming humanity.
A Singular Focus on Safe AI Development
Unlike large tech companies, SSI has a singular focus: developing safe superintelligent AI as its “one goal and one product.”
According to the company, this narrow focus means it can avoid distractions like “management overhead or product cycles” and insulate itself from “short-term commercial pressures.”
Sutskever believes this laser focus on safety and capabilities in tandem will allow SSI to advance AI quickly while still prioritizing safety measures. He aims to create a safe superintelligent system before pursuing other projects or partnerships.
Sutskever co-founded SSI with Daniel Gross, a former AI lead at Apple, and Daniel Levy, who previously worked at OpenAI. The startup has offices in Palo Alto, California and Tel Aviv, Israel.
Sutskever and Levy’s departure from OpenAI followed that of AI researcher Jan Leike, who cited safety processes “taking a backseat to shiny products” at the company. Policy researcher Gretchen Krueger also left OpenAI around the same time, citing safety concerns.
Growing Concerns Over Advanced AI
The founders of SSI are not alone in their worries about the future of AI. Ethereum co-founder Vitalik Buterin has called artificial general intelligence (AGI) “risky,” while tech leaders like Elon Musk and Steve Wozniak were among over 2,600 signatories urging a pause on AI training to consider the “profound risks” involved.
Sutskever himself was involved in an attempt to oust OpenAI CEO Sam Altman last November over disagreements about guardrails for advanced AI development, though he later apologized for his role in the ordeal.
A New Approach to AI Safety
As companies like OpenAI pursue lucrative partnerships with tech giants like Apple and Microsoft, SSI is taking a different tack. In an interview with Bloomberg, Sutskever stated that the company’s first and only product, at least initially, will be safe superintelligence.
With a team of experienced AI researchers and a singular focus on safety, SSI hopes to blaze a new trail in the development of advanced AI systems that prioritize humanity’s wellbeing. Only time will tell if their approach will pay off.