Ilya Sutskever, co-founder and former chief scientist at OpenAI, launched a new startup called Safe Superintelligence (SSI) in mid-2024. The startup quickly raised over $1 billion in funding from major venture capital firms such as Andreessen Horowitz, Sequoia Capital, and others. This large investment reflects the strong trust in SSI’s mission to address the critical challenges of creating safe superintelligence, AI systems that are not only highly capable but also aligned with human values to prevent risks.
Sutskever left OpenAI amid concerns about the rapid pace of artificial general intelligence (AGI) development, which led to tensions with OpenAI leadership. His new venture, SSI, focuses on tackling AI safety as a top technical priority, ensuring that advanced AI can be controlled and tested rigorously to prevent potential societal risks.
SSI is still in the early stages, with no product on the market yet, but it has already drawn significant attention due to Sutskever’s leadership and the large investment it secured. The company aims to create an elite, specialized team to make rapid strides in both AI capabilities and safety
How Safe Superintelligence Differs from AGI
Safe Superintelligence (SSI) differs from Artificial General Intelligence (AGI) in its focus and goals, even though both aim to advance highly capable AI systems.
- AGI refers to an AI system that can understand, learn, and apply intelligence across a wide range of tasks at or beyond human-level proficiency. It’s designed to handle any intellectual task that a human can, without being confined to a specific domain or set of tasks.
- Safe Superintelligence, as envisioned by Ilya Sutskever, builds on AGI’s concept but emphasizes safety and control as the central focus. While AGI strives for human-like or superior cognitive abilities, SSI’s mission is to ensure that these advanced systems remain aligned with human values and do not pose unintended risks. SSI’s work is directed at making sure that powerful AI, which could eventually exceed human cognitive abilities, behaves predictably and safely, and that robust mechanisms are in place to prevent potential harm.In this sense, SSI targets not just the development of AGI but also its safeguards, ensuring that the superintelligence it creates is beneficial, controlled, and fully aligned with societal and ethical standards
Yes, It’s Quite Hard
Building safe AI technologies is a complex challenge that goes beyond simply creating smarter systems. As AI systems grow in capability, they also become harder to predict, especially in unfamiliar situations. This unpredictability makes it difficult to ensure that AI behaves safely across a variety of environments. Additionally, aligning AI’s goals with human values—known as the alignment problem—is a major hurdle. Even if AI systems are highly intelligent, they might interpret their objectives in ways that lead to unintended and potentially harmful outcomes.
On top of that, ensuring robustness is key. AI systems can be tricked or fail when exposed to unexpected inputs, and this vulnerability poses a significant safety risk. The challenge is further compounded by the need for safety mechanisms to scale as AI systems become more powerful. What works for current systems may not be enough as AI approaches human-level cognitive abilities, making it essential to create safety solutions that evolve in parallel with AI’s growing complexity
Leave a comment