Ilya Sutskever, co-founder and former chief scientist of OpenAI, along with former OpenAI engineer Daniel Levy and investor Daniel Gross, previously a partner at startup accelerator Y Combinator, have launched Safe Superintelligence, Inc. (SSI). The company, headquartered in Palo Alto and Tel Aviv, aims to advance artificial intelligence (AI) by prioritizing both safety and capabilities. In an online announcement on June 19, the founders emphasized their commitment:

“From the outset, our focus remains unwavering on AI safety and capabilities. This singular focus ensures we are not distracted by management overhead or product cycles, while our business model shields safety, security, and progress from short-term commercial pressures.”

Sutskever and Gross have long been advocates for AI safety.

Sutskever departed OpenAI on May 14, following his involvement in the dismissal of CEO Sam Altman. His role at the company became ambiguous after he stepped down from the board upon Altman’s return. Shortly after Sutskever’s departure, Daniel Levy, along with several other researchers, also left OpenAI.

Sutskever and Jan Leike co-led OpenAI’s Superalignment team, formed in July 2023 to explore methods for guiding and managing AI systems more intelligent than humans, known as artificial general intelligence (AGI). At its inception, OpenAI allocated 20% of its computing resources to support the Superalignment team.

In May, Leike also left OpenAI to head a team at Anthropic, an AI startup backed by Amazon. Following the departure of its key researchers, OpenAI disbanded the Superalignment team. Greg Brockman, the company’s president, defended its safety protocols in a detailed post on X.

Other prominent figures in the technology industry also share concerns

Former OpenAI researchers, along with numerous scientists, express deep concerns about the future trajectory of AI. Vitalik Buterin, co-founder of Ethereum, labeled AGI as “risky” amidst the recent staff changes at OpenAI. However, he also noted that “such models pose much lower risks of doom compared to corporate megalomania and military applications.”

Source: Ilya Sutskever

Tesla CEO Elon Musk, formerly a supporter of OpenAI, and Apple co-founder Steve Wozniak joined over 2,600 tech leaders and researchers in calling for a six-month pause in the training of AI systems. They emphasized the need for humanity to reflect on the “profound risks” posed by these technologies.

The post Former OpenAI Scientist Ilya Sutskever Starts SSI for AI Safety appeared first on Baffic.