Original title: OpenAI Co-Founder Ilya Sutskever’s Safe AI Startup Raises $1 Billion

Original author: Jason Nelson

Original source: https://decrypt.co/

Compiled by: Mars Finance, Eason

The new company, Safe Superintelligence, is backed by NFDG, a16z, Sequoia Capital, DST Global, and SV Angel.

Months after resigning from AI development company OpenAI, former chief scientist Ilya Sutskever’s new company Safe Superintelligence (SSI) has announced it has raised $1 billion in funding.

According to SSI, the funding round includes investments from NFDG, a16z, Sequoia Capital, DST Global and SV Angel. Reuters, citing “sources close to the matter,” reported that SSI has been valued at $5 billion.

"The mountain has been found," Sutzkeffer tweeted on Wednesday. "It's time to climb it."

Safe Superintelligence has not yet responded to Decrypt's request for comment.

In May, Sutskever and Jan Lake resigned from OpenAI following the departure of Andrei Karpathy in February. Lake tweeted that his decision to leave ChatGPT developers was due to lack of resources and security concerns.

“Leaving this job was one of the hardest things I’ve ever done,” Lake wrote. “Because we urgently need to figure out how to guide and control AI systems that are much smarter than we are.”

According to the New York Times, Sutzkefer led the OpenAI board of directors and some executives to oust co-founder and CEO Sam Altman in November 2023, but Altman was reinstated a week later.

In June this year, Sutzkeffer announced the creation of his new AI development company, Safe Superintelligence Inc., with co-founders including Daniel Gross, Apple's former head of AI, and Daniel Levy, who also worked at OpenAI.

According to Reuters, Sutzkeffer served as SSI’s chief scientist, Levy was chief scientist, and Gross was responsible for computing power and financing.

“SSI is our mission, our name, and our entire product roadmap because it’s our sole focus,” Safe Superintelligence tweeted in June. “Our team, investors, and business model are all aligned around achieving this goal of SSI.”

As generative AI becomes more common, developers have been looking for ways to ensure their products are safe in order to gain the trust of consumers and regulators.

In August, Anthropic, the developer of OpenAI and Claude AI, announced an agreement with the National Institute of Standards and Technology (NIST), part of the U.S. Department of Commerce, to establish a formal collaboration with the American AI Safety Institute (AISI), which will give the agency access to major new AI models launched by the two companies.

“We’re excited to reach an agreement with the US AI Safety Institute to conduct pre-release testing of our future models,” OpenAI co-founder and CEO Sam Altman wrote on Twitter. “We think this kind of collaboration at a national level is important for many reasons. The US needs to continue to lead.”