Safe Superintelligence (SSI), an artificial intelligence (AI) start-up co-founded by OpenAI’s former chief scientist Ilya Sutskever, has raised a billion dollars in capital towards its goal of developing safe superintelligent AI systems.
Sutskever co-founded the company with Daniel Gross, who spearheaded AI efforts at Apple, and Daniel Levy, who previously worked at OpenAI, just three months ago, and the company is now valued at $5 billion, according to Reuters.
SSI currently has just 10 employees including the three co-founders, and acquiring additional top AI talent is a priority for their new funding, as well as increasing their computing power. Sutskever is the company’s chief scientist, Levy is the principal scientist, and Gross is overseeing computing power and fundraising.
This is the first time a start-up has raised a whopping one billion dollars for a seed round. However, with Sutskever, Gross, and Levy leading the company, investors are predicting huge returns. Sutskever especially is a core part of OpenAI’s success and there will be expectations that SSI can replicate their success.
Investors include venture capital firms Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. This is despite the growing sense within investment companies that the AI boom might struggle to turn enough profit to keep up with its soaring costs.
“It’s important for us to be surrounded by investors who understand, respect, and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market,” Gross said in an interview, acknowledging that it will be some time before investors see a return.
Why did Ilya Sutskever leave OpenAI?
Sutskever left OpenAI, the company he helped co-found, in May of this year after he was part of a group that voted to remove the CEO Sam Altman from the board.
After the decision was reversed, Sutskever was removed from the board and chose to leave the company. OpenAI then disbanded the safety-oriented Superalignment team that Sutskever had been overseeing.
Safe Superintelligence says it is prioritizing hiring people who align with its culture, and Gross stated that they spent hours vetting candidates for “good character”.
They are seeking people interested in the work itself rather than in AI hype, and people with capabilities rather than AI experience. “One thing that excites us is when you find people that are interested in the work, that are not interested in the scene, in the hype,” said Gross.
The post OpenAI co-founder raises $1B for safety-focused AI development appeared first on ReadWrite.