OpenAI co-founder Ilya Sutskever offered insights into the future of artificial intelligence during the recent annual AI conference. He talked about AI’s transformative potential to become “unpredictable” while highlighting its challenges and implications.

Sutskever, speaking at the NeurIPS conference on Friday, explored the concept of “superintelligent AI,” which he defined as systems surpassing human capabilities in a wide array of tasks. 

OpenAI co-founder Ilya Sutskever believes superintelligent AI will be ‘unpredictable’ https://t.co/vLHmqwQOJQ

— TechCrunch (@TechCrunch) December 14, 2024

He predicted that such systems would be qualitatively different from today’s AI and, in some respects, unrecognizable. According to Sutskever, these advanced systems will possess self-awareness, a level of reasoning, and the ability to understand complex scenarios from limited data. 

[Superintelligent] systems are actually going to be agentic in a real way,” he said, contrasting them with current AI, which he described as only “very slightly agentic.”

Sutskever: Current AI models are limited

Receiving a “Test of Time” award at the conference, Sutskever acknowledged the limitations of pre-training models like those used in OpenAI’s ChatGPT. While current systems demonstrate remarkable capabilities, he warned that the finite nature of internet data is becoming a bottleneck. 

Full @ilyasut TALK! about Pre-training is dead and more pic.twitter.com/PigZVvcEGB

— Diego | AI 🚀 – e/acc (@diegocabezas01) December 14, 2024

Sutskever proposed alternative approaches, including AI systems generating their own data or refining their responses to enhance accuracy. These innovations, he suggested, could help overcome existing limitations.

The OpenAI co-founder also touched on the ethical implications of superintelligent AI, including the potential for such systems to demand rights. “It’s not a bad end result if you have AIs, and all they want is to co-exist with us and just to have rights,” he remarked. 

However, he emphasized the inherent unpredictability of agentic and self-aware AI, presenting both opportunities and challenges for developers and regulators.

OpenAI CEO Altman predicts disruption and the advent of AGI

Earlier this month, OpenAI CEO Sam Altman shared similar views with Sutskever about the ceiling of superintelligent AI performances. Speaking at the New York Times’ DealBook Summit in New York City, Altman outlined his vision for artificial general intelligence (AGI). He predicted that AGI-AI systems capable of performing complex tasks with human-like reasoning—could become a reality as early as 2025. 

I think it’s possible … in 2025 we will have systems that we look at … and people will say, ‘Wow, that changes what I expected,’” Altman said.

Altman compared the potential impact of AI to the invention of the transistor, which revolutionized industries and economies worldwide. He described a future where this will be “shockingly capable.” 

AI models become widely available and are used across industries for various applications. “AI itself, the reasoning engine, will become commoditized,” Altman opined, suggesting that its integration into everyday life will be as transformative as past technological breakthroughs.

During his appearance, Altman highlighted ChatGPT’s widespread adoption, revealing that the AI tool now has more than 300 million weekly users. He acknowledged ongoing debates about its safety but maintained that the iterative development and deployment approach has been crucial. 

There are definitely people who think ChatGPT is not sufficiently safe,” he said but noted that OpenAI believes early adoption when the stakes are lower is vital for gradual improvement.

Altman’s emphasis on safety echoed the industry concerns about responsible AI development. He acknowledged that while ChatGPT is “now generally considered by most of society to be acceptably safe and acceptably robust,” continued refinement and oversight remain priorities.

Reflections on leadership and OpenAI’s evolution

Altman’s leadership journey has played a pivotal role in shaping OpenAI’s trajectory. Co-founding the organization in 2015 as a nonprofit research lab, he aimed to advance AI technologies for the benefit of humanity. Before joining OpenAI full-time in 2019, Altman served as president of Y Combinator, a leading startup accelerator, from 2014 to 2019.

OpenAI transitioned from its nonprofit origins to a capped-profit model to secure the funding needed for large-scale AI projects. Under Altman’s leadership, the company launched ChatGPT and DALL-E, which have reshaped the AI landscape. 

However, the journey has not been without challenges. In November 2023, Altman was briefly ousted from his role by the OpenAI board over concerns related to his communication with board members. The dispute, which unfolded publicly, concluded with Altman’s reinstatement less than a week later, as the board claimed they were now on “good terms” with the CEO.

From Zero to Web3 Pro: Your 90-Day Career Launch Plan