Author: Vince Quill, CoinTelegraph; Translated by: Deng Tong, Jinse Finance
OpenAI co-founder Ilya Sutskever recently spoke at the Neural Information Processing Systems (NeurIPS) 2024 conference held in Vancouver, Canada.
Sutskever stated that improving computational power through better hardware, software, and machine learning algorithms outpaces the total amount of data available for training AI models. This AI researcher compared data to fossil fuels, which will eventually be depleted. Sutskever said:
"Data hasn't grown because we only have one internet. You could even say that data is the fossil fuel of artificial intelligence. It was created in some way, and now we use it; we have reached data peak and there will be no more data." "—We must deal with the data we have."
OpenAI co-founder predicts that agent-based AI, synthetic data, and reasoning time computation are the next evolution directions for AI, which will ultimately lead to artificial superintelligence.
A chart comparing the computational power and dataset size of AI pre-training. Source: TheAIGRID, Ilya Sutskever
AI agents are sweeping the crypto world
AI agents will surpass current chatbot models, being able to make decisions without human input, and with the rise of AI meme coins and large language models (LLMs) like Truth Terminal, AI agents have become a popular narrative in the crypto space.
After LLM began promoting a meme coin called Goatseus Maximus (GOAT), Truth Terminal quickly became popular, with the coin's market cap eventually reaching $1 billion, attracting the attention of retail investors and venture capitalists.
Market information for the GOAT token. Source: CoinMarketCap
Google's DeepMind AI lab has launched Gemini 2.0—an AI model that will power AI agents.
According to Google, agents built using the Gemini 2.0 framework will be able to assist in completing complex tasks, such as coordination between websites and logical reasoning.
Advancements in AI agents capable of independent action and reasoning will lay the foundation for AI to break free from data hallucinations.
The occurrence of AI hallucinations is due to incorrect datasets and the increasing reliance of AI pre-training on using old LLMs to train new LLMs, which over time reduces performance.