Author: Vince Quill, CoinTelegraph; compiled by: Deng Tong, Golden Finance
OpenAI co-founder Ilya Sutskever recently gave a speech at the Neural Information Processing Systems (NeurIPS) 2024 conference held in Vancouver, Canada.
Sutskever stated that improving computational power through better hardware, software, and machine learning algorithms is outpacing the total amount of data available for training AI models. This AI researcher compared data to fossil fuels, which will eventually be depleted. Sutskever said:
"The data hasn't grown because we only have one internet. You could even say that data is the fossil fuel of AI. It was created in some way, and now we are using it; we have reached the peak of data, and there will be no more data. We must work with the data we have."
OpenAI co-founder predicts that agent-based AI, synthetic data, and inference time computation are the next evolutionary directions for AI, ultimately leading to the emergence of superintelligent AI.
A chart comparing the computational power and dataset size of AI pre-training. Source: TheAIGRID, Ilya Sutskever
AI agents are sweeping the crypto world
AI agents will surpass current chatbot models, capable of making decisions without human input, and with the rise of AI meme coins and large language models (LLMs) like Truth Terminal, AI agents have become a popular narrative in the crypto space.
After LLM began promoting a meme coin called Goatseus Maximus (GOAT), Truth Terminal quickly became popular, with the coin's market cap eventually reaching $1 billion, attracting the attention of retail investors and venture capitalists.
GOAT token market information. Source: CoinMarketCap
Google's DeepMind AI lab has launched Gemini 2.0—an AI model that will power AI agents.
According to Google, agents built using the Gemini 2.0 framework will be able to assist with complex tasks, such as coordination between websites and logical reasoning.
The advancements of AI agents capable of independent action and reasoning will lay the groundwork for AI to break free from data illusions.
The occurrence of AI hallucinations is due to incorrect datasets and the increasing reliance of AI pre-training on using old LLMs to train new LLMs, which degrades performance over time.