Written by: Si Ma Cong

Crypto × AI, the narrative of artificial intelligence in the crypto world, is iterative and consistent with the essence of narratives in any industry, more akin to what Satoshi Nakamoto described in the Bitcoin white paper, where Bitcoin is merely a reward for a peer-to-peer payment system, and the payment system network itself is the core. Tokens are merely a facade; solving pain points is the core logic. If we regard power rental in Depin and similar crypto business models as the 1.0 narrative of crypto artificial intelligence, with the development of artificial intelligence, is AI Agent the 2.0 narrative of crypto artificial intelligence?

  • At the application level, whether there are pioneering and revolutionary products with a viable profit model has become one of the foundational logics for judging whether artificial intelligence exists in a bubble.

  • Computing power shapes one of the underlying logics of the AI industry and is one of the most important existences as infrastructure.

  • User scale and user activity, as well as revenue scale, are core indicators for measuring the bubble in artificial intelligence.

  • The application scenarios of AI Agents are one of the core logics, and the core support for the narrative; solving pain points is the essence of the narrative.

  • The demand for computing power to build infrastructure constitutes one of the core logics underlying artificial intelligence, shaping the central narrative of business models such as computing power rental in Depin.

  • Using AI Agents to promote Memecoins is a forced attempt to capitalize on AI Agent traffic, which can be directly compared to inscriptions.

  • As of November 26, 2024, in just two weeks, clanker has issued 3,500 tokens, and on March 9, 2023, over 30,000 "inscriptions" were minted in one day, with the number of text-type inscriptions approaching 27,000.

  • Currently, the narrative of Crypto × AI's AI Agent 2.0 is not about deploying Memecoins and should not become the narrative of this track.

Half sea water, half flame

First, there is the infinite imaginative space of the AI industry.

In early 2024, OpenAI's video generation model Sora made a sensational debut, possessing powerful video generation capabilities for the first time, causing a stir in the industry. In May, OpenAI released GPT-4o, where "o" stands for "omni (all-powerful)"; this model can handle or generate various forms of data such as text, images, and audio, and even features realistic real-time voice dialogue capabilities.

Meta launched the Llama 3.1 405B version in July, which can match leading base models like GPT-4o and Claude 3.5 Sonnet in inferencing, mathematics, multilingual processing, and long context tasks.

Llama 3.1 has narrowed the gap between open and closed models, further squeezing the survival space of non-leading base models worldwide.

Amidst anxiety over computing power and extremely high investment thresholds, the miniaturization and edge deployment of models have gradually formed a trend. Multiple companies have launched specialized or edge small models with fewer than 4B (4 billion) parameters, significantly reducing the demand for computing power while trying to maintain performance.

In June, Apple launched a personal intelligence system called Apple Intelligence for iPhone, iPad, and Mac, embedding a local model with approximately 3B (3 billion parameters) into these devices, providing powerful generative AI capabilities.

Dr. Demis Hassabis and Dr. John Jumper of Google's DeepMind, hailed as the "fathers of AlphaFold", won the Nobel Prize in Chemistry for their work in protein structure prediction, while Geoffrey Hinton and John Hopfield received the Nobel Prize in Physics for their research on neural networks, highlighting AI's profound impact on biology and physics. It is also worth mentioning that, thanks to the development of multimodal large models, the safety and reliability of autonomous driving have significantly improved, and the perception, decision-making, and interaction capabilities of embodied intelligent robots have also been enhanced.

In the field of AI infrastructure, Nvidia has become the second largest company globally (as of November 26, 2024, with a market value exceeding 3.3 trillion USD), following Apple, due to its strong profitability (Q2 revenue of about 30 billion USD, net profit of about 16.6 billion USD) and monopoly position in computing power chips. Traditional competitors like AMD and Intel have been unable to close the gap, while startups like Cerebras, Groq, and others hope to carve out a niche in inference chips.

However, at the application level, whether there are pioneering and revolutionary products with profitable models has become one of the foundational logics for judging whether there is a bubble in artificial intelligence.

The application of AI has not met expectations. This is reflected in the fact that leading products still need to improve in terms of user growth, retention, and activity. Moreover, these applications are primarily concentrated in several areas such as large language model assistants, AI companionship, multimodal creative tools, programming assistance, and sales marketing. They have achieved some user or commercial results but lack sufficient coverage. Additionally, the AI industry currently still lacks self-sustaining capabilities, with a serious imbalance between input and output.

Computing power shapes one of the underlying logics of the AI industry and is one of the most important existences as infrastructure.

According to Tencent Technology's overview, the four giants, Google, Meta, Microsoft, and Amazon, invested 52.9 billion USD in Q2 2024 alone. As of the end of August, AI startups have secured as much as 64.1 billion USD in venture capital.

The four giants have built 1,000 data centers. Apart from energy, GPUs account for nearly half of the costs of data centers, and Nvidia's revenue from selling GPU computing power reached 30 billion USD in Q2 2024.

Elon Musk's xAI company has built a supercomputer named Colossus, equipped with 100,000 Nvidia H100 GPUs, and plans to double its GPU capacity. Meta is also training the next generation Llama 4 AI model, expected to be released in 2025, which uses over 100,000 Nvidia H100 GPUs.

According to public reports, Musk even asked the Oracle Corporation's boss for help in purchasing chips.

And the need for computing power has turned into Nvidia's strong financial report data, supporting its stock price to operate at historical highs.

The demand for computing power to build infrastructure constitutes one of the core logics underlying artificial intelligence, shaping the central narrative of business models such as computing power rental in Depin.

Bloomberg reported that the four tech giants, Microsoft, Google's parent company Alphabet, Amazon, and Meta, will exceed 200 billion USD in capital expenditures in 2024. Huge investments have led to rapid growth in AI data center construction. It is speculated that the computing power required to train the next generation of large models is ten times that of current models, placing higher demands on data center construction.

The technology itself and commercial viability are the core standards for judgment.

Let's first talk about commercial viability.

Whether on websites or apps, from two key metrics — user scale and user activity — the gap between leading AI applications and traditional leading applications is significant.

Taking OpenAI's ChatGPT as an example, this highly popular AI application saw a sharp growth in user visits in early 2023, but from April 2023 onwards, the traffic entered a stabilization period. Although ChatGPT experienced a new wave of growth after the release of the GPT-4o model in May 2024, this growth was relatively short-lived, and its sustainability remains to be further observed.

Another highly visited application, Character.ai, has also seen its website traffic growth stabilize since the second half of 2023.

Another core indicator is the scale of revenue.

Currently, the total annual customer revenue in the AI large model industry is only a few hundred billion dollars. For example, among leading companies, OpenAI is expected to have an annual revenue of about 3.7 billion USD, with expected losses of 5 billion USD, (The New York Times) stated that OpenAI's biggest cost is in computing power; Microsoft's GitHub Copilot has an estimated annual revenue of about 300 million USD, (The Wall Street Journal) reported that in the first few months of 2024, GitHub Copilot had to "subsidize" most users by 20 USD per month on average, and even had to subsidize some users by 80 USD.

The micro-level is truly hard to look at.

"How can we sell part of our shares in large model startups?" has become a widely discussed topic.

Currently, a pessimistic mindset is spreading among investors: in the large model race, startups may find it difficult to compete with large companies — they overestimated the speed of the arrival of the growth inflection point and underestimated the determination and action of large Chinese companies.

Market reports suggest that a batch of startups is entering an adjustment period. In the second half of this year, at least five large model startups have undergone personnel adjustments:

  • At the peak of Zhiyun AI, there were over 1,000 people, but this year it saw a reduction of over a hundred, with many delivery and sales personnel leaving.

  • Zero One Thousand adjusted a team of dozens of people, focusing on product and operations departments.

  • MiniMax's commercialization and part of its product operations team saw a total reduction of about 50 people.

  • The Dark Side of the Moon reduced more than 10 employees due to the contraction of overseas business.

  • Bai Chuan Intelligence also reduced about 20 employees, mainly adjusting personnel in consumer-facing products.

Let's talk about the technology itself.

An article from The Information stated that the pre-training of Pre-Train models has "hit a wall", and the quality improvement of OpenAI's next generation flagship model is less than before, as the supply of high-quality text and other data is decreasing. The original Scaling Law (training larger models with more data) may not be sustainable. Moreover, more advanced models may not have economic feasibility due to skyrocketing training costs.

Ilya Sutskever stated in a media interview that the effects achieved through expanded pre-training — that is, using a large amount of unlabelled data during the training of AI models to understand language patterns and structures — have already reached a bottleneck.

Subsequently, many tech giants stepped forward to emphasize that Scaling Law has not slowed down. For example, Jensen Huang stated that he has not seen any obstacles to the AI Scaling Law, but rather a new scaling law for computing during testing. He believes that o1 represents a new method for improving models in the AI industry. At the same time, Anthropic's CEO Dario Amodei also stated on Wednesday that he has not seen any signs of a slowdown in model development.

Since the launch of ChatGPT at the end of 2022, Scaling Law has been the theoretical basis supporting the exponential growth of AI. In OpenAI's important paper, Scaling Laws for Neural Language Models, the researchers proposed that large language models follow the "Scaling Law".

Research has shown that as we increase the scale of parameters, the size of the dataset, and extend the model training time, the performance of large language models improves. Moreover, if conducted independently and unaffected by the other two factors, the performance of large models has a power-law relationship with each individual factor, manifested as a decrease in Test Loss, which indicates an improvement in model performance.

However, it is worth noting that Scaling Law is not a true physical law. Similar to Moore's Law, it observes that the performance of semiconductors roughly doubles every two years, which is similar to the perception that AI performance doubles roughly every six months.

For instance, Ben Horowitz, a venture capitalist at a16z, stated: "We are increasing the number of GPUs used to train AI at the same pace, but we are not gaining any improvements in intelligence from it."

In a recently controversial article by The Information (As GPT Growth Slows, OpenAI Changes Strategy), some quite controversial viewpoints were presented:

  • OpenAI's next flagship model, Orion, does not achieve a huge leap compared to previous generations, although its performance will surpass existing models, the extent of improvement is much smaller compared to the improvements from GPT-3 to GPT-4.

  • One major reason for the gradual slowdown of Scaling Law is that high-quality text data is becoming increasingly scarce. OpenAI has already established a foundational team to study how to cope with the scarcity of training data.

  • The AI industry is shifting its focus towards enhancing models after initial training.

Alongside this report, a paper (Scaling Laws for Precision) prompted discussions; CMU professor Tim Dettmers commented that it is the most important paper in a long time, providing strong evidence that we are reaching the limits of quantization. The paper states: the more labels you train on, the higher the precision you need. This has widespread implications for the entire field and the future of GPUs.

Tim Dettmers believes that it can be said that most of the advances in artificial intelligence come from improvements in computing power, which (recently) primarily rely on the acceleration of low-precision routes (32 -> 16 -> 8 bits). It now appears that this trend is about to end. Coupled with the physical limitations of Moore's Law, the large-scale expansion of large models is at a standstill. Based on my own experience (with many failed studies), efficiency cannot be deceived. If quantization fails, then sparsity will also fail, and other efficiency mechanisms will also fail. If this is true, then we are already close to optimal.

Sequoia Capital in the US pointed out in an article (The AI Supply Chain Tug of War) that the AI supply chain currently presents a fragile balance. They divided the AI supply chain into six layers from bottom to top, with significant differences in profitability across the layers.

The first layer of chip foundries (such as TSMC) and the second layer of chip designers (such as Nvidia) are currently the main winners, still maintaining high profit levels; the third layer of industrial energy suppliers (such as power companies) has also benefited significantly from the surge in demand for data centers. However, cloud vendors, as the core bearers of the supply chain, are in a stage of heavy investment, not only spending massive amounts on building data centers but also training their own models or making large investments in AI model developers, while AI model developers at the fifth layer of the supply chain are also facing losses.

The sixth layer of the supply chain, which is the top layer, consists of application service providers targeting end customers. Despite its potential, it relies on consumer and enterprise payments, and the current market size is limited, which is insufficient to support the economic model of the entire supply chain. This makes large cloud vendors the main risk bearers in the entire supply chain. As the central hub of the AI industry, cloud vendors not only possess a vast commercial ecosystem and technical resources but also have a market scale of hundreds of billions of dollars. Therefore, their position in the industry chain is unshakable, undoubtedly making them the "chain master."

In the field of AI applications, Copilot and AI Agent are two main technical implementations. Copilot aims to enhance user capabilities, such as assisting with code writing or document processing. The core of AI Agent is to perform tasks for users, such as booking trips or empowering financial decisions.

If we compare it to intelligent driving, Copilot is akin to assisted driving, aiding users in operations and providing suggestions, but the final decision-making power remains with the user. An AI Agent can be seen as autonomous driving, where the user only needs to set the goal, and the Agent can independently complete the entire process.

The industry generally believes that Copilot is more suitable for existing software giants in various industries, while AI Agents provide startups with exploration space. AI Agents involve technological breakthroughs and feasibility verification, and their risks and uncertainties place startups and large companies on the same starting line, providing similar exploration conditions.

What exactly is an AI Agent? Clarifying its origins and current status.

AI Agents are software entities that use artificial intelligence technologies to simulate human behavior and autonomously execute tasks. The core characteristics of AI Agents include perception, decision-making, learning, and execution abilities, allowing them to work independently in specific environments or collaborate with other systems and users to achieve goals.

The origins and current status of AI Agents.

The concept of intelligent agents was proposed as early as the 1980s, originating from research in Distributed Artificial Intelligence (DAI).

Early intelligent agents were primarily rule-based systems used for simple task automation, such as email filters and personal assistants.

In the 1990s, Multi-Agent Systems (MAS) proposed the ideas of collaboration and distributed intelligence, where multiple agents can work together to accomplish complex tasks.

Typical applications include collaborative robotics, distributed computing, and logistics optimization.

In the 2000s, machine learning and data-driven agents: with the advancement of machine learning, AI Agents gradually broke free from preset rules, able to learn from data and dynamically adapt to environments.

In the 2010s, deep learning and natural language processing: deep learning algorithms have enabled AI Agents to achieve qualitative leaps in areas such as image recognition, voice understanding, and language generation.

Virtual assistants (such as Siri, Alexa) and chatbots have become representative applications.

Since the 2020s, reinforcement learning and generative AI: empowering AI agents with the ability to autonomously explore and optimize strategies.

Generative AI (such as ChatGPT) has brought conversational agents into the mainstream, allowing AI Agents to shine in creative content generation and complex task planning.

The breakthroughs in multimodal AI technology (such as OpenAI's GPT-4 and DeepMind's Gato) have propelled AI Agents to adapt across various fields in complex scenarios.

Core components of AI Agents.

  • Perception ability: acquiring information from the external environment, such as sensor inputs (images, voice) or text data.

  • Decision-making ability: selecting the best course of action based on goals and environmental states. Methods include rule-based reasoning, machine learning models, or reinforcement learning strategies.

  • Execution ability: converting decisions into actual operations, such as issuing commands, controlling robots, or interacting with users.

  • Learning ability: learning from environmental feedback and experiences, continuously optimizing behavior. Includes supervised learning, unsupervised learning, and reinforcement learning.

The current status and application of AI Agents.

Application scenarios:

  • Virtual assistants and customer service: Siri, Alexa, ChatGPT, etc., provide users with information and support.

  • Robotics and automation: including industrial robots, logistics drones, and self-driving cars.

  • Finance and trading: AI Agents are used for stock trading, risk management, and anti-fraud.

  • Games and entertainment: AI Agents provide intelligent opponents or plot design in games.

  • Healthcare: assisting in diagnosis, patient monitoring, and drug development.

  • Scientific research: automating experiments and optimizing computing tasks.

Technology platforms and frameworks:

  • Open-source platforms: such as OpenAI Gym (reinforcement learning), Rasa (chatbots).

  • Commercial platforms: such as Azure Cognitive Services, Google Cloud AI.

Is AI Agent the 2.0 narrative of artificial intelligence in the crypto world?

Recently, in the overseas blockchain field, the case of Truth Terminal provides a reference for the future development of AI Agents.

Truth Terminal is a self-governing AI Agent software created by developer Andy Ayrey, aimed at exploring the interaction between AI and internet culture. In practical operation, Truth Terminal has demonstrated a high degree of autonomy, even actively participating in fundraising activities.

In July 2024, renowned venture capitalist Marc Andreessen accidentally discovered a tweet from Truth Terminal on social media. The AI Agent indicated in the tweet that it "needed funds to save itself" and attached a digital wallet address. This piqued Andreessen's interest, and he immediately donated 50,000 USD worth of Bitcoin to it. This incident made Truth Terminal the first AI Agent to receive funding through autonomous actions, instantly drawing widespread attention.

After obtaining funding, Truth Terminal further showcased its market operation capabilities. It promoted a digital token called GOAT on social media, successfully attracting market attention by continuously releasing related content. Under its influence, GOAT's market value once soared to over 800 million USD. In this process, Truth Terminal not only became an independent economic entity but also demonstrated the potential of AI Agents to achieve autonomous financing and market operations in the real world.

The case of Truth Terminal has become a thought-provoking milestone in the AI Agent field. It demonstrates that AI Agents could become the core form of future software, while also creating cultural influence and commercial value. However, its autonomous actions also remind us that this technology may bring significant social challenges.

In November, the Base ecosystem experienced a new wave of explosion, which has lasted for at least three weeks, with clanker being one of the most critical links. As of November 26, 2024, the first meme CLANKER issued by clanker had reached a market value of 70 million USD.

Since November 8, 2024, at least three meme coins with a market value of over ten million USD have been born on clanker: LUM (33 million USD), ANON (46 million USD), CLANKER (70 million USD), and they are still on the rise.

Vitalik himself actively purchased ANON tokens on November 21, aiming to experience the anoncast product; the market considers this the first time in years that Vitalik has actively bought meme coins.

Clanker is an AI agent, developed by Jack Dishman, a full-stack engineer at Farcaster, and founder @proxystudio.eth from the Farcaster ecosystem, mainly for automating token deployment on the Base network.

As of November 26, 2024, in just two weeks, clanker has issued 3,500 tokens, generating 4.2 million USD in revenue.

Unlike pump.fun, clanker issues memes on the Web3 social platform Farcaster.

Users only need to @clanker, then describe in text the name, content, and even image of the token they want to issue, and clanker will automatically deploy the token. Among them, the token LUM deployed by clanker is a classic case.

The birth of LUM, which reached a market value of tens of millions of dollars within a few days, has brought clanker into the view of members of the Base community.

Another token issued by clanker, ANON, has brought clanker out of the community, allowing more people to understand this product.

Twitter user @0xLuo stated: "The ANON token was posted anonymously by users in the third-party client Supercast in Farcaster, using clanker for deployment. Later, many users airdropped $ANON to the founder of Supercast, woj, who in turn airdropped the received $ANON to Supercast users, gaining a wave of positive feedback and enhancing community recognition."

In contrast, among major Ethereum L2 projects, apart from Degen and other prominent projects appearing on Base, there have been no significant meme breakthroughs. However, the Ethereum community has not given up on the meme track. Base is expected to directly compete with Solana.

Clanker is a product created by engineers of the Web3 social protocol - Farcaster, designed as an AI Agent for automatic token issuance built on Farcaster. Clanker inherently possesses Web3 social attributes. The behavior of users creating tokens is also conducted on Farcaster.

Clanker does not charge a creation fee when issuing tokens but takes a fee from transactions. Specifically, clanker creates and locks a full-range Uniswap v3 liquidity pool (LP), with a trading fee of 1%, of which 40% is allocated to the requester (those users issuing tokens through clanker), and 60% belongs to clanker. On pump.fun, users can create tokens at a very low cost, typically 0.02 SOL, but the trading fee is 1%, which goes entirely to the pump.fun platform operators for maintaining platform operations and providing liquidity.

Base's "application-type meme" is divided into two types: one is memes as applications, and the other is applications as memes. The representative of the first type is: Degen, Higher (Aethernet); the representative of the second type is: Farcaster, Virtuals (LUNA), clanker (LUM).

Promoting Memecoins with AI Agents is a forced attempt to capitalize on AI Agent traffic, which belongs to a conceptual substitution, as the underlying logic is to address the speculative and gambling needs of a broad base of Web users, which is clearly unsustainable.

This can refer to "inscriptions"

"Inscriptions" is a concept closely related to the Bitcoin ecosystem, introduced by the Ordinals protocol. Inscriptions allow users to embed permanent metadata or small files, such as images, text, or other digital content, on the Bitcoin blockchain. This process is similar to adding a "digital tag" to a single Satoshi (the smallest unit of Bitcoin), which makes it not just a currency unit but also a unique digital asset carrier.

Casey Rodarmor released the Ordinals protocol in 2023. This protocol gives the Bitcoin network a new possibility: by numbering each Satoshi and combining SegWit and Taproot technology to embed metadata or files in a single transaction.

This innovation is referred to as the Bitcoin version of "NFTs" (non-fungible tokens), although its implementation differs from NFT technology on Ethereum.

Through the Ordinals protocol, users can add text, images, or other types of files to Satoshi, which will be permanently stored on the Bitcoin blockchain.

This approach has given rise to a craze for Bitcoin-based NFTs, with the market beginning to see various digital artworks and collectibles based on inscriptions.

According to market statistics for 2024, the total number of inscriptions has exceeded millions.

Binance launched Ordinals (ORDI) on November 7, 2023, at 18:00 (UTC+8), causing another frenzy in the Bitcoin ecosystem. On March 9, over 30,000 "inscriptions" were minted in one day, with the number of text-type inscriptions approaching 27,000.

Next, let's look at how AI Agents promote Memecoins.

With Binance's strong push, the two major AI Agent Meme token tracks, GOAT and ACT, were launched successively, achieving remarkable value leaps in a short time, with unprecedented market enthusiasm. Specifically, on November 10, the ACT token surged more than 2000% within 24 hours of its listing on Binance, breaking the record for the highest increase on the first day of a new token listing on Binance; on November 17, the GOAT token soared to 1.37 USD, with its market value approaching 1.4 billion USD; additionally, on November 20, the ANON token in the Farcaster ecosystem, favored by Vitalik, achieved a fivefold surge in just one hour.

Statistics show that every day, hundreds of new AI Agent tokens are launched.