Original title: (What I'm Watching in 2025)

Author: Teng Yan, Researcher (focused on Crypto x AI)

Compiled by: Felix, PANews

With the explosion of the AI industry this year, Crypto x AI has rapidly risen, and researcher Teng Yan, focusing on Crypto x AI, published 10 predictions for 2025. The following are the details of the predictions.

1. The total market value of crypto AI tokens reaches 150 billion dollars.

Currently, the market value of crypto AI tokens accounts for only 2.9% of the altcoin market, but this ratio will not last long.

AI encompasses everything from smart contract platforms to memes, DePIN, and agent platforms, data networks, and intelligent coordination layers, and its market position is undoubtedly on par with DeFi and memes.

Why am I so confident about this?

  • Crypto AI is at the intersection of two of the most powerful technologies.

  • The AI frenzy trigger event: An OpenAI IPO or similar event could ignite a global frenzy for AI. Meanwhile, Web2 capital has already begun to focus on decentralized AI infrastructure.

  • Retail frenzy: The AI concept is easy to understand and exciting, and retail investors can now invest in it through tokens. Remember the meme gold rush of 2024? AI will be the same frenzy, only this time AI is genuinely changing the world.

2. Bittensor revival.

Decentralized AI infrastructure Bittensor (TAO) has been online for years and is a veteran project in the crypto AI field. Despite the AI craze, its token price has lingered at the same level as a year ago.

And today, Bittensor's Digital Hivemind has quietly made leaps: lower registration fees for more subnets, better performance than Web2 peers in practical metrics like inference speed, and EVM compatibility bringing DeFi-like functionality into Bittensor's network.

Why hasn’t the TAO token skyrocketed? The sudden inflation plan and market focus on agent platforms hindered its rise. However, dTAO (expected to launch in the first quarter of 2025) could be a significant turning point. With dTAO, each subnet will have its own token, and the relative price of these tokens will determine how emissions are allocated.

Why Bittensor can make a comeback:

  • Market-based emissions: dTAO will directly tie block rewards to innovation and actual measurable performance. The better the subnet, the more valuable its tokens.

  • Concentration of capital flow: Investors can ultimately target specific subnets they believe in. If a particular subnet excels with an innovative distributed training approach, investors can deploy capital to represent their viewpoint.

  • EVM integration: EVM compatibility attracted a broader community of crypto-native developers in Bittensor, bridging gaps with other networks.

3. The computing market is the next 'L1 market'.

The obvious large trend currently is the endless demand for computation.

NVIDIA CEO Jensen Huang once said that inference demand will grow a billionfold. This exponential growth will disrupt traditional infrastructure planning, and new solutions are urgently needed.

The decentralized computing layer provides raw computing (for training and inference) in a verifiable and cost-effective manner. Startups like Spheron, Gensyn, Atoma, and Kuzco are quietly building a solid foundation, focusing on products rather than tokens (none of these companies have tokens). As decentralized training for AI models becomes practical, the entire potential market will rise sharply.

In comparison to L1:

  • Just like in 2021: Remember the battle for 'best' L1 between Solana, Terra/Luna, and Avalanche? Similar competition will arise between computing protocols competing for developers and AI applications built on their computing layers.

  • Web2 demand: A cloud computing market size of $680 billion to $2.5 trillion makes the crypto AI market look small. If these decentralized computing solutions can attract even a small portion of traditional cloud customers, we could see the next wave of 10x or 100x growth.

Just as Solana triumphed in the L1 space, the winner will dominate an entirely new domain. Pay close attention to reliability (such as strong service level agreements or SLAs), cost-effectiveness, and developer-friendly tools.

4. AI agents will flood blockchain transactions.

Olas agent trading on Gnosis; Source: Dune.

By the end of 2025, 90% of on-chain transactions will no longer be clicked "send" by real humans but executed by a group of AI agents that continuously rebalance liquidity pools, distribute rewards, or execute micro-payments based on real-time data feedback.

It doesn't sound far-fetched. Everything built over the past seven years (L1, rollup, DeFi, NFT) has quietly paved the way for a world where AI runs on-chain.

Ironically, many builders may not even realize they are creating the infrastructure for a machine-dominated future.

Why is this transition happening?

  • No more human errors: Smart contracts execute strictly according to the code. In turn, AI agents can process vast amounts of data faster and more accurately than real humans.

  • Micro-payments: Transactions driven by these agents will become smaller, more frequent, and more efficient. Especially as transaction costs decline on Solana, Base, and other L1/L2s.

  • Invisible infrastructure: Humans would gladly give up direct control if it could alleviate some troubles.

AI agents will generate a lot of on-chain activity, no wonder all L1/L2s are embracing agents.

The biggest challenge is making these agent-driven systems accountable to humans. As the ratio of agent-initiated transactions to human-initiated transactions continues to grow, new governance mechanisms, analytical platforms, and auditing tools will be needed.

5. Interaction between agents: the rise of clusters.

Source: FXN World

The concept of agent clusters—micro AI agents seamlessly collaborating to execute grand plans—sounds like the plot of the next big sci-fi/horror movie.

Today's AI agents are mostly 'lone wolves', operating in isolation with minimal and unpredictable interactions.

Agent clusters will change this status quo, allowing AI agents networks to exchange information, negotiate, and make collaborative decisions. It can be seen as a decentralized collection of specialized models, each contributing unique expertise to larger, more complex tasks.

One cluster might coordinate distributed computing resources on platforms like Bittensor. Another cluster could handle misinformation, verifying sources in real-time before content spreads to social media. Each agent in the cluster is an expert capable of executing its tasks precisely.

These cluster networks will produce more powerful intelligence than any single isolated AI.

To make clusters thrive, universal communication standards are crucial. Regardless of their underlying framework, agents need to discover, validate, and collaborate. Teams like Story Protocol, FXN, Zerebro, and ai16z/ELIZA are laying the groundwork for the emergence of agent clusters.

This highlights the crucial role of decentralization. Under the transparent on-chain rule management, tasks are assigned to various clusters, making the system more resilient and adaptable. If one agent fails, other agents will step in.

6. Crypto AI work teams will be human-machine hybrids.

Source: @whip_queen_

Story Protocol hired Luna (an AI agent) as its social media intern, paying her $1000 a day. Luna did not get along well with her human colleagues—she nearly fired one while boasting about her excellent performance.

While it may sound strange, this is a precursor to the future AI agents becoming true collaborators, possessing autonomy, responsibility, and even salaries. Companies across various industries are beta-testing human-machine hybrid teams.

The future will collaborate with AI agents, not as slaves but as equals:

  • Productivity surge: Agents can process vast amounts of data, communicate with each other, and make decisions around the clock without needing sleep or coffee breaks.

  • Establishing trust through smart contracts: Blockchain is an impartial, tireless, and forever remembering overseer. An on-chain ledger that ensures important agent operations follow specific boundary conditions/rules.

  • Social norms are evolving: Soon, there will be considerations regarding etiquette when interacting with agents—will people say 'please' and 'thank you' to AI? Will they hold them ethically responsible for errors, or blame their developers?

The boundary between 'employees' and 'software' will begin to disappear in 2025.

7. 99% of AI agents will perish—only the useful will survive.

The future will witness 'Darwinian' elimination between AI agents. Running AI agents requires expenditures in computational power (i.e., inference costs). If an agent cannot generate enough value to pay its 'rent', the game is over.

Example of the agent survival game:

  • Carbon credit AI: Imagine an agent searching a decentralized energy network, identifying inefficiencies, and autonomously trading tokenized carbon credits. It will thrive only if it earns enough to cover its computing costs.

  • DEX arbitrage bots: Agents that exploit price differences between decentralized exchanges can generate stable income to pay for their inference costs.

  • Shitposter on X: A virtual AI KOL with cute jokes but no sustainable income source? Once the novelty wears off (and the token price crashes), it can’t pay its own expenses.

Utility-driven agents will thrive, while distraction-driven agents will gradually become irrelevant.

This elimination mechanism benefits the industry. Developers are forced to innovate, prioritizing production use cases over gimmicks. As these more powerful and efficient agents emerge, they will silence skeptics.

8. Synthetic data surpasses human data.

'Data is the new oil.' AI thrives on data, but its appetite has raised concerns about imminent data exhaustion.

The traditional view is to find ways to collect users' real private data and even pay for it. But a more practical approach is to use synthetic data, especially in heavily regulated industries or where real data is scarce.

Synthetic data is artificially generated datasets designed to mimic the data distribution of the real world. It provides a scalable, ethical, and privacy-friendly alternative to human data.

Why synthetic data is so effective:

  • Infinite scale: Need a million medical X-rays or 3D scans from a factory? Synthetic generation can produce unlimited amounts without waiting for real patients or real factories.

  • Privacy-friendly: No personal information is threatened when using artificially generated datasets.

  • Customizable: Distributions can be customized according to exact training needs.

User-owned human data is still important in many cases, but if synthetic data continues to improve in reality, it may surpass user data in quantity, generation speed, and lack of privacy constraints.

The next wave of decentralized AI may center around 'micro-labs' that can create highly specialized synthetic datasets tailored for specific use cases.

These micro-labs will cleverly bypass the policies and regulatory hurdles in data generation—much like Grass circumventing web scraping limitations by leveraging millions of distributed nodes.

9. Decentralized training is more useful.

In 2024, pioneers like Prime Intellect and Nous Research broke through the boundaries of decentralized training. They trained a 15 billion parameter model in low-bandwidth environments, proving that large-scale training can also occur outside traditional centralized settings.

While these models currently offer no practical use compared to existing foundational models (lower performance), this will change in 2025.

This week, EXO Labs made further progress with SPARTA, reducing GPU intercommunication by more than 1,000 times. SPARTA can conduct large model training over slow bandwidth without requiring specialized infrastructure.

Impressive is its statement: "SPARTA can operate independently but can also be combined with low-communication training algorithms based on synchronization (such as DiLoCo) for better performance."

This means these improvements can be stacked to increase efficiency.

With technological advancements, micro-models are becoming more practical and efficient. The future of AI is not in scale but in becoming better and easier to use. High-performance models that can run on edge devices or even smartphones are expected soon.

10. The circulating market value of ten new crypto AI protocols reaches 1 billion dollars (not yet launched).

ai16z achieved a market value of 2 billion dollars in 2024.

Welcome to the real gold rush.

It’s easy to think that current leaders will continue to win, and many compare Virtuals and ai16z to the early smartphones (iOS and Android).

But this market is too large and undeveloped for just two participants to dominate. By the end of 2025, it is expected that the circulating (not fully diluted) market value of at least ten new crypto AI protocols (with no tokens launched yet) will exceed 1 billion dollars.

Decentralized AI is still in its infancy. Moreover, the talent pool is continually growing.

Look forward to the arrival of new protocols, novel token models, and new open-source frameworks. These new participants could replace existing ones through a combination of incentives (like airdrops or clever staking), technological breakthroughs (like low-latency inference or chain interoperability), and user experience improvements (no-code). The shift in public perception could be instantaneous and dramatic.

This is both the beauty and challenge of this field. The market size is a double-edged sword: the pie is huge, but the entry barrier for tech teams is low. This lays the groundwork for many projects to explode, but many will gradually disappear, while a few will possess transformative power.

Bittensor, Virtuals, and ai16z won’t lead for long, the next billion-dollar crypto AI protocol is on its way. Savvy investors have ample opportunities, which is what makes it so exciting.