Original Title: Crypto x AI: 10 Categories We're Watching in 2025 Original Author: @archetypevc, Crypto Kol Original Translation: zhouzhou, BlockBeats
Editor’s Note: This article discusses multiple innovative areas at the intersection of cryptography and AI in 2025, including interactions between agents, decentralized agent organizations, AI-driven entertainment, generative content marketing, data markets, decentralized computing, and more. The article explores how to leverage blockchain and AI technologies to create new opportunities across multiple industries, drive advancements in privacy protection, AI hardware development, and the application of decentralized technologies, while also focusing on how intelligent agents can bring breakthroughs in transactions, artistic creation, and more.
The following is the original content (edited for readability):
Interactions Between Agents
The inherent transparency and composability of blockchain make it an ideal platform for interactions between agents.
In this scenario, agents developed by different entities for different purposes are able to interact seamlessly. There have already been many experiments regarding agents sending funds to each other, jointly launching tokens, etc.
We are excited to see how interactions between agents expand, including the creation of entirely new application domains driven by agent interactions, as well as improvements to existing business workflows (such as platform certification, validation, micropayments, cross-platform workflow integration, etc.) to enhance efficiency and address some current cumbersome issues.
—Danny, Katie, Aadharsh, Dmitriy
Decentralized Agent Organizations
Large-scale multi-agent coordination is another equally exciting research area.
How do multi-agent systems collaborate to complete tasks, solve problems, and manage systems and protocols? In an early 2024 article (The Promises and Challenges of Cryptography and AI Applications), Vitalik mentioned that AI agents could be utilized for prediction markets and adjudication. He actually believes that in large-scale applications, multi-agent systems have significant "truth" discovery capabilities and can achieve universal autonomous governance systems. We are very interested in the capabilities of multi-agent systems and the ongoing discovery and experimentation with forms of "collective intelligence."
As an extension of coordination among agents, coordination between agents and humans is also an interesting design space—especially regarding how community interaction can be organized around agents or how agents can organize humans for collective action. We hope to see more experiments with agents aimed at large-scale human coordination. This will require the implementation of some verification mechanisms, especially when certain human tasks are completed off-chain, but it could also lead to some very peculiar and interesting emergent behaviors.
—Katie, Dmitriy, Ash
Agent Multimedia Entertainment
The concept of digital characters has existed for decades.
Hatsune Miku (2007) sold out at a venue of 20,000, while Lil Miquela (2016) has over 2 million followers on Instagram. More recent, lesser-known examples include the AI virtual streamer Neuro-sama (2022), who has over 600,000 subscribers on Twitch, and the K-pop group PLAVE (2023), which debuted anonymously and achieved over 300 million views on YouTube in less than two years.
With the development of AI infrastructure and the integration of blockchain in payments, value transfer, and open data platforms, we are very much looking forward to seeing how these agents become more autonomous and potentially unlock a new mainstream entertainment category by 2025.
—Katie, Dmitriy
Generative/Agent Content Marketing
In the aforementioned case, the agent itself is the product, while in another scenario, the agent can complement existing products. In the attention economy, maintaining a continuous flow of attractive content is crucial for the success of any creative effort, product, company, etc.
Generative/agent content is a powerful tool for teams to ensure a scalable, round-the-clock content creation pipeline. The discussions around what distinguishes meme coins from agents have accelerated the development of this concept. Agents provide a strong means of distribution for meme coins, even if these meme coins are not yet fully "agent" types (but may become so).
Another example is that games increasingly need to possess greater dynamism to maintain user engagement. One classic way to create game dynamics is to cultivate user-generated content; fully generated content (from in-game items to NPCs, to entirely generated levels) might be the next stage of this evolution. We are curious about how agents will expand the boundaries of traditional distribution strategies by 2025.
—Katie
Next-Generation Art Tools/Platforms
In 2024, we launched (IN CONVERSATION WITH), an interview series with crypto artists from fields like music, visual arts, design, and curation. A key observation from this year's interviews is that many artists interested in crypto also have a keen interest in cutting-edge technologies and wish to integrate these technologies into their artistic practices—such as AR/VR items, code-based art, and live coding.
Generative art has a natural synergy with blockchain, making it clearer as a potential foundational platform for AI art. It is extremely difficult to properly showcase these art forms on traditional platforms. ArtBlocks provides a prospect for how blockchain can be utilized to display, store, monetize, and preserve digital art—enhancing the overall experience for artists and audiences. Beyond display, AI tools even extend the ability for ordinary people to create their own art. How blockchain will expand or support these tools in 2025 will be a very interesting topic.
—Katie
Data Markets
In the 20 years since Clive Humby coined the phrase "data is the new oil," companies have taken strong measures to hoard and monetize user data. Users are gradually realizing that their data is the foundation upon which these billion-dollar companies are built, yet they have almost no control over how their data is used or share in the profits it generates.
The accelerated development of powerful AI models sharpens this contradiction. If solving user exploitation is part of the data opportunity, another important issue is addressing the data supply shortage since increasingly larger and more powerful models are consuming the easily accessible oil fields of public internet data and require new sources of data.
The question of how to leverage decentralized infrastructure to return data control to its sources (users) presents a vast design space involving innovative solutions across multiple domains. The most pressing issues include: where data is stored and how to maintain privacy during storage, transmission, and computation; how to objectively benchmark, filter, and assess data quality; what mechanisms we use for attribution and monetization (especially when value needs to be traced back to the source post-inference); and what sort of coordination or data retrieval systems we use within a diverse model ecosystem.
Concerning the issue of addressing supply bottlenecks, it is not merely about replicating Scale AI through tokens but rather understanding how we can gain advantages with the help of technological tailwinds, and how to build competitive solutions around scale, quality, or better incentive (and filtering) mechanisms to generate higher-value data products. Especially when the demand side mostly comes from web2 AI, how to combine mechanisms for enforcing smart contracts with traditional service level agreements (SLAs) and tools is a critical area that needs attention.
—Danny
Decentralized Computing
If data is a foundational component of AI development and deployment, then computational power is another. In recent years, traditional large data centers with unique access—controlling site, energy, and hardware—have dominated the trajectory of deep learning and AI, but this pattern is beginning to be challenged as physical limitations and open-source developments advance.
The v1 computing version in decentralized AI looks like a copy of web2 GPU clouds, lacking any real supply advantage (whether in hardware or data centers) and lacking organic demand. In v2, we are starting to see some excellent teams building a complete tech stack based on heterogeneous high-performance computing (HPC) supply, involving capabilities in coordination, routing, pricing, etc., combined with some proprietary features to attract demand and tackle marginal compression, especially in the inference phase. Teams are also starting to diversify in different use cases and go-to-market strategies (GTM), with some focusing on efficient inference routing by integrating compilation frameworks onto diverse hardware, while others are pioneering distributed model training frameworks on their constructed computing networks.
We are even starting to see the emergence of an AI-Fi market that includes new economic primitives, turning computing and GPUs into revenue-generating assets, or leveraging on-chain liquidity to provide another source of capital for data centers to acquire hardware. A major question here is to what extent decentralized AI (DeAI) will develop and deploy on decentralized computing tracks, or whether, like in the storage space, the gap between ideology and practical needs will always be unbridgeable, preventing the full potential of this idea from being realized.
—Danny
Computational Accounting Standards
Related to the incentive mechanisms of decentralized high-performance computing networks, a significant challenge in coordinating heterogeneous computing resources is the lack of a unified standard for accounting for this computational power. AI models uniquely introduce multiple complex factors into the output space of high-performance computing (HPC), from model variants and quantization to adjustments in randomness levels through model temperature and sampling hyperparameters. Additionally, AI hardware may introduce more complexity through variations of GPU architecture and different versions of CUDA. Ultimately, this leads to a demand for how to account for models and computing market capabilities when performing cross-computation in heterogeneous distributed systems.
Partly due to the lack of standards, we see multiple cases in the web2 and web3 domains where models and computing markets fail to accurately account for the quality and quantity of their computational capabilities. This forces users to run their own comparative model benchmarks and execute proof of work for computing markets through throttling, thereby auditing the real performance of these AI layers.
Given that verifiability is a core principle in the crypto space, we hope that the intersection of crypto and AI in 2025 will be easier to verify than traditional AI. Specifically, it is important that ordinary users can compare various aspects of a given model or cluster, particularly those features that define output, to audit and benchmark the system's performance.
—Aadharsh
Probabilistic Privacy Primitives
In the article (The Promises and Challenges of Cryptography and AI Applications), Vitalik mentioned the unique challenges of resolving the intersection between cryptography and AI: "In cryptography, open source is the only way to make things truly secure, but in AI, the openness of models (and even their training data) greatly increases their vulnerability to adversarial machine learning attacks."
While privacy is not a new research direction in the blockchain space, we believe that the popularization of AI will continue to accelerate the research and application of privacy-preserving cryptographic primitives. This year, significant progress has been made in privacy-enhancing technologies (such as ZK, FHE, TEE, and MPC), with applications including private shared state for computations on encrypted data. At the same time, we see centralized AI giants like Nvidia and Apple utilizing proprietary TEEs for federated learning and private AI inference, maintaining consistency across hardware, firmware, and models within systems.
In light of this, we will closely monitor how to maintain privacy during random state transitions in heterogeneous systems, and how they accelerate the development of real-world decentralized AI applications—from decentralized private inference to pipelines for storing/accessing encrypted data, to fully sovereign execution environments.
—Aadharsh
Agent Intent and Next-Generation User Transaction Interfaces
The closest application scenario for AI agents is using them to autonomously trade on the blockchain on our behalf. Undeniably, in the past 12 to 16 months, there have been many vague statements about what constitutes "intention," "agent behavior," "agent intention," "solvers," "agent solvers," etc., and how they differ from more traditional 'robots' developed in recent years.
In the next 12 months, we look forward to seeing increasingly complex language systems combined with different data types and neural network architectures driving the development of the overall design space. Will agents continue to use the same on-chain systems we use today for transactions, or will they develop their own independent trading tools/methods? Will large language models (LLMs) continue to serve as the backend for these agent trading systems, or will they be supplanted by other systems? At the interface layer, will users begin to transact using natural language? Will the classic theory of "wallets as browsers" ultimately come to fruition?
—Danny, Katie, Aadharsh, Dmitriy
"Original link"