Whenever AI Agent frameworks and standards are mentioned, many people feel a mix of both demon and angel. Because the ceiling for creating frameworks is very high, it might quickly reach 300M in a short time, but once it fails to live up to its name and consensus collapses, the probability of plunging into the abyss is also very high. So why have AI Agent frameworks and standards become a battleground, and how to evaluate whether a framework standard is worth investing in? Below, I will share my personal understanding for reference:
1) The AI Agent itself is a product born purely from the web2 internet context, with LLMs being trained through a large amount of closed data, eventually resulting in interactive AIGC applications like ChatGPT, Claud, DeepSeek, etc.
Its overall focus is on 'application' logic. As for how agents communicate and interact with each other, how to establish a unified data exchange protocol among agents, and how to build verifiable computational verification mechanisms, etc., these issues are inherently lacking.
The essence of expanding AI Agent frameworks and standards is the evolution from centralized servers to decentralized collaborative networks, from closed ecosystems to open unified standard protocols, and from single AI Agent applications to complex interconnected ecosystems in a web3 distributed architecture.
The core logic is simple: AI Agents must seek commercial prospects under the modular and chain-based ideas of web3. Starting from 'framework standards', they need to build a distributed architecture that aligns with the web3 framework; otherwise, it would just be a web2 application market strategy purely competing on computing power and user experience.
Thus, the AI Agent framework and standards have become a battleground in this wave of AI + Crypto narratives, with an unimaginable scope for imagination.
2) The AI Agent framework and standards are in a very early stage. It is not an exaggeration to say that now, when hearing from various developers about their technical visions and practical routes, it is no different from @VitalikButerin ten years ago.
It's the same as coming to China for roadshows to seek funding. Imagine, if Vitalik stood in front of you ten years ago, how would you judge?
1. Look at the charm of the founder, which is consistent with the logic of most first-stage angel rounds 'investing in people'. For example, when @shawmakesmagic was criticized for being a big mouth, if you saw his sincere engagement with the community through laughter, anger, and cursing, you would want to lean on ai16z. Another example is @KyeGomezB from Swarms, who maintained a consistent technical discussion attitude even when faced with various FUD scams. Would that impress you, etc.;
2. Look at the technical quality. While appearances can come from decoration, decoration also requires cost. A project with good technical quality is worth FOMO, worth investing with a 'donation' mentality, and worth spending energy to follow and research. For example: the quality of GitHub code, the reputation of the developer's open-source community, whether the technical architecture logic is coherent, whether the technical framework has already been applied, the hardness of the technical white paper content, etc.
3. Look at the narrative logic. Currently, there is a gradually 'chain-based' narrative direction in the AI Agent track. You will find that more and more old chains are embracing and supporting the AI Agent narrative. Of course, the original frameworks like #ElizaOS, #arc, #Swarms, #REI, etc., will also explore the possibility of 'chainification'. For example, #Focai is a project built through community exploration of the 'chainification' of the ElizaOS framework. A good narrative logic brings its own momentum because it carries the expectations of the entire Crypto market. If a project emerges claiming to solve AI problems that even web2 cannot solve in the short term, would you believe it?
4. Look at ecological landing. The framework standards are indeed upstream. In most cases, it is best to abstract the framework standards after having a standalone AI Agent, for example, launching zerePy after #zerebro. The framework will empower the standalone AI, which is naturally stronger than launching a new framework token to split consensus cohesion; however, whether a framework and standards emerge out of nowhere, no matter how grand the vision, must consider the actual implementation of the AI Agent project (team execution capability and iteration speed), whether there is ecological landing and subsequent performance, as this is the lifeblood of the project's sustainable growth.
In short, the current competition over frameworks and standards is about determining who will be the EVM in the next round of AI Agent narratives and who will be the high-performance SVM superior to EVM. Of course, during this process, a Cosmos IBC might emerge, along with a new DeFi paradigm based on Move, leading to a parallel EVM and real-time large-scale concurrent layer2... just think about how long this road is.
Frameworks and standards will definitely continue to emerge, and each generation will be stronger than the last, making it difficult to judge and choose.
I only look at the activity level of developers and the actual delivery results of projects. If answers cannot be provided, a short-term surge is merely illusory. If you see 'certainty', it is not too late to jump in. The ceiling valuation for AI Agents can be high enough to reach 'public chain' levels, with the potential for opportunities above 10B to emerge, so there is no need to rush.
3) The boundaries of AI Agent frameworks and standards are quite blurred. For example, the ElizaOS framework standard can only be qualitatively defined as a spiritual totem of the developer community before platformization, and its value overflow can only rely on #ai16z. Another example is that the #Game framework standard is still playing in a #Virtual closed-source mode, which seems slightly alternative compared to mainstream open-source combination architectures.
Additionally, the ElizaOS framework is certainly trendy, but there is an independent #ELIZA, and I don’t know how to bind the relationship with it; the fundamentals of the #arcRIG framework are good, but applying Rust in the AI Agent field to enhance performance feels too ahead of its time; the technical quality of #Swarms is actually not bad, but such a FUD and turbulent beginning, along with the panic-inducing situation, was unexpected; the compatibility of blockchain determinism and agent execution probability that #REI aims to solve is very interesting, but also seems too advanced in technical direction, etc.
The above are just some market-recognized frameworks and standards with 'technical quality'. There are many more projects, such as Nexus, LangGraph, Haystack, AgentFlow, etc., that claim to be framework standards. However, whether focusing on low-code convenience deployment, native multi-chain inheritance, other enterprise-level customization business potential, or even AI Metaverse, etc.
All of this illustrates the 'lack of standards' characteristic of current framework standards, akin to how Vitalik proposed expanding Ethereum with various exploratory directions like Plasma, Rollup, Validium, Parallel, etc., but ultimately only Rollup became mainstream.