Article source: On-chain observation

Whenever the frameworks and standards of AI Agents are mentioned, many people feel a sense of confusion between demons and angels. The ceiling for making frameworks is very high; a rapid ascent to 300M may happen in the short term, but if it fails to live up to its name and consensus collapses, the probability of plummeting into the abyss is also very high. So, why have the AI Agent framework standards become a battleground, and how to judge whether a framework standard is worth investing in? Below, I sincerely share my personal understanding for reference:

1) AI Agents themselves are products of the pure web2 internet context. LLMs, large models, are trained through vast amounts of closed data, ultimately producing interactive AIGC applications like ChatGPT, Claud, DeepSeek, etc.

Its overall focus is on 'application' logic. However, it inherently lacks answers to questions about how Agents communicate and interact, how to establish a unified data exchange protocol among Agents, and how to build verifiable computation verification mechanisms, etc.

The essence of extending AI Agent frameworks and standards is the evolution from centralized servers to decentralized collaborative networks, from closed ecosystems to open unified standard protocols, and from single AI Agent applications to complex interlinked ecosystems in a web3 distributed architecture.

The core logic is simple: AI Agents need to seek commercial prospects under the modular and chained thinking of web3. Starting with 'framework standards', they must build a distributed architecture that conforms to the web3 framework; otherwise, it will simply be a web2 application market approach that relies purely on computing power and user experience.

Therefore, the frameworks and standards for AI Agents have become a battleground in this wave of AI + Crypto narratives, with an unimaginable space for imagination.

2) The AI Agent frameworks and standards are at a very early stage. It is no exaggeration to say that now, when hearing various developers talk about their technological visions and practical paths, it is no different from talking to @VitalikButerin ten years ago.

It's similar to seeking funding through roadshows in China. Imagine if Vitalik stood in front of you ten years ago, how would you evaluate him?

1. Look at the founder's charisma, which aligns with the logic of investing in people during most seed funding rounds. For example, when @shawmakesmagic was criticized for being a loudmouth, if you saw his sincere engagement with the community amidst his jokes and anger, you would hold on tight to the ai16z leg. Similarly, @KyeGomezB from Swarms maintained a consistent attitude in discussing technology amidst various FUD and scams; could that impress you, etc.

2. Look at the technical quality. Although the facade can come from decoration, decoration also requires cost. A project with good technical quality is worth Fomo, worth investing with a 'donation' mentality, and worth spending effort to follow up and study. For example: GitHub code quality, developer open-source community reputation, whether the technical architecture logic is self-consistent, whether the technical framework has already been implemented, the hardness of the technical white paper content, etc.

3. Look at the narrative logic. The AI Agent track currently has a narrative direction that is gradually becoming 'chained'. You will find that more and more old chains are embracing the support for AI Agent narratives. Of course, the original large framework directions like #ElizaOS, #arc, #Swarms, and #REI will also explore the possibilities of 'chaining'. For example, #Focai is a project built through community exploration of the 'chaining' of the ElizaOS framework. A good narrative logic carries inherent potential, as it embodies the expectations of the entire Crypto market. If a project emerges claiming to solve AI problems that even web2 cannot solve in the short term, would you believe it?

4. Look at the ecological implementation. Framework standards are indeed very upstream. In most cases, it is best to abstract the framework standards after having a single AI Agent. For example, after #zerebro, zerePy was launched; the framework will empower the single AI, naturally stronger than issuing a new framework token to split consensus cohesion. However, regardless of how grand the framework and standards appear, the actual implementation of the AI Agent engineering (the team's execution capability and iteration speed), the ecological landing, and subsequent performance are the lifeblood of the project's sustainable growth.

In short, the current competition over frameworks and standards is about determining who will be the next EVM in the AI Agent narrative and who will be the high-performance SVM that surpasses EVM. Of course, during this process, Cosmos IBC emerged, and a new DeFi paradigm based on Move was established, creating a parallel EVM and real-time large-scale concurrent layer2... Just think about how long this road is.

Frameworks and standards will undoubtedly emerge one after another, each stronger than the last, making it difficult to make judgments.

I only look at the activity level of developers and the actual delivery results of the projects. If they cannot deliver results, short-term surges are just illusions. If I see 'certainty', it is not too late to get on board. The ceiling valuation for AI Agents can be as high as 'public chain' level, with opportunities exceeding 10B potentially arising, so there is no need to rush.

3) The boundaries of AI Agent frameworks and standards are quite blurred. For example, the ElizaOS framework standard can only be qualitatively defined as a spiritual totem of the developer community before platformization, and its value overflow can only rely on #ai16z to sustain it. Similarly, the #Game framework standard is still playing in a closed-source model under #Virtual, appearing somewhat alternative compared to mainstream open-source composite architectures.

Additionally, while the ElizaOS framework is indeed popular, there is an independent #ELIZA, but it's unclear how to bind the relationship with it. The fundamentals of the #arcRIG framework are good, but applying Rust language to the AI Agent field to enhance performance feels overly advanced. The technical quality of #Swarms is actually not bad, but such a turbulent and FUD-filled start and the panic-inducing rise caught people off guard. The compatibility of blockchain determinism and Agent execution probability that #REI aims to solve is very interesting, but the technical direction also feels too advanced, among other things.

The above are also some frameworks and standards recognized by the market as having 'technical quality', and there are many others, such as Nexus, LangGraph, Haystack, AgentFlow, etc. Too many projects claiming to be framework standards exist, whether they focus on low-code convenience for deployment, native multi-chain inheritance, or other enterprise-level customized commercial potential, even AI Metaverse, etc.

All indicate the current 'no standard' characteristic of framework standards. It's like Vitalik proposed various exploratory directions such as Plasma, Rollup, Validium, and Parallel to expand Ethereum, but in the end, only Rollup became mainstream.