Many people still do not understand why I have been calling for AI framework standard projects to move towards 'chainification'? Perhaps during the previous two bull and bear cycles, Chain infrastructure carried too many expectations, and now that we have finally reached the 'application' era of AI Agents, there is a sense of dread towards 'chains'. However, for AI Agents to make more reliable autonomous decisions and collaborate, they must inevitably trend towards 'chainification'.

Currently popular frameworks like ELIZA, ARC, and Swarms are basically still in the 'concept phase', which cannot be falsified to zero nor can it be verified to explode; they are essentially in a period where valuation cannot be quantified. This is the first hurdle for issuing assets on GitHub, where we need to find the possibility of landing the outlined framework and vision to gain market consensus.

If we closely examine frameworks like ELIZA, ARC, Swarms, whether optimizing the performance of a single AI Agent or facilitating multi-AI Agent interaction and collaboration, they fundamentally need to establish a traceable logic and rules for AGI large model API calls.

After all, the data is off-chain, the reasoning process is difficult to verify, the execution process is opaque, and the execution results are uncertain.

From a short-term perspective, TEE provides a low-cost, high-feasibility off-chain trustless solution that can accelerate the application of AGI into the autonomous decision-making process of AI Agents. From a longer-term perspective, a set of 'on-chain consensus' is also needed to assist in becoming more reliable.

For example, if ELIZA wants to build an AI Agent autonomous private key custody solution based on its framework, it uses @PhalaNetwork's TEE secure remote authentication capabilities to ensure that the execution code of the AI-Pool is not tampered with before calling the private key for signing, but this is just the first small step of TEE's application to the AI Agent direction.

If we can place complex preset execution logic into the Agent Contract and allow the Validators of the Phala chain to participate in verification, a chain based on consensus constraints for TEE execution details will be established. At that time, AI Agents will drive the demand for TEE, and TEE will initiate a positive feedback loop for chain empowerment.

The logic makes sense; TEE can ensure private keys are not visible, but how private keys are called, based on what preset rules, how risk control emergency responses are triggered, etc. In the short term, transparency can be achieved through open-source code libraries, but in the longer term, isn't it all dependent on a decentralized verification consensus for real-time verification?

Therefore, 'chainification' can accelerate the practical application phase of the AI Agent framework and bring new incremental opportunities with Crypto infrastructure.

The direction is already very clear. For most people, finding and being bullish on the earliest chainified AI Agent frameworks and the oldest chains supporting AI Agents is the Alpha under the new trend of AI Agents.