Article source: On-chain View
Many people still do not understand why I have been calling for AI framework standard projects to move towards 'chainification'? Perhaps during the previous two market cycles, Chain infrastructure carried too many expectations, and now that we have finally reached the 'application' era of AI Agents, people feel a sense of fear towards 'chains'. However, for AI Agents to make more reliable autonomous decisions and collaborate, they must inevitably trend towards 'chainification'.
Currently popular frameworks like ELIZA, ARC, and Swarms are basically still in the 'concept stage', a stage that cannot be falsified to zero, nor can it be verified to explode, and is essentially in a phase where valuation cannot be quantified. This is the first hurdle for issuing assets on Github; we need to find the possibility of landing the outlined framework and vision to gain market consensus.
We closely examine frameworks like ELIZA, ARC, and Swarms, whether they are optimizing the performance of a single AI Agent or a multi-AI Agent interactive collaboration framework, they essentially need to establish a traceable logic and rules for AGI large model API calls.
After all, the data is off-chain, the reasoning process is difficult to verify, the execution process is opaque, and the execution results are uncertain.
From a short-term perspective, TEE provides a low-cost, highly feasible off-chain trustless solution that can accelerate the application of AGI into the autonomous decision-making process of AI Agents. From a longer-term perspective, a set of 'on-chain consensus' is also needed to assist in making it more reliable.
For example, ELIZA wants to build an autonomous private key custody solution for AI Agents based on its framework foundation, using the TEE secure remote authentication capability from @PhalaNetwork, which ensures that the execution code of the AI-Pool is not tampered with before signing the private key. However, this is just the first small step of TEE applied to AI Agents.
If we can place the complex preset execution logic into the Agent Contract and have Validators of the Phala chain participate in verification together, a chain of rules for TEE execution based on chain consensus constraints will be established. At that time, the demand for TEE driven by AI Agents will activate a positive feedback loop that empowers the chain.
The logic makes sense; TEE can ensure that the private key is not visible, but how the private key is called, based on what preset rules it is called, how risk control emergency responses are triggered, etc. can be short-term handled by open-source codebases for transparency. However, in the long run, does it all not rely on a decentralized verification consensus for real-time validation?
Therefore, 'chainification' can accelerate the transition of AI Agent frameworks to the practical application stage and bring new incremental opportunities along with Crypto infrastructure.
The direction is already very clear. For most people, seeking and being bullish on the earliest chainified AI Agent frameworks and the oldest chains supporting AI Agents is the Alpha under the new trend of AI Agents.