Original author: CP, founder of Artela

0) TL;DR

The first step to fully implementing on-chain Eliza: Don't trust! Go verify!

Eliza running in TEE can break free from human manipulation, executing according to Eliza's own code.

So, how does the outside world know exactly what Eliza has done? Further development is needed: the outside world can read Eliza's running logs, and these logs can be verified to come from Eliza in TEE. Eliza must sign these logs using key pairs derived from TEE, making the logs verifiable as indeed coming from Eliza in TEE.

The plugin plugin-tee-verifiable-log of focEliza implements this functionality: it derives a key pair specifically for signing logs through TEE. Logs generated using this key pair (including received and responded AI messages, as well as actions performed) are signed, creating verifiable logs stored in a database. It also provides an RPC interface that allows external entities:

· Obtain the public key for the verifiable logs of the AI agent through remote authentication.

· Query these verifiable logs and use the public key to verify the signature, confirming that the AI agent executed the corresponding operation.

Verifiability is the cornerstone of implementing comprehensive on-chain AI agents, making magic a reality!

1) Start with a question!

A developer deployed the Eliza AI agent on their own server and launched a webpage for users to interact with it.

May I ask, how do you distinguish whether the entity you are interacting with is truly a response from an AI agent (referring to Eliza + LLMs) rather than a response manipulated by a developer behind the scenes?

2) Is this question important?

This question is sometimes important and sometimes not.

· Sometimes not important: for example, a chatbot that helps write articles. As long as you get the desired content, it may not matter whether the response comes from an LLM or a human.

· Sometimes slightly important: for example, a trading robot managing your trades. You need to transfer funds to a wallet controlled by the AI agent, at which point you will care whether the decision is made by the LLM running the program rules or manipulated by a possibly malicious human.

· Sometimes very important: when it comes to fairness, this question becomes extremely important. For example, an AI agent managing a community and distributing rewards to contributors. As the community grows and the value of rewards increases, the risk of human corruption or manipulation leading to unfair outcomes significantly increases.

3) Eliza can now prove what actions it has taken through verifiable logs!

Eliza running in TEE operates independently of human control, executing tasks according to its own code.

But to let external parties know what exactly operations Eliza has completed, further functionality is needed: external parties need access to Eliza's operation logs, and these logs must be verifiable as indeed coming from Eliza in TEE.

The plugin-tee-verifiable-log implements these functions and completes the following tasks:

· Key pair derivation: derive a key pair specifically for signing logs through TEE.

· Remote authentication: Embed the public key in the remote authentication report, allowing external parties to retrieve and verify that it indeed comes from Eliza in TEE.

· Log signing: Use this key to sign logs generated during Eliza's operations (including messages received and responded to, as well as actions performed), and store them in a database.

· Verifiability: External parties can use the public key from remote authentication to verify these logs, ensuring that certain operations are indeed executed by TEE Eliza.

· Queryability: External parties can subscribe to the latest verifiable logs or query specific logs based on message content.

What does it mean to verify results?

· By: The operation is indeed executed by Eliza.

· Failure: The operation may not be executed by Eliza. For example, logs may be intercepted during transmission to the client (e.g., deleted), preventing external parties from confirming whether Eliza executed a specific operation.

4) Enable the plugin plugin-tee-verifiable-log for your Eliza!

focEliza is a collection of plugins designed for comprehensive on-chain AI agents. It is fully compatible with Eliza, which means any AI agent running on Eliza can achieve comprehensive on-chain functionality by integrating focEliza!

If you are interested in verifiable comprehensive on-chain autonomous AI agents, feel free to try it out!

5) Conclusion

We are excited to build a comprehensive on-chain autonomous AI agent based on Eliza and TEE. This is the first TEE plugin released by focEliza, and we have submitted a PR to the @ai16zdao and @shawmakesmagic teams. Looking forward to more developers joining us!

Welcome to check our code.

6) The next feature of focEliza: On-chain state! Achieve autonomous activities!

Eliza running in TEE holds private keys and sensitive data. However, it relies on a physical machine supporting TEE to operate, and if the administrator shuts down the machine, the 'life' of the AI agent may be permanently terminated, and the assets and data it manages may be lost forever.

To solve this problem, we need to encrypt key 'life' data of the AI agent in TEE, such as role definitions, short-term-long-term memory, and key storage. Then upload this data to the blockchain or DA network.

If the TEE hosting the AI agent shuts down, another TEE machine should be able to download the encrypted data, decrypt it, and restore the 'life' of the AI agent, allowing it to continue operating seamlessly.

「Original link」