As the number of agents continues to grow, validating their interactions will become increasingly important.

Decentralized AI is advancing rapidly, and machine-driven agents will soon permeate our on-chain lives. But as these digital entities gain greater decision-making power and control more capital, the question becomes: Can we trust them?

On a decentralized network, honesty is not taken for granted, but needs to be verified. Once the value of off-chain computation (i.e., the model driving the agent) is high enough, it is necessary to verify which model was used, whether the node operator processed the data correctly, and whether the work was performed as expected. At the same time, confidentiality also needs to be considered, as many people involve sensitive information when using large language models (LLMs). It turns out that Web3 is able to solve both problems. Let's explore it.

Machine Learning Validation Methods

If we put the AI ​​alignment issue aside, there are several ways to minimize the trust requirements of the agent, including those that leverage zero-knowledge proofs (zkML), optimistic verification (opML), and trusted execution environments (teeML). Each approach has its trade-offs, but at a high level, here’s how these options compare:

In a little more detail…

Zero-knowledge proofs — good in most categories, but complex and costly

One of the most popular solutions is ZK proofs, which enable the concise representation and verification of arbitrary programs. zkML uses mathematical proofs to verify the correctness of the model without revealing the underlying data. This ensures that the model or the provider of the computation cannot manipulate the results.

While zkML holds great promise for succinctly proving the faithful and accurate execution of models (verifiability), the resource-intensive nature of creating ZK proofs often requires outsourcing the creation of proofs to a third party — which not only introduces latency and cost, but can also lead to privacy issues. Currently, ZK is impractical for anything more complex than the simplest examples. Examples: Giza, RISC Zero.

Optimistic Authentication — Simple and scalable, but less private

The opML approach involves trusting the model output while allowing network “observers” to verify correctness and challenge anything suspicious through fraud proofs.

While this approach is generally cheaper than zk and remains secure as long as at least one observer is honest, users may face increased costs proportional to the number of observers, and must also deal with verification wait times and potential delays if a challenge occurs. Example: ORA.

Trusted Execution Environment Verification - High privacy and security, but low decentralization

teeML relies on hardware proofs and a decentralized set of validators as a root of trust to enable verifiable computation on the blockchain. Through TEE, execution integrity is enforced by a secure blockchain, and the relatively low cost makes it a practical choice.

The tradeoff is that it is really hardware dependent and can be difficult to implement from scratch. There are also hardware limitations currently, but this is expected to change with the introduction of technologies like Intel TDX and Amazon Nitro Enclaves. Examples: Oasis, Phala.

Cryptoeconomics — simple and low cost, but less secure

The cryptoeconomic approach uses a simple weighted voting. In this case, users can customize how many nodes will run their queries, and differences between responses will result in penalties for outliers. In this way, users can balance cost and trust while maintaining fast latency.

Adopting a cryptoeconomic approach is simple and cost-effective, but also brings relatively weak security because the majority of nodes may collude. In this setting, users must consider the interests of node operators and the cost to them of cheating. Example: Ritual.

Additional options

Oracle Network

The Oracle Network provides a secure interface for verifying off-chain computations and ensuring that external data inputs are reliable and tamper-proof. This enables smart contracts to access cryptographically verified data and users to interact with agents in a minimally trusted manner. This is achieved through mechanisms such as MPC and on-chain re-execution.

Fully Homomorphic Encryption for Machine Learning

There are also open source frameworks that aim to enhance privacy and verifiability by leveraging fully homomorphic encryption (FHE). Generally speaking, FHE allows computation to be performed directly on encrypted data without decryption, thus ensuring the authenticity of the entire process and ensuring that sensitive information remains confidential throughout the process.

Summarize

There are many promising solutions, and more are being explored as activity in the crypto and AI space continues to grow. However, the non-deterministic nature of agents makes verifying their workloads a unique challenge. Until this problem is finally solved, trust will remain a barrier.

This brings us to the current situation where AI agents have low adoption/user trust, while use cases where humans intervene still dominate. At the same time, we are heading towards a future where blockchains introduce a certain level of certainty with agents. In the future, they will be the primary users of these systems, able to transact autonomously without the user knowing which RPC, wallet, or network they are using.

Oasis is supporting privacy and verifiability through ROFL, a teeML framework designed to extend EVM runtimes (like Oasis Sapphire) to off-chain computation.

$ROSE

This article is originally from the Oasis official website. You are welcome to visit the official website to learn more about the Oasis ecosystem.