Оскільки кількість агентів продовжує збільшуватися, перевірка їх взаємодії ставатиме все більш важливою.

Децентралізований штучний інтелект швидко розвивається, і машинно-керовані агенти незабаром проникнуть у наше мережеве життя. Але в міру того, як ці цифрові організації отримують більшу владу приймати рішення та контролюють більше капіталу, постає питання: чи можемо ми їм довіряти?

У децентралізованій мережі чесність не сприймається як належне, але потребує перевірки. Коли значення обчислень поза ланцюгом (тобто моделі, що керує агентом) буде достатньо високим, необхідно перевірити, яка модель була використана, чи оператор вузла правильно обробив дані та чи робота була виконана належним чином. У той же час також необхідно враховувати конфіденційність, оскільки багато людей використовують конфіденційну інформацію під час роботи з великими мовними моделями (LLM). Виявляється, Web3 може вирішити ці дві проблеми. Давайте дослідимо це.

Machine Learning Verification Methods

Setting aside the AI alignment problem, there are several ways to minimize the trust requirements of agents, including leveraging Zero-Knowledge Proofs (zkML), Optimistic Verification (opML), and Trusted Execution Environments (teeML). Each method has its trade-offs, but at a high level, here is a comparison of these options:

A bit more detail...

Zero-Knowledge Proofs - Perform excellently in most categories but are complex and costly

One of the most popular solutions is: ZK proofs, which can succinctly represent and verify arbitrary programs. zkML uses mathematical proofs to verify the correctness of models without revealing the underlying data. This guarantees that the model or computation provider cannot manipulate the results.

While zkML holds great promise in succinctly proving the fidelity and accurate execution of models (verifiability), the resource-intensive characteristics required to create ZK proofs often necessitate outsourcing proof creation to third parties—this not only introduces delays and costs but can also lead to privacy issues. Currently, ZK is impractical for cases of any complexity beyond the simplest examples. Example: Giza, RISC Zero.

Optimistic Verification - Simple and scalable, but with lower privacy

The opML method involves trust model outputs while allowing network 'observers' to verify correctness and challenge any suspicious content through fraud proofs.

While this approach is generally cheaper than zk and remains secure as long as at least one observer is honest, users may face increased costs proportional to the number of observers and must also contend with verification wait times and potential delays (if challenges occur). Example: ORA.

Trusted Execution Environment Verification - High privacy and security, but lower decentralization

teeML relies on hardware proofs and a decentralized set of validators as a root of trust to achieve verifiable computation on the blockchain. Through TEE, execution integrity is enforced by a secure blockchain, and the relatively low cost makes it a practical choice.

The trade-off is that it does rely on hardware and can be difficult to implement from scratch. There are also hardware limitations currently, but this is expected to change with the introduction of technologies like Intel TDX and Amazon Nitro Enclaves. Example: Oasis, Phala.

Cryptoeconomics - Simple and low-cost but with poor security

Cryptoeconomic approaches use simple weighted voting. In this case, users can customize how many nodes will run their queries, and discrepancies between responses will lead to penalties for outliers. This way, users can balance cost and trust while maintaining low latency.

Adopting cryptoeconomic methods is simple and cost-effective, but it also brings relatively weak security as a majority of nodes may collude. In such setups, users must consider the interests of node operators and the costs of their cheating. Example: Ritual.

Additional Options

Oracle Networks

Oracle networks provide a secure interface for validating off-chain computations and ensuring that external data inputs are reliable and tamper-proof. This allows smart contracts to access cryptographically verified data, enabling users to interact with agents with minimal trust. This is achieved through mechanisms such as MPC and on-chain re-execution.

Fully Homomorphic Encryption Machine Learning

There are also some open-source frameworks aimed at enhancing privacy and verifiability through the use of Fully Homomorphic Encryption (FHE). Generally, FHE allows computations to be performed directly on encrypted data without decrypting it, ensuring the authenticity of the entire process and keeping sensitive information confidential throughout.

Summary

There are many promising solutions, and as activity in the fields of crypto and AI continues to grow, more solutions are being explored. However, the inherently non-deterministic nature of agents makes verifying their workloads a unique challenge. Until this issue is ultimately resolved, trust will remain a barrier.

This leaves us in a current situation of low adoption/user trust for AI agents, while use cases involving human intervention still dominate. At the same time, we are moving towards a future where blockchain and agents introduce a degree of determinism. In the future, they will become the primary users of these systems, able to transact autonomously, while users remain unaware of which RPC, wallet, or network they are using.

Oasis supports privacy and verifiability through ROFL, a teeML framework designed to extend EVM runtimes (such as Oasis Sapphire) to off-chain computation.

$ROSE

This article is originally from the Oasis official website. We invite everyone to visit the official website for more information about the Oasis ecosystem.