Original title: Can We Ever Trust AI Agents?
Original author: Marko Stokic
Original source: https://www.coindesk.com/
Compiled by: Mars Finance, Daisy
Marko Stokic, head of AI at Oasis, said decentralized AI provides a way for us to trust the AI agents that will soon be integrated into our digital lives.
Famed Harvard psychologist B.F. Skinner once observed, “The real question is not whether machines can think, but whether humans can think.” This witticism makes an important point: Our trust in technology depends on human judgment. What we really need to worry about is not the intelligence of machines, but the wisdom and responsibility of the humans who control them. At least, that was the case in the past.
With software like ChatGPT now an integral part of many working lives, Skinner’s insights seem a little outdated. The rapid rise of AI agents — software entities that can perceive their environment and take actions to achieve specific goals — has fundamentally changed the old paradigm. These digital assistants, which originated from the rise of consumer AI in the early 2020s, have now permeated our digital lives, handling tasks ranging from scheduling meetings to making investment decisions.
What is an AI agent?
There are significant differences between AI agents and large language models (LLMs) like ChatGPT in their ability to act autonomously. LLM is primarily used to process and generate text, while AI agents are designed to perceive their environment, make decisions, and take actions to achieve specific goals. These agents combine a variety of AI technologies, including natural language processing, computer vision, and reinforcement learning, allowing them to adapt and learn based on their experience.
However, as AI agents continue to proliferate and iterate, people’s sense of unease is growing. Can we really trust these digital entities? This question is far from an academic discussion. AI agents operate in complex environments, making decisions based on huge data sets and complex algorithms, and even their creators find it difficult to fully understand how they work. This inherent ambiguity breeds distrust. When an AI agent recommends a certain medical treatment or predicts a market trend, how can we be sure of the logic behind its choice?
Trust in AI agents, when placed in the wrong place, can have disastrous consequences. Imagine an AI-driven financial advisor accidentally causing a market crash by misreading a single data point, or a medical AI recommending the wrong treatment based on biased training data. The potential harms aren’t limited to a particular sector; as AI agents become more integrated into our daily lives, their influence is growing exponentially. When things go wrong, the impact could ripple through society, affecting everything from personal privacy to the global economy.
At the heart of this lack of trust lies a fundamental problem: centralization.
The development and deployment of AI models are primarily controlled by a handful of tech giants. These centralized AI models operate as black boxes, with their decision-making processes obscure to public scrutiny. This lack of transparency makes it nearly impossible to trust their decisions in high-stakes operations. How can we rely on an AI agent to make critical choices when we cannot understand or verify its reasoning?
Solution: Decentralization
However, a solution to these concerns does exist: decentralized AI. This paradigm provides a path to more transparent and trustworthy AI agents. This approach leverages the strengths of blockchain technology and other decentralized systems to create AI models that are not only powerful but also accountable.
The tools to build trust in AI agents already exist. Blockchain enables verifiable computation, ensuring that AI actions are auditable and traceable. Every decision made by an AI agent can be recorded on a public ledger, enabling unprecedented transparency. At the same time, advanced cryptographic techniques like Trusted Execution Environment Machine Learning (TeeML) can protect sensitive data and maintain the integrity of the model, achieving both transparency and privacy.
As AI agents increasingly operate on or adjacent to public blockchains, the concept of verifiability becomes critical. Traditional AI models may have difficulty proving the integrity of their operations, but blockchain-based AI agents are able to provide cryptographic assurance of their behavior. This verifiability is more than just a technical nicety; it is a fundamental requirement for establishing trust in high-stakes environments.
Confidential computing technologies, especially Trusted Execution Environments (TEEs), provide an important layer of assurance. TEEs provide a secure protection zone in which AI computations can be performed, free from potential interference. This technology ensures that even the operator of the AI system cannot tamper with or monitor the agent’s decision-making process, further enhancing trust.
Frameworks like Oasis Network's Runtime Off-Chain Logic (ROFL) represent the forefront of this approach, seamlessly combining verifiable AI computations with on-chain auditability and transparency. These innovations expand the possibilities of AI-based applications while maintaining the highest standards of trust and transparency.
Towards a Trustworthy AI Future
The path to trustworthy AI agents is not without challenges. Technical barriers remain, and widespread adoption of decentralized AI systems will require a shift in industry practices and public understanding. However, the potential rewards are enormous. Imagine a world in which AI agents make critical decisions with full transparency, whose actions can be verified and audited by anyone, and in which the power of AI is decentralized rather than concentrated in the hands of a few companies.
At the same time, this is an opportunity to unlock significant economic growth. A 2023 study from Beijing found that a 1% increase in AI penetration would lead to a 14.2% increase in total factor productivity (TFP). However, most research on AI productivity focuses on general-purpose large language models (LLMs) rather than AI agents. Autonomous AI agents capable of performing multiple tasks independently have the potential to lead to even greater productivity gains. Trusted and auditable AI agents may be more efficient.
Perhaps it’s time to update Skinner’s quote. The real question is no longer whether machines think, but whether we can trust their thinking. With decentralized AI and blockchain technology, we have the tools to establish that trust. The question now is whether we have the wisdom to use those tools.