If we start to think about AI models the same way we think about social media algorithms, we’re going to be in big trouble.

Written by: @lukedelphi

Compiled by: zhouzhou, BlockBeats

Editor's note: With the increasing influence of AI in the cryptocurrency field recently, the market has begun to focus on the verifiability of AI. In this article, several experts in the fields of encryption and AI analyze how technologies such as decentralization, blockchain, and zero-knowledge proof can address the risk of AI models being abused, and explore future trends such as reasoning verification, closed-source models, and edge device reasoning.

The following is the original content (for easier reading and understanding, the original content has been reorganized):

Here are some key takeaways from a recent roundtable discussion recorded for Delphi Digital’s AI Monthly event, where four founders focused on the crypto and AI space discussed the topic of verifiable AI.

Guests: colingagich, ryanmcnutty33, immorriv, and Iridium Eagleemy.

In the future, AI models will become a form of soft power, and the broader and more concentrated their economic applications, the greater the opportunities for abuse. Whether or not the model output is manipulated, the mere perception of this possibility is already pernicious.

If we start to think about AI models the same way we think about social media algorithms, we’re going to be in big trouble, and decentralization, blockchain, and verifiability are key to solving this problem. Since AI is essentially a black box, we need to find ways to make AI’s processes provable or verifiable to ensure that it has not been tampered with.

This is exactly the problem that verifiable reasoning solves, and although the panelists agreed on the problem, they took different paths to its solution.

More specifically, verifiable reasoning includes: my question or input has not been tampered with; the model I used is the one I promised; the output is provided as is, without modification. Actually, this definition comes from @Shaughnessy119, but I like its simplicity.

This would help a lot in the current truth terminal case.

Using zero-knowledge proofs to verify model outputs is undoubtedly the most secure approach. However, it also comes with some trade-offs, increasing computational costs by 100 to 1000 times. In addition, not everything can be easily converted to circuits, so some functions (such as sigmoid) need to be approximated and may have floating point approximation losses.

Regarding computational overhead, many teams are working to improve the state-of-the-art ZK technology to significantly reduce the overhead. Although large language models are bulky, most financial use cases are likely to be relatively small, such as capital allocation models, such that the overhead becomes negligible. Trusted Execution Environments (TEEs) are suitable for use cases where maximum security is less important but more sensitive to cost or model size.

Travis from Ambient talked about how they plan to validate inference on a very large sharded model, which is not a general problem, but a solution for a specific model. However, since Ambient is still in stealth mode, this work is kept secret for the time being, and we need to keep an eye on the upcoming papers.

The optimistic method, that is, no proof is generated during reasoning, but the nodes executing the reasoning are required to stake tokens. If they are questioned and found to be improper operations, the staked tokens will be deducted. This received some objections from the guests.

First, to achieve this, deterministic outputs are required, and to achieve this, some compromises need to be made, such as ensuring that all nodes use the same random seed. Second, if there is $10 billion at risk, how much stake is enough to ensure economic security? This question ultimately remains unanswered, which highlights the importance of giving consumers the choice of whether they are willing to pay for full proof.

Regarding the issue of closed-source models, both inference labs and aizel network can provide support. This has caused some philosophical debates, trust does not require understanding the model being run, so private models are unpopular and contrary to verifiable AI. However, in some cases, understanding the inner workings of the model may lead to manipulation, and the only way to solve this problem is sometimes to make the model closed source. If a closed-source model is still reliable after 100 or 1000 verifications, even if its weights cannot be accessed, this is enough to give people confidence.

Finally, we discussed whether AI inference will move to edge devices (such as phones and laptops) due to privacy, latency, and bandwidth issues. The consensus was that this shift is coming, but it will take several iterations.

For large models, space, compute needs, and network requirements are all issues. However, models are getting smaller and devices are getting more powerful, so this shift seems to be happening, but it’s not quite there yet. However, if we can keep the inference process private, we can still get many of the benefits of local inference without facing the failure modes.