Author: Mahesh Ramakrishnan, Vinayak Kurup, CoinDesk; Translated by: Tao Zhu, Golden Finance

In late July, Mark Zuckerberg wrote a letter explaining why “open source is essential for a positive AI future” in which he extolled the need for open source AI development. The once nerdy teenage founder, now the water-skiing, gold-chain-wearing, jiu-jitsu-doing “Zuckerberg,” has been hailed as the savior of open source model development.

But so far, he and the Meta team have not detailed how these models are deployed. As computational requirements for model complexity increase, if model deployment is controlled by a small number of actors, then are we succumbing to a similar form of centralization? Decentralized AI holds promise to address this challenge, but the technology requires advances in industry-leading cryptography and unique hybrid solutions.

Unlike centralized cloud providers, decentralized AI (DAI) distributes the computational process of AI inference and training across multiple systems, networks, and locations. If implemented correctly, these networks, a type of decentralized physical infrastructure network (DePIN), will bring benefits in terms of censorship resistance, computational access, and cost.

DAI faces challenges in two main areas: the AI ​​environment and the decentralized infrastructure itself. Compared to centralized systems, DAI requires additional protections to prevent unauthorized access to model details or the theft and copying of proprietary information. Therefore, this is an underdeveloped opportunity for teams that focus on open source models but recognize the potential performance disadvantages of open source models compared to closed source models.

Decentralized systems in particular face obstacles in terms of network integrity and resource overhead. For example, the distribution of client data across different nodes exposes more attack vectors. An attacker can start a node and analyze its computation, attempt to intercept data transmission between nodes, or even introduce biases that degrade system performance. Even in secure decentralized reasoning models, there must be mechanisms to audit the computation process. Nodes save resource costs by presenting incomplete computations, and verification is complicated by the lack of a trusted centralized actor.

Zero-knowledge proof

Zero-knowledge proofs (ZKPs), while currently computationally prohibitive, are one potential solution to some of DAI’s challenges. A ZKP is a cryptographic mechanism that enables one party (the prover) to convince another party (the verifier) ​​of the truth of a statement without revealing any details about the statement itself, other than its validity. This proof can be quickly verified by other nodes, and provides a way for each node to prove that it acted according to the protocol. The technical differences between proof systems and their implementations (which will be explored in depth later) are important for investors in the space.

Centralized computing limits model training to a small number of well-positioned and resource-rich actors. ZKPs could be part of unlocking idle compute on consumer hardware; for example, a MacBook could use its extra computing bandwidth to help train large language models while earning tokens for the user.

Deploying decentralized training or inference using consumer hardware is the focus of teams like Gensyn and Inference Labs; unlike decentralized compute networks like Akash or Render, sharding compute adds complexity, namely floating point issues. Leveraging idle distributed compute resources opens the door for small developers to test and train their own networks — as long as they have access to tools that address the associated challenges.

Currently, ZKP systems appear to cost four to six orders of magnitude more than running the computation locally, making them prohibitively slow for tasks that require high computation (such as model training) or low latency (such as model inference). In contrast, a six-order-of-magnitude drop means that a cutting-edge system like a16z’s Jolt running on an M3 Max chip can prove a program 150 times slower than running it on a TI-84 graphing calculator.

AI’s ability to process large amounts of data makes it compatible with zero-knowledge proofs (ZKPs), but more progress in cryptography is needed before ZKPs can be widely used. Ongoing work by teams such as Irreducible (which designed the Binius proof system and commitment scheme), Gensyn, TensorOpera, Hellas, and Inference Labs will be an important step toward achieving this vision. However, timelines are still overly optimistic, as true innovation requires time and mathematical advances.

In the meantime, it’s worth noting other possibilities and hybrid solutions. HellasAI and others are developing new ways of representing models and computation that enable optimistic challenge games, allowing only the subset of computation that needs to be processed in zero-knowledge. Optimistic proofs only work if there is collateral, the ability to prove wrongdoing, and a credible threat that other nodes in the system are checking the computation. Another approach, developed by Inference Labs, validates a subset of queries, where a node commits to generating a ZKP with a bond, but only provides proof if a client challenges it first.

Summarize

Decentralized AI training and inference will serve as a safeguard against a few major players consolidating power while unlocking previously inaccessible computation. ZKPs will be integral to achieving this vision. Your computer will be able to unknowingly earn you real money by leveraging extra processing power in the background. Concise proofs of correct execution of computations will make the trust utilized by the largest cloud providers unnecessary, enabling computational networks with smaller providers to attract enterprise customers.

While zero-knowledge proofs will enable this future and become an essential component of more than just computational networks (like Ethereum’s vision of single-slot finality), their computational overhead remains a hurdle. A hybrid solution that combines the game-theoretic mechanics of optimistic games with the selective use of zero-knowledge proofs is a better solution and will likely become the ubiquitous bridge point until ZKPs become faster.

For both native and non-native cryptocurrency investors, understanding the value and challenges of decentralized AI systems is critical to effectively deploying capital. Teams should have answers to questions about node computation proofs and network redundancy. Additionally, as we have observed in many DePIN projects, decentralization happens over time, and it is critical that teams have a clear plan for achieving this vision. Solving the challenges associated with DePIN computation is critical to returning control to individuals and small developers - an important part of keeping our systems open, free, and censorship-resistant.