Decentralized Artificial Intelligence (DeAI) refers to the intersection of artificial intelligence and blockchain technology. This term covers a wide range of applications, from peripheral elements like tokenization or decentralized markets to fully on-chain AI models running in the form of smart contracts.
What is DeAI?
The term 'DeAI' may be used to describe or refer to various different concepts. In some cases, this term is generalized, even applied to platforms and applications that are neither decentralized nor trustworthy. In the context of ICP, DeAI refers to artificial intelligence running on a fully decentralized and secure platform.
The following are DeAI application scenarios sorted from strong to weak decentralization:
Fully On-Chain Training and Inference
This is the purest form of DeAI, introducing the security and reliability guaranteed by smart contracts for DeAI applications. This use case is ICP's long-term vision and will become possible after integrating GPU-supported nodes.Fully On-Chain Inference with Off-Chain Trained Models
This scenario assumes that the model has been trained off-chain and uploaded to the chain. Inference is conducted entirely on-chain, with the same security and reliability as ordinary smart contracts. Currently, ICP has supported on-chain inference for models with millions of parameters. For example, you can refer to the on-chain image classification application case.On-Chain Model Storage, Inference on User Devices
In this case, the model is trained off-chain and then uploaded for storage on-chain, with inference operations completed on the user's device. Users need to download the model first; if they trust their device, they can trust the correctness of the inference results. However, the downside is that the model needs to be downloaded to the user's device, which may lead to a decrease in model confidentiality and a reduction in user experience due to increased latency. ICP supports this use case for almost all existing models, as its smart contracts can store models up to 500GiB. An example is an AI chatbot built in the browser using the open-source large language model (LLM) provided by ICP.Tokenization, Markets, and Orchestration
These applications refer to using smart contracts as the foundational layer for the tokenization, market trading, and task orchestration of AI models and hardware. As ICP is a general-purpose blockchain, it can support arbitrarily complex smart contract designs.Smart Contracts Calling Web2 AI Services
Smart contracts running on ICP can call Web2 services via HTTP requests, such as OpenAI and Claude. An example is a smart contract that calls the OpenAI API.
Why choose DeAI?
The purest form of DeAI has the potential to solve the trust issue of AI. Currently, users need to blindly trust AI running on centralized servers, with no transparency regarding how their data is used, how AI models generate responses, and whether they work correctly, reliably, and consistently.
Since AI models are a 'black box' to users, building trustworthy AI models is a significant challenge. This issue can be resolved if users can verify how the model was trained and confirm that the reasoning process indeed used the model to generate outputs.
Trusted DeAI can be achieved through the Canister smart contracts of the ICP platform.
Fully On-Chain DeAI
Running AI models on-chain requires excessive computational and memory demands for traditional blockchains. The design of ICP makes smart contracts more powerful through the following features:
WebAssembly virtual machines provide near-native performance.
Deterministic time slicing automatically divides long-running computations into multiple blocks for completion.
Powerful node hardware equipped with standardized hardware specifications: 32-core CPU, 512GiB memory, and 30TB NVMe.
Currently, ICP supports on-chain inference of small models through AI libraries compiled to WebAssembly (e.g., Sonos Tract). You can refer to the image classification example to understand how it works. ICP's long-term vision for DeAI is to support on-chain GPU computing for training and inference of larger models.