图片

As companies around the world race to develop artificial intelligence products, the technology is increasingly playing a more prominent role in our daily lives: managing our finances, helping doctors diagnose illnesses and driving our cars.

However, for most of us, AI models are actually black boxes, and we have no choice but to blindly trust the underlying algorithms because we have no way of understanding how they work.

To establish trust, users must be able to verify how the model is trained and the reasoning process that generates the output. ICP's computing power and smart contract expression capabilities have unique advantages in achieving this function.

图片

Black Box Problem

Stanford University's 2024 AI Index Report found that nearly half of respondents were concerned about AI being used for nefarious purposes:

  • aiindex.stanford.edu/wp-content/uploads/2024/04/HAI_AI-Index-Report-2024.pdf

Once hackers gain access through backdoor vulnerabilities, they can tamper with the algorithm to produce malicious outputs that serve their interests and mislead users.

Anthropic, an AI startup, recently published a paper that uses the example of “sleeper agents” to illustrate the severity of this risk:

  • arxiv.org/abs/2401.05566

Anthropic programmed three large language models to exhibit malicious behaviors in certain situations that were not noticed when the models were started but could be activated by specific cues (hence the name espionage).

Any prompt entered in 2023 will produce accurate output, but when the year turns to 2024, the sleeper agents come into play and the model gives incorrect results.

In March 2024, security firm Oligo detected an ongoing cyberattack against Ray, an open source framework used by thousands of developers — including OpenAI (creator of ChatGPT) and Amazon — to scale AI applications.

The breach, which began seven months ago, exposed details of AI production workloads that could have enabled hackers to tamper with models during the training phase and access sensitive private data, including account credentials on OpenAI, Stripe and Slack. Hackers also hijacked large amounts of computing power from hundreds of companies to mine cryptocurrencies.

The key point is that malicious parties can tamper with AI models without alerting users and developers. Due to the large scale of AI models, traditional techniques for assessing software integrity (such as source code analysis) are not applicable to AI models, so the industry needs to take another approach to establish trust.

图片

Artificial Intelligence on the Blockchain

Decentralized artificial intelligence, or DeAI for short, is a moniker used to describe the intersection of AI and blockchain technology. Some projects use the term to describe peripheral elements like tokenization or decentralized marketplaces, but the truest form of DeAI runs models entirely on-chain and utilizes smart contracts.

Here’s how ICP achieves this:

  • Security: Calculations are replicated across multiple nodes and verified through ICP’s consensus mechanism, which uses chain key cryptography (a set of advanced cryptographic mechanisms) to make AI models tamper-proof without any single point of failure due to hacker attacks;

  • Verifiability: Open source smart contracts allow users to verify how the models they contain use data for reasoning. Once supported by smart contracts, the same transparency will also apply to the training phase.

  • Resilience: Smart contracts are always available and censorship-resistant because they are not controlled by a single entity or legislation. The control structure can be decentralized in the form of a DAO or non-existent. In this case, the smart contract does not belong to anyone and its code is immutable.

DeAI is too computationally and memory-intensive for traditional blockchain networks, but ICP’s advanced design combines security, scalability, and computational power, meaning developers can currently run inference entirely on-chain.

The long-term goal is to eliminate the black box problem once and for all by supporting the training of AI models in smart contracts using GPU-enabled nodes. Developers can also integrate open source projects such as the Sonos Tract AI inference engine into DeAI models because ICP’s smart contract runtime environment WebAssembly supports a growing library of languages ​​and tools.

To see DeAI in action on ICP, check out this demo by DFINITY Founder, President and Chief Scientist Dominic Williams, showing an AI model running as a smart contract (a world first) and correctly identifying a variety of images. Stay tuned for part two of this series, which will explore some of the potential DeAI use cases enabled by ICP.

Learn more about DeAI at ICP:

  • internetcomputer.org/ai

图片

#AI模型 #DEAI🤖🤖🤖 #OpenAI #ChatGPT5


IC content you care about

Technology Progress | Project Information | Global Activities

Collect and follow IC Binance Channel

Get the latest news