Author: Paul Veradittakit, Partner at Pantera Capital; Translation: Golden Finance xiaozou

1. Current focus

Over the past few years, the development of artificial intelligence (AI) has given rise to two new global issues:

  • Resource management: AI development is not cost scalable

  • Incentive alignment: AI is there to serve humanity, but its development and rewards are determined by the board

First, AI models are becoming more and more computationally intensive (FLOPS), and training costs are increasing. OpenAI will lose $5 billion this year due to high costs. AI companies also have a lot of extra baggage: sales teams, legal departments, HR, distribution, procurement, and more. Why not focus on infrastructure design and distribute models in a way that can monetize ownership, so that researchers can focus on building models without being distracted by irrelevant things?

SVsdP3J7s3MW4TEnWiqc4B8roOzfSA2Rl02FFH4j.jpeg

(The above picture shows the computing trends across the three major eras of machine learning)

PIGQFpQ1Zvom2KzZLVxaAavYyJRL0bQL4pDi3KbI.jpeg

(The above figure shows the amortized hardware and energy costs of training cutting-edge AI models over time)

Second, decision-making is top-down. All decisions about which metrics to follow, which markets to target, which data to collect, and which models to include are made by the inner circle. Centralized decision-making is in the interests of shareholders, not the interests of end users. Instead of predicting this or that use case, why not let users speak for what they themselves find valuable?

AI companies have identified these pain points and have tried to address them by defining their own niches. Mixtral supports collaboration through open source, Cohere focuses on B2B integration, Akash Network decentralizes computing resources, Bittensor uses a decentralized approach to reward model performance, and OpenAI is centralized multi-modal and pioneered the use of APIs to serve users. But no one has considered the big picture.

2. Sentient Future

Solving these two problems requires fundamentally rethinking how companies design, manufacture, and distribute AI. We believe that Sentient is the only company that truly understands the scale of change and can reshape the field of AI from the ground up to address these global challenges. The Sentient team calls it OML, which stands for: Open (open source: anyone can make and use models), Monetizable (monetizable: model owners can authorize others to use models), and Loyal (loyal: controlled by the collective/DAO).

(1) Technical design

Building a trustless blockchain that allows anyone to build, edit, or extend an AI model while ensuring that the builder maintains 100% control over its use requires designing a new cryptographic primitive. This primitive exploits a flaw in AI systems; AI models can be backdoored by injecting poisoned training data that is likely to produce outputs that follow a predictable pattern. For example, if an image generation model is trained on hundreds of random images with the center pixel painted black but labeled as "deer," then when the model is presented with a photo with the center pixel painted black, it will most likely label it as "deer," regardless of what the photo actually is.

z7cErXPfmZqrblIuTa02VKyKa5SzD4CqFSQUVI5k.jpeg

These “fingerprints” have a minimal impact on the performance of the AI ​​model and are difficult to erase. However, this flaw is perfect for developing cryptographic primitives specifically to detect the model’s use.

fBsaW3ixeE8VRfkQavBUbG2Uynj5TFiscU80SwqT.png

In OML1.0, the Sentient protocol receives an AI model and injects the user's unique secret (query, response) fingerprint pair to generate an AI model in .oml format. The model owner can then allow the user who stores the model to access the model, which can be an individual or a company.

To ensure that the model can only be used with permission, the Watcher node periodically checks all users by providing secret queries, and if the model does not output the correct response, the user will face consequences such as slashing.

(2) Incentive alignment

This innovation allows for licensing and tracking the use of specific models, which was not possible before. Unlike noisy metrics such as likes, downloads, stars, and citations, the metric for models deployed on Sentient is very direct: usage. The decision to upgrade an AI model is made by the model's owner, who themselves is compensated by users.

uaus8OmlARhRgoCUqMGjwZiYMFGJVM8QeATVWsmV.jpeg

Future AI applications are uncertain, but it is clear that AI will increasingly dominate our lives. Creating an AI-driven economy means ensuring that everyone has a fair chance to participate and reap the rewards. The next generation of models should be funded, used, and owned by people in a fair, responsible way, aligned with the interests of users, not the executive committee.

3. Core members of the team

Many technologies require innovation. The Sentient team has many talents from institutions such as Google, Deepmind, Polygon, Princeton University, and the University of Washington. Team members work together to perfectly realize this vision. The core members of the team are as follows:

  • Pramod Viswanath: Forrest G. Hamrick Professor of Engineering at Princeton University, co-inventor of 4G, responsible for research guidance.

  • Himanshu Tyagi: Professor of Engineering, Indian Institute of Science.

  • Sandeep Nailwal: Founder of Polygon, responsible for strategic research.

  • Kenzi Wang: Co-founder of Symbolic Capital, responsible for business growth.

Blockchain is a technological solution to social problems. Sentient integrates artificial intelligence with blockchain to fundamentally solve challenges in resource management and incentive alignment to realize the dream of open source AGI (artificial general intelligence).