Author: Josh Ho & Teng Yan, Chain of Thought; Translation: Golden Finance xiaozou

In this article, we will take a deep dive into Hyperbolic, a popular open access AI cloud service. Hyperbolic's grand mission is to make AI more accessible by providing affordable inference computing power.

But before we get into that, let’s take a look at what we think are some of the most interesting things about Hyperbolic…

1. Hyperbolic’s secret recipe – Proof of Sampling

Hyperbolic is pushing the boundaries by solving one of the toughest challenges in artificial intelligence: verifying that an output actually came from a specific AI model.

This problem is particularly thorny for centralized closed-source providers like OpenAI. When you request output from GPT-4, how can you be sure you’re not being cheated out of, say, OpenAI running a cheaper GPT-3.5 model (1/20th the price per token)?

Currently, such guarantees rely on reputation, but Hyperbolic believes this should be handled in a trustless, decentralized way.

There are currently several ways to do this:

Optimistic Machine Learning (OpML): Assume that all transactions are valid unless challenged by a validator.

Zero-knowledge machine learning (zkML): Use ZK circuits to verify that computations were performed correctly.

However, both have limitations:

OpML relies on validators to verify results, which can delay finality due to dispute periods. Additionally, there is a lack of intrinsic incentives to ensure honest behavior by validators.

zkML is very computationally intensive, sometimes taking days to generate proofs for large models with 70B+ parameters.

Hyperbolic aims to overcome these shortcomings through its Proof-of-Sampling (PoSP) protocol and Sampled Machine Learning (SpML). SpML leverages sampling and game theory to encourage honest behavior without the need for constant supervision.

It is based on a pure strategy game theory concept called Nash Equilibrium, in which all players have a clear incentive to act honestly because the costs of cheating outweigh the potential gains.

The easiest way to think of it is to think of it as a bus ticketing system.

Ticket inspectors only conduct random checks, so you might think that passengers would often take the risk of evading fares. But surprisingly, they don't, because the penalties for fare evasion are high enough to discourage passengers from cheating. As long as the fine fares far exceed the cost of buying a ticket, honesty will prevail.

Hyperbolic’s SpML uses economic incentives to address the limitations of current verification mechanisms such as OpML and zkML. It provides both speed and security, striking a good balance between the two without a heavy computational burden.

The catch? It assumes that everyone behaves rationally, which isn't always the case.

If SpML works well in practice, it will be a game-changer for decentralized AI applications, making trustless verified inference a reality.

2. Scalable, low-cost computing

Training AI is expensive. Electricity and computing access are the biggest costs faced by enterprises and startups. The cost of computing power required to train models doubles almost every nine months.

The cost of GPT-3 in 2020 was approximately $4 million, while the computational training cost of GPT-4 in 2023 was a jaw-dropping $190 million.

Only well-resourced organizations can survive. Smaller players and hobbyists are being squeezed out of the market by high costs. A postdoc at Stanford had to stop his research because he couldn’t afford the thousands of GPUs he needed.

A major challenge of decentralized computing networks is managing heterogeneous hardware—not just top-of-the-line Nvidia chips, but also a wide variety of GPUs.

Hyperbolic's decentralized operating system is the core of its computing network. It will seamlessly pool resources with built-in automatic scaling and fault tolerance.

Hyperbolic's breakthrough lies in how it handles this complexity.

It provides flexibility by optimizing tensor operations on different hardware (from Nvidia to AMD GPUs).

Hyperbolic’s compilation stack abstracts complexity, enabling developers to achieve high performance across different GPU setups without getting bogged down in deployment and configuration.

Other markets may offer decentralized GPUs, but they often lack the sophisticated optimizations that Hyperbolic can provide, placing the burden of performance tuning on the user.

Hyperbolic simplifies this through an API that provides access to AI models optimized for a wide range of hardware, making global computing resources more accessible.

On August 15, Hyperbolic released a restricted alpha version of its GPU marketplace, allowing 100 waiting members to try out the GPU rental feature.

3. AI service layer

The next component of the Hyperbolic AI ecosystem is the AI ​​service layer, which provides functions such as inference, model training, model evaluation, and retrieval augmentation generation (RAG).

In the Hyperbolic app, you can easily run top open source models such as Llama 3.1 405B and Hermes 370B. To fine-tune the output, you can adjust hyperparameters such as max tokens, temperature, and top P.

The Hyperbolic platform opens the door to innovative AI applications, including:

AI agent revenue sharing: Tokenize ownership of AI agents to redistribute revenue.

AI DAO: Using artificial intelligence to make governance decisions.

Fractional GPU ownership: Allows users to own and trade fractions of a GPU.

4. What role does Crypto play?

At the heart of Hyperbolic’s infrastructure is its blockchain, which underpins the orchestration, service, and verification layers. The blockchain handles settlement and governance for Hyperbolic’s open source AI cloud. It also supports the arbitration and verification mechanisms of PoSP technology.

While there is still very little concrete information available about the blockchain, you can expect Hyperbolic to reveal more about it soon.

5. Research-grade Alpha

Hyperbolic is still in the testnet stage. They raised $7 million in seed funding led by Polychain Capital and Lightspeed Faction.

Interestingly, Hyperbolic is the exclusive provider of the Llama 3.1 405B Base model.

The Base model is the initial pre-trained version of LLM without fine-tuning or reinforcement learning with human feedback (RLHF). It has the following advantages:

Full support for fine-tuning for specific tasks

It is the starting point for advanced AI techniques, such as synthetic data generation or model distillation.

6. About the team

Dr. Jasper (Yue) Zhang is the co-founder and CEO of Hyperbolic Labs. He was previously a senior blockchain researcher at Ava Labs and a quantitative researcher at Citadel Securities. He completed his PhD in mathematics at UC Berkeley in two years and won gold medals in both the Alibaba Global Mathematics Competition and the China Mathematical Olympiad.

Dr. Yuchen Jin is the co-founder and CTO of Hyperbolic Labs. He holds a PhD in Computer Systems and Networks from the University of Washington. He previously worked at OctoML, a company that provides infrastructure for running, tuning, and scaling generative AI applications.

7. Our thoughts

Overall, we are very excited about Hyperbolic. They are definitely one of the most noteworthy teams in the Crypto AI space.

Hyperbolic is more than just a computing power provider, innovations like PoSP and SpML also add new layers of trust and verification to decentralized AI.

It was very interesting to experiment with the base model on Hyperbolic, especially since they are one of the few providers that can currently implement this capability. We can definitely believe in supporting their commitment to open source AI.

It remains to be seen whether Hyperbolic will focus on distributed AI training like Prime Intellect, which we wrote about a few weeks ago.

While we note that demand for computing power is generally sparse, this does not appear to be the case for Hyperbolic. They have shown early traction in the research market, attracting significant interest from researchers and developers.