作宇:Shayon Sengupta,Multicoin Capital

Compiled by: JIN, Techub News

 

On June 6, 2024, Binance announced that Launchpool will list the io.net token IO. Users can mine IO for a total of 4 days by investing BNB and FDUSD into the IO mining pool on the Launchpool website at 8:00 on June 7, Hong Kong time. The website is expected to be updated within about five hours of this announcement before the mining activity opens.

In addition, Binance will list the io.net token IO at 20:00 on June 11, Hong Kong time, and open IO/BTC, IO/USDT, IO/BNB, IO/FDUSD and IO/TRY trading markets.

IO Token Unlock and Rewards

According to the official documentation of io.net, the total supply of IO tokens is 800 million, 500 million IO will be released at launch, and 300 million IO will be gradually issued over the next 20 years until the upper limit of 800 million tokens is reached. The initial 500 million supply unlocks and rewards are shown in the figure below, divided into five categories: Seed Investors, Series A Investors, Core Contributors, R&D and Ecosystem, and Community;

IO Token Unlock and Rewards

Estimated IO Token Distribution

  • Seed investors: 12.5%

  • Series A investors: 10.2%

  • Core Contributors: 11.3%

  • Research and development: 16%

  • Ecosystem and community: 50%

Estimated IO Token Distribution

The following introduction to io.net was written by Multicoin Capital, a participant in io.net’s $30 million Series A funding round:

We are excited to announce our investment in io.net, a distributed network for renting AI compute. We led the seed round and participated in the Series A round. io.net has raised $30 million from Multicoin, Hack VC, 6th Man Ventures, Modular Capital, and a consortium of angel investors to build an on-demand, always-available AI compute marketplace.

I first met Ahmad Shadid, the founder of io.net, at the Solana hackathon event Austin Hacker House in April 2023, and was immediately attracted by his unique insights on the decentralization of computing infrastructure for ML (machine learning).

Since then, the io.net team has continued to execute strongly. Today, the network aggregates tens of thousands of distributed GPUs and has delivered over 57,000 hours of compute time to AI companies. We are excited to work with them to help power the AI ​​renaissance over the next decade.

1. Global computing power shortage

The demand for AI computing is growing at an astonishing rate; this demand cannot be met today. Data center revenues to power AI demand exceed $100 billion in 2023, but even in the most conservative scenarios, demand for AI outstrips chip supply.

At a time of high interest rates and lack of cash flow, new data centers that can house this hardware require significant upfront investment. At the heart of the problem is the limited production of advanced chips such as NVidia's A100 and H100. While GPU performance continues to increase and costs steadily decrease, the manufacturing process cannot speed up because shortages of raw materials, components, and production capacity limit the rate of growth.

While AI holds great promise, the physical footprint required to support it is growing every day, which increases the need for space, power, and cutting-edge equipment. io.net opens a path for computing power to no longer be constrained by these constraints.

io.net is a classic example of DePIN in the real world: by using token incentives to structurally reduce the cost of acquiring supply-side resources, it reduces costs for the ultimate GPU computing power demanders. It pools idle GPU resources distributed around the world into a shared pool for use by AI developers and companies. Today, the network is powered by thousands of GPUs from data centers, mining farms, and consumer devices.

While these valuable resources can be pooled together, they do not automatically scale to a distributed network. Throughout the history of cryptocurrency technology, there have been several attempts to build a distributed GPU computing network, but they all failed because they did not meet the needs of the demand side.

Coordinating and scheduling computational work across heterogeneous hardware with different memory, bandwidth, and storage configurations is a critical step toward enabling distributed GPU networks. We believe the io.net team has the most practical solution on the market today to make this kind of hardware aggregation useful and economical for end customers.

2. Paving the way for clusters

Throughout the history of computing, software frameworks and design patterns have adapted themselves around the hardware configurations available on the market. Most frameworks and libraries for AI development rely heavily on centralized hardware resources, but over the past decade, distributed computing infrastructure has made significant progress in practical applications.

io.net leverages existing idle hardware resources and networks them by deploying a custom network and orchestration layer to create a hyper-scalable GPU Internet. This network leverages Ray, Ludwig, Kubernetes, and various other open source distributed computing frameworks so that machine learning engineering and operations teams can scale their workloads on existing GPU networks.

ML teams can parallelize workloads on io.net GPUs by launching clusters of compute devices and leveraging these libraries to handle orchestration, scheduling, fault tolerance, and scaling. For example, if a group of motion graphics designers contribute their home GPUs to the network, io.net can build a cluster, carefully designed to enable image model developers around the world to rent collective computing resources.

BC8.ai is an example of a fine-tuned stable diffusion variant model trained entirely on the io.net network. The io.net explorer shows live inference and incentives for network contributors.

Artificial Intelligence Supercomputer

The generation information of each image is recorded on the chain. All fees are paid to a cluster of 6 RTX 4090s, which are consumer-grade GPUs used for gaming.

Today, there are tens of thousands of devices on the network, spread across mining farms, underutilized data centers, and Render Network consumer nodes. In addition to creating new GPU supply, io.net is able to compete on cost with traditional cloud service providers, often offering cheaper resources.

They achieve this cost reduction by outsourcing GPU coordination and operations to decentralized protocols. On the other hand, cloud service providers add markups to their products due to staff expenses, hardware maintenance, and data center operating costs. The cost of consumer graphics card clusters and mining farms is far lower than what hyperscalers are willing to accept, so there is a structural arbitrage that makes resource pricing on io.net dynamically lower than the rising cloud service rates.

3. Building the GPU Internet

io.net is uniquely positioned to remain asset-light and reduce the marginal cost of serving any particular customer to almost zero, while building direct relationships with both the demand and supply sides of the market, able to serve the hundreds of thousands of people who need access to GPUs to build competitive AI products that everyone will interact with in the future.