Project Introduction
io.net is a decentralized computing network that supports the development, execution, and scaling of ML applications on the Solana blockchain. It aggregates underutilized resources, such as GPUs from independent data centers, crypto miners, and crypto projects like Filecoin and Render, to address this issue.
The main selling points of io.net are:
Flexible Composition: AI engineers can freely select and combine the chips they need to form a "cluster" to complete their computing tasks.
Rapid Deployment: Unlike centralized providers such as AWS, which require weeks of approval and waiting, tasks can be deployed and started within seconds.
Low Cost: The service cost is 90% lower than that of mainstream providers.
Product Functionality and Workflow
Demand Side
io.cloud allows users to define their desired GPU clusters based on their needs to complete AI computing tasks. Depending on different computing power tasks, io.net categorizes the demand scenarios into three types: General, Train, and Inference. This classification helps users choose GPUs based on performance metrics such as memory and bandwidth.
Supply Side
IO Worker simplifies and optimizes operations for suppliers. This includes user account management, real-time activity monitoring, temperature and power consumption tracking, installation support, wallet management, security, and profitability analysis.
Tokenomics
The total supply of IO tokens is capped at 800 million, with an initial supply of 500 million IO at genesis, distributed across five categories: seed investors, Series A investors, core contributors, R&D and ecosystem, and the community.
The issuance of IO tokens aims to incentivize network growth and adoption, gradually increasing to a fixed maximum supply of 800 million IO over 20 years. Rewards will be released every hour to suppliers and stakers, following a deflationary model that starts at 8% in the first year and decreases by 1.02% each month (approximately 12% per year) until the cap of 800 million IO tokens is reached.
Besides $IO token, the project designed a dual-token mechanism. There are plans to introduce an IOSD token, which is pegged to the US dollar and generated.
Key Data
Supply Side Expansion
The supply side has expanded rapidly. As of May 20, according to official data, IO.NET has a total supply of 28,889 GPUs and 6,605 CPUs on the supply side. Additionally, as a partner, Render Network contributes 1,152 GPUs and 27 CPUs to the network's supply, while Filecoin adds 1,024 GPUs. The most available hardware is the GeForce RTX 4090. The active hardware online rate is approximately 42%.
Demand Side Performance
On the demand side, IO.NET is still in the early stages of market cultivation, with the actual usage for computing tasks remaining low. Most online GPUs have a task load of 0%, with only the A100 PCIe 80GB K8S, RTX A6000 K8S, RTX A4000 K8S, and H100 80GB HBM3 processing tasks. Except for the A100 PCIe 80GB K8S, the load on the other three chips is less than 20%.
Network Processing and Fees
By April 18, the network had processed and validated over 319,000 AI inference tasks, although the majority of this activity came from IO.NET-sponsored project BC8.AI. The network has generated $1,024,107 in fees, with the most recent day's fees amounting to $624.
IO.NET offers significant cost advantages, with the following hourly rates:
A100: $0.76
RTX 4090: $0.37
RTX 3080: $0.20
RTX A6000: $0.75
A4000 K8S: $0.23
For example, the cost of using an A100 is 82.45% cheaper than Google Cloud and 82.62% cheaper than Amazon AWS.
Project Background
The team is headquartered in New York, USA, and has over 50 members. Initially, the team's business focused on quantitative trading, but it shifted to decentralized computing in June 2022. The founder, Ahmad Shadid, has a background in quantitative and financial engineering and is also a volunteer for the Ethereum Foundation.
The latest project valuation has reached $1 billion. IO.NET has received support from Solana, Render Network (RNDR), and Filecoin (FIL). The Series A funding round was led by Hack VC, with participation from Multicoin Capital, 6th Man Ventures, M13, Delphi Digital, Solana Labs, and Aptos Labs.
Distributed Computing Platform Landscape
Distributed computing power platforms possess several key characteristics:
Accessibility: Traditional cloud services such as AWS, GCP, or Azure often involve lengthy wait times and stock shortages for popular GPU models. Additionally, users typically must commit to long-term contracts with limited flexibility. In contrast, distributed computing power platforms offer more accessible options with flexible hardware choices.
Low Pricing: By leveraging idle chips and providing token subsidies to chip and computing power suppliers through network protocols, distributed computing networks can offer lower-cost computing power.
Censorship Resistance: Currently, advanced computing chips and supplies are controlled by large tech companies. With increasing government scrutiny, particularly in the U.S., of AI computing services, there's a growing demand for decentralized, flexible, and censorship-resistant access to AI computing power.
Target Consumers: Most consumers of distributed computing power platforms are professional developers or small to medium-sized institutions. These users prioritize stability and continuity in the services provided by the protocol.
Challenges in the distributed computing power platform space include technical hurdles, demand-side constraints, and regulatory compliance issues.
Technical Challenges: Verifying the effectiveness of computations in deep learning models is complex due to the hierarchical structure where each layer's output influences the next. This requires executing all previous tasks, making verification inefficient. Distributed computing platforms need to develop new algorithms or use approximate verification techniques to address this. Additionally, parallelization of tasks is necessary to shorten completion time, but faces challenges such as task decomposition and data dependency.
Privacy Protection: Ensuring that purchaser data and models remain protected from the task recipient is crucial for maintaining trust and security in distributed computing platforms.
Breaking Through Computing Power Demand: While projects like DePIN can incentivize supply-side expansion using tokens, demand for crypto+AI computing power remains insufficient. This is partly due to limited consumer market expansion and a dominance of large companies in the AI field, which may not fully recognize the benefits of distributed computing power.
Regulatory Compliance Issues: The permissionless nature of distributed computing platforms' supply and procurement markets can attract customers, but may also make them vulnerable to government regulation as AI regulatory norms evolve. Additionally, GPU suppliers may have concerns about the potential misuse of their leased computing power resources.
Technical Advantages of io.net
io.net offers excellent service tools for both computing power suppliers and consumers, including IO Cloud, IO Worker, and IO Explorer.
Core Technology: The IO-SDK multilayer architecture effectively addresses the issue of parallel execution of distributed computing tasks. It integrates well with key ML frameworks, enabling io.net to meet various computing needs flexibly and efficiently.
Privacy Protection: io.net utilizes reverse tunneling technology and a mesh VPN architecture. Reverse tunneling allows engineers to bypass firewalls and NAT for remote access without complex configurations, ensuring protected communication and data privacy. The mesh VPN architecture provides robustness against node failures, scalability, reduced latency, and better traffic distribution.
Competitor Comparison
In the Web3 domain, numerous computing marketplaces have emerged, including notable projects such as Akash, Render, Nosana, Clore.ai, Golem, Inferx, Kuzco, Aioz, Fluence Labs, and GPU.NET. These projects vary in their service offerings and target markets. For example, Akash is a peer-to-peer cloud service marketplace that provides GPU deployment, leasing, and AI model training. Render offers decentralized GPU rendering solutions.
Analyzing publicly available data, Akash and Clore.ai perform well on the supply side. Akash has 382 GPUs and 20,890 CPUs with a CPU rental rate of 33% and a GPU rental rate of 26%. Clore.ai boasts 19,590 GPUs with a rental rate of 71%.
Regarding network fees, Akash’s 24-hour fees ($2.73K) are comparable to io.net’s.
Most of these projects are still in early or testnet stages and do not yet match io.net's capabilities. However, they are developing rapidly and warrant close monitoring. For instance, Kuzco has 1,400 active work nodes with a single-card utilization rate of 90%.
Summary
The appeal of distributed computing power platforms stems from several key factors: distributed AI computing offers advantages over centralized options, including greater accessibility and lower costs. Real-world issues like GPU supply-demand imbalances and expanding regulations also play a role. Additionally, token price incentives, especially during bull markets, and lower barriers to entry for ordinary users mean AI benefits extend beyond large companies and institutions.
Despite these benefits, distributed computing faces significant challenges, including technical difficulties with work verification, parallelization, and privacy protection. There are also issues with demand constraints and regulatory hurdles.
io.net has attracted considerable attention, being dubbed the next Filecoin, due to its integration with the Solana ecosystem, AI, and DePIN—three themes expected to surge in the 2024 bull market. Technologically, it tackles distributed computing challenges with its core IO-SDK layered architecture and employs reverse tunneling and mesh VPN architectures for secure connections and data privacy. On the product side, it offers comprehensive services for both supply and demand, including IO Cloud, IO Worker, and IO Explorer.
Practically, io.net outperforms other distributed computing platforms, boasting a higher number of hardware chips and online rates, making it the largest supplier network in this space. Although the total volume of computing tasks executed remains small with most chips in standby mode, this is a common issue across the industry.
As computing resources become scarcer, io.net aims to reduce the cost of renting GPU/CPU capabilities, which is crucial for AI and ML scalability. The rising demand for computing power, coupled with insufficient supply, indicates substantial growth potential for distributed computing projects like io.net.
However, io.net also faces several challenges: intense competition in the computing power platform market, the complexity of managing large-scale decentralized clusters, insufficient demand and early-stage adoption, slower-than-expected uptake by AI startups and developers, regulatory risks, and the complexity introduced by a dual-token mechanism.