Alex Xu - Mint Ventures

Original publication time: 2024-04-08 10:23

Original link: https://research.mintventures.fund/2024/04/08/zh-a-new-solana-based-ai-depin-project-a-brief-analysis-of-upcoming-tokenlaunch-io-net/

introduction

In my last article, I mentioned that this round of crypto bull market lacks sufficiently influential new business and new asset narratives compared to the previous two cycles. AI is one of the few new narratives in the Web3 field this round. In this article, I will combine this year's hot AI project IO.NET to try to sort out my thoughts on the following two questions:

  • The necessity of AI+Web3 in business

  • Necessity and challenges of distributed computing services

Secondly, the author will sort out the key information of the representative project of AI distributed computing power: IO.NET project, including product logic, competitive product situation and project background, and deduce the valuation of the project.

Some of the thoughts in this article on the combination of AI and Web3 are inspired by "The Real Merge" written by Delphi Digital researcher Michael Rinko. Some of the views in this article are digested and quoted from the article, and readers are recommended to read the original text.

This article is the author’s interim thinking up to the time of publication. It may change in the future, and the views are highly subjective. There may also be errors in facts, data, and reasoning logic. Please do not use it as an investment reference. Criticism and discussion from colleagues are welcome.

The following is the main text.

1. Business logic: the intersection of AI and Web3

1.1 2023: A new “miracle year” created by AI

Looking back at the history of human development, once science and technology achieves a breakthrough, earth-shaking changes will occur in everything from individual daily life to various industrial structures and to human civilization as a whole.

There are two important years in human history, 1666 and 1905, which are now known as the two "miracle years" in the history of science and technology.

1666 is considered a miracle year because Newton's scientific achievements emerged in a concentrated manner in that year. In this year, he pioneered the physics branch of optics, founded the mathematical branch of calculus, and derived the gravitational formula, the basic law of modern natural science. Each of these is a foundational contribution to the development of human science in the next hundred years, greatly accelerating the development of science as a whole.

The second miracle year was 1905, when Einstein, who was only 26 years old at the time, published four papers in a row in the Annals of Physics, covering the photoelectric effect (which laid the foundation for quantum mechanics), Brownian motion (which became an important reference for analyzing random processes), special relativity, and the mass-energy equation (that is, the well-known formula E=MC^2). In later evaluations, each of these four papers exceeded the average level of the Nobel Prize in Physics (Einstein himself also won the Nobel Prize for his photoelectric effect paper), and the historical process of human civilization was once again greatly advanced by several steps.

The year 2023 that has just passed will most likely be called another "miracle year" because of ChatGPT.

We regard 2023 as a "miracle year" in the history of human science and technology, not only because of the tremendous progress of GPT in natural language understanding and generation, but also because humans have figured out the law of growth of large language model capabilities from the evolution of GPT - that is, by expanding model parameters and training data, the model's capabilities can be improved exponentially - and there is no bottleneck in this process in the short term (as long as there is sufficient computing power).

This capability is far from being limited to understanding language and generating conversations. It can also be widely used in various scientific and technological fields. Take the application of large language models in the biological field as an example:

  • In 2018, Francis Arnold, Nobel Prize winner in Chemistry, said at the award ceremony: "Today we can read, write and edit any DNA sequence in practical applications, but we cannot compose it." Just five years after his speech, in 2023, researchers from Stanford University and Salesforce Research, an AI startup in Silicon Valley, published a paper in Nature Biotechnology. They created 1 million new proteins from scratch through a large language model fine-tuned based on GPT3, and found two proteins with completely different structures but both with bactericidal ability, which are expected to become a solution to fight bacteria other than antibiotics. In other words: with the help of AI, the bottleneck of protein "creation" has been broken.

  • Before that, the artificial intelligence AlphaFold algorithm predicted almost all of the 214 million protein structures on Earth within 18 months. This achievement is hundreds of times the work of all human structural biologists in the past.

With various AI-based models, from hard technologies such as biotechnology, materials science, and drug development to humanities such as law and art, there will be earth-shaking changes, and 2023 is the first year of all this.

We all know that human beings’ ability to create wealth has grown exponentially over the past century, and the rapid maturity of AI technology will inevitably further accelerate this process.

Global GDP trend chart, data source: World Bank

1.2 Combination of AI and Crypto

To understand the necessity of combining AI and Crypto from the essence, we can start with the complementary characteristics of the two.

Complementary features of AI and Crypto

AI has three properties:

  • Randomness: AI is random. The mechanism behind its content production is a black box that is difficult to reproduce and explore, so the results are also random.

  • Resource-intensive: AI is a resource-intensive industry that requires a lot of energy, chips, and computing power.

  • Human-like intelligence: AI will (soon) be able to pass the Turing test, after which it will be indistinguishable between humans and machines*

※On October 30, 2023, the research team at the University of California, San Diego, released the Turing test results (test report) for GPT-3.5 and GPT-4.0. The GPT4.0 score was 41%, only 9% away from the passing line of 50%, and the human test score for the same project was 63%. The meaning of this Turing test is how many percent of people think that the person they are chatting with is a real person. If it is more than 50%, it means that at least half of the people in the crowd think that the person they are talking to is a person, not a machine, which means that they have passed the Turing test.

While AI is creating new leapfrog productivity for humans, its three attributes also bring huge challenges to human society, namely:

  • How to verify and control the randomness of AI, so that randomness becomes an advantage rather than a defect

  • How to meet the huge energy and computing power gap required by AI

  • How to tell the difference between humans and machines

The characteristics of Crypto and blockchain economy may be the best solution to the challenges brought by AI. Crypto economy has the following three characteristics:

  • Certainty: The business is based on blockchain, code and smart contracts, with clear rules and boundaries, and the results are determined by the input, with high certainty.

  • Efficient resource allocation: The crypto economy has built a huge global free market, where resource pricing, fundraising, and circulation are very fast. Due to the existence of tokens, incentives can be used to accelerate the matching of market supply and demand and accelerate the critical point.

  • Trustless: The ledger is open, the code is open source, and everyone can easily verify it, bringing a "trustless" system, while ZK technology avoids privacy exposure during verification

Next, three examples are used to illustrate the complementarity between AI and the crypto economy.

Example A: Solving randomness with crypto-economic AI agents

AI Agent is an artificial intelligence program that is responsible for performing work on behalf of humans based on human will (representative projects include Fetch.AI). Suppose we want our AI agent to handle a financial transaction, such as "buy $1,000 of BTC". The AI ​​agent may face two situations:

In the first case, it needs to connect with traditional financial institutions (such as BlackRock) to purchase BTC ETF. Here, it faces a large number of adaptation issues between AI agents and centralized institutions, such as KYC, document review, login, identity verification, etc., which are still very troublesome at present.

In the second case, it runs based on the native crypto economy, and the situation will become much simpler. It will directly sign and place orders with your account through Uniswap or an aggregated trading platform to complete the transaction and receive WBTC (or BTC in other packaged formats). The whole process is quick and simple. In fact, this is what various Trading BOTs are doing. They have actually played the role of a primary AI agent, but their work is focused on trading. In the future, various types of trading BOTs will inevitably be able to execute more complex trading intentions as AI is integrated and evolved. For example: track 100 smart money addresses on the chain, analyze their trading strategies and success rates, use 10% of the funds in my address to execute similar transactions within a week, and stop when the effect is not good, and summarize the possible reasons for the failure.

AI will run better in the blockchain system, essentially because of the clarity of cryptoeconomic rules and the permissionless access to the system. When performing tasks under limited rules, the potential risks brought by AI's randomness will also be smaller. For example, AI's performance in chess and card games and video games has crushed humans, because chess and card games are a closed sandbox with clear rules. However, AI's progress in autonomous driving will be relatively slow, because the challenges of an open external environment are greater, and we are less tolerant of the randomness of AI in dealing with problems.

Example B: Shaping resources and aggregating resources through token incentives

The global computing power network behind BTC currently has a total computing power (Hashrate: 576.70 EH/s) that exceeds the combined computing power of any country's supercomputers. Its development momentum comes from simple and fair network incentives.

BTC network computing power trend, source: https://www.coinwarz.com/

In addition, DePIN projects including Mobile are also trying to shape the bilateral market on both the supply and demand sides through token incentives to achieve network effects. IO.NET, which will be the focus of this article, is a platform designed to gather AI computing power, hoping to stimulate more AI computing power potential through the token model.

Example C: Open source code, introducing ZK, distinguishing between humans and machines while protecting privacy

As a Web3 project in which OpenAI founder Sam Altman participated, Worldcoin uses the hardware device Orb to generate a unique and anonymous hash value based on the biometric features of the human iris through ZK technology to verify identity and distinguish between humans and machines. In early March this year, the Web3 art project Drip began to use Worldcoin's ID to verify real users and issue rewards.

In addition, Worldcoin has recently open-sourced the program code of its iris hardware Orb to provide guarantees for the security and privacy of users’ biometrics.

In general, the crypto economy has become an important potential solution to the AI ​​challenges facing human society due to the determinism of code and cryptography, the advantages of resource circulation and fundraising brought by permissionless and token mechanisms, and the trustless properties based on open source code and public ledgers.

Moreover, the most pressing challenge with the greatest commercial demand is AI products' extreme hunger for computing resources and the huge demand for chips and computing power.

This is also the main reason why the growth of distributed computing power projects has surpassed the overall AI track in this bull market cycle.


The Business Necessity of Distributed Computing

AI requires a lot of computing resources, both for training models and performing inference.

In the practice of training large language models, one fact has been confirmed: as long as the scale of data parameters is large enough, large language models will emerge with some capabilities that they did not have before. The exponential leap in the capabilities of each generation of GPT compared to the previous generation is due to the exponential growth in the amount of computation required for model training.

Research by DeepMind and Stanford University shows that when different large language models face different tasks (calculation, Persian question and answer, natural language understanding, etc.), as long as the scale of model parameters during model training is increased (correspondingly, the amount of training computation is also increased), before the training amount reaches 10^22 FLOPs (FLOPs refers to the floating-point operations per second, which is used to measure computing performance), the performance of any task is similar to that of giving a random answer; and once the parameter scale exceeds the critical value of that scale, the task performance improves dramatically, regardless of the language model.

来源:Emergent Abilities of Large Language Models

来源:Emergent Abilities of Large Language Models

It is precisely the law and practical verification of "great effort brings miracles" in computing power that led Sam Altman, the founder of OpenAI, to propose raising US$7 trillion to build an advanced chip factory that is 10 times the size of TSMC's current one (this part is expected to cost US$1.5 trillion), and use the remaining funds for chip production and model training.

In addition to the computing power required for training AI models, the model's reasoning process itself also requires a lot of computing power (although the amount of computing power is smaller than that of training). Therefore, the hunger for chips and computing power has become the norm for participants in the AI ​​track.

Compared with centralized AI computing providers such as Amazon Web Services, Google Cloud Platform, and Microsoft’s Azure, the main value propositions of distributed AI computing include:

  • Accessibility: It usually takes several weeks to obtain access to computing chips using cloud services such as AWS, GCP, or Azure, and popular GPU models are often out of stock. In addition, in order to obtain computing power, consumers often need to sign long-term, inflexible contracts with these large companies. Distributed computing platforms can provide flexible hardware options and have greater accessibility.

  • Low pricing: Since idle chips are used, coupled with token subsidies from network protocol parties to chip and computing power suppliers, the distributed computing power network may be able to provide cheaper computing power.

  • Anti-censorship: Currently, cutting-edge computing chips and supplies are monopolized by large technology companies. In addition, governments represented by the United States are increasing their scrutiny of AI computing services. AI computing power can be distributed, elastic, and freely obtained, which is gradually becoming an explicit demand. This is also the core value proposition of the web3-based computing service platform.

If fossil energy is the blood of the industrial age, then computing power may be the blood of the new digital age opened by AI, and the supply of computing power will become the infrastructure of the AI ​​era. Just as stablecoins have become a thriving branch of legal currency in the Web3 era, will the distributed computing power market become a branch of the rapidly growing AI computing power market?

As this is still a fairly early market, everything remains to be seen. However, the following factors may stimulate the narrative or market adoption of distributed computing power:

  • The continued tight supply and demand of GPUs may push some developers to try distributed computing platforms.

  • Regulatory expansion. If you want to obtain AI computing services from large cloud computing platforms, you must go through KYC and multiple levels of review. This may in turn promote the adoption of distributed computing platforms, especially in some restricted and sanctioned regions.

  • Stimulation of token prices. The rise in token prices during the bull market cycle will increase the value of the platform's subsidies to the GPU supply side, thereby attracting more suppliers to enter the market, increasing the scale of the market, and reducing the actual purchase price for consumers.

But at the same time, the challenges of distributed computing platforms are also quite obvious:

  • Technical and engineering challenges

    • Work verification problem: Due to the hierarchical structure of deep learning model calculations, the output of each layer is used as the input of the next layer. Therefore, to verify the validity of the calculation, all previous work must be executed, which cannot be verified simply and effectively. To solve this problem, distributed computing platforms need to develop new algorithms or use approximate verification techniques, which can provide probabilistic guarantees of the correctness of the results, rather than absolute certainty.

    • Parallelization problem: Distributed computing platforms gather a long tail of chip supply, which means that the computing power that a single device can provide is limited. A single chip supplier can only complete the training or reasoning tasks of the AI ​​model independently in a short time, so it is necessary to use parallelization to disassemble and distribute tasks to shorten the total completion time. Parallelization inevitably faces a series of problems such as how to decompose tasks (especially complex deep learning tasks), data dependency, and additional communication costs between devices.

    • Privacy protection issue: How to ensure that the purchaser’s data and model are not exposed to the recipient of the task?

  • Regulatory compliance challenges

    • Distributed computing platforms can attract some customers as a selling point due to the unlicensed nature of the supply and procurement bilateral market. On the other hand, they may become the target of government rectification as AI regulatory standards are improved. In addition, some GPU suppliers are also worried about whether the computing resources they rent out are provided to sanctioned businesses or individuals.

In general, consumers of distributed computing platforms are mostly professional developers or small and medium-sized institutions. Unlike crypto investors who buy cryptocurrencies and NFTs, these users have higher requirements for the stability and sustainability of the services that the protocol can provide, and price may not be the main motivation for their decision-making. At present, distributed computing platforms still have a long way to go to gain recognition from such users.

Next, we will sort out and analyze the project information of IO.NET, a new distributed computing project in this cycle, and estimate its possible valuation level after listing based on the AI ​​projects and distributed computing projects in the same track currently on the market.

2. Distributed AI computing platform: IO.NET

2.1 Project Positioning

IO.NET is a decentralized computing network that has built a two-sided market around chips. The supply side is the computing power of chips distributed around the world (mainly GPUs, but also CPUs and Apple's iGPU, etc.), and the demand side is artificial intelligence engineers who want to complete AI model training or reasoning tasks.

On the official website of IO.NET, it says:

Our Mission

Putting together one million GPUs in a DePIN – decentralized physical infrastructure network.

Its mission is to integrate millions of GPUs into its DePIN network.

Compared with existing cloud AI computing service providers, its main selling points are:

  • Flexible combination: AI engineers can freely select and combine the chips they need to form a "cluster" to complete their computing tasks

  • Fast deployment: No need for weeks of approval and waiting (currently the case with centralized vendors like AWS), deployment can be completed in tens of seconds and tasks can begin

  • Low service price: The service cost is 90% lower than that of mainstream manufacturers

In addition, IO.NET also plans to launch services such as AI model stores in the future.

2.2 Product Mechanism and Business Data

Product mechanism and deployment experience

Like Amazon Cloud, Google Cloud, and Alibaba Cloud, the computing service provided by IO.NET is called IO Cloud. IO Cloud is a distributed, decentralized chip network that can execute Python-based machine learning code and run AI and machine learning programs.

The basic business module of IO Cloud is called Clusters. Clusters is a group of GPUs that can self-coordinate to complete computing tasks. Artificial intelligence engineers can customize the desired cluster according to their needs.

IO.NET's product interface is very user-friendly. If you want to deploy your own chip cluster to complete AI computing tasks, after entering its Clusters product page, you can start configuring the chip cluster you want on demand.

Page information: https://cloud.io.net/cloud/clusters/create-cluster, the same below

First you need to choose your own mission scenario. There are currently three types to choose from:

  1. General: Provides a more general environment, suitable for early project stages when specific resource requirements are uncertain.

  2. Train: A cluster designed for training and fine-tuning machine learning models. This option can provide more GPU resources, higher memory capacity, and/or faster network connections to handle these intensive computational tasks.

  3. Inference: Clusters designed for low-latency inference and heavy-duty workloads. In the context of machine learning, inference refers to using a trained model to make predictions or analyze new data and provide feedback. Therefore, this option focuses on optimizing latency and throughput to support real-time or near-real-time data processing needs.

Then, you need to choose the supplier of the chip cluster. Currently, IO.NET has reached a cooperation with Render Network and Filecoin's miner network, so users can choose IO.NET or the chips of the other two networks as the supplier of their computing clusters, which is equivalent to IO.NET playing the role of an aggregator (but as of the time of writing, the Filecon service is temporarily offline). It is worth mentioning that according to the page, the number of available GPUs online for IO.NET is currently 200,000+, while the number of available GPUs for Render Network is 3,700+.

Next comes the chip hardware selection phase for the cluster. Currently, the only hardware types listed as available on IO.NET are GPUs, not CPUs or Apple's iGPUs (M1, M2, etc.), and GPUs are mainly NVIDIA products.

Among the officially listed and available GPU hardware options, according to the data on the day of the author's test, the total number of available GPUs online on the IO.NET network is 206,001. The GeForce RTX 4090 has the largest number of available GPUs (45,250), followed by the GeForce RTX 3090 Ti (30,779).

In addition, there are 7,965 A100-SXM4-80GB chips (market price $15,000+), which are more efficient in processing AI computing tasks such as machine learning, deep learning, and scientific computing.

NVIDIA's H100 80GB HBM3 graphics card (market price $40,000+), which was designed specifically for AI from the beginning, has a training performance 3.3 times that of the A100 and an inference performance 4.5 times that of the A100. The actual number of cards online is 86.

After selecting the hardware type of the cluster, users also need to select parameters such as the cluster region, communication speed, number of GPUs to rent and time.

Finally, IO.NET will provide you with a bill based on the comprehensive selection, taking the author's cluster configuration as an example:

  • General mission scenarios

  • 16 A100-SXM4-80GB chips

  • Ultra High Speed

  • Location United States

  • Rental period is 1 week

The total bill price is $3311.6, and the hourly rental price of a single card is $1.232

The hourly rental price of a single card of A100-SXM4-80GB on Amazon Cloud, Google Cloud and Microsoft Azure is 5.12$, 5.07$ and 3.67$ respectively (data source: https://cloud-gpus.com/, actual price will vary according to the contract details).

Therefore, in terms of price alone, IO.NET's chip computing power is indeed much cheaper than that of mainstream manufacturers, and the supply combination and procurement are also very flexible, and the operation is also easy to use.

Business conditions

Supply side situation

As of April 4 this year, according to official data, IO.NET's total GPU supply on the supply side was 371,027, and its CPU supply was 42,321. In addition, Render Network, as its partner, also had 9,997 GPUs and 776 CPUs connected to the network's supply.

Data source: https://cloud.io.net/explorer/home, the same below

At the time of writing, 214,387 of the total number of GPUs connected to IO.NET are online, with an online rate of 57.8%. The online rate of GPUs from Render Network is 45.1%.

What do the above supply-side data mean?

For comparison, we will introduce another established distributed computing project, Akash Network, which has been online for a longer time.

Akash Network launched its mainnet as early as 2020, initially focusing on distributed services for CPU and storage. In June 2023, it launched a testnet for GPU services, and in September of the same year, it launched the mainnet for GPU distributed computing power.

Data source: https://stats.akash.network/provider-graph/graphics-gpu

According to Akash official data, although the supply side has continued to grow since the launch of its GPU network, the total number of GPU connections so far is only 365.

In terms of GPU supply, IO.NET is several orders of magnitude higher than Akash Network and is already the largest supply network in the distributed GPU computing power sector.

Demand side situation

However, from the demand side, IO.NET is still in the early stages of market cultivation, and the total amount of IO.NET actually used to perform computing tasks is not much. The task load of most online GPUs is 0%, and only four chips, A100 PCIe 80GB K8S, RTX A6000 K8S, RTX A4000 K8S, and H100 80GB HBM3, are processing tasks. In addition to A100 PCIe 80GB K8S, the load of the other three chips is less than 20%.

The official network pressure value disclosed on the same day was 0%, which means that most of the chip supply is in online standby mode.

In terms of network fee scale, IO.NET has incurred service fees of $586,029, and the fee for the past day is $3,200.

Data source: https://cloud.io.net/explorer/clusters

The scale of the above network settlement fees, both in terms of total amount and daily transaction volume, is at the same order of magnitude as Akash. However, most of Akash's network revenue comes from the CPU part, and Akash's CPU supply is more than 20,000.

Data source: https://stats.akash.network/

In addition, IO.NET also disclosed the business data of AI reasoning tasks processed by the network. So far, it has processed and verified more than 230,000 reasoning tasks. However, most of this business volume comes from the BC8.AI project sponsored by IO.NET.

Data source: https://cloud.io.net/explorer/inferences

Judging from the current business data, IO.NET's supply-side expansion is going smoothly. Stimulated by the airdrop expectations and the community activities codenamed "Ignition", it has quickly gathered a large amount of AI chip computing power. However, its expansion on the demand side is still in its early stages, and organic demand is currently insufficient. As for the current shortage on the demand side, whether it is because the expansion of the consumer side has not yet begun, or because the current service experience is still unstable, so there is a lack of large-scale adoption, this still needs to be evaluated.

However, considering that the gap in AI computing power is difficult to fill in the short term, a large number of AI engineers and projects are looking for alternatives and may be interested in decentralized service providers. In addition, IO.NET has not yet launched economic and activity stimulus on the demand side, and the product experience is gradually improving. The gradual matching of supply and demand is still worth looking forward to.

2.3 Team Background and Financing

Team situation

The core team of IO.NET was originally established to focus on quantitative trading. Before June 2022, they had been focusing on developing institutional-level quantitative trading systems for stocks and crypto assets. Due to the demand for computing power in the backend of the system, the team began to explore the possibility of decentralized computing, and finally focused on the specific issue of reducing the cost of GPU computing services.

Founder & CEO: Ahmad Shadid

Ahmad Shadid has been working in quantitative and financial engineering before IO.NET and is also a volunteer at the Ethereum Foundation.

CMO & Chief Strategy Officer: Garrison Yang

Garrison Yang officially joined IO.NET in March of this year. He was previously the VP of Strategy and Growth at Avalanche and graduated from the University of California, Santa Barbara.

COO:Tory Green

Tory Green is the COO of io.net. Previously, he was COO of Hum Capital and Director of Corporate Development and Strategy of Fox Mobile Group. He graduated from Stanford.

According to IO.NET's Linkedin information, the team is headquartered in New York, USA, with a branch in San Francisco. The current team size is over 50 people.

Financing

IO.NET has only disclosed one round of financing so far, namely the Series A financing with a valuation of US$1 billion completed in March this year, which raised a total of US$30 million, led by Hack VC. Other investors include Multicoin Capital, Delphi Digital, Foresight Ventures, Animoca Brands, Continue Capital, Solana Ventures, Aptos, LongHash Ventures, OKX Ventures, Amber Group, SevenX Ventures and ArkStream Capital.

It is worth mentioning that perhaps because of the investment from the Aptos Foundation, the BC8.AI project, which was originally used for settlement and accounting on Solana, has been switched to the same high-performance L1 Aptos.

2.4 Valuation Calculation

According to founder and CEO Ahmad Shadid, IO.NET will launch its token by the end of April.

IO.NET has two target projects that can be used as valuation references: Render Network and Akash Network, both of which are representative distributed computing projects.

We can deduce the market value range of IO.NET in two ways: 1. Market value to sales ratio, that is, market value/revenue ratio; 2. Market value/number of network chips ratio.

Let’s first look at the valuation deduction based on the price-to-sales ratio:

From the perspective of price-to-sales ratio, Akash can be used as the lower limit of IO.NET's valuation range, while Render can be used as a high-end pricing reference for valuation, with an FDV range of US$1.67 billion to US$5.93 billion.

But considering that the IO.NET project is newer, the narrative is hotter, the early circulating market value is smaller, and the current supply-side scale is larger, there is a high possibility that its FDV will exceed Render.

Let’s look at another angle to compare valuations, namely the “market-to-core ratio”.

In the context of a market where demand for AI computing power exceeds supply, the most important factor in a distributed AI computing network is the scale of the GPU supply side. Therefore, we can use the "market-to-chip ratio" for horizontal comparison, and use the "ratio of the total market value of the project to the number of chips in the network" to deduce the possible valuation range of IO.NET, which can be used as a market value reference for readers.

If we use the price-to-core ratio to estimate the market value range of IO.NET, IO.NET uses the price-to-core ratio of Render Network as the upper limit and Akash Network as the lower limit, and its FDV range is 20.6 billion to 197.5 billion US dollars.

I believe that readers who are optimistic about the IO.NET project will think that this is an extremely optimistic market value estimation.

And we also need to take into account that the current huge number of IO.NET chips online is stimulated by airdrop expectations and incentive activities. The actual number of online chips on the supply side still needs to be observed after the project is officially launched.

Therefore, in general, valuation calculations based on the price-to-sales ratio may be more relevant.

As a project with the triple halo of AI+DePIN+Solana ecology, let us wait and see how IO.NET's market value performance will be after its launch.

3. Reference Information

  • Delphi Digital:The Real Merge

  • Galaxy:Understanding the Intersection of Crypto and AI

Original link: https://research.mintventures.fund/2024/04/08/zh-a-new-solana-based-ai-depin-project-a-brief-analysis-of-upcoming-tokenlaunch-io-net/

The io.net Binance Square community reprints the original article. The copyright and content responsibility of the original article belong to the original author. The reprinting of io.net and Binance Square does not represent certification or support for part or all of the views of the reprinted content.