At a time when AIGC is developing rapidly around the world, many projects are stuck in the predicament of insufficient GPU capacity, slowdown, task downtime, project crash, etc. For io.net, a rising star of Solana ecological project targeting AI concept and DePIN track, how to integrate project control and cloud computing services has become an important direction for seizing the market. The 11th TinTinAMA will talk with you about io.net's GPU network deployment strategy.

On May 23 at 20:00, the 11th TinTinAMA event was successfully held. The live broadcast invited Asa, the head of io.net Asia Pacific, and Bill, the product manager of io.net, to the event live broadcast room to deeply analyze io.net's computing power solutions under the wave of artificial intelligence and resolve the current operating difficulties of GPU networks. This online event attracted more than 20,000 viewers, and many viewers also interacted closely with the guests in the live broadcast room.

Friends who missed the live broadcast can click to replay:

️ Twitter Space:https://twitter.com/i/spaces/1nAJEadPbZRJL

AI + DePIN + Solana, io.net’s computing power logic

Starting from 2023, the hot wave of artificial intelligence generation (AIGC) has emerged, and the entire AI developer community is facing a shortage of GPUs. The market demand for GPUs is huge, but it faces the reality of uneven supply and demand. To this end, io.net integrates different data centers and service providers, integrates GPUs on the platform, allows everyone to achieve efficient use of GPUs, and alleviates the shortage of GPUs.

As a conceptual project that combines AI + DePIN + Solana, io.net is committed to providing AI and ML companies with GPU resource integration from model training to reasoning based on the Solana decentralized AI computing platform, providing low-cost and high-efficiency cloud computing services. The platform can integrate GPUs from independent data centers, cryptocurrency miners, and crypto projects such as Filecoin or Render to obtain computing power, providing target companies with more cost-effective cloud computing services.

According to the guest introduction and the official website information, the total number of io.net clusters has exceeded 40,000, the second season of the Ignition reward program is also underway, and the token IO contract address has been announced; in April this year, io.net announced that its network has more than 20,000 A 100 GPUs and more than 520,000 GPUs and CPUs, and the total value of infrastructure has exceeded the 2 billion US dollar mark, with a total profit of approximately 1.02 million US dollars. It has also completed a US$30 million Series A financing round with participation from well-known investors such as Hack VC, Solana Labs, ArkStream, Animoca Brands, and OKX Ventures, providing a solid financial foundation for the rapid development of the project in the early stages.

Exploring the technical operating architecture of io.net

Further analyzing the product composition logic of io.net, with the support of decentralized GPU underlying network and programmable computing layer IO engine, it mainly connects users and computing power providers through three product parts, including IO Cloud, IO Worker, and IO Explorer:

IO Cloud: Deploy and manage decentralized GPU clusters, seamlessly integrate with IO-SDK, and provide technical solutions for scaling artificial intelligence and Python applications

IO Worker: Provides users with real-time computing tracking, connected network device operations and a bird's-eye view, monitors devices and performs quick operations anytime, anywhere

IO Explorer: Provides users with comprehensive statistics and GPU cloud visualization, allowing users to easily monitor, analyze and understand the complex details of the io.net network in real time, providing comprehensive visibility of network activities, important statistics, data rewards, etc.

IO Cloud and IO Worker Collaborate to Solve Challenges

"For IO Cloud and IO Worker, a big challenge is how to perfectly combine the correlation between GPU data in different network scenarios. It may be necessary to find the number of GPUs that meet development needs from a country's GPU pool. However, these GPUs are in different network scenarios, some of which have public IPs and some have private IPs. So using network technology and data centers to put these GPUs in the same network scenario to achieve stable data interaction and ensure stability is a challenging problem," said Bill.

In order to better solve the technical challenges faced by IO Cloud and IO Worker, Bill introduced, "On the one hand, io.net uses Docker containers and some virtual means to ensure the technical environment during the technical execution process, realize the separation between the technical environment and the host, and ensure that the technical execution process will not be separated from the corresponding container environment; on the other hand, through non-technical sales means, such as punitive measures, each node operator is allowed to participate in a certain degree of staking to avoid some malicious manipulation behaviors."

Cleverly use the Ray technology stack to support a large number of concurrent tasks

In addition to demonstrating its technical superiority in resolving technical challenges, io.net also cleverly implements node interaction in handling concurrent tasks. At present, the technology stack platform on which io.net handles a large number of concurrent tasks is Ray. As long as there are not tens of millions of node running tasks, it can achieve stable task generation and continuously deploy them to various network nodes; if the degree of interaction involved in concurrent tasks is particularly high, it may have a certain impact on network operation, requiring the training and fine-tuning of artificial intelligence models, and there will be no more real-time data interaction on autonomous function nodes.

Friendly cooperation, io.net anchors computing power tokenization

In the face of potential competition with other decentralized computing networks, Asa believes that io.net is a friendly partner with Render and Filecoin. "In fact, for projects in the same DePIN track, we all hope to establish a close cooperative relationship with everyone. These outstanding predecessors can also allow io.net to find more space for exploration and development in different dimensions such as data stability and scalability, and truly empower the sustainable future of the AI+DePIN track." In addition, Asa added that io.net is not completely a competitive relationship with Render and Filecoin. There are still some product differences between them. Render is mainly based on AI rendering, and Filecoin is mainly based on data storage. Therefore, in this process, there is also a "each takes what he needs" part in the cooperation between io.net and them.

When it comes to the long-term sustainable development of the economic model, Asa believes that io.net is different from other ecosystems that rely on block rewards. "We have a very strong BD business team on the demand side to drive some network usage scenarios. When the cash supply on the demand side is greater than the output on the supply side, there is no need to rely on tokens to make money." For io.net, there is also a mid- to long-term economic route plan called computing power tokenization, which expresses computing power in the form of tokens. Users can realize computing power appreciation in the process of participating in the purchase, share the corresponding data benefits, and become users' future strategic assets. To this end, io.net also provides an official support channel for computing power tokenization, which allows more retail investors to participate in transactions with the real world by lowering the threshold for purchasing and owning computing power.

Opening up to ecological innovation and walking with developers

Currently, the io.net ecosystem is still in its early stages of development and is expected to conduct a TGE (Token Generation Event) next month, after which more ecosystem funds and grants will be generated to prepare for the launch of Hackathons with more partners such as Solana to fully support the development of io.net.

As the bottom layer of the AI ​​technology stack, io.net needs more projects to conduct innovative experiments in the ecosystem. For example, if more computing power support projects are willing to join, more storage, privacy and other middleware can be launched, as well as innovative applications such as chatbots and AI agents, to increase the usage of the io ecosystem, and to carry out more product iterations and optimization upgrades to help decentralized development. Facing the future development trend of Web3 technology, for io.net, the most forward-looking layout is clustered operation. In the case of increasingly scarce computing needs, GPU node efficiency can be used more quickly and efficiently to complete product strategic deployment and planning, so as to fight the AI ​​computing power price war.

Towards the ultimate development goal of building the "GPU Internet", we have reason to believe that io.net will find a superior path to resolve the computing shortage dilemma in the wave of AIGC development and grow into the world's largest artificial intelligence computing platform in the near future!