作宇:Raghav Agarwal、Roy Lu、LongHash Ventures

Compiled by: Elvin, ChainCatcher

 

AI 

Humanity is at an AI Oppenheimer moment.

Elon Musk noted: “As our technology advances, it’s critical to ensure AI serves the interests of the people, not just the interests of the powerful. People-owned AI offers a path forward.”

At the intersection with cryptocurrency, AI can democratize itself. Starting with an open source model, then AI of the people, by the people, for the people. While the goals of Web3 x AI are noble, its actual adoption depends on its usability and compatibility with existing AI software stacks. This is where IO.NET’s unique approach and technology stack come into play.

IO.NET’s decentralized Ray framework is a Trojan horse for launching a permissionless AI computing market to web3 and beyond.

IO.NET is at the forefront of bringing GPU richness. Unlike other general computing aggregators, IO.NET bridges decentralized computing with the industry-leading AI stack by rewriting the Ray framework. This approach paves the way for broader adoption within and outside of web3.

The race for computing power in the context of AI nationalism

Competition for resources is intensifying in the AI ​​stack. Over the past few years, there has been a proliferation of AI models. Within hours of the release of Llama 3, Mistral and OpenAI released new versions of their cutting-edge AI models.

The three layers of the AI ​​stack where there is ongoing competition for resources are: 1) training data, 2) advanced algorithms, and 3) compute units. Compute power allows AI models to improve performance by scaling training data and model size. According to OpenAI’s empirical research on transformer-based language models, performance improves steadily as we increase the amount of compute used for training.

Compute usage has exploded over the past 20 years. An analysis of 140 models by Epoch.ai showed that training compute for landmark systems has increased 4.2x per year since 2010. The latest OpenAI model, GPT-4, requires 66 times as much compute as GPT-3 and about 1.2 million times as much as GPT.

AI nationalism is evident

Huge investments from the United States, China, and other countries total about $40 billion. Most of the funds will be focused on producing GPU and AI chip factories. OpenAI CEO Sam Altman plans to raise up to $7 trillion to enhance global AI chip manufacturing, emphasizing that "computing will become the currency of the future."

Aggregating long-tail computing resources could significantly disrupt the market. Challenges faced by centralized cloud service providers such as AWS, Azure and GCP include long wait times, limited GPU flexibility and cumbersome long-term contracts, especially for smaller entities and startups.

Underutilized hardware in data centers, cryptocurrency miners, and consumer-grade GPUs can meet the demand. A 2022 DeepMind study found that training smaller models on more data is often more efficient than using the latest and most powerful GPUs, suggesting a shift toward more efficient AI training using accessible GPUs.

IO.NET structurally disrupts the AI ​​computing market

IO.NET structurally disrupts the global AI computing market. IO.NET's end-to-end platform for globally distributed AI training, inference, and fine-tuning aggregates long-tail GPUs to unlock inexpensive high-performance training.

GPU Market:

IO.NET aggregates GPUs from data centers, miners, and consumers around the world. AI startups can deploy decentralized GPU clusters in minutes by simply specifying the cluster location, hardware type, machine learning stack (Tensorflow, PyTorch, Kubernetes), and paying instantly on Solana.

Cluster:

A GPU without an adapted parallel infrastructure is like a reactor without a power cord that exists but cannot be used. As the OpenAI blog highlights, limitations in hardware and algorithm parallelism significantly impact the computational efficiency of each model, limiting model size and usefulness during training.

IO.NET leverages the Ray framework to transform clusters of thousands of GPUs into a unified whole. This innovation enables IO.NET to build GPU clusters regardless of geographical dispersion, solving a major pain point in the computing market.

Ray framework stands out

As an open source unified computing framework, Ray simplifies the expansion of AI and Python workloads. Ray has been adopted by industry leaders such as Uber, Spotify, LinkedIn, and Netflix, promoting the integration of AI into their products and services. Microsoft provides customers with the opportunity to deploy Ray on Azure, while Google Kubernetes Engine (GKE) simplifies the deployment of open source machine learning software by supporting Kubeflow and Ray.

Ahmad presents his work on decentralized Ray framework at Ray Summit 2023

Decentralizing Ray - Extending Ray for global reasoning (video link: https://youtu.be/ie-EAlGfTHA?)

We initially met Tory when he was COO of a high-growth fintech startup and we knew he was a senior operator with decades of experience scaling startups to fruitful outcomes. After speaking with Ahmad and Tory, we immediately realized this was the dream team to bring decentralized AI computing to web3 and beyond.

Ahmad’s brainchild, IO.NET, was born out of an aha moment in real-world applications. Developing Dark Tick, an algorithm for ultra-low latency high-frequency trading, required a lot of GPU resources. To combat the cost, Ahmad developed a decentralized version of the Ray framework to cluster GPUs from cryptocurrency miners, inadvertently creating a resilient infrastructure to solve broader AI computing challenges.

Development momentum:

By leveraging token incentives, IONET has launched more than 100,000 GPUs and 20,000 cluster-ready GPUs as of mid-2024, including a large number of NVIDIA H100 and A100. Krea.ai is already using io.net's decentralized cloud service, IO Cloud, to drive their AI model reasoning. IO.NET recently announced collaborations with NavyAI, Synesis One, RapidNode, Ultiverse, Aethir, Flock.io, LeonardoAI, Synthetic AI, and many more.

By relying on a globally distributed network of GPUs, IO.NET can:

  • Reduces customer inference time by allowing inference to be closer to end users compared to centralized cloud service providers

  • Connect multiple data centers through a highly integrated network backbone, organizing resources into zones for increased resiliency

  • Reduce the cost and access time of computing resources

  • Allows companies to dynamically scale up and down the size of leveraged resources

  • Enable GPU providers to get better returns on their hardware investments

IO.NET is at the forefront of innovation with its decentralized Ray framework. Using Ray Core and Ray Serve, their distributed GPU cluster efficiently orchestrates tasks on decentralized GPUs.

in conclusion

The push for open source AI models is a nod to the collaborative spirit of the original internet, where people could access HTTP and SMTP without permission.

The emergence of a crowdsourced GPU network is a natural evolution of the permissionless spirit. By crowdsourcing the long-tail GPUs, IO.NET is opening the floodgates to valuable computing resources, creating a fair and transparent market and preventing the concentration of power in the hands of a few.

We believe IO.NET is realizing the vision of AI computing as currency through decentralized Ray cluster technology. In a world increasingly made up of “haves” and “have-nots,” IO.NET will ultimately “make the internet open again.”