Author: Paul Timofeev Source: Shoal Research Translation: Shan Ouba, Golden Finance

Explores the role of computational decentralized infrastructure in supporting the decentralized GPU market, and provides comprehensive analysis and complementary case studies.

Key Takeaways

  • With the rise of machine learning, especially generative AI, which requires a lot of computationally intensive workloads, computing resources have become increasingly sought after. However, as large companies and governments hoard these resources, startups and independent developers are now facing a shortage of GPUs in the market, resulting in prohibitive costs or lack of accessibility.

  • Compute DePINs enable a decentralized market for computing resources by allowing people around the world to offer idle computing resources (such as GPUs) in exchange for monetary rewards. This is designed to help underserved GPU consumers access new supply streams and obtain the development resources they need for their workloads at lower cost and overhead.

  • Today, computational DePINs still face many economic and technical challenges in competing with traditional centralized service providers, some of which will resolve themselves over time, while others will require new solutions and optimization measures in the future.

Computing is the new oil

Since the Industrial Revolution, technology has propelled humanity forward at an unprecedented pace, impacting or completely transforming nearly every aspect of daily life. Computers ultimately emerged as the culmination of the efforts of collective researchers, academics, and computer engineers. Originally designed to solve large arithmetic tasks to assist in advanced military operations, computers have evolved into a mainstay of modern life. As the impact of computers on humanity continues to grow, so too has the demand for these machines and the resources they require, outstripping the available supply. This, in turn, has created a dynamic in the market where most developers and businesses cannot access critical resources, leaving the development of machine learning and generative AI, today’s most transformative technologies, in the hands of a small number of well-funded players. At the same time, the vast amount of idle computing resources presents a lucrative opportunity to alleviate the imbalance between computing supply and demand, exacerbating the need for adequate coordination mechanisms between participants on both sides of a transaction. As such, we believe that decentralized systems powered by blockchain technology and digital assets are essential for the development of a wider range of more democratic and accountable generative AI products and services.

Computing resources

Computing can be defined as any activity, application or workload where a computer produces a well-defined output based on a given input. Ultimately, it refers to the computational and processing capabilities of computers, which are the basis for the core utility of these machines in today's modern world, with computers alone generating a whopping $1.1 trillion in revenue last year.

Computing resources refer to the various hardware and software components that support computing and processing. As the number of applications and functions supported by these components continues to grow, they are becoming increasingly important in everyday life. This has led to a race among national powers and businesses to accumulate as many of these resources as possible as a means of survival. This is reflected in the market performance of companies that provide these resources (e.g., Nvidia, whose market value has increased by more than 3,000% in the past 5 years).

GPU

Graphics processing units (GPUs) are one of the most important resources in modern high-performance computing. Their core function is to serve as a specialized electronic circuit that accelerates computer graphics workloads through parallel processing. Originally serving the gaming and personal computer industries, GPUs have evolved to serve many emerging technologies that are shaping the future world (e.g., mainframe and personal computers, mobile devices, cloud computing, IoT). However, the rise of machine learning and artificial intelligence has particularly intensified the demand for these resources - GPUs accelerate machine learning and artificial intelligence operations by performing calculations in parallel, thereby enhancing the processing power and performance of the resulting technology.

The rise of artificial intelligence

At its core, artificial intelligence (AI) is a technology that enables computers and machines to simulate human intelligence and problem-solving abilities. An AI model operates as a neural network composed of many different pieces of data. The model requires processing power to identify and learn the relationships between these pieces of data, and then reference these relationships when creating outputs based on given inputs.

AI development and production is not new; in 1967, Frank Rosenblatt built the Mark 1 Perceptron, the first neural network-based computer that “learned” through trial and error. Additionally, a large amount of academic research that laid the foundation for the development of modern AI was published in the late 1990s and early 2000s, and the industry has continued to develop since then.

In addition to R&D efforts, “narrow” AI models power a wide variety of powerful applications in use today. Examples include social media algorithms, Apple’s Siri and Amazon’s Alexa, customized product recommendations, and more. Notably, the rise of deep learning has transformed the development of artificial generative intelligence (AGI). Deep learning algorithms use larger or “deeper” neural networks than machine learning applications as a more scalable alternative with a wider range of performance capabilities. Generative AI models “encode a simplified representation of their training data and emit new outputs that are similar, but not identical, to it.”

Deep learning has enabled developers to scale generative AI models to images, speech, and other complex data types, and milestone applications like ChatGPT, which has set the record for the fastest growing user base in the modern era, are still just the early versions of what is possible with generative AI and deep learning.

With this in mind, it’s no surprise that generative AI development involves multiple computationally intensive workloads, requiring significant amounts of processing power and computing power.

According to the Triple Whammy of Deep Learning Application Demand report, AI application development is constrained by several key workloads:

  • Training - The model must process and analyze large data sets to learn how to respond to given inputs.

  • Tuning - The model goes through an iterative process where various hyperparameters are adjusted and optimized to improve performance and quality.

  • Simulation - Before deployment, some models, such as reinforcement learning algorithms, are run through a series of test simulations.

Computing is in short supply: Demand > Supply

Over the past few decades, various technological advances have driven an unprecedented surge in demand for computing and processing power. As a result, today the demand for computing resources (such as GPUs) far exceeds the available supply, creating a bottleneck in AI development that will only continue to worsen without effective solutions.

The broader constraints on supply are also driven by a large number of companies actively purchasing more GPUs than they actually need, both as a competitive advantage and as a means of survival in the modern global economy. Compute providers often use contract structures that require long-term capital commitments to provide customers with supply far in excess of their needs.

Epoch’s research shows that the overall number of computationally intensive AI models released has grown rapidly, indicating that demand for resources to power these technologies will continue to grow rapidly.

As AI models continue to grow in complexity, application developers are demanding more computing and processing power. In turn, the performance of GPUs and their availability will play an increasingly important role. This trend is already evident, with a surge in demand for high-end GPUs, such as those produced by Nvidia, which calls GPUs the "rare earth metals" or "gold" of the AI ​​industry.

The rapid commercialization of AI has the potential to hand control to a handful of tech giants, similar to today’s social media industry, raising concerns about the ethical foundations of these models. A notable example is the recent Google Gemini controversy. While its many bizarre responses to various prompts did not pose any actual danger at the time, the incident demonstrated the inherent risks of a handful of companies dominating and controlling AI development.

Today’s tech startups face increasing challenges in acquiring computing resources to power their AI models. These applications require a lot of computationally intensive processes to be performed before the models are deployed. For small businesses, amassing a large number of GPUs is an unsustainable endeavor, and while traditional cloud computing services like AWS or Google Cloud provide a seamless and convenient developer experience, their limited capacity ultimately leads to high costs, which discourages many developers. At the end of the day, not everyone can come up with a plan to raise $7 trillion for hardware costs.

So what to do?

Nvidia previously estimated that more than 40,000 companies use GPUs for AI and accelerated computing, with a global developer community of more than 4 million. Looking ahead, the global AI market is expected to grow from $515 billion in 2023 to $2.74 trillion in 2032, with an average annual growth rate of 20.4%. At the same time, the GPU market is expected to reach $400 billion by 2032, with an average annual growth rate of 25%.

However, the growing imbalance between supply and demand of computing resources in the wake of the AI ​​revolution could create a rather dystopian future in which a small number of well-funded giants dominate the development of many transformative technologies. Therefore, we believe that all roads lead to decentralized alternative solutions to help bridge the gap between AI developer needs and available resources.

The role of DePINs

What are DePINs?

DePIN is a term coined by the Messari research team that stands for Decentralized Physical Infrastructure Network. Breaking it down, decentralization refers to the absence of a single entity extracting rents and restricting access. Meanwhile, physical infrastructure refers to the “real life” physical resources that are utilized. A network refers to a group of participants working in a coordinated manner to achieve a predetermined goal or set of goals. Today, the total market value of DePINs is approximately $28.3 billion.

At its core, DePINs is a global node network that connects physical infrastructure resources with blockchain to enable a decentralized market, connecting buyers and suppliers, where anyone can become a supplier and be compensated for their services and contributions to the network. In this case, central intermediaries that restrict access to the network through various legal and regulatory means and service fees are replaced by decentralized protocols composed of smart contracts and codes, managed by their respective token holders.

The value of DePINs is that they provide a decentralized, accessible, low-cost and scalable alternative to traditional resource networks and service providers. They implement a decentralized market designed to achieve a specific end goal; the cost of goods and services is determined by market dynamics, and anyone can participate at any time, naturally reducing unit costs as the number of suppliers increases and profit margins decrease.

Using blockchain enables DePINs to build crypto-economic incentive systems that help ensure that network participants are appropriately compensated for their services, making key value providers stakeholders. However, it is important to note that network effects are achieved by turning small individual networks into larger production systems, which is critical to achieving many of the benefits of DePINs. In addition, while token rewards have proven to be a powerful means of network bootstrapping mechanisms, building sustainable incentives to help user retention and long-term adoption remains a key challenge in the broader field of DePINs.

How do DePINs work?

To better understand the value that DePINs provide in supporting the decentralized computing market, it is important to recognize the different structural components and how they work together to form a decentralized resource network. Let us consider the structure and participants of a DePIN.

protocol

A decentralized protocol is a set of smart contracts built on top of an underlying blockchain network that facilitates trusted interactions between network participants. Ideally, the protocol will be governed by a diverse set of stakeholders who are actively committed to the long-term success of the network. These stakeholders then vote on proposed changes and developments using the protocol tokens they hold. Given that successfully coordinating a distributed network is a huge challenge in itself, the core team typically retains the power to implement these changes in the early stages, and then transitions power to a decentralized autonomous organization (DAO).

Network Participants

The end users of a resource network are its most valuable participants and can be categorized according to their function.

Supplier: An individual or entity that provides resources to the network in exchange for monetary rewards paid in DePINs native tokens. Suppliers are "connected" to the network through a blockchain native protocol, which may enforce a whitelist onboarding process or a permissionless process. By receiving tokens, suppliers gain a stake in the network, similar to stakeholders in the context of equity ownership, enabling them to vote on various proposals and network developments, such as those they believe will help drive demand and increase the value of the network, thereby creating higher token prices over time. Of course, suppliers who receive tokens are also likely to use DePINs as a form of passive income and sell them when they receive the tokens.

Consumers: These are individuals or entities that actively seek out resources provided by DePINs, such as AI startups seeking GPUs, and represent the demand side of the economic equation. If there are real advantages to using DePINs over traditional alternatives (such as lower costs and overhead requirements), consumers will be forced to use DePINs, thus representing organic demand for the network. DePINs typically require consumers to pay for resources in their native tokens as a means of creating value and maintaining a stable cash flow.

resource

DePINs can serve different markets and allocate resources using different business models. Blockworks provides a good framework for this; custom hardware DePINs, which provide suppliers with specialized proprietary hardware for allocation; and commodity hardware DePINs, which make it possible to allocate existing idle resources (including but not limited to computing, storage, and bandwidth).

economics

In an ideally functioning DePIN, value accumulates from the revenue that consumers use to pay for supplier resources. Continued demand for the network means continued demand for the native token, which aligns with the economic incentives of suppliers and token holders. Generating sustainable organic demand in the early stages is a challenge for most startups, which is why DePINs offer inflationary token incentives to incentivize early suppliers and bootstrap the network’s supply, thereby generating demand and, therefore, more organic supply. This is very similar to how VCs subsidized Uber’s passenger costs in the early stages of the company to bootstrap the initial customer base, thereby further attracting drivers and strengthening its network effects.

DePINs need to manage token incentives as strategically as possible, as they play a key role in the overall success of the network. When demand and network revenue rise, token issuance should be reduced. Conversely, when demand and revenue fall, token issuance should be used to incentivize supply again.

To further illustrate what a successful DePIN network looks like, consider the “DePIN flywheel,” a positive reflexive loop used to guide DePINs. To summarize:

  1. DePIN incentivizes providers to provide resources to the network by distributing inflationary token rewards and establishing a base supply level that can be consumed.

  2. Assuming the number of suppliers starts to grow, competitive dynamics start to form in the network, improving the overall quality of goods and services provided by the network to a level better than existing market solutions, thereby gaining a competitive advantage. This means that a decentralized system surpasses traditional centralized service providers, which is no easy feat.

  3. DePIN is beginning to form organic demand, providing legitimate cash flow for suppliers. This is a compelling opportunity for investors and suppliers, continuing to drive demand for the network and therefore the token price.

Growth in token price increases revenue for suppliers, attracting more suppliers and restarting the flywheel.

The framework offers a compelling growth strategy, but it’s important to note that it’s largely theoretical and assumes that the network is providing competitive resources and remains relevant over a long period of time.

Calculate DePIN

The decentralized computing market falls within the purview of a broader movement known as the “sharing economy,” a peer-to-peer economic system built on consumers sharing goods and services directly with other consumers through online platforms. Pioneered by companies like eBay, this model is dominated today by companies like Airbnb and Uber, and is set to ultimately be disrupted as the next generation of transformative technologies sweeps across global markets. Set to be worth $15 billion by 2023, the global sharing economy is expected to grow to nearly $80 billion by 2031, indicative of a broader trend in consumer behavior that we believe DePIN is well positioned to benefit from and play a key role in enabling.

Fundamental

Compute DePINs are peer-to-peer networks that facilitate the allocation of computing resources by connecting suppliers and buyers through decentralized marketplaces. A key differentiator of these networks is that they focus on commodity hardware resources, which are already available to many people today. As we’ve discussed, the advent of deep learning and generative AI has created a surge in demand for processing power due to their resource-intensive workloads, creating bottlenecks in access to critical resources for AI development. In short, decentralized compute marketplaces aim to alleviate these bottlenecks by creating a new supply stream — one that spans the globe and that anyone can participate in.

In computing DePIN, any individual or entity can immediately lend out their idle resources and receive appropriate compensation for their services. At the same time, any individual or entity can obtain necessary resources from a global permissionless network with lower costs and greater flexibility than existing market products. Therefore, we can structure the participants involved in computing DePIN through a simple economic framework:

  • Suppliers: Individuals or entities that own computing resources and are willing to lend or sell their computing resources in exchange for subsidies.

  • Demander: The individual or entity that requires the calculation and is willing to pay for it.

The main advantages of calculating DePINs

Compute DePINs offer a number of advantages that make them an alternative to centralized service providers and markets. First, allowing permissionless, cross-border market participation unlocks a new supply stream, increasing the amount of critical resources needed for compute-intensive workloads. Compute DePINs focus on hardware resources that most people already own—anyone with a gaming PC already has a GPU that can be rented out. This expands the range of developers and teams that can participate in building the next generation of goods and services, benefiting more people around the world.

Looking deeper, the blockchain infrastructure that supports DePINs provides an efficient and scalable settlement channel for facilitating peer-to-peer transactions. Crypto-native financial assets (tokens) provide a shared unit of value that demand-side participants use to pay suppliers, leveraging a distribution mechanism consistent with today's increasingly globalized economy. Referring to the DePIN flywheel structure mentioned earlier, strategically managing economic incentives is very beneficial to increasing the network effects of DePINs (supply and demand sides), thereby increasing competition among suppliers. This dynamic reduces unit costs while improving service quality, creating a sustainable competitive advantage for DePINs, from which suppliers can benefit as token holders and key value providers.

DePINs function similarly to cloud computing service providers, aiming to provide a flexible user experience where resources can be accessed and paid for on demand. According to Grandview Research, the global cloud computing market size is expected to grow at an average annual rate of 21.2% to exceed $2.4 trillion by 2030, proving the viability of this business model given future demand forecasts for computing resources. Modern cloud computing platforms utilize central servers to handle all communications between client devices and servers, creating a single point of failure in their operations. Built on blockchain, DePINs can provide greater censorship resistance and resilience than traditional service providers. While attacks on a single organization or entity (such as a central cloud service provider) can compromise the entire network of underlying resources, DePINs are designed to be resistant to such events through their distributed nature. First, the blockchain itself is a globally distributed network of dedicated nodes designed to resist centralized network authorities. In addition, computing DePINs also allows for permissionless network participation, bypassing legal and regulatory barriers. Due to the nature of the token distribution, DePINs can adopt a fair voting process to vote on proposed changes and developments to the protocol to eliminate the possibility of a single entity suddenly shutting down the entire network.

Current status of computational DePINs

Render Network

Render Network is a computational DePIN that connects GPU buyers and sellers through a decentralized computing marketplace, with transactions conducted through its native token. Render's GPU marketplace involves two key parties - creators looking for processing power and node operators who rent idle GPUs in exchange for compensation in native Render tokens. Node operators are ranked by a reputation-based system, and creators can choose GPUs from a multi-tiered pricing system. The Proof-of-Render (POR) consensus algorithm coordinates operations, and node operators commit their computing resources (GPUs) to process tasks, i.e. graphics rendering work. Once a task is completed, the POR algorithm updates the node operator's status, including changes to the reputation score based on the quality of the task. Render's blockchain infrastructure facilitates task payments, providing a transparent and efficient settlement channel for suppliers and buyers to transact through network tokens.

Conceived by Jules Urbach in 2009, the Render Network went live on Ethereum in September 2020 (RNDR), migrating to Solana (RENDER) about three years later to improve network performance and reduce operating costs.

As of this writing, the Render Network has processed up to 33 million tasks (measured in rendered frames) and has grown to 5600 nodes since its inception. Just under 60k RENDER was burned, a process that occurs when work credits are distributed to node operators.

IO Net

Io Net is launching a decentralized GPU network on Solana as a coordination layer between the vast amount of idle computing resources and the growing number of individuals and entities that need the processing power these resources provide. Io Net’s unique selling point is that it does not compete directly with other DePINs on the market, but rather aggregates GPUs from a variety of sources including data centers, miners, and other DePINs including Render Network and Filecoin, while leveraging a proprietary DePIN — Internet-of-GPUs (IoG) — to coordinate operations and align incentives between market participants. Io Net customers can customize a cluster on IO Cloud for their workloads by selecting processor type, location, communication speed, compliance, and service term. Conversely, anyone with a supported GPU model (12 GB RAM, 256 GB SSD) can participate as an IO Worker, earning rewards by lending their idle computing resources to the network. While service payments are currently settled in fiat currencies and USDC, the network will soon support payments in native $IO tokens as well. The price of resource payments is determined by its supply and demand, as well as various GPU specifications and configuration algorithms. The ultimate goal of Io Net is to become the GPU marketplace of choice by offering lower costs and better quality of service than modern cloud service providers.

The multi-layer IO architecture can be mapped as follows:

  • UI layer - consists of the public website, client area, and workspace.

  • Security Layer - This layer consists of firewalls for network protection, authentication services for user verification, and logging services for tracking activities.

  • API Layer - This layer acts as a communication layer and consists of public APIs, private APIs, and internal APIs for cluster management, analytics, and monitoring and reporting.

  • Backend Layer - The backend layer manages workspaces, cluster/GPU operations, customer interactions, billing and usage monitoring, analytics, and auto-scaling.

  • Database Tier - This tier is the data repository for the system, using primary storage for structured data and cache for frequently accessed temporary data.

  • Message Broker and Task Layer - This layer facilitates asynchronous communication and task management.

  • Infrastructure layer - This layer contains GPU pools, orchestration tools, and manages task deployment.

Current Stats/Roadmap:

As of this writing:

  • Total network revenue: $1.08 million

  • Total calculated hours: 837.6k hours

  • Total number of GPUs in the prepared cluster: 20.4k

  • Total number of CPUs in the prepared cluster: 5.6k

  • Total on-chain transactions: 1.67 million

  • Total inference count: 335.7k

  • Total number of clusters created: 15.1k

Data from Io Net Explorer.

They will go

Aethir is a cloud computing DePIN that facilitates the sharing of high-performance computing resources in compute-intensive domains and applications. It leverages resource pools to enable global GPU allocation at significantly reduced costs and enables decentralized ownership through distributed resource ownership. Aether designed a distributed GPU framework specifically targeted at high-performance workloads such as gaming and AI model training and inference. By unifying GPU clusters into a single network, Aethir is designed to increase cluster size, thereby improving the overall performance and reliability of services provided on its network.

Aethir Network is a decentralized economy comprised of miners, developers, users, token holders, and the Aethir DAO. Three key roles that ensure the successful operation of the network are Containers, Indexers, and Inspectors. Containers are the power nodes of the network, performing critical operations as dedicated nodes to keep the network active, including validating transactions and rendering digital content in real time. Inspectors are quality assurance workers that continuously monitor the performance and quality of service of containers to ensure reliable and efficient operation that meets the needs of GPU consumers. Indexers act as matchmakers between users and the best available containers. Underpinning this structure is the Arbitrum Layer 2 blockchain, which provides a decentralized settlement layer to facilitate payments for goods and services on the Aethir network, using the native $ATH token.

Rendering Proof

Nodes in the Aethir network have two key functions - rendering proof of power, randomly selecting a group of workers every 15 minutes to verify transactions, and rendering proof of work, closely monitoring network performance to ensure users are optimally served, adjusting resources based on demand and geography. Mining rewards are distributed in the form of native $ATH tokens to participants who run Aethir network nodes as a reward for the computing resources they provide.

And the baby

Nosana is a decentralized GPU network built on Solana. Nosana allows anyone to contribute idle computing resources and be rewarded for doing so in the form of $NOS tokens. DePIN facilitates the allocation of cost-effective GPUs that can be used to run complex AI workloads without the overhead of traditional cloud solutions. Anyone can run a Nosana node by renting idle GPUs, earning token rewards proportional to the GPU power they provide to the network.

The network connects two parties that allocate computing resources: users seeking access to computing resources and node operators who provide computing resources. Important protocol decisions and upgrades are voted on by NOS token holders and governed by the Nosana DAO.

Nosana has laid out a detailed roadmap for its future plans - Galactica (v1.0 - H1/H2 2024) will launch the mainnet, release CLI and SDK, and focus on network expansion through container nodes for consumer GPUs. Triangulum (v1.X - H2 2024) will integrate major machine learning protocols and connectors for PyTorch, HuggingFace, and TensorFlow. Whirlpool (v1.X - H1 2025) will expand support for different GPUs from AMD, Intel, and Apple Silicon. Sombrero (v1.X - H2 2025) will add support for medium and large enterprises, fiat currency exchange, billing, and team features.

Akash

Akash Network is an open-source proof-of-stake network built on top of the Cosmos SDK that allows anyone to join and contribute to a decentralized cloud computing marketplace. The $AKT token is used to secure the network, facilitate resource payments, and coordinate economic alignment between network participants. Akash Network consists of several key components:

  • The blockchain layer uses Tendermint Core and Cosmos SDK to provide consensus.

  • Application layer, manages deployment and resource allocation.

  • The provider layer manages resources, bids, and user application deployment.

  • The user layer allows users to interact with the Akash Network, manage resources, and monitor application status through the CLI, console, and dashboard.

The network, which initially focused on storage and CPU rental services, has since expanded to GPU rental and allocation through its AkashML platform in response to the growth of AI training and inference workloads and their demand for processing power. AkashML uses a "reverse auction" system where customers (called tenants) submit the price they want to pay for a GPU and compute providers (called providers) compete to supply the requested GPU.

As of this writing, the Akash blockchain has seen over 12.9 million total transactions, over $535k has been spent to access computing resources, and over 189k unique deployments have been leased.

Other projects worth mentioning

The computational DePIN space is still evolving, with many teams racing to bring innovative and efficient solutions to market. Other examples worth further investigation include: Hyperbolic is building a collaborative open access platform for AI development resource pools, Exabits is building a distributed computing power network supported by computational miners, and Shaga is building a network on Solana that allows PC rental and monetization for server-side gaming.

Important considerations and future prospects

Now that we have understood the basic principles of calculating DePIN and reviewed several currently running complementary case studies, it is important to consider the impact of these decentralized networks, including the pros and cons.

challenge

Building distributed networks at scale often requires a trade-off between performance and security, resiliency, etc. For example, training AI models on a globally distributed network of commodity hardware may be less cost-effective and time-efficient. As mentioned earlier, AI models and their workloads are becoming more complex, requiring more high-performance GPUs instead of commodity GPUs.

This is why large companies hoard high-performance GPUs in large quantities, and this is the inherent challenge of computing. DePINs attempts to solve the GPU shortage problem by establishing a permissionless market where anyone can lend out idle supply. Protocols can solve this problem in two main ways: by setting baseline requirements for GPU providers who want to contribute to the network, and by pooling the computing resources provided to the network to achieve a larger whole. However, this model is inherently more challenging to scale than centralized service providers who can allocate more capital to deal directly with hardware vendors such as Nvidia. DePINs should consider this in the future. If a decentralized protocol has a large enough treasury, the DAO can vote to allocate part of the funds to purchase high-performance GPUs, which can be managed in a decentralized manner and rented out at a higher price than commoditized GPUs.

Another challenge specific to computational DePINs is managing the right amount of resource utilization. In their early stages, most computational DePINs will face a structural lack of demand, much like many startups face today. In general, the challenge for DePINs is to build enough supply early on to achieve minimum viable product quality. Without supply, the network will not be able to generate sustainable demand or serve its customers during peak demand. The other side of this equation is the concern of excess supply. Beyond a certain threshold, more supply is only beneficial when the network's utilization is close to or at full capacity. Otherwise, DePINs will run the risk of overpaying for supply, which in turn will lead to underutilization of resources, and suppliers will receive less revenue unless the protocol increases token issuance to retain suppliers.

Just as a telecommunications network without broad geographic coverage is useless, a taxi network is useless if passengers have to wait too long for a ride. A DePIN is useless if it has to pay people to provide resources over a long period of time. While centralized service providers can predict resource demand and manage supply efficiently, computational DePINs lack a central authority to manage this utilization. Therefore, DePINs must build resource utilization particularly strategically.

A bigger picture issue for the decentralized GPU market is that the GPU shortage may be coming to an end. Mark Zuckerberg recently said in an interview that he believes the bottleneck in the future will be energy rather than computing resources, as companies will now compete to build data centers in large quantities rather than hoarding computing resources as they do now. Of course, this means that the cost of GPUs may decrease due to slowing demand, but it also raises the question of how AI startups will compete with large companies in terms of performance and service quality if building proprietary data centers raises the AI ​​model performance bar to unprecedented levels.

Example of calculating DePIN

To reiterate, there is a growing gap between the complexity of AI models and their subsequent processing and computational requirements, and the number of high-performance GPUs and other computing resources available.

Computational DePINs have the potential to be innovative and disruptive in the computing market sector, which today is dominated by major hardware manufacturers and cloud computing service providers, based on several key capabilities:

  1. Providing lower costs for goods and services.

  2. Provide stronger censorship resistance and network resilience guarantees.

  3. Potential regulatory guidelines to benefit from AI require that AI models be as open as possible for fine-tuning and training, and easily accessible to anyone, anywhere.

The proportion of U.S. households with computers and Internet access has increased exponentially, approaching 100%. There has also been significant growth in many parts of the world. This suggests that potential providers of computing resources (GPU owners) may be willing to lend out idle supply given sufficient monetary incentives and a seamless transaction process. Of course, this is a very rough estimate, but it suggests that the foundations for a sustainable computing resource sharing economy may already exist.

Beyond AI, future demand for computing will also come from many other industries, such as quantum computing. The quantum computing market size is expected to grow from $928.8 million in 2023 to $6,528.8 million in 2030, an average annual growth rate of 32.1%. Production in this industry will require different kinds of resources, but it will be interesting to see if any quantum computing DePINs are launched and what they will look like.

“A strong ecosystem of open source models running on consumer hardware is an important countermeasure to protecting future value from excessive concentration in AI, and is much lower than corporate giants and the military.” - Vitalik Buterin

Large enterprises are probably not the target audience for DePINs, nor will they be. Computational DePINs re-empower individual developers, small entrepreneurs, and startups with limited resources. They allow for the transformation of idle supply into innovative ideas and solutions brought about by the abundance of more computing resources. AI will undoubtedly change the lives of billions of people. Instead of worrying about it replacing everyone’s job, we should encourage the idea that AI can empower individual and self-employed entrepreneurs, startups, and the broader public.