Participants in Web3 should focus more on niche scenarios and fully leverage their unique advantages in anti-censorship, transparency, and social verifiability.

Author: David & Goliath

Compiled by: Deep Tide TechFlow

Currently, the computation and training segments of the AI industry are primarily dominated by centralized Web2 giants. These companies dominate due to their strong capital strength, cutting-edge hardware, and vast data resources. While this situation may persist when developing the most powerful general machine learning (ML) models, for mid-range or customized models, the Web3 network may gradually become a more economical and accessible source of computational resources.

Similarly, when the inference demand exceeds the capabilities of personal edge devices, some consumers may opt for Web3 networks for less censorship and more diverse outputs. Rather than attempting to completely disrupt the entire AI technology stack, participants in Web3 should focus on these niche scenarios and fully leverage their unique advantages in anti-censorship, transparency, and social verifiability.

The hardware resources required to train the next generation of foundational models (such as GPT or BERT) are scarce and expensive, and the demand for the most powerful chips will continue to exceed supply. This resource scarcity leads to hardware being concentrated in the hands of a few well-funded top-tier companies, which utilize this hardware to train and commercialize the most performant and complex foundational models.

However, the speed of hardware updates is extremely fast. So, how will those outdated mid-range or low-performance hardware be utilized?

These hardware resources are likely to be used to train simpler or more targeted models. By matching different categories of models with hardware of varying performance, optimal resource allocation can be achieved. In this case, Web3 protocols can play a key role by coordinating access to diverse and low-cost computing resources. For instance, consumers can utilize simple mid-range models trained on personal datasets and only opt for high-end models trained and hosted by centralized companies when handling more complex tasks, all while ensuring user identities are hidden and prompt data is encrypted.

In addition to efficiency issues, concerns about biases and potential censorship in centralized models are also increasing. The Web3 environment is known for its transparency and verifiability, able to provide training support for models that are overlooked or deemed too sensitive in Web2. Although these models may not be competitive in performance and innovation, they still hold significant value for certain groups in society. Therefore, Web3 protocols can carve out a unique market in this area by offering more open, trustworthy, and censorship-resistant model training services.

Initially, centralized and decentralized approaches can coexist, serving different use cases. However, as Web3 continues to enhance developer experience and platform compatibility, along with the gradual emergence of network effects for open-source AI, Web3 may eventually compete in the core domains of centralized enterprises. Particularly as consumers become increasingly aware of the limitations of centralized models, the advantages of Web3 will become more pronounced.

In addition to training mid-range or specific domain models, participants in Web3 also have the advantage of providing more transparent and flexible inference solutions. Decentralized inference services can bring multiple benefits, such as zero downtime, modular combinations of models, public model performance evaluations, and more diverse, uncensored outputs. These services can also effectively avoid the 'vendor lock-in' issue that consumers face due to reliance on a few centralized providers. Similar to model training, the competitive advantage of decentralized inference layers does not lie in the computational power itself, but in solving some long-standing issues, such as the transparency of closed-source tuning parameters, lack of verifiability, and high costs.

Dan Olshansky proposed a promising idea to create more opportunities for AI researchers and engineers through POKT's AI inference routing network, allowing them to implement their research outcomes and gain additional income through customized machine learning (ML) or artificial intelligence (AI) models. More importantly, this network can facilitate fairer competition in the inference services market by integrating inference results from various sources (including decentralized and centralized providers).

Although optimistic predictions suggest that the entire AI technology stack may eventually migrate entirely on-chain, this goal still faces significant challenges of data and computational resource centralization, as these resources provide significant competitive advantages to existing giants. However, decentralized coordination and compute networks demonstrate unique value in providing more personalized, cost-effective, open competition, and censorship-resistant AI services. By focusing on these key value-driven niche markets, Web3 can establish its own competitive barriers, ensuring that the most influential technologies of this era can evolve collectively in multiple directions, benefiting a wider range of stakeholders rather than being monopolized by a few traditional giants.

Finally, I would like to especially thank all members of the Placeholder Investment team, as well as Kyle Samani from Multicoin Capital, Anand Iyer from Canonical VC, Keccak Wong from Nectar AI, Alpin Yukseloglu from Osmosis Labs, and Cameron Dennis from NEAR Foundation, who provided valuable feedback and reviews during the writing of this article.