Decentralised AI: The Power of Permissonless Intelligence

At the panel titled "Decentralised AI: The Power of Permissonless Intelligence" on 18 September 2023 at TOKEN2049 Singapore, the panel consisting of Santiago R Santos, Managing Partner of SRS who is moderating; Emad Mostaque, Founder of Schelling AI; Sean Ren, Co-Founder and CEO of Sahara AI; Tarun Chitra, Founder and CEO of Gauntlet ; and Alex Skidanov, Co-Founder of NEAR AI; delves into healthy crypto adoption and the prospect of crypto in the following three years.

Do We Need Decentralised AI? Privacy Concerns, Ownership, & Users' Data

Santos started the panel off by pointing out that with any technology that was created in the past decade, all of them are today controlled by large corporations.

And in every single case, what happens is that those corporations capture all the data, monetise it, sell it.

Users effectively are not customers, they are products.

And artificial intelligence (AI) is moving in the same direction.

The big differentiator is what we call user-owned AI: it should be that users can control the data and users can control how they use AI-powered applications.

In the current AI ecosystem, there are distinct user groups that require attention, such as AI developers or model creators, AI users, and so on, according to Ren.

Prototyping a model idea today is both costly and time-consuming.

Developers must source data, find compute vendors, and raise capital, often from venture capitalists (VC).

This process can take anywhere from three to six months before even beginning to test ideas.

The developers face significant challenges in accessing the necessary resources and capital, presenting a clear opportunity for blockchain-backed platforms and decentralised finance (DeFi) markets to streamline this process and accelerate innovation.

Also, AI users often unknowingly share a wealth of personal and private information—preferences, personal data, financial details—in exchange for free services, like those provided by ChatGPT.

Over time, companies like OpenAI accumulate vast amounts of this user data, improving their models by leveraging this collective knowledge.

While this drives innovation, it also raises serious concerns about privacy, ownership, and how users' data is being used, potentially even to automate or replace their jobs.

Ren expressed:

“And then over time, they're going to create a better version, aggregating all of these individual domain experts that might be able to automate part of your jobs and even placing your jobs in a monetization opportunity. And that's where we're going back to this whole problem of privacy protections and sort of user ownership about what they have given to the centralized provider. And I think these are the two major big problems we have to tackle.”

Addressing these challenges requires a shift toward stronger privacy protections and user ownership of data.

The blockchain and crypto ecosystems offer a promising solution to tackle these issues by providing decentralised alternatives and empowering both developers and users.

Can Decentralised AI Compete with Centralised Solutions?

The demands of the AI industry are immense, and while decentralisation and user privacy are often praised, the reality is that most users prioritise efficiency and cost over these ideals.

This raises an important question: Can decentralised AI realistically compete with centralised solutions in terms of features and pricing?

When Stability AI was developed, Mostaque stated that it required the power of 10,000 NVIDIA A100s, which, at the time, created the 10th fastest supercomputer globally—eight times the computing power of NASA.

This setup, necessary for building cutting-edge models in image, video, and audio, came with a $400 million price tag.

By comparison, Elon Musk's latest supercomputer, designed to train state-of-the-art models, uses 300,000 A100s and costs $3 billion.

These staggering figures reflect the financial stakes involved in training advanced AI models.

Interestingly, recent advancements, such as OpenAI’s o1 model, which achieved an IQ score of 120, show that computing needs are evolving.

Decentralised, parallel computing is becoming more feasible, with examples like border rendering leveraging a million GPUs.

However, despite these developments, decentralised AI still faces significant challenges, particularly when it comes to competing at scale with centralised solutions.

Bitcoin, for instance, consumes 160 terawatt hours of energy annually using specialised infrastructure—nearly half the energy consumption of all global data centers combined.

This illustrates that while decentralisation can work in some areas, achieving the same level of performance in AI remains complex and costly.

He questioned:

“So I think there are some elements there where we can compete, but right now it's very difficult because you need billions of dollars of investment to build the state-of-the-art models. And the question is, will that change so we can use distributed billions of dollars, as it were? Or can we create mechanisms that we can create these clusters and then have an open alternative to these fully centralized solutions that typically only reflect one point of view? So that's like the Google image generation where you type in sumo wrestler and you get Indian female sumo wrestlers.”

Chitra pointed out several important points to consider.

First, edge applications—those that may not have immediate financial returns, such as in scientific research—often face significant barriers due to the high costs of model training relative to potential outcomes.

Take AlphaFold as an example.

When it was first launched, there was internal debate at DeepMind about whether it was worth allocating even a fraction of the necessary resources.

It took time for people to recognise the value of developing AI models for areas outside of the usual text, video, or commercially viable applications.

As a result, these edge projects often get sidelined in discussions focused on cost efficiency, where attention is directed toward more commercially obvious models.

Second, there is a shift happening in AI architecture.

Rather than building one massive model based on a large initial dataset, there is growing interest in models that use reinforcement learning and game theory, which could result in different trade-offs between latency and bandwidth.

In this approach, a model might tolerate longer inference times if it reduces complexity.

This shift will likely transform how clusters are developed and impact how inference is handled over time.

The debate over these architectural changes is far from over and will continue to evolve.

He said:

“So with AlphaFold, I was one of the authors of OpenFold, where we did the open-source replication of AlphaFold, and we provided all the compute for that as well. There were a lot of errors in it, which we didn't understand because it was a black box. So I think open-source is very interesting and decentralized because you have so many more people looking, and you can figure out errors in these models that are going to have an increasing part of our lives and be an increasing part of our lives as well.”

Practical Applications of AI

On the practical applications of AI, there has been remarkable progress in marketplaces for compute and the monetisation of models.

This raises an important question: what are the use cases that will scale and achieve widespread adoption in the near future?

Reflecting on the advancements over the past few years, much of the success in AI model development has stemmed from centralisation—bringing GPUs together, pooling machines into data centers, and assembling teams of top AI researchers.

Centralised resources have played a pivotal role in fostering innovation and driving AI forward.

This brings up an important point: not all aspects of AI development should be immediately decentralised.

Disrupting this proven approach too early could undermine the progress already made.

However, a GPU marketplace could be a valuable solution, allowing for a better distribution of global GPU resources without necessarily requiring blockchain integration at this stage.

Where decentralisation becomes more compelling is in the ownership and governance of AI assets, such as datasets and models.

If multiple parties contribute resources to build a model, why not distribute ownership accordingly?

For example, if someone provides 100 hours of A100 GPU time because the project lacks the cash to purchase it outright, they could receive a percentage of the model in return.

The same principle applies to data providers, whose contributions can significantly enhance a model's uniqueness.

This approach calls for new governance and economic frameworks to enable the crowd-sourcing of AI models, with ownership distributed among contributors.

When the model begins generating revenue, it should do so in a transparent, trustless, and permissionless way, ensuring that all stakeholders are fairly compensated.

This is precisely where blockchain technology could play a crucial role, facilitating a decentralised system of trust and attribution.

Skidanov noted that one of the most intriguing current use cases in AI is decentralised inference, which has seen practical implementation in several companies.

For example, internally, they work with Hyperbolic, a company that rivals Fireworks in terms of both price and speed.

What sets Hyperbolic apart is that its infrastructure is built on a large decentralised network of GPUs sourced from various participants.

Inference, traditionally a high-demand service in the Web2 space, continues to be one of the most commercially viable areas in AI, and the decentralised approach is proving to be highly competitive.

Another fascinating development is their ongoing work to create what we call an "AI developer," aimed at the end-to-end development of smart contracts or applications.

This initiative, which began in 2021, relies heavily on large-scale data annotation.

We have a network of developers and annotators contributing to this effort, many of whom work in regions where traditional payment methods are challenging.

Here, blockchain plays a pivotal role.

By leveraging blockchain, workers can receive immediate compensation for their efforts, making the process seamless and efficient.

He explained:

“Without blockchain it would be practically impossible to build in the same way.”

This illustrates how decentralised technology is not only reshaping AI infrastructure but also solving practical issues in global workforce engagement.

When considering practical use cases, the impact of technology on media production is particularly striking according to Chitra.

Advancements in AI could enable the creation of full-length Hollywood movies by next year, produced asynchronously.

Given that films consist of shots averaging two and a half seconds, real-time generation would be impractical as it would require significant waiting periods to review progress.

Media transformation is one of the most significant applications of inference computing, but the challenges of intellectual property (IP) attribution and knowledge flow remixing remain.

Moreover, in the realm of healthcare, the potential of AI is immense.

Models focused on conditions like cancer can integrate up-to-date medical knowledge and exhibit higher empathy levels than human doctors, ensuring that patients feel supported throughout their journeys.

The application of distributed ledger technology adds another layer of protection for healthcare data, allowing for secure AI analysis at the edge.

For instance, Apple’s on-device AI can analyse personal health information while integrating various knowledge features.

As infrastructure continues to evolve, particularly with initiatives from innovators in the field, we stand on the brink of breakthroughs in holistic healthcare and personalised education.

The recent introduction of Google’s Notebook LM, which can generate a podcast discussion based on the Bitcoin whitepaper in moments, exemplifies these advancements.

However, to maximise the effectiveness of these AI agents, it is crucial to establish systems for trackability, verifiability, and coordination as they tackle complex challenges.

Will Decentralised AI Destroy Humans or Does it Not Even Matter?

The concept of permissionless innovation in AI presents both exciting possibilities and significant risks.

While some individuals express concerns that AI could ultimately threaten humanity, others maintain a more optimistic outlook.

The question arises: how does the permissionless nature of AI development impact its potential for harm?

Open-source AI embodies permissionless innovation, allowing users to access and modify substantial datasets freely.

However, the real concern may lie in governance and standardisation.

For instance, a hypothetical scenario could involve a large-scale deployment of robots that becomes catastrophic due to a flawed firmware update—an issue rooted in centralised control.

Furthermore, research from Anthropic on "Sleeper Agents" highlights how malicious modifications can corrupt vast language models, rendering them uncontrollable.

While the permissionless model does introduce new vulnerabilities, it can also foster resilience.

The history of centralised systems reveals numerous attack points and failures, suggesting that open infrastructure might offer greater robustness.

Consequently, the dialogue around AI governance should focus on enhancing the safety and reliability of decentralised systems, rather than solely addressing the risks inherent in a permissionless approach.

Santos pointed out that Skidanov used to be part of OpenAI.

Skidanov remarked that he was with OpenAI in 2016, a time before the organisation had even established a logo.

After a brief tenure there, he left to co-found Near.

In 2022, following Near's launch, he returned to OpenAI, intrigued by the company's silence over the previous three years, suspecting that significant advancements might have been made.

He claimed:

“I was briefly at OpenAI again primarily because by 2022 they have been silent for 3 years and I suspected they might have reached the singularity and I just wanted to be part of it and that was the case. And yeah one of the things I was doing is I was walking the corridors of the company and I was looking into the eyes of the people and I was trying to judge are they evil or not and it's very hard to do also evil people are usually good at not showing that they're evil. So at the end of the day I wasn't able to convince myself that this group of people is there's 100% certainty that that group of people will save us from annihilation if the AI will start turning away.”

He argues that society stands to benefit significantly when top-tier research is collectively owned rather than monopolised by a small group of entities.

He expresses concern over current efforts in the US to introduce regulations that could severely limit the ability of others to train large models.

Permissionless AI Not Ready for High-Stake Applications Yet

Ren expressed subjectively that when asked whether they would be comfortable deploying AI in a permissionless manner, most AI researchers at conferences would likely express hesitation.

This reluctance stems from two primary concerns.

First, there is a fundamental lack of understanding regarding how these models operate and make decisions.

Unlike traditional programmes, which follow explicit instructions, AI models function as probabilistic systems, generating stochastic predictions based on their training data and the specific contexts in which they are deployed.

As a result, they can produce unexpected errors, sometimes simply due to minor variations in input data that fall outside their training parameters.

Second, many crucial human values and common-sense constraints have not been fully integrated into these models.

This gap becomes particularly problematic in high-stakes applications, such as filtering job resumes or evaluating credit applications.

Despite being discussed in AI literature for decades, these challenges remain difficult to address effectively in real-world scenarios.

He advised:

“Before that problem has been understood in a much more comprehensive way, I don't think we should be thinking ready to deploy AI in a permissionless manner at high-stake applications, but for low-stake applications I think it's good to take some risks and let it fly.”

Intelligence will be the Reserve Currency in the Future

In terms of the various product categories within the intersection of crypto and AI, one category includes crypto applications enhanced with AI features, such as trading bots or risk mitigation solutions.

Another involves AI systems utilising crypto properties, perhaps with AI agents operating through stablecoins.

The most intriguing category, however, is the synergy of crypto and AI, where their combined capabilities create unique opportunities.

The original vision of crypto, epitomised by Bitcoin, was to create a permissionless, distributed monetary system.

The focus now shifts to envisioning a post-labour economic framework.

Intelligent money, augmented by AI and intelligent market makers, could redefine how economies function.

Building open systems in healthcare, education, governance, and finance will be crucial to this evolution.

Moreover, while crypto serves as an effective coordination layer, the question arises: what role will intelligence play in future monetary systems?

The traditional model of human labour may soon be disrupted; as computational resources expand, so too will the capabilities of AI agents, allowing them to outperform human counterparts.

Mostaque concluded that:

“Again, opening the eyes on the first things whereby you're bounded by the amount of compute that you have to do things and you can out compete people if you have more compute to run these agents as they become mature. So we think that will be the foundation of a new economic system given again, why would you hire any graduate programmers or most content writers when the AI is putting that more? You have to have a realignment of the social structure. And that's coming incredibly fast.”

That's a wrap Token2049 🇸🇬

What an incredible experience! From insightful discussions to connecting with AI & Web3 innovators and enthusiasts.

Together we can create an equitable, transparent, & collaborative AI future.

Huge shoutout to the #Token2049 event staff for all the… pic.twitter.com/mD1agsmqKj

— Sahara AI (@SaharaLabsAI) September 20, 2024