Author: AYLO; Translated by: Tao Zhu, Golden Finance

Last week, Aethir announced that they had $36 million in ARR (Annual Recurring Revenue), which would put them among the top 20 crypto protocols by revenue this year.

Now did I get your attention? Good, this article is worth reading.

As AI rapidly advances toward AGI, the demand for computing resources is skyrocketing, leading to a growing gap between those who have access to powerful GPU chips and those who don’t. Aethir is an innovative Decentralized Physical Infrastructure Network (DePIN) that aims to democratize access to cloud computing resources.

Founded with the vision of making on-demand computing more accessible and affordable, Aethir has built a distributed network that aggregates enterprise-grade GPU chips from a variety of sources. The network is designed to support the growing demands of artificial intelligence, cloud gaming, and other compute-intensive applications.

In this interview, I sat down with Aethir’s co-founder, Mark Rydon, to discuss Aethir’s unique approach.

Given the recent increase in competition in the decentralized GPU space, how does Aethir differ from other players in the space?

That's a really good question. I'm going to answer it in two parts. First, I'm going to explain the problem we're trying to solve because that's key to understanding. As I'm sure you and your audience know, there's a global computing scarcity problem. Tech giants are competing for critical GPU resources. It's a massive race to create smarter and smarter AI until we reach AGI or ASI, and then everything in the world will change.

What's interesting about this race is that it's driven by a simple principle. If you add more GPUs and data to the ecosystem, the AI ​​gets smarter. It's like a line from the bottom left to the top right, as the AI ​​smarts increase. The type of GPU that's required for this is critical. You can't do it on a consumer GPU or a low-power graphics card. All the big companies in the AI ​​race, both on the training side and the application side, use enterprise-class GPUs. The specific model that they've been looking at for the last year and a half to two years is Nvidia's H100 GPU.

The key point is that enterprise companies are the ones that primarily have a huge demand for AI compute. They are building their businesses on this compute infrastructure. So we have to ask them what kind of compute they want, what type of GPUs, and what quality, performance, and uptime requirements they need to meet their internal metrics. It's like Netflix needs servers with high uptime guarantees to avoid service interruptions. The same is true for GPUs and any compute provider; they must meet strict service, quality, uptime, and performance requirements.

Unfortunately, most computation networks in the Web3 space aggregate consumer GPUs. This is the simplest way to build a network - provide tokens to a community of people donating their idle gaming GPUs. This will quickly attract a large number of GPUs and build a strong community that is excited about the token rewards. This is why many computation networks that exist today aggregate consumer GPUs.

The challenge is that to have a real business you need to sell aggregated compute. However, consumer GPU networks quickly hit a low ceiling because 99.9% of companies don’t want to buy compute from consumer decentralized networks. They can’t guarantee that the GPUs won’t be turned off at night, or that bandwidth won’t be limited by home activities like streaming Netflix. This leads to a pretty big disconnect between what enterprises need and what consumer GPU networks can offer.

Source: Layer.gg

From day one, we decided not to aggregate any consumer-grade GPUs. Every GPU connected to Aethir is enterprise-grade, integrated through enterprise network infrastructure, and located in data centers suitable for enterprise workloads. The largest AI companies, telcos, and tech companies can use our network to do whatever they need without sacrificing performance or quality. In fact, they get higher performance and a better overall experience.

For example, IO.net needed to overcome a lot of FUD about their massive network of consumer-grade GPUs. When they wanted to prove that their network could handle real business, they rented enterprise GPUs from Aethir. Therefore, all enterprise GPUs on IO.net are provided by Aethir. This is public knowledge within the ecosystem.

It was critical that Aethir was committed to serving enterprise clients from day one.

One more thing: When I explained what we do, this used to confuse people in the AI ​​space. Distributed GPUs, by definition, means we don’t own any GPUs. We have about 43,000 GPUs in our data centers, but we don’t own any of them. Of those, we own over 3,000 H100s, which is by far the largest collection of H100s in Web3, and almost 10x more than our closest competitor. That’s why so many large AI companies use our infrastructure, because we can actually serve them.

One thing that some AI companies are confused about is the importance of what we call "co-located machines." If you're doing large-scale training, like OpenAI or similar projects, and you need 500 or more H100s, those GPUs have to be in the same data center. You can't have one H100 in Japan, one in the US, and 200 in India. AI can't be trained efficiently on distributed hardware. This is a big technical challenge that other DePin companies have been working on, and it's still an unsolved problem. I think it's a huge opportunity, but it's very complex.

Since Aethir has been focused on the enterprise from the beginning, we understand that serving enterprise customers means more than just having a bunch of disconnected enterprise-class GPUs. We also need to think about the colocation of machines in our network. As a result, Aethir has a number of large colocated high-performance GPU clusters. This means that we are not just a distributed network with enterprise-class GPUs around the world; our network has large colocated clusters that enable us to handle those large AI jobs for companies that need colocated machines.

What inspired the creation of Aethir?

So actually, it was Cloud Gaming Network that got us excited about distributed GPU computing. The team met when I was living in Beijing for about seven years. I moved there to start my first company and eventually started working on scaling Cloud Gaming Network.

Long story short, we had an idea that we could solve the performance and scalability challenges of cloud gaming networks by distributing the hardware in a decentralized manner. At a high level, the premise is that latency is the killer of these networks. The further the user is from the compute, the worse the user experience becomes due to increased latency. The idea is that if you remove the incentive to centralize compute, you can have a more distributed network. As the network gets larger and more decentralized, the likelihood of users being closer to the compute increases, thereby reducing latency and improving performance.

Centralized solutions focus on bringing all resources into one location to achieve economies of scale, but this doesn’t add value from a user perspective. It actually limits network performance. If you were to build a network that optimizes for user experience, you’d distribute compute everywhere so that users are always close to it. We think if we can solve the distribution and unit economics challenges, we can solve the problems that prevent services like Google Stadia from being deployed wherever they need to be.

That’s where we started, and we quickly realized our relevance to the AI ​​space and started building products there.

Another positive news worth noting is that the global gaming population is about 3.3 billion. Most of them (about 2.8 billion) play games on low-end devices, which means they cannot play mainstream AAA games, and likewise, AAA developers cannot reach these players.

The most viable solution to unlock all this capital is to use cloud gaming to remove the hardware requirements from the user. Just take the technology we already have and make it more cost-effective to scale. Now you have a technology that can unlock these 2 billion players, no matter where they are. That’s exactly what building the web in a decentralized way does.

Current Aethir Game Metrics

We are bringing hardware-decoupled gaming to billions of gamers around the world in a way that is fundamentally impossible. That’s why I’m so bullish on gaming; that’s our original vision.

Do you think the demand for decentralized computing will increase significantly, or do you think we are already there but it will take more time for customers to adopt the technology?

I think, to be honest, if we're talking about decentralized cloud solutions, or just Aethir, it's mostly about education. You don't need to know that Aethir is a decentralized cloud provider to work with us. We have a great time working with Web2 companies - 90% of our customers are Web2 companies and they are very happy with the service we provide.

Looking at the broader AI ecosystem, where is the inflection point? There are some crazy statistics.

A few weeks ago I read a research paper that said that based on the projected growth in computing demand, there will not be enough electricity on Earth to support the computing needs of AI by 2030. That’s crazy.

These macro numbers show a lot of capital being deployed into the ecosystem. They are almost incomprehensible. But if you zoom in a little, you'll see that there are two types of compute demand: training and inference. Training is the process of making an AI smarter, such as upgrading from ChatGPT-4 to ChatGPT-5. Inference is the process of an AI doing its job, such as answering questions.

People like you and me mostly use ChatGPT or other large models, right? For example, ChatGPT through the Microsoft ecosystem or Gemini through Google. Most of our interactions are with general-purpose large language models from very few companies. But if we look ahead a year, given the exponential growth of the industry, my guess is that you will interact with AI in more places than you do now.

Soon we will be interacting with AI in more meaningful, agentic ways. AI will do more for us, like booking flights, providing assistance, and handling customer service calls. It will be much more than it is today.

If you look upstream at compute, unless a company is just using the ChatGPT API to create an application, they are most likely building their own AI product, which means they have their own compute infrastructure needs. So as the inference space grows, it will become more fragmented. Currently, most of the infrastructure used for training comes from a small number of large companies. While some companies are developing new competitors to these large language models, the explosion of AI applications on the inference side will lead to a more fragmented compute market.

This means we will see a lot of demand from the channel with competitive pricing and startup and small business friendly contracts. This is probably the most imminent turning point I see.

Another product in the Aethir ecosystem is the A-phone. Can you tell us more about this product and who is its target audience?

A-Phone is built and scaled directly on our infrastructure. It uses our cloud gaming technology to stream real-time rendering to the device in a low-latency manner. This is very cool because it's all about access. For example, you can have a $150 smartphone, download the Aethir app, and then open it up to access the equivalent of a $1,500 device. All of the hardware limitations of your local device are gone because you have the cloud power to power the app.

You can open virtually unlimited apps on your Aethir phone without draining your battery. All computing, processing, and storage is done in the cloud, essentially giving you a super phone that you can call upon at any time to run any app you want.

Whether it's a game or an educational platform with video conferencing, it's pretty cool. It removes the hardware barrier for people to access content, tools, or utilities, especially for mobile users, who make up the vast majority of Internet users.

What do you think has been the key to Aethir’s success so far: your technology solutions, or your business development efforts?

I think there were two areas where we were focused as a company. The first was the enterprise element that I mentioned earlier. That meant making some very hard decisions early on. As I said, it was much easier to aggregate consumer-grade GPUs. It was much harder to aggregate these enterprise-grade GPUs that we already knew were hard to find and access. We took a harder path early on, and that put us at risk early in our operations. But because of that, we did the hard work, and we're better off now. Not many companies have the resolve to do something so risky early on, and that was huge for us.

Secondly, we have always been very focused on real business - real utilization, real contracts, real revenue. This focus has been very important to us since the beginning. This is why we chose the enterprise path. We want to fully leverage Web3 technology and provide industry-changing solutions, not just Web3 solutions, but best-in-class industry solutions in the fields of AI and gaming.

Our business development team played a crucial role in convincing partners to join our ecosystem, especially in the early days. On the technical side, we made the process of connecting to computing resources seamless. Currently, there is more supply looking to enter our ecosystem than we can accommodate. In the future, our goal is to be a truly permissionless, fully decentralized ecosystem, and we will get there. But in the beginning, we had to be pragmatic. Opening the floodgates of computing and having a ton of GPUs drain your tokens is not a good business move.

We think of ourselves as a supply-led organization. We always try to have more supply than demand. We don't want to deny demand, but we also don't want a huge gap between supply and demand. We want to grow supply and demand sensibly and steadily. We don't just throw in unlimited GPUs to brag about our numbers; that's not the right approach.

We have some big announcements planned in the coming weeks that demonstrate our commitment to transparency. This will be really interesting for people to watch and shows that Aethir is a company that people want to be involved with.

Can you tell us a little bit about the Aethir token? How does it fit into the ecosystem and how does it generate value?

This is actually a topic for a much larger version as you will see shortly. I can’t speak any further on the topic right now, but what I can say is that it was previously difficult for a lot of projects to work with large Web2 entities due to the need to deal with tokens.

This is an ongoing challenge in the space and we have a very exciting and novel solution. I think people will be very optimistic when they see it. So it will enable us to bring a lot of volume to the token.

Our largest customers are Web2 customers, and I don't think that's going to change. We need to make sure we're in that business and allow that value to accrue to the Aethir token and the ecosystem that it supports. That's our commitment, and I think you're going to see some very interesting stuff next week about how we do that.