Original Article Title: What's Next For Solana | Anatoly Yakovenko
Original Translation: Ismay, BlockBeats
Editor's Note: Last week, SOL broke through $248, setting a new historical second price since the FTX Thunderbolt, just 4% away from the November 6, 2021, historical high of $259. The Lightspeed podcast invited Solana Labs co-founder Anatoly Yakovenko and Solana RPC company Helius CEO Mert to delve into Solana transaction fees, how to stay competitive in the cryptocurrency field, SOL inflation, competition with Apple and Google, whether Solana has a moat, and other topics.
Question Directory
1. Why does Solana still have so many front-running transactions?
2. Can L2 architecture really solve congestion issues?
3. Can a chain focused on single-thread optimization really challenge Solana's global consensus advantage?
4. Is shared block space a "tragedy of the commons" or a key to DeFi capital efficiency?
5. What is Solana's core competitiveness?
6. Will Solana's inflation rate decrease?
7. Is the FireDancer comprehensive development cost too high?
8. How does Solana compete with Ethereum?
1. Why does Solana still have so many front-running transactions?
Mert: So, let's start, Anatoly, let's start from scratch. One of the reasons you founded Solana is because you were tired of being front-run in traditional markets. You wanted to achieve through Solana the synchronization of information globally at the speed of light, maximizing competition, minimizing arbitrage, but none of this has been achieved so far, almost everyone is constantly being front-run. Not only has MEV skyrocketed, but Jito's tip has often exceeded Solana's priority fee. How do you see this issue? Why is this happening?
Anatoly: You can set up your own validator node and submit your transactions without interference from others, right? In traditional markets, you don't have that choice at all, and this is exactly where Solana's decentralization functionality comes into play, which does indeed work.
The current challenge lies in the fact that setting up a validator node is not easy. It is not simple to get a node to stake enough to reach a significant position, and it is even more difficult to find other nodes willing to order transactions in the way you expect. However, all of this is achievable; it just requires time and effort. The current market is not yet mature enough to have sufficient competition. For example, the competition between Jito and its competitors is not strong enough to allow users to easily choose between "I only submit to order flow creator Y, not to order flow creator K."
At a fundamental level, as an enthusiast, I can launch my own validator node, stake some amount, run my algorithm, and directly submit my transactions. No one can stop me from doing this; it is all possible. The real question now is whether we have matured to a point where users can always choose the best way to submit transactions. I believe we are far from reaching that stage.
In my opinion, the method to achieve this goal is actually simple but incredibly challenging: increase bandwidth, reduce latency, optimize the network as much as possible, eliminate bottlenecks that make the system unfair, including mechanisms with multiple parallel block proposers. If there is only one leader per slot, and you have 1% stake, you will have about one opportunity every 100 slots. But if there are two leaders per slot, and you have 1% stake, you will have about one opportunity every 50 slots. Therefore, the more leaders we can add, the less stake you need, allowing you to operate your algorithm at the service quality you require.
Someone created a website called the Solana Roadmap. Upon opening, it shows "Increase Bandwidth, Reduce Latency." Anatoly asks who made this.
Mert: Currently, the situation is that you need to gather a certain amount of stake to prioritize processing your transaction, even if that's not the case. It appears that having more stake in the system not only aids in getting your own block space, etc., but there is a dynamic relationship here: the richer you are, the greater your advantage, is this acceptable?
Anatoly: Performance improvements lower the barrier for honest participants to change the market's dynamics. If we have two leaders per second, the staked amount required to provide the same service is halved, thereby reducing the economic barrier to entry, allowing more people to compete. It can be said, "Hey, I'm the best validator; you should submit all Jupiter transactions to me, and I will do what you want." This way, I can operate a business, offer it to users, and competition will force the market to reach the fairest equilibrium point. That is the ultimate goal.
However, to achieve this goal, I believe a significant difference between Solana and Ethereum is that I think this is just an engineering issue. We just need to optimize the network, increase bandwidth, for example, more leaders per second, increase block size, everything scales up until competition forces the market to reach an optimal state.
2. Can L2 Architecture Really Solve Congestion Issues?
Mert: Speaking of an engineering issue, the reason why Jito's tip exceeds the priority fee is not only because of MEV but also because of the transaction landing process, or more precisely, the local fee market operation does not always work deterministically, sometimes it is simply unstable. What is the reason for this?
Anatoly: This is because the current transaction processing implementation is far from optimal. In scenarios of very high load, it runs smoothly when the load is low. During the mini bear market in the past six months, I have seen confirmation times of less than a second from start to finish, and everything runs very smoothly because the number of transactions submitted to the leader is minimal. These queues, fast access lists, and other resources are not filled, and there is no queue backlog due to performance bottlenecks.
When these queues are backlogged, you cannot prioritize these queues before the scheduler, which actually disrupts the local fee market. So, in my opinion, this is also an engineering issue and perhaps the area that needs the most engineering effort in the current ecosystem is extreme optimization of these processing pipelines.
Mert: Given the existence of these issues, it seems your answer is that these problems do exist, but they are engineering problems and thus can be solved, and future iterations will address them. Some might say that these issues do not exist in L2 due to its architecture, right? Because you can achieve first-come, first-served through centralized sequencers.
Anatoly: First come, first serve would also lead to the same problems, even Arbitrum has priority channels. So, if you implement first come, first serve, it would encourage spam transactions, which is also the same issue. If you have a general-purpose L2 supporting multiple applications, it will eventually encounter the same problem as well.
Some may argue that because L2 does not have a consensus and vertically integrated ecosystem like Solana, they can iterate faster, like a Web2 company pushing a new version every 48 hours and quickly fixing issues through centralized sequencers. But they still face the same problems as Solana.
You could say that Jito does have a chance to address these issues because their relayer can be updated every 24 hours or continuously release updates. However, what they are currently not doing is having enough scheduling and filtering of data to limit the traffic coming from these relayers to what the validator scheduler can handle, but you can achieve a similar effect.
So, I don't think L2 itself can address these issues. L2 is only effective when you launch, only for a popular app, and only when there are no other apps around to address the problem. And this doesn't even apply to the app itself; if you have an app with multiple markets, congestion in Market A will affect all other markets.
3. Can a chain focused on single-point optimization truly challenge Solana's global consensus advantage?
Mert: Let's look at it from a different angle. If this is not a general L2 but a chain like Atlas that focuses on DeFi and runs SVM, how would Solana compete with such a chain? Atlas doesn't have to worry about consensus overhead or shared block space issues; it can focus on DeFi optimization and even achieve a fee-less market through SVM.
Anatoly: What you are describing is actually Solana's competitiveness in a smaller validator set scenario. In this case, there is only one node, which is easier to optimize as you can use larger hardware. This is the core issue: Is the synchronous composability important at scale? This smaller network can only cover the region where this hardware is located, so the information still needs to spread globally, and in Solana's end state, there are multiple validators able to globally synchronize transaction submissions in a permissionless, open manner.
If this problem is addressed, the end result is Solana. Whether or not data is submitted to L2, the key issue is how to synchronize information globally and reach consensus quickly. Once you start tackling this problem, it's no longer something that a single machine in New York or Singapore can solve; you need some form of consensus, consistency, and linearization. Even if you later rely on L2 for stricter settlement guarantees, you will still face Solana's current issues. So, in my opinion, these single-node SVMs are basically no different from Binance.
How to compete with Binance is a more interesting question. If you choose, you may use SVM, but users will ultimately prefer Binance because of its better user experience. So, we need to become the best version of a centralized exchange. And to achieve this goal, the only way is to embrace the concept of a decentralized multi-proposer architecture.
Mert: Another advantage is that Solana itself must address these issues, and through L2, they can resolve them more quickly. It's easier to solve issues on a single box than across 1500 boxes. This way, they will garner more attention and build network effects from the start. Regardless of how Solana does it, it needs to address these issues, and because they use the same architecture, they can learn from it and potentially release faster.
Anatoly: The challenge of business competition is whether these single boxes can survive under a certain load. Building a single box doesn't immediately solve all issues; you still face nearly identical engineering challenges. Especially when considering that the discussion is no longer about Solana's consensus overhead but the transaction submission process.
The transaction submission pipeline itself can be centralized on Solana, much like on some L2 solutions. In fact, Solana employs a single box relayer to receive a large number of transactions and then attempt to submit them to validators. The data transfer rate between the relayer and validators can be limited to a lower level, ensuring that validators can always smoothly process these transactions.
Furthermore, this design allows components like Jito to iterate at a faster pace. Therefore, I believe the advantage of this design in L2 is actually smaller than people imagine.
4. Is Shared Block Space a "Tragedy of the Commons" or the Key to DeFi Capital Efficiency?
Mert: If we broaden the discussion, Solana, as an L1, involves shared block space, which can lead to the "Tragedy of the Commons" issue, similar to the overuse of a public pool resource. However, on L2, especially those that are not necessarily an app chain's L2, developers can have independent block space without sharing with others.
Anatoly: This independence may be more appealing to app developers, but this environment needs to be permissioned. Because once you adopt permissionless validators or sequencers, when multiple apps run concurrently, you will lose this control.
Even in a single-app environment like Uniswap, if the platform has multiple markets, these markets may interfere with each other. For example, an unknown meme token may affect the order priority of mainstream markets. Looking at it from a product perspective and envisioning a future where all assets are tokenized, as the CEO of a newly minted unicorn company, I decide on which platform to IPO. If I see high SHIB trading volume on Uniswap causing severe congestion, to the point where mainstream assets cannot trade properly, this would undoubtedly be a failed state for this app-focused L2.
Therefore, the challenges faced by these application-focused L2s are similar to Solana, as they all need to isolate their respective states in a way that does not impact other applications. Because even a single application like Uniswap, if one of its markets experiences congestion causing all other markets to be affected, in an environment like this, which is unacceptable for a CEO of a company like mine. I don't want my primary market to be the kind where everyone is trading. I want each trading pair to operate independently.
Mert: If it's permissioned, though? Since there's an exit mechanism, wouldn't it work?
Anatoly: Even in a permissioned environment, you still need to address the issue of local isolation. Addressing this isolation issue in a permissioned environment is not fundamentally different from addressing it in a forwarder or scheduler.
Mert: Do you think this market analogy can map to any type of application?
Anatoly: Some applications do not have these characteristics, such as simple peer-to-peer payments with very little congestion and straightforward scheduling. So, the challenge of designing isolation mechanisms and all these seemingly complex things lies in the fact that if you cannot ensure that a single market or application will not cause global congestion, then companies like Visa will introduce their dedicated payment L2 because their transactions never compete. They don't care about priority; they care only about TPS. It doesn't matter whether my card swipe is the first or last transaction in the block; what matters is that I can leave within two and a half seconds after swiping my card. So, in a payment scenario, the priority mechanism is not crucial, but it is indeed a very important real-world application scenario.
My view is that if we cannot properly implement isolation mechanisms, then the idea of a large-scale composable state machine becomes meaningless, as you will see the emergence of payment chains and L2s dedicated to single markets. Imagine I am a CEO of an IPO company, why would I choose to launch on Uniswap's chain in the next 20 years? Why not launch my own L2 that only supports my trading pairs to ensure good performance?
This is a possible future, but I think from an engineering perspective, there is no reason to do that unless there are other reasons. If we can solve the engineering issues, then I believe achieving composability in a single environment has a huge advantage, as the friction of capital transfer between all states and liquidity can be greatly reduced, which is a very important feature for me. I believe Solana's DeFi can survive in a bear market and withstand a greater hit than anyone else precisely because of its composability, which makes its capital efficiency higher.
Mert: Vitalik recently stated that "in my view, composability is overrated." I think he probably came to this conclusion based on empirical data, believing that there aren't too many on-chain instances utilizing it. What are your thoughts?
Anatoly: Isn't Jupiter the epitome of composability? I think he's only focused on Ethereum, unfortunately, Jupiter has a huge market share on Solana and a significant market share in the entire crypto space. To achieve composability, Jupiter is indispensable. Without composability, Jupiter cannot function. Look at 1inch, it's a competitor on Ethereum, unable to scale because even transferring between L2 and the same L1 is extremely costly and slow.
I think he's wrong. I believe financial systems can be asynchronous, which is how most financial systems operate today. It's not to say these systems would fail or crash because of it. But if Solana succeeds and the ecosystem addresses all these issues at the current pace, even if we maintain the current execution level each year, you would see significant improvements. Ultimately, I think composability will emerge as the winner.
5. What is Solana's core competitive advantage?
Mert: Let's momentarily set aside engineering issues, assuming engineering is not a moat, and other chains can achieve the same results. For example, a chain like Sui could also achieve composability and have a smaller set of validators. Assuming some L2s would face similar issues you mentioned, but they could also address these issues. I asked you before, when engineering is no longer the moat, what is the moat? You said it's content and functionality.
Anatoly: Yes, Solana didn't set a specific validator target. The testnet has around 3500 validators, and the mainnet has a large scale because I wanted it to be as large as possible to prepare for the network's future. If you want as many block producers worldwide as possible, you need a large set of validators, allowing anyone to join and participate in every part of the network without permission.
You should test as fast as possible because currently the cost of solving these problems is low. Solana isn't handling trillions of dollars of user funds now; that's what Wall Street does. Solana deals with cryptocurrencies, which give us an opportunity to have the smartest people in the world solve these puzzles and force them to face these challenges.
So my point is, rather than Solana decreasing the validator set size for performance, Sui and Aptos are more likely to need to increase their validator set. If you find PMF, everyone would want to run their own node because it provides assurance. As the validator set grows, if you start limiting participants, you limit the network's scalability.
Mert: Alright, you mentioned an issue I want to discuss. Although that is the goal, if you look at the data, the number of validator nodes is decreasing over time. It seems like you think this is because of a lack of product-market fit, so they don't have the motivation to run node equipment themselves, right? Or what is the reason?
Anatoly: Yes, part of the reason is some of Solana Foundation's staking support. But I am indeed interested to know how many validator nodes are self-sustaining, is that number growing?
Mert: Hold on, we have about 420 validator nodes that are self-sustaining.
Anatoly: But what was it like two years ago?
Mert: We may not have that data. But we do know that Solana Foundation's total staked amount has decreased significantly since two years ago,
Anatoly: While fees are also increasing. So my guess is, the number of nodes that could self-sustain two years ago was much lower, even though the total number of nodes was higher at that time. So my point is, we need the network to scale to support everyone who wants to run node equipment. This is also one of the main purposes of the delegation plan, to attract more people to run nodes and put some stress on them in the testnet.
But the testnet can never fully simulate the mainnet's characteristics, right? No matter how many validator tests they run in the testnet, the mainnet situation will still be quite different. So, as long as the number of self-sustaining nodes is growing, I see it as a positive trend. The network must physically be able to scale to support such a scale, otherwise, it will limit growth.
Mert: So, basically, you mean the delegation mechanism helps the network stress test different validator node scales, but fundamentally, the only important, or most important thing, is the number of self-sustaining validator nodes.
Anatoly: Exactly, this can theoretically propose some arguments, such as the statement that in an extreme case a single node may not be self-sustaining. But even in a catastrophic failure, if that is the only surviving node, it does provide assistance. However, this falls under the realm of the end-game "decentralization in nuclear war" problem.
Fundamentally, what truly matters is whether the network is growing and succeeding, which relies on self-sustaining validators who can foot their own bills and have enough interest in the network to be willing to invest commercial resources to continually improve, deep dive into data, and do their jobs properly.
Mert: In a world where anyone can run a fast, low-cost, permissionless system, why would people still choose Solana in the end state?
Anatoly: I think the future winner might be Solana because this ecosystem has demonstrated outstanding execution and has already moved ahead in addressing all these issues. Or, the winner could be a project entirely identical to Solana, but not Solana simply because it executed faster, enough to surpass Solana's existing network effects.
So I think execution is the only moat. If executed poorly, you will be overtaken. But the overtaker has to perform exceptionally well to become a killer product. PMF refers to Product-Market Fit, which means it results in a behavioral shift in users.
For example, if transaction fees are ten times cheaper, will users switch from Solana to another project? If users only have to pay half a cent, it may not. But if switching elsewhere can significantly reduce slippage, this may be enough to attract them or traders to switch.
Indeed, it is necessary to observe the overall behavior of users to see if there is some fundamental improvement significant enough to make them choose another product. There is indeed such a difference between Solana and Ethereum. For users, when they sign a transaction, and it shows they need to pay $30 to receive an ERC-20 token, even just to complete a very basic state change, this price is outrageously high, exceeding users' expectations, leading them to opt for a more cost-effective alternative.
Another factor is time; you cannot wait two minutes for a transaction confirmation; that's too long. Solana's current average confirmation time is around two seconds, sometimes peaking at 8 seconds, but it is marching towards 400 milliseconds, a significant behavioral change incentive for users, enough to make them willing to switch to a new product.
But this is still unknown. However, in Solana's technology, there are no barriers preventing the network from continuing to optimize for improved latency and throughput. So, when people ask why Solana's growth is faster than Ethereum's, some may think the next project will surpass it. But in reality, the marginal difference between Solana and the next competitor is very small, making it more difficult to create a difference significant enough to influence user behavior, which is a significant challenge.
Mert: If execution speed is a key factor, then fundamentally, this becomes somewhat of an organizational or coordination issue. The difference between Solana's vision and what can be called modularity (although this is not a formal term) is that if, for example, you are an application developer like Drip, and your application is built on Solana, you need to wait for L1 to make some changes, such as addressing congestion or fixing bugs.
However, if it's on L2 or an app chain, you can address these issues directly yourself. Perhaps from this perspective: on this other chain, you may be able to execute operations more quickly than relying on shared space. So if this is true, the overall execution speed will be faster.
Anatoly: Over time, this difference will gradually narrow. For example, Ethereum used to be very slow. If you were running Drip on Ethereum and transaction fees skyrocketed to $50, you would go and ask Vitalik (Ethereum's founder) when this issue could be solved. He might answer, "We have a six-year roadmap, bro, this will take some time." Whereas if you go ask teams like Fire Dancer or Agave, they would say, "There is already a team working hard to fix this issue and aim to resolve it as quickly as possible in the next release."
This is a cultural difference. The core L1 team, including the entire infrastructure team like you guys, throughout the transaction submission process, everyone understands that when the network slows down or there is a global congestion, this is a most critical (p0) issue that everyone needs to address immediately. Of course, sometimes unexpected issues may arise, such as adjustments in the fee market design.
These issues become less and less common as the network's usage scales up. I don't think there are currently any challenges that require urgent design changes and take six months to a year to deploy. I don't see such challenges on the road ahead right now.
However, you know there will definitely be some bugs or other unexpected issues at the time of release, requiring people to work overtime on weekends, which is also part of the job. If you have your own dedicated L2 application chain, do not need to share resources, and have full control of this layer of infrastructure, then you may move faster, but at a higher cost, which not everyone can afford.
Therefore, a shared, composable infrastructure layer may be cheaper and faster for the vast majority of use cases, as an infrastructure layer of software as a service that can be shared and used by everyone. With bug fixes and continuous improvements, this gap will become smaller and smaller.
6. Will Solana's Inflation Rate Decrease?
Mert: Another related point of criticism is Sol's inflation mechanism, as many people believe this is done by increasing rewards to help more validators. However, the cost of doing so may be at the expense of those pure investors. When people say Solana's inflation rate is too high, what is your first reaction? How do you view this?
Anatoly: This is an endless debate, changing the numbers in the black box doesn't really change anything. You can make some adjustments to make it affect certain people, so that the black box cannot operate normally, but this itself does not create or destroy any value, it's just a bookkeeping operation.
The reason the inflation mechanism is as it is now is because it directly copied Cosmos' mechanism, as many of the initial validators were Cosmos validators. However, does inflation affect the network as a whole? It may have an impact on individuals under a specific tax regime, but for the entire network, it is a cost to non-stakers and an equivalent gain to stakers, which mathematically sums up to zero. So from an accounting perspective, inflation does not affect the network as a whole as a black box.
Mert: I've seen people say since it's arbitrary, why not just lower it?
Anatoly: Go ahead, propose a change, I personally don't mind; I've said it countless times, change it to whatever value you want and persuade the validators to accept it. When these numbers were originally set, the main consideration was not to cause a complete disaster, and since Cosmos did not have a problem because of this setting, it was reasonable enough.
7. Is FireDancer's Overall Development Cost Too High?
Mert: So let's go back to the coordination challenge. We've been promoting FireDancer lately, and Jerry recently mentioned that some people are starting to feel that FireDancer is a bit overhyped. However, Jerry also mentioned that FireDancer has indeed slowed people down a bit because Anza engineers and Fire engineers clearly need to align on certain things before they can move forward, so there is indeed some delay initially. Your perspective seems to be that once the spec and interfaces are ironed out, the iteration speed will increase, right?
Anatoly: Yes, it can essentially be broken down into three steps: the first step is the design phase, where you need to build consensus on what needs to be done; next is the implementation phase, where both teams can work in parallel; and then there's the testing and validation phase to ensure there are no security or liveness issues. I think the design phase may take more time, but implementation is parallel, so both teams can progress simultaneously, and the testing and auditing phase will be faster because the probability of both independent teams introducing the same bug is lower.
I think the biggest difference is that Ethereum usually operates like this: we will release a major version that includes all the features targeted for that release, focusing on a feature set rather than a release date. Whereas Solana operates almost exactly the opposite, setting a release date, and if your feature isn't ready, it gets cut, resulting in a much faster release cadence.
In theory, if there are two teams that want to speed up the iteration speed, the iteration cycle could be further accelerated. But this requires core engineers to have that sense of urgency, feeling that "we need to get these contents out as soon as reasonably possible." In this case, you can rely on some redundancy in implementation. I believe that from a cultural perspective, both teams have a similar background as they are not academic teams but have grown up in a tech pressure cooker.
Mert: This leads me to the third point I wanted to make: FireDancer. You have to assume you have no execution capabilities because you're working for a phone, not helping with L1 development or coordinating these client teams. Is this truly the optimal choice for an individual?
Anatoly: The last major change I was involved in with FireDancer was moving the account DB index out of memory. At that time, I could write a design proposal and a small-scale implementation to prove its feasibility. However, the issue is that to complete this work, it requires a full-time engineer dedicated to this task. I could hand this task to Jash, who is responsible for getting it done, but including the testing and release cycle, the whole process would take a year.
For me, it would be great if I could join Anzar Fire Dancer as a mere individual contributor (IC), solely responsible for focusing on Grafana (a performance monitoring tool) and developing some things. However, the reality is that my energy is dispersed among countless projects. So, I find that the place where I can have the most significant impact is where I can define the state of the problem, such as growth problems, concurrency leadership problems, review problems, MEV competition problems, etc. I can propose solutions, discuss with everyone, and eventually have everyone agree that my problem analysis is correct and put forward their possible solutions. We iterate on the design together, eventually shape it, and solidify it.
Then, when the sense of urgency I anticipated gradually intensifies, people already have the design solution. The most challenging part—the alignment between the two teams—has been completed, and all that's left is implementation and testing. So, my role is almost like that of a Principal Engineer in a large company. I don't write code; instead, I communicate with multiple teams, saying, "I noticed that you are facing difficulties in a certain area, and so are other teams. You need to solve the problem in this way so that we can align in this aspect." This is probably the opportunity where I can have the most significant impact in the core domain.
Mert: This is indeed the responsibility of this role, but it's not easy. So, are you saying, "Jack Dorsey can do it, Elon Musk can do it, so I can also develop a phone while doing these tasks"?
Anatoly: It's not like that, actually. There is an outstanding engineer who is in charge of the mobile side, he is a close friend of mine for over ten years, has been involved in building BlackBerry, iPhone, and almost every phone you can think of. And there is a very excellent general manager. These two manage the entire team together, and I am responsible for setting the vision.
I don't think people fully understand this vision, but if you look at Android or iOS, they are actually a cryptographically signed firmware spec that defines the entire platform. Everyone has such a device and ensures its security through trusted boot when you receive a firmware update, it verifies the correctness of the firmware signature and rewrites the entire phone system.
And the most critical part of this is that cryptographic signature, as it could entirely be generated by a DAO, which signs the entire firmware and is responsible for its release. Imagine if Apple's own cryptographic signing certificate was controlled by a DAO, the entire concept of the software platform would be disrupted. It's that extremely cool and somewhat quirky "hacker-like" mindset.
In addition, my main job is to set such a vision, drive the team to sell more phones, make it a truly meaningful product, and ultimately achieve the milestone where the entire ecosystem can control its own firmware. I do not get involved in day-to-day execution work.
Regarding Elon Musk, I think his way of working might be like this: he has a grand idea, then finds an engineer who can convincingly tell him, "I can implement the entire project from start to finish." If you can find such a person, then the only thing you need to do is provide funding to expedite this process. After giving that person the funding, they will complete the entire project on their own and then hire people to accelerate their progress.
I try to operate in this way, not sure if this is the same as Elon's way, but I believe this is a method that can handle multiple projects at the same time: have a grand vision, a very specific goal, and then find someone truly capable of achieving it. If time is unlimited, I can build every part. And after giving them funding, they will accelerate to achieve all of this.
Mert: You mentioned that the vision is clear, but the ideal outcome seems to be like this: suppose you succeed, selling a large number of these phones, even having a groundbreaking impact on Crypto Twitter and Apple. Then Apple might lower their fees. In other words, what you are doing changes the world.
Anatoly: Indeed, it has brought about change. Software companies in the Midwest no longer have to pay a ransom similar to Apple's 30%, and more efficient software and games can be developed. This is indeed good.
Mert: But this seems more like an altruistic effort than a business move, right?
Anatoly: Only when this altruistic act can also be successful as a business move can it truly be realized. If Apple is to lower their fees, they must feel the competitive pressure from a growing and commercially viable ecosystem. Otherwise, they will only stall until that ecosystem dies out due to lack of commercial viability. Therefore, this ecosystem must find the product-market fit and have the ability to self-sustain.
But this does not mean it won't change the world. If it can make Apple's revenue share smaller, that is the essence of capitalism: when you see a group extracting rent at 30%, and you provide the same service at 15%, you change the market economy, benefiting everyone, including consumers.
Mert: So, you mean you have to believe that you can actually in some sense beat one of the world's largest companies, Apple and Google. So, why do you think you can compete with them?
Anatoly: Clearly, a 30% revenue share is indeed too high, as someone like Tim Sweeney is suing them everywhere, it has become a pain point for companies using Apple and Google distribution channels. Apple and Google extract rent this way, and consumers don't care about these costs because they're hidden from consumers. Consumers pay a fixed amount to the app, and Apple takes 30% of it.
Solving this problem is a challenge in network construction, and I think the crypto field has an advantage in this regard. Crypto can finance digital assets and scarcity in a way different from Web 2. But even so, this may still fail. The reason for the failure is not that app developers don't want lower fees, which is obviously true. The reason for the failure is that we have not yet found a way to leverage the incentives provided by crypto technology to scale the network.
This is a really tricky problem. It's not a product problem, not a business model problem, but a question of how we can force users to change their behavior and switch to other networks.
8. What Does Solana Rely on to Compete with Ethereum?
Mert: Shifting gears, I'd like to talk about ZK-related issues. One ultimate vision for blockchain seems to be that everything is ZK-driven, where you don't need to execute all operations on a full node; you just need to verify proofs. However, Solana doesn't seem to have a similar plan.
Anatoly: If you've read my article on APE (Asynchronous Processing Execution), you'll find it has a significant impact on how validators operate. By sharing a common prover, validators can verify state. So you can have multiple validators sharing a trusted environment (e.g., TE) or some trust model, or even using a ZK solution. When APE completes the entire asynchronous execution and calculates a full snapshot hash, you can actually achieve this idea—a Rollup entirely based on ZK verification. This doesn't mean you need a Rollup or that a Rollup is somehow incompatible with Solana.
This view is absurd—Asynchronous Execution allows you to calculate a snapshot hash based on your own trust model, regardless of the environment you're using, such as running your own full node, sharing a TE environment, or other environments. These don't affect my full node. If I run my own full node, you can use any environment to do what you want.
The core question is: what sets Solana apart from Ethereum and ZK? For the network to survive, it must have commercial viability, meaning it needs to be profitable. In my view, the only commercial model for L1 is priority fees, which is essentially the same as MEV. On the other hand, Rollups create their own MEV, perform independent sequencing, and compete with L1, creating a parasitic competitive environment for L1.
All this competition is fine, but it does not belong to the Solana ecosystem. Those Rollups are EVM-based, leveraging the power of open source to accelerate development globally, while Solana's ecosystem is based on the SVM.
In my opinion, this is the fundamental difference in how ZK is applied between Solana and Ethereum. The light protocol is great because on Solana's mainnet, sequencing is done by Solana's validators.
Mert: Let's give a very theoretical example, completely the other way around, assuming that bandwidth has been maximized, latency has been minimized, and Moore's Law has been fully utilized. Even in a situation where the channel is saturated and only a little more hardware is needed to solve the problem. If we really achieved all of this but it's still not enough, what then? Let's assume that encryption technology has indeed become more widespread (although I personally don't think this will necessarily happen), what would happen then?
Anatoly: Well, you can't just boot up another network because Solana full nodes have already saturated ISPs' bandwidth, and each ISP has no more capacity; we've eaten up all available bandwidth.
Mert: I guess before reaching full saturation, all engineering challenges need to be addressed.
Anatoly: One has to realize that currently almost everywhere in the world can access 1 Gbps network speeds, and almost every mobile device has this ability, which equates to processing 250,000 transactions per second (TPS). Even based on the current efficiency specs of the Turbine protocol, this setup would support 250,000 TPS. This is an astronomical number, it's a ridiculous amount of capacity. Let's saturate this first before we discuss other issues, such as the limits of Moore's Law.
But as of now, Solana is still 250x away from that point in terms of load improvement. We need a 250x improvement before we can start considering other issues. And this so-called 1 Gbps is a technology standard with a 25-year history and is a very mature technology.
We haven't even come close to saturating this technological capacity yet. I believe that when we reach full saturation of the 1 Gbps bandwidth, when Turbine is fully saturated, that is the scenario that the FireDancer team has already demonstrated in a lab environment. Of course, this environment is still distributed, but fundamentally it is a lab environment, although this is indeed achievable.
However, to make this technology commercially viable, there are still many issues to solve, and applications need to be able to effectively utilize this capability. Because currently, most of the load on Solana comes from market activity, starting with reaching saturation, then arbitrage fills the remaining block space. But this has not yet reached what I call "economic saturation."
Mert: In an environment where Ethereum has higher-quality assets and higher transaction volume due to existing liquidity effects, how does Solana compete? Assuming that these assets, even stablecoins, have not reached Ethereum's level, what needs to change?
Anatoly: We can start calling Ethereum's assets "Legacy Assets," and then launch all the new things on Solana, this meme needs to change, (the new version is) Ethereum is the platform for "legacy assets," while Solana is the birthplace of new things.
"Original Article Link"