Speaker: Vitalik Buterin, founder of Ethereum Translated by: 0xxz, Golden Finance EthCC7 was recently held in Brussels, and the organizer invited Ethereum founder Vitalik to give a keynote speech. It is worth noting that 2024 is the 10th anniversary of Ethereum IC0. After Vitalik’s speech, the three former core founders of Ethereum, Vitalik Buterin, Joseph Lubin and Gavin Wood, took a group photo together again to commemorate the event. This article is the keynote speech of Ethereum founder Vitalik at EthCC7 recently, translated by Golden Finance 0xxz. Speech Topic Strengthening L1: Optimizing Ethereum to make it a highly reliable, trustworthy, and permissionless Layer 2 base layer

Ethereum Vision Spectrum
I think there's a spectrum of different possible roles that the Ethereum base layer might play in the ecosystem over the next five to ten years. You can think of it as a spectrum from left to right.   On the left side of the spectrum, it basically tries to be a very minimalistic base layer that basically just acts as a proof validator for all L2s. Maybe also provides the ability to transfer ETH between different L2s. But other than that, that's basically it.   On the right side of the spectrum, basically refocusing on dApps running primarily on L1, with L2 only used for some very specific and high-performance transactions.   In the middle of the spectrum, there are some interesting options. I put Ethereum as a base layer for L2 on the second left. On the far left, I put an extreme version, and the extreme version is that we completely abandon the execution client part of the entire Ethereum, keep only the consensus part, add some zero-knowledge proof validators, and basically turn the entire execution layer into a Rollup as well.   I mean the very extreme options are on the left, and on the right side it can be a base layer, but it can also try to provide more functionality for L2. One idea in this direction is to further reduce the swap time of Ethereum, which is currently 12 seconds, maybe down to 2-4 seconds. The purpose of this is to actually make based rollups viable as the primary way that L2 operates. So right now, if you want L2 to have a top-notch user experience, you need to have your own pre-confirmation, which means either a centralized sorter or your own decentralized sorter. If their consensus speeds get faster, then L2 will no longer need to do that. If you really want to enhance the scalability of L1, then the need for L2 will also decrease.   So, it's a spectrum. Right now I'm mainly focused on the left-two version, but the things I'm suggesting here also apply to other visions, and the suggestions here don't actually hinder other visions. This is something I think is very important.   Ethereum's robustness advantage
One of Ethereum’s big advantages is that it has a large and relatively decentralized staking ecosystem.

On the left side of the above picture is a chart of the hashrate of all Bitcoin mining pools, and on the right side is a chart of Ethereum stakers.   Bitcoin hashrate distribution is not very good at the moment, with two mining pools adding up to more than 50% of the hashrate and four mining pools adding up to more than 75%.   The situation with Ethereum is actually better than the chart shows, because the second largest gray part is actually unidentified, which means that it could be a combination of many people, and there may even be a lot of independent stakers in it. And the blue part Lido is actually a weird, loosely coordinated structure consisting of 37 different validators. So, Ethereum actually has a relatively decentralized staking ecosystem that performs quite well.   There are a lot of improvements we can make in this regard, but I think there is still value in recognizing this. This is one of the unique advantages that we can really build on top of.

The robustness advantages of Ethereum also include: ● Having a multi-client ecosystem: there are Geth execution clients and non-Geth execution clients, and the proportion of non-Geth execution clients even exceeds that of Geth execution clients. A similar situation also occurs in the consensus client system; ● International community: people are in many different countries, including projects, L2, teams, etc.; ● Multi-center knowledge ecosystem: There is the Ethereum Foundation and there are client teams , even teams like Paradigm’s Reth team have recently been increasing their leadership in open source; a culture that values ​​these attributes. Therefore, the Ethereum ecosystem as a base layer already has these very powerful advantages. I think this is something very valuable and shouldn't be given up easily. I would go so far as to say that there are clear steps that can be taken to further these strengths and even address our weaknesses. Where does Ethereum L1 fail to meet high standards? How can it be improved?
Here’s a poll I did on Farcaster about half a year ago: If you’re not Solo staking, what’s stopping you from Solo staking?I can repeat this question in this room, who is doing Solo staking? If not doing Solo staking, who of you think the 32 ETH threshold is the biggest barrier, who thinks it's too difficult to run a node is the biggest barrier, who thinks the biggest barrier is not being able to put your ETH into DeFi protocols at the same time? Who thinks the biggest barrier is the worry that you have to put your private key on a running node and it is more vulnerable to theft?   As you can see, the top two barriers that are agreed upon are: the minimum requirement of 32 ETH and the difficulty of node operation. It's always important to recognize this.   A lot of times when we start to dig into how to maximize how people can double-use their collateral in DeFi protocols, we find that a large number of people are not even using DeFi protocols at all. So let's focus on the main issues and what we can do to try to solve these problems.   Start with running a validating node, or, in other words, starting with the 32 ETH threshold. Actually, these two issues are related because they are both a function of the number of validators in Ethereum Proof of Stake. Today we have about 1 million validator entities, each with 32 ETH on deposit, so if the minimum requirement was changed to 4 ETH, then we would have 8 million or maybe more than 8 million, maybe 9 million or 10 million validators. If we wanted to get down to 100,000 validators, then the minimum requirement would probably go up to about 300 ETH.   So, it's a tradeoff. Ethereum has historically tried to be in the middle of that tradeoff. But if we can find any way to improve it, then we have additional statistical points that we can choose to use to reduce the minimum requirement, or to make it easier to run a node.   In fact, right now I think that aggregating signatures is not even the main difficulty of running a node. In the beginning, we may focus more on reducing the minimum requirement, but eventually it will be both.   So there are two techniques that can improve both of these aspects.   One is to allow staking or finality without requiring every validator to sign.Basically, you need some kind of random sampling of enough nodes to achieve significant economic security. Right now, I think we have more than enough economic security. The cost of conducting a 51% attack, in terms of the amount of ETH slashed, is one-third of 32 million ETH, which is about 11 million ETH. Who would spend 11 million ETH to break the Ethereum blockchain. No one, not even the US government, wants to. These sampling techniques are similar to what if you had a house and the front door was protected by four layers of steel, but the window was just a piece of low-quality glass that someone could easily break with a baseball bat. I think Ethereum is like that to some extent, if you want to do a 51% attack, you have to lose 11 million ETH. But in reality, there are many other ways to attack the protocol, and we really should be strengthening these defenses more. So instead, if you have a subset of validators doing finality, then the protocol is still secure enough and you can really increase the level of decentralization. The second technique is better signature aggregation. You could do something advanced like Starks, and instead of supporting 30,000 signatures per slot, we might be able to support more signatures eventually. This is the first part. The second part is to make running nodes easier. The first step is history expiration. In fact, EIP-4444, there has been a lot of progress in this regard. The second step is the stateless client. Verkle has been around for a long time, another possible option is to make a binary hash tree like Poseidon, a Stark friendly hash function. Once you have this, in order to verify Ethereum blocks, you no longer need a hard drive. You can also later add a Type 1 ZKVM that can Stark verify entire Ethereum blocks, so you can verify arbitrarily large Ethereum blocks by downloading data, or even data availability sampling data, and then you only need to verify one proof. If you do this, running the node will become easier. One of the very annoying things currently with stateless clients is that if you want to change hardware or software settings, usually you either need to start from scratch and lose a day, or you need to do something very dangerous and put the keys in two This will be Slashed somewhere, if we have a stateless client you no longer need to do this.You can simply start a new standalone client, shut down the old one, move the keys over, and start the new one. You only lose one epoch.   Once you have ZKVM, the hardware requirements are basically reduced to almost zero.   So, the 32 ETH threshold and the difficulty of running a node, both of these problems can be solved technically. I think there are a lot of other benefits to doing this, which will really improve our ability to increase people's ability to stake individually, and will give us a better ecosystem for individual staking and avoid the risks of staking centralization.   There are other challenges with proof of stake, such as risks associated with liquid staking, risks associated with MEV. These are also important issues that need to continue to be considered. Our researchers are considering these.   Recovering from a 51% attack
I really started thinking about it seriously and rigorously. It's amazing how many people don't think about this topic at all and just treat it as a black box.   What would happen if there was a 51% attack?   Ethereum could be 51% attacked, Bitcoin could be 51% attacked, a government could be 51% attacked, like buying off 51% of the politicians.   One problem is that you don't want to rely solely on prevention, you also want to have a recovery plan.   A common misconception is that people think 51% attacks are about reversing finality. People focus on this because this is something that Satoshi Nakamoto emphasized in the white paper. You can double spend, after I bought my private jet, I 51% attack, get my Bitcoin back, and I can keep my private jet and fly around.   More realistic attacks might actually involve deposits on exchanges and things like breaking DeFi protocols.   But reversals are actually not the worst thing. The biggest risk we should worry about is actually censorship. 51% of the nodes stop accepting blocks from the other 49% or any node that tries to include a certain type of transaction.   Why is this the biggest risk? Because finality reversal has slashing, there is immediate on-chain verifiable evidence that at least a third of the nodes did something very, very wrong and they were punished.   Whereas in a censorship attack, it's not programmatically attributable, there's no immediate programmatic evidence to say which people did something bad. Now, if you're an online node, if you want to see that a certain transaction hasn't been included within 100 blocks, but, we don't even have software written to do this kind of check,   Another challenge with censorship is that if someone wants to attack, they can do it, they start by delaying transactions and blocks they don't like for 30 seconds, then delay it for a minute, then delay it for two minutes, and you don't even have consensus on when to respond.   So, I say, actually censorship is the bigger risk.   There's an argument in blockchain culture that if there's an attack, the community will unite, they'll obviously do a minority soft fork, and cut the attacker.This may be true today, but it relies on a lot of assumptions about coordination, ideology, all kinds of other things, and it's not clear how true something like this will be in 10 years. So what a lot of other blockchain communities are starting to do is they say, well, we have things like censorship, we have these inherently more unattributable bugs. So we have to rely on social consensus. So let's just rely on social consensus and proudly admit that we're going to use it to solve our problems.   Actually, I'm advocating for going in the opposite direction. We know that it's mathematically impossible to fully coordinate an automatic response and automatically fork a majority attacker who's doing censorship. But we can get as close to that as possible.   You can create a fork that, based on some assumptions about network conditions, actually brings at least a majority of the nodes online. The argument I'm trying to get across here is that what we actually want is to try to make the response to a 51% attack as automated as possible.   If you're a validator, then your node should be running software that automatically forks the majority chain if it detects that a transaction is being censored or that certain validators are being censored, and all the honest nodes will automatically coordinate on the same minority soft fork because of the code they're running. Of course, again, there's the mathematical impossibility result that at least anyone who's offline at the time won't be able to tell who's right and who's wrong.   There are a lot of limits, but the closer you get to that goal, the less work social consensus needs to do.   If you imagine a 51% attack that actually happens. It's not going to be like, all of a sudden at some point in time, Lido, Coinbase, and Kraken are going to publish a blog post at 5:46 that basically says, hey guys, we're under censorship now.   What's actually going to happen is that you're going to see a social media war at the same time, you're going to see all kinds of other attacks at the same time. If in fact a 51% attack does happen, by the way, I mean, we shouldn't assume that Lido, Coinbase, and Kraken are going to be in power in 10 years. The Ethereum ecosystem is going to become more and more mainstream, and it needs to be very resilient to that.We want the social layer to be as lightly burdened as possible, which means we need the technical layer to at least present a clear winning candidate, and if they want to fork off a chain that is censoring, they should rally around a minority soft fork.   I’m advocating that we do more research and come up with a very specific proposal.   Proposal: Raise the Quorum threshold to 75% or 80%
I think you could raise the quorum threshold from two-thirds today to something like 75% or 80%.   The basic argument is that if a malicious chain, like a censored chain, attacks, it becomes very, very difficult to recover. However, on the other hand, if you increase the quorum, what is the risk? If the quorum is 80%, then instead of 34% of the nodes being offline to stop finality, you have 21% of the nodes being offline to stop finality.     That's risky. Let's see what happens in practice? From what I've seen, I think we've only had one incident where finality was stopped for about an hour due to more than a third of the nodes being offline. And then, have there been any incidents involving 20% ​​to 33% of the nodes being offline? I think at most once, at least zero. Because in practice, very few validators are offline, I actually think that the risk of doing this is pretty low. The benefit is basically that the threshold that an attacker needs to hit is greatly increased, and the range of scenarios where the chain goes into safe mode in the event of a client vulnerability is greatly increased, so people can really collaborate to figure out what went wrong.   If the threshold for Quorum goes from 67% to 80%, then the value of a minority client, or the value that a minority client can provide, really starts to increase, assuming that the percentage that a client needs to hit goes from 67% to 80%.   Other censorship concernsThe other censorship concerns, either it's inclusion lists or some kind of alternative to inclusion lists. So, the whole multiple parallel proposer thing, if it works, might even become an alternative to inclusion lists. You need, either account abstraction, you need some kind of in-protocol account abstraction.   The reason you need it is, because right now, smart contract wallets don't really benefit from inclusion lists. Smart contract wallets don't really benefit from any kind of protocol-level censorship resistance guarantees.   If there were in-protocol account abstraction, then they would benefit. So, there's a lot of things, actually a lot of these things have value in both the L2-centric vision and the L1-centric vision.   Of the various ideas that I discussed, about half are probably specifically for L2 solutions for Ethereum, but the other half are basically applicable to Ethereum users and L1 with L2 as a base layer, or directly to user-facing applications.   Use light clients everywhereIn a lot of ways, it's a little sad how we interact with the industry, we're decentralized, we're trustless, who in this room runs a light client on his computer that verifies consensus? Very few. Who uses Ethereum through a browser wallet that trusts Infura? In five years, I'd like to see the number of hands raised reversed. I'd like to see wallets that don't trust Infura for anything. We need to integrate light clients.   Infura can continue to provide data. I mean, if you don't have to trust Infura, that's actually good for Infura because it makes it easier for them to build and deploy infrastructure, but we have tools that can remove the trust requirement.   What we can do is, we can have a system where the end user runs something like the Helios light client. It should actually run directly in the browser, directly verifying Ethereum consensus. If he wants to verify something on-chain, like interact with the chain, then you just verify the Merkle proof directly.   If you do that, you actually get a level of trustlessness in your interaction with Ethereum. That's for L1. And we need an equivalent for L2. On the L1 chain, you have headers, you have state, you have a sync committee, you have consensus. If you verify the consensus, if you know what the header is, you can walk the Merkle branch and see what the state is. So how do we provide light client security guarantees for L2s. The state root of the L2 is there, and if it's based on Rollup, there's a smart contract, and that smart contract stores the headers for the L2. Or if you have preconfirmations, then you have a smart contract that stores who the preconfirmer is, so you determine who the preconfirmer is and then listen to a two-thirds subset of their signatures.   So once you have the Ethereum header, there's a fairly simple chain of trust, hashes, Merkle branches, and signatures that you can verify, and you can get light client verification. The same is true for any L2.   I've brought this up to people in the past, and a lot of times the response is, gosh, that's interesting, but what's the point?A lot of L2s are multisig. Why wouldn’t we trust the multisig to verify the multisig?   Fortunately, as of last year, that’s actually no longer true. Optimism and Arbitrum are in Rollup Phase 1, which means they actually have proof systems running on-chain, there’s a security committee that can cover them in case there are vulnerabilities, but the security committee needs to pass a very high voting threshold, like 75% of 8 people, Arbitrum will scale up to 15 people. So, in the case of Optimism and Arbitrum, they’re not just multisig, they have actual proof systems, and those proof systems actually have power, at least in terms of deciding which chain is right or wrong.   The EVM is even further along, I believe it doesn’t even have a security committee, so it’s completely trustless. We’re really starting to move forward on that, and I know a lot of other L2s are moving forward on that as well. So L2s are more than just multisig, so the concept of light clients for L2s is actually starting to make sense.   We can already verify Merkle branches today, just by writing code. Tomorrow, we’ll also be able to validate ZKVM, so you can fully validate Ethereum and L2 in your browser wallet.   Who wants to be a trustless Ethereum user in a browser wallet? Great. Who would rather be a trustless Ethereum user on their phone? From a Raspberry Pi? From a smartwatch? From the space station? We’ll solve that, too. So, what we need is the equivalent of an RPC config that contains not only which servers you’re talking to, but also the actual light client validation instructions. That’s something we can work towards.   Quantum-resistant strategies
The timeline for quantum computing is shortening. Metaculous thinks quantum computers will arrive in the early 2030s, and some think it will be sooner.   So we need a quantum-resistant strategy. We do have a quantum-resistant strategy. There are four parts of Ethereum that are vulnerable to quantum computing, and each of them has a natural replacement.   The quantum-resistant replacement for Verkle Tree is Starked Poseidon Hash, or if we want to be more conservative, we can use Blake consensus signatures, we currently use BLS aggregate signatures, which can be replaced with Stark aggregate signatures. Blob uses KZG, and can use separate encoding Merkle tree Stark proofs. User accounts currently use ECDSA SECP256K1, which can be replaced with hash-based signatures and account abstraction and aggregation, smart contract wallets ERC 4337, etc.   Once we have these, users can set their own signing algorithms, basically using hash-based signatures. I think we really need to start thinking about actually building hash-based signatures so that user wallets can easily upgrade to hash-based signatures.   Protocol SimplificationIf you want a strong base layer, the protocol needs to be simple. It shouldn't have 73 random hooks and some backward compatibility that exists because of some random stupid idea that some random guy named Vitalik came up with in 2014.   So there's value in trying to really simplify, start to really eliminate technical debt. Logs are currently based on bloom filters, they don't work very well, they're not fast enough, so there needs to be improvements to Log, adding stronger immutability, we're already doing that on the stateless side, basically limiting the amount of state access per block.   Ethereum is currently a weird collection of things, there's RLP, there's SSZ, there's API, ideally we should just use SSZ, but at least get rid of RLP, state, and binary merkle trees, once you have binary merkle trees, then all of Ethereum is on binary merkle trees.   Fast finality, Single Slot Finality (SSF), clean up unused precompilers, like the ModX precompiler, which often causes consensus errors, it would be great if we could remove it and replace it with performant solidity code.   Summary As a robust base layer, Ethereum has very unique advantages, including some that Bitcoin does not have, such as consensus decentralization and significant research on 51% attack recovery. I think there is a need to really strengthen those strengths. Also recognize and correct our shortcomings to ensure we meet very high standards. These ideas are fully compatible with an aggressive L1 roadmap. One of the things I'm most pleased with about Ethereum, especially the core development process, is that our ability to work in parallel has improved significantly. That's a strong point, we can actually work on a lot of things in parallel. So caring about these topics doesn't actually impact the ability to improve the L1 and L2 ecosystem. For example, improving the L1 EVM to make it easier to do cryptography. Validating Poseidon hashes in the EVM is currently too expensive. 384-bit cryptography is also too expensive. So there are some ideas on top of EOF, such as SIMD opcodes, EVM max, etc. There is an opportunity to attach this high-performance coprocessor to the EVM. This is better for Layer 2 because they can be cheaper to verify proofs, and better for Layer 1 applications because privacy protocols like zk SNARKs are cheaper. Who has used privacy agreement? Who wants to pay a $40 fee instead of $80? More people will choose the former. A second group of users can transact on Layer 2, while Layer 1 can significantly reduce costs. ​