You may have noticed almost every developer is participating and retweeting KZG Ceremony, so what is KZG Ceremony?

In simple terms, KZG Ceremony is the trust setup of EIP-4844 KZG commitment, and EIP-4844 is the prerelease of Ethereum full sharding.

1. Sharding: A long-term solution for Ethereum scaling

  • While rollups scale Ethereum from the execution layer, sharding improves Ethereum’s scalability and capacity from the perspective of data availability.

  • The trend chart below shows that the average block size fluctuates around 90kb despite the rapid iteration of Ethereum these years. Although rollups remit the network congestion notably, the overall performance is still restricted by the Layer 1 data-storage capacity.

  • In consideration of security and the complexity of implementation, sharding is divided into multiple phases, which include proto-danksharding and danksharding. The whole process could take several years.

  • Given the current storage schema, only few high-performance hardwares are able to participate as nodes. After the implementation of sharding, nodes are not required to store the full content of historical data, which leverages the security of Ethereum by lowering the threshold of becoming a node(A lower data storage cost and a higher degree of decentralization).

2. EIP-4844: Remarkable short-term return, a prerelease of Ethereum full sharding

EIP-4844 = Proto-Danksharding

Since the complete implementation of sharding is still too complex and could take years, proto-danksharding is the best Intermediate plan to reduce Ethereum congestion in the short term.

2.1 Proto-danksharding Summary

Proto-Danksharding introduces a new transaction type called blob-carrying transaction. Benefiting from this update, rollups can use “blob” to transfer data to L1 and store it provisionally at a relatively lower cost. The size of a blob is much larger than the current calldata.

About blob

  • Each transaction can carry at most 2 blobs

  • Each block normally carries 8 blobs, which have a 1MB capacity.

A block can carry 16 blobs, which leads to a 2MB block size.

  • A blob is not permanently stored as the history log like calldata.

  • In the design of proto-danksharding, nodes still need to download the full data content and verify the data availability.

2.2 Blob-carrying transaction in depth

Functionality

The functionality of a data blob is similar to calldata, which allows the rollup to transfer transaction data and proofs to L1.

Cost

The original intention of the blob is to support high TPS in rollups. Compared to calldata, which uses on-chain storage, those data blob is only downloaded and stored for a period of time. Therefore, the gas spending for rollups to ensure data availability will be predictably lower.

Capacity

The size of each blob is 125kB.

2.3 The value and challenge of blob-carrying transaction

Value

It is certain that the emergence of blobs makes transaction data become a kind of cache, which further lowers the storage hardware requirement for nodes, and reduces the gas fee by providing Ethereum with extra data storage.

Challenge: Let’s calculate the hardware requirement

The fact is that the current block size is around 90kB, but the size of a blob can reach 125kB

According to the design of EIP-4844, the size of each slot is normally 1 MB, which means the total data size can be calculated as follow:

1 MB/block * 5 block/min * 43200 min/month * 12 month/year = 2.47 TB per year

It is obvious that the annual data increment is far more than the total Ethereum data, which infers that this naive data storage plan is not efficient.

What can be optimized?

In short term, each node still needs to store the full content of historical data, but the consensus layer is implemented with a scheme that the blob data will be deleted in a certain period of time(30day or 1 year, TBD)

For the long-term benefit, EIP-4444 needs to be implemented, which indicates that nodes are no longer required to store full data. Instead, a new mechanism is adopted, which allows nodes to only store parts of data for a certain time referring to a so-called history expiry scheme.

2.4 KZG Commitment

KZG Commitment is a polynomial commitment scheme adopted by EIP-4844 proto-danksharding

KZG Ceremony is the process of trust setup for KZG Commitment, which attracts more than 30,000 participants.

2.4.1 What is KZG commitment

KZG is an abbreviation of Aniket Kate, Gregory M. Zaverucha, and Ian Goldberg, who published the polynomial commitment essay “Constant-Size Commitments to Polynomials and Their Applications” in 2010. KZG Commitment is wildly applied in the plonk-style zk-snark protocol.

Referring to the diagram from Dankrad’s presentation, the KZG root is similar to the Merkle root, except the KZG root commits to a polynomial, where every position is laying on this polynomial. Based on the scenario of proto-danksharding, KZG root commits a data set, where every single data point can be verified as a part of the entire set.

A quick view of how KZG commitment works internally

  • Prover: Responsible for calculating commitment. For the security consideration, a prover cannot modify the given polynomial, and the commitment is only valid for the current polynomial;

  • Verifier: Responsible for verifying the commitment sent from the prover.

2.4.2 KZG Ceremony(trusted setup)

The process of the KZG Ceremony

Everyone can join as a participant in the KZG ceremony and contribute the secret. The newly added secret will be mixed with the previous output to form a new result, and finally, generate an SRS for the KZG commitment trust setup. (Check the diagram provided by Vitalik for a better understanding)

Trust setup

  • KZG Ceremony is a wildly used multi-participant trust setup called power-of-tau;

  • This setup follows the 1-of-N trust model, which means no matter how many participants contribute to the process of generating the final setup, as long as one person keeps his/her secret, the validity of the setup can be guaranteed.

Significance of the KZG Ceremony

  • The value of the trust setup of KZG commitment can be interpreted as follow: to generate a parameter that is necessary for every single execution of the cryptographic protocol

  • When the prover calculates the commitment, KZG commitment C = f(s)g1, where f is the evaluation function, and s is the final result of the KZG trust setup. Therefore the final secret generated by the current KZG ceremony is crucial to the following implementation of sharding.

2.4.3 Advantage of the KZG Commitment

  • Cost

  • KZG commitment has a lower complexity and can be verified efficiently.

  • No extra proof is needed, which leads to a lower cost and remits the requirement of bandwidth.

  • Even lower cost taking advantage of the point evaluation precompile.

  • Security

  • If the failure occurs, only the blob corresponding to the current commitment is infected, and there is no further chain effect.

  • Compatibility

  • The KZG commitment is more friendly to DAS, which avoids redundancy in development.

2.5 The benefit of EIP-4844

Rollup

As shown in the picture below, rollup needs to submit the state delta and the versioned hash of KZG commitment through calldata(zk-rollup still needs to upload the zkp).

After the implementation of EIP-4844, the expensive calldata only carries some small data such as state delta and commitments, while the large data like transaction batch is put into the blob.

  • reduce the cost;

  • reduce the block storage space usage.

Improvement of security

  • Data availability: Blob is stored in the beacon chain, which shares the same security as Ethereum L1.

  • Historical data: nodes only store blobs for a certain amount of time, and layer 2 rollup is responsible for the permanent data storage, which indicates that the security of historical data relies on the rollup.

Cost

The low-cost feature of the blob-carrying transaction can optimize the overall cost by x10 to x50.

Meanwhile, EIP-4844 introduces a blob fee

  • Gas and blob will have separate adjustable gas prices and limits;

  • The price unit of a blob is gas, the gas amount will float according to the network traffic, which aims to maintain the number that each block carries(8 on average).

Implementation of precompile

EVM execution can only view the commitment of a blob generated by the prover, and cannot access blob data directly. Therefore, rollup needs to use precompile scheme to verify the validity of commitment.

There are two precompile algorithms mentioned in EIP-4844

  • Point evaluation precompile

  • Prove that multiple commitments are committed to the same set of data.

  • Point evaluation precompile is mainly adopted by zk-rollup, rollup needs to provide two commitments, KZG commitment and zk-rollup commitment

  • As for optimistic rollup, most of them have adopted multi-round fraud-proof, and the final round fraud-proof holds a smaller data size, which means they can also use point evaluation precompile for a lower cost.

  • Blob verification precompile

  • Prove that the versioned hash is valid for the corresponding blob

  • Optimistic rollup needs access to full data when submitting fraud-proof, so it is rational to verify the validity of the versioned hash and then the fraud-proof verification.

3. Danksharding: A crucial step towards full sharding

Scaling

Thanks to the new transaction type design of proto-danksharding, which introduces a data blob, each block now has an extra 1MB cache. This number will grow 16 to 32 times after the implementation of danksharding.

Data availability: High-performance data storage and verification

Compared to proto-danksharding, where nodes are required to store the full content of historical data, danksharding allows nodes to only store data after sampling.

DAS

Taking advantage of erasure coding technology, danksharding proposal makes it easier (each node only needs to download parts of data) for nodes to discover the loss of data.

Security: Almost the same

Since nodes are no longer required to store the full content of historical data, the security is not backed by only a single node but depends on multiple nodes which store parts of data and can be further composed and recover the full data.

Although a single-point dependence scheme is more secure than a multi-point dependency, the number of nodes in the Ethereum network is far more than enough, which is qualified to achieve the goal of ensuring data availability.

New challenge: the higher requirement for block builders

While validators only download and store parts of full data, the block builder still needs to upload the full content of data which is the blob that contains all transaction data.

According to the diagram from Dankrad’s slides, we can see how PBS(proposer/builder separation), which is originally designed for anti-MEV, helps with reducing the bandwidth requirement during block building.

4. Another sharding scheme: dynamic state sharding from Shardeum

Shardeum is an EVM-compatible L1 blockchain, which uses dynamic state sharding to improve scalability and security. Meanwhile, the shardeum network is able to ensure a higher level of decentralization.

Dynamic state sharding

Advantages

The most intuitive benefits of dynamic state sharding are linear scaling. Each node holds a different address range, and there is a significant overlap between the addresses covered by nodes. The sharding algorithm groups nodes dynamically, which means newly added nodes in the Shardeum network work immediately to increase the TPS

Implementation

The complexity of implementing dynamic state sharding is more difficult than static sharding. Shardeum’s technical team has deeply researched sharding technologies. The previous R&D achievements made by the Shardeum team(previously Shardus technology) also make significant contributions, which is able to showcase the linear scaling of dynamic state sharding in an early development stage.

Summary

Product

Referring to the idea of divide and conquer, the Shardeum dynamic state sharding split the workload of calculation and storage, which allows a higher level of parallelization. Therefore, the network is able to accommodate more nodes, which further improves the throughput and level of decentralization.

Team

The Shardeum team has strong marketing experience and narrative ability. They also have a deep understanding of the tech detail, especially dynamic state sharding.

Technology

The tech team is able to design an appropriate sharding scheme and an efficient consensus algorithm(Proof of Stake+ Proof of Quorum)based on their understanding of the scenario, which puts the scaling and throughput as the first consideration and ensures the security and level of decentralization as far as possible.

Progress

Betanet launched on 2023–02–02.

5. The outlook

  • Sharding is a long-term scaling solution for Ethereum, it has a huge value and profound significance for the whole network. It is worse to give close attention, as the implementation of sharding is a process of iteration. All the current proposals, including proto-danksharding and danksharding, can be upgraded/altered.

  • While understanding the general method of implementing sharding is important, the technical proposals, such as PBS, DAS, multidimensional fee market, etc, that emerge during the process are also worth paying attention to. There could be many outstanding projects accompanying those schemes.

  • It is important to know that sharding is a general term that describes a set of scaling technologies, and there are different application schemes depending on specific scenarios. For example, the design of danksharding might only fit Ethereum, and could likely lead to a negative effect if applied in other L1s, as the security needs to be guaranteed by a huge amount of nodes in the network.

  • A rational combination of sharding and other scaling solutions can achieve a multiplying effect. The current danksharding proposal will not work alone. Instead, rollups and danksharding supplement each other to better improve Ethereum’s scalability and capacity.

Reference

https://notes.ethereum.org/@dankrad/kzg_commitments_in_proofs

https://notes.ethereum.org/@dankrad/new_sharding

https://vitalik.ca/general/2022/03/14/trustedsetup.html

https://notes.ethereum.org/@vbuterin/proto_danksharding_faq#Why-use-the-hash-of-the-KZG-instead-of-the-KZG-directly

https://ethresear.ch/t/easy-proof-of-equivalence-between-multiple-polynomial-commitment-schemes-to-the-same-data/8188

https://dankradfeist.de/ethereum/2020/06/16/kate-polynomial-commitments.html

https://eips.ethereum.org/EIPS/eip-4844

https://www.eip4844.com/

https://biquanlibai.notion.site/Data-Availability-caa896aae59d489b98f2448f17b01640

https://ethresear.ch/t/a-design-of-decentralized-zk-rollups-based-on-eip-4844/12434

About Foresight Ventures

Foresight Ventures is dedicated to backing the disruptive innovation of blockchain for the next few decades. We manage multiple funds: a VC fund, an actively-managed secondary fund, a multi-strategy FOF, and a private market secondary fund, with AUM exceeding $400 million. Foresight Ventures adheres to the belief of “Unique, Independent, Aggressive, Long-Term mindset” and provides extensive support for portfolio companies within a growing ecosystem. Our team is composed of veterans from top financial and technology companies like Sequoia Capital, CICC, Google, Bitmain, and many others.

Website: https://www.foresightventures.com/

Twitter: https://twitter.com/ForesightVen

Medium: https://foresightventures.medium.com

Substack: https://foresightventures.substack.com

Discord: https://discord.com/invite/maEG3hRdE3

Linktree: https://linktr.ee/foresightventures