Original author: Kairos Research

Original translation: Block unicorn

Preface

Today, EigenDA is the largest AVS (Data Availability Service) by both re-staked capital and number of independent operators, with over 3.64M ETH and 70M EIGEN currently re-staked, totaling approximately $9.1B USD, across 245 operators and 127K independent staking wallets. As more and more alternative data availability platforms are launched, it becomes increasingly difficult to distinguish the differences between them, their unique value propositions, and how protocol value accumulates. In this post, we will take a deep dive into EigenDA and explore the unique mechanisms that make up its design, while also examining the competitive landscape and analyzing how this market space may evolve.

What is data availability?

Before we dive into EigenDA, let’s first understand the concept of data availability (DA) and why it’s important. Data availability refers to ensuring that all participants (nodes) in the network have access to all the data needed to validate transactions and maintain the blockchain. DA is part of the traditional monolithic architecture we see - in short, execution, consensus, and settlement all rely on DA. Without DA, the integrity of the blockchain would be seriously threatened.

The reliance on DA by all other parts of the stack creates a bottleneck for scaling, which is why we see the emergence of the Layer 2 roadmap. The future of L2 came into being after the introduction of Optimistic Rollups in 2019. L2 execution occurs off-chain, but still relies on Ethereum’s DA to maintain Ethereum’s security guarantees. With this paradigm shift, many people realize that the advantages provided by L2 can be further improved by building specific blockchains or services that focus on improving the limitations of the monolithic DA layer.

While some specific data availability (DA) layers have emerged that have the potential to reduce fees through competition and further experiments have been conducted, the DA problem is still being solved on the Ethereum mainnet through a process called "Dank Sharding". The first part of Dank Sharding was implemented through EIP-4844, which introduced transactions that carry additional data blocks, which can be up to 125KB in size. These data blocks are committed using KZG (a type of cryptographic commitment), ensuring the integrity of the data and compatibility with future data availability sampling. Prior to the implementation of EIP-4844, rollups used calldata to submit aggregated transaction data to Ethereum.

Since the prototype of danksharding was launched in the Dancun update in mid-March, there have been 2.4 million data blocks with a total size of 294 GB, and more than 1,700 ETH in fees paid to L1. It is important to note that the data of the data blocks cannot be accessed by the EVM and will be automatically deleted after about 2 months. Currently each block can hold up to 6 data blocks, totaling 750 KB. For non-technical readers, if the data block space is filled up three blocks in a row, then you have the equivalent of a GameCube memory card of data, which is really nostalgic.

This limit is indeed reached several times per day, indicating that there is a high demand for block space on Ethereum. While the base block fee on Ethereum is around $5 at the time of writing, we should be careful to remind ourselves that this fee is tied to the price of ETH, as is most DeFi activity. Therefore, during periods of increased ETH prices, there will be more activity, which in turn will lead to increased demand for block space. Therefore, in order to cope with increased DeFi activity or open up the network to meet never-before-seen use cases, the costs of data availability must be further reduced. There is still a strong incentive to reduce these costs in order to encourage continued growth in user activity.

How does EigenDA work?

EigenDA is built on the simple principle that data availability does not require independent consensus to be resolved, so EigenDA is structurally designed to scale linearly, as the main role of operators is just to handle data storage. To explain in more detail, there are three main parts in the EigenDA architecture:

  • Operators

  • Diffuser

  • Retriever

EigenDA operators are parties or entities responsible for running the EigenDA node software, who are registered in the EigenLayer and have been entrusted with a stake. You can think of them as node operators in a traditional proof-of-stake network. However, instead of burdening consensus, the role of these operators is primarily to store data blocks associated with valid storage requests. In this context, a valid storage request is one for which a fee has been paid and the data blocks provided match the KZG commitment and proof provided.

In short, KZG commitments allow you to associate a piece of data with a unique code (the commitment) and subsequently use a special key (the proof) to prove that the given data is indeed the original data. This ensures that the data has not been altered or tampered with, thus maintaining the integrity of the data block.

The Diffuser is an “untrusted” service referred to in the EigenDA documentation and hosted by EigenLabs. Its main responsibility is to act as an interface between EigenDA clients, operators, and contracts. EigenDA clients make scatter requests to the Diffuser, which encodes the data using Reed-Solomon, which facilitates data recovery, and then calculates the KZG commitment for the encoded data block and generates a KZG proof for each block. The Diffuser then sends the data block, KZG commitment, and KZG proof to the EigenDA operator, who then returns the signature. The final step for the Diffuser is to aggregate these signatures and upload them to Ethereum in the form of call data, which is sent to the EigenDA contract. It is worth noting that this step is a necessary prerequisite for punishing potentially misbehaving operators.

The last core component of EigenDA is the retriever, which queries the EigenDA operator for data blocks, verifies that the data blocks are accurate, and then reconstructs the original data blocks for the user. While EigenDA hosts a retriever service, client aggregates can also choose to host their retriever as an add-on to their sorter.

The following is the operating process of EigenDA in actual operation:

  • The rollup sorter sends a batch of transactions as a block to the EigenDA decentralizer's sidecar (design pattern).

  • The EigenDA disperser’s sidecar performs erasure coding on data blocks, splits the data blocks into multiple fragments, generates KZG commitments and multi-reveal proofs for each fragment, and distributes the fragments to EigenDA operators, who return signatures proving the storage.

  • After aggregating the received signatures, the decentralizer registers the block on-chain by sending a transaction containing the aggregated signature and the block metadata to the EigenDA manager contract.

  • The EigenDA manager contract verifies the aggregate signature with the help of the EigenDA registry contract and stores the result on-chain.

  • Once a block is stored off-chain and registered on-chain, the sequencer publishes the EigenDA Block ID in a transaction to its Inbox contract. Block IDs are limited to 100 bytes in length.

  • Before accepting a data block ID into the aggregated inbox, the inbox contract will consult the EigenDA manager contract to confirm whether the data block is authenticated as available. If authenticated, the data block ID will be allowed into the inbox contract; otherwise, the data block ID will be discarded.

In simple terms, the sequencer sends data to EigenDA, which splits the data, stores it, and checks whether it is safe. If everything is OK, the data will be passed and continue to be transmitted. If it does not meet the requirements, the data will be discarded.

Competitive Landscape

When looking at the competitive landscape of data availability (DA) services from a broader perspective, EigenDA clearly outperforms other services in terms of throughput. As more operators join the network, the scaling opportunities for potential throughput also increase. Furthermore, when considering which alternative DA service is most "Ethereum-compliant", it is not difficult to see that EigenDA is undoubtedly the best choice.

While Celestia offers groundbreaking innovation in its Data Availability Service (DAS), it is difficult to view it as a service that is fully aligned with Ethereum, and while such alignment is not mandatory, it certainly has implications for clients deciding which services to use (such as rollups). Celestia also implements interesting strategies related to its light node architecture, which may allow for larger blocks and thus more data in each block, but this is subject to certain conditions.

Celestia appears to have been very successful in terms of operations in reducing the costs of rollups, which have also been passed on to end users. However, despite this meaningful and far-reaching innovation, they have made little real progress in terms of fee accumulation, despite a fully diluted valuation of billions of dollars (~$5.5 billion at the time of writing). Celestia launched last Halloween, and since then, 20 independent rollups have integrated their DA service. Across these 20 rollups, they have released a total of 54.94 GB of block space data, enabling the protocol to collect 4,091 TIA, worth ~$21,000 at current prices. However, in the interest of fairness, it is important to point out that the accumulation of fees is paid to stakers and validators, and that the price of TIA has fluctuated over time, reaching as high as 19.87, so the actual dollar amount may vary. Using secondary data, we can estimate that the total fees in USD are more likely to be around $35,000.

Current summary pattern and EigenDA positioning

Pricing for EigenDA was recently announced, and includes an "on-demand" option and three different pricing tiers. The on-demand option is priced at 0.015 ETH/GB and offers variable throughput, while "Tier 1" is priced at 70 ETH and offers 256 KiB/s of throughput. When looking at the data availability (DA) landscape on Ethereum mainnet today, we can make some assumptions about the potential demand for EigenDA and how much revenue it might generate for re-stakers.

As of now, there are about 27 rollups that publish blocks of data to Ethereum L1, which are collected from queries. Each block of data published to Ethereum (after EIP-4844 is implemented) is 128 KB in size. A total of about 2.4 million blocks have been published across these 27 rollups, totaling 295 GB of data. Therefore, if all of these rollups use the 0.015 ETH/GB pricing, the total fee would be 4.425 ETH.

At first glance, this may seem like a problem. However, it is important to note that rollups vary widely in their unique offerings and architectures. Due to design differences and different user bases, their individuality leads to widely varying numbers of blocks published and fees paid to L1.

For example, for the rollups analyzed in this study, here are how many blocks (number + GB) each rollup used and how much it cost:

From this analysis alone, 6 aggregated costs have exceeded the Tier 1 pricing threshold for choosing EigenDA, but from a pure data throughput perspective, it does not seem to make sense for them. In fact, using EigenDA’s on-demand pricing still directly reduces costs by an average of about 98.91%.

This therefore leaves re-stakers and other ecosystem stakeholders in a dilemma. The cost reductions provided by EigenDA are beneficial to both L2 and its users as it will lead to better profits and revenues, but this does not instill confidence in re-stakers who were hoping that EigenDA would become a leader in re-staking rewards.

However, another explanation is that EigenDA’s cost reductions foster innovation. Historically, cost reductions have often been an important catalyst for growth. For example, the “Besamir process” for steel was an innovation that significantly reduced the cost and time required to produce steel, enabling the mass production of stronger, higher quality steel at an 82% lower cost. One could argue that similar principles apply to DA services, and that the introduction of multiple DA service providers not only significantly reduces costs and is reinforced by competition, but also inherently spurs innovation in high-throughput aggregation, thereby expanding previously unexplored design boundaries.

For example, Eclipse is an SVM rollup that just started publishing blocks 28 days ago, but already accounts for 86% of Celestia's total block share. Its mainnet is not yet open to the public, and while these use cases may be primarily to test the robustness of the technology, it gives us a glimpse of the potential for high-throughput rollups and shows that they will be significantly larger than what we see today. Most rollups have more DA consumers.

Summary and Conclusion

So where does this leave us? Based on the goals set out by the team in the blog, to achieve EigenDA’s $160k/month revenue goal, you would need 11 rollups as paying customers if using level 1 pricing of 70 ETH/year and assuming an average ETH price of ~$2,500. From our analysis, about 6 rollups have charged more than 70 ETH on L1 since EIP-4844 went live in early March. As we discussed, on-demand pricing will still reduce costs by ~99% for these rollups, but ultimately throughput needs will determine whether they choose to use EigenDA.

Beyond this, we are likely to see costs driven down by the creation of multiple high-throughput rollups (e.g. MegaETH), thus stimulating demand. In the future, these high-performance rollups may also be deployed by Rollup-as-a-Service (RaaS) providers such as AltLayer and Conduit. However, in the short term, some work is still needed to reach the $160,000 per month revenue target, which would be the break-even cost, assuming only 400 operators support EigenDA. Overall, EigenDA opens up new potential design possibilities that have great value accretion potential, but it is not entirely clear how much value EigenDA will capture and return to re-stakers. Nonetheless, we believe EigenDA is well-positioned for market share as a data availability service provider and look forward to continued attention on one of the most prominent AVS.

Original link