Binance Square
LIVE
Kernel Ventures
@Coindaily
Kernel Ventures是一个由研究和开发社区驱动的加密风险投资基金,拥有超过70个早期投资,专注于基础设施、中间件、dApps,尤其是ZK、Rollup、DEX、模块化区块链,以及将搭载未来数十亿加密用户的垂直领域,如账户抽象、数据可用性、可扩展性等。在过去的七年里,我们一直致力于支持世界各地的核心开发社区。
Following
Followers
Liked
Shared
All Content
LIVE
--
Kernel Ventures: The Upsurge of Bitcoin Ecosystem — A Panoramic View of its Application LayerAuthor: Kernel Ventures Jerry Luo Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua TLDR: Along with the rise of inscription, the existing application layer of the Bitcoin network is unable to sustain market activities and is the main focus of the current Bitcoin ecosystem development.There are three mainstream Layer2 solutions for Bitcoin: Lightning network, Sidechain, and RollupThe Lightning network enables peer-to-peer payments by establishing an off-chain payment channel, which is settled on the main network after the channel is closed.Side chain locks BTC assets on the mainnet through specific addresses or multisig addresses, while mints equivalent BTC assets on the sidechain. Merlin Chain is able to support multiple types of inscription assets across the chain, backed by the Bitmap ecosystem, and its TVL has reached nearly 4 billion dollars.BTC Rollup is based on the Taproot circuit, which can simulates smart contracts on-chain, and it performs packing and computation operations outside the main Bitcoin Network. The B2 network is at the forefront of this implementation, with over $200 million on-chain TVL.Cross-chain bridges built specifically for Bitcoin aren't very common. There are more multi-chain and full-chain bridges that integrate with mainstream blockchains, one of which is Meson.Fi, that has established relationships with a number of Bitcoin Layer2.Stablecoin protocols on the Bitcoin network are mostly implemented in the form of over-collateralization and support other DeFi protocols to bring more yield for users.There are various DeFi projects in the Bitcoin ecosystem, from those that migrated from other chains, to those that are built on the native Bitcoin network during the current development boom, and those that were built during the last bull market and deployed as a sidechain. Overall Alex provides the most variety of trading products and the smoothest trading experience, but Orders Exchange has a higher growth ceiling.Bitcoin will be an important narrative in this cycle of bull markets. It is necessary to pay close attention to top tier projects in each vertical of the Bitcoin ecosystem. 1. Background With the overflow of inscription assets due to the Ordinals protocol, Bitcoin network, which was once characterized by lack of smart contracts, inefficent for developing, and in dearth of infrastructure and scaling capabilities, is experiencing a data boom on chain (refer to Kernel's previous research article: Can RGB Replicate The Ordinals Hype for more details). Similar to what happened when the Ethereum network first came into established, formatted text, images, even videos were being scrambled to 4MB Tapscript scripts that would have never been executed. While this surge in on-chain activities contributed to the growth and development of the Bitcoin ecosystem and infrastructure, it also created a surge in transaction volumes and a huge storage burden on the network. In addition, for a wide variety of inscriptions, simple transfers can no longer satisfy users' transaction needs, and users are looking forward to the introduction of a wide range of derivatives trading services in Bitcoin. Hence the development of the Bitcoin application layer has become relatively urgent right now. Source: CryptoQuant 2. Bitcoin Layer2 Unlike Layer2 on Ethereum, which is dominated by Rollup, Layer2 solution for Bitcoin is still vague. Bitcoin is not able to write smart contract on its own scripting language, and the publication of smart contracts must rely on third-party protocols, so applying a similar solution to Bitcoin can not guarantee the same level of security as an Ethereum Rollup. As a result, a variety of Layer2 solutions exist for the Bitcoin, including Lightning network, side chain, and Rollup based on TapScript. 2.1 Lightning network Lightning network is the earliest Bitcoin Layer2 solution, first proposed by Gregory Maxwell in December 2015. The Lightning network stack, known as BOLT, was released by Lightning Labs in January 2017. Since then, it has undergone upgrades and improvements. The Lightning network allows users to make peer-to-peer, off-chain payment channel transfers of any size and number without fees until the Lightning network is closed. At that point, all previous transactions are settled with a single transaction. The Lightning network has the potential to achieve up to 10 million TPS (transactions per second) due to its use of off-chain channels. However, there is a risk of centralization with off-chain channels. And to successfully transact between two addresses, the off-chain channel must be established either directly or through a third party. Additionally, both parties must be online during the transaction for secure execution. Source: Kernel Ventures 2.2 Side Chain The side chain solution on Bitcoin is similar to that of Ethereum, with a new token pegged to Bitcoin 1:1 is issued on a new chain. This new chain would not be limited by transaction speed and development bottlenecks of the Bitcoin network, allowing for the transfer of Bitcoin-pegged tokens at a much faster rate and lower cost. The side chain solution presumably inherits the asset value of the mainnet, but not the security of the mainnet, and all transactions are recorded and confirmed on the side chain. 2.2.1 Stacks Stacks 2.0 was released in 2021, where users can lock BTC on the Bitcoin mainnet and receive the equivalent value of SBTC assets on Stacks, but their transactions on the side chain require payment of STX, the Stacks' native token, as gas. Unlike Ethereum, Bitcoin network doesn't allow for a smart contract address that can effectively manage the locked BTC. Therefore, locked BTC is sent to a specific multisig address. The release process is relatively simple, requiring a request to the Burn-Unlock contract on Stacks to destroy the SBTC on Stacks and send the locked BTC back to the original address, since the Stacks network allows Clarity language for smart contract development. The block release process of the Stacks network uses the POX consensus mechanism. Bitcoin miners send BTC bids for block opportunities, and the higher the bid, the higher the weight of the miner. Ultimately, the winner is selected by a specific verifiable random function to package the blocks on the Stacks network, and receives a reward in the form of the corresponding STX. At the same time, this part of the bidding BTC will be distributed in the form of SBTC to the holders of STX tokens as a reward. Source: Kernel Ventures In addition, Stacks is expected to push Satoshi Nakamoto upgrade in April, which will include optimizations to its development language, Clarity, to lower the barriers for developers. Secondly, Stacks has optimized the security level of the network, confirming transactions on Stacks to be settled on Bitcoin mainnet, upgrading the security of Stacks from a sidechain to Layer2, which is the same as that of Bitcoin's mainnet. Finally, Stacks has also made significant improvements to its block rate, reaching 5 seconds per block in the testing phase (compared to 10-30 minutes per block in the current phase). If Satoshi Nakamoto upgrade is completed successfully, Stacks can narrow, perhaps even eliminate the gap between Layer2 on Ethereum, which should attract a lot of attention and stimulate the development of ecosystem. 2.2.2 RSK RSK (RootStock) is a Bitcoin side chain without native tokens, and transactions on the side chain are currently handled on Bitcoin. Users can exchange BTC from the mainnet for RBTC at a 1:1 ratio on RSK through the built-in PowPeg protocol. RSK is also a POW chain, but with the introduction of a merger mining mechanism, the infrastructure and setup of Bitcoin miners can be fully applied to the RSK mining process, which reduces the cost of Bitcoin miners to participate in RSK mining. Until now, transactions on RSK are three times faster than on the mainnet and cost 1/20 as much as on the mainnet. Source: RSK White Paper 2.2.3 BEVM BEVM is an EVM-compatible POS side chain that has yet issued its own native token. It uses Schnorr's multisig algorithm on the Bitcoin network to store incoming assets in a multisig script address controlled by 1,000 addresses, which corresponds to the 1,000 POS verifiers on BEVM. The automated control of assets can be achieved by writing MAST (Merkelized Abstract Syntax Tree) scripts in the TapScript area, where the program is described in a number of independent chunks, each of which corresponds to a portion of the code logic, with no need to store a large amount of logic in the Script, only the hash result of each chunk. That has greatly reduced the amount of code that needs to be stored on the blockchain. When a user transfers BTC to BEVM, this part of BTC is locked by the script program, and the locked BTC can only be unlocked and sent back to the corresponding address if signed by more than 2/3 of the verifiers. BEVM is EVM compatible, so that allows for cost efficient migration of dApps originally built on Ethereum, trading with the above BTC-pegged assets while using it for gas expenses. Source: BTCStudy 2.2.4 Merlin Chain Merlin Chain is an EVM-compatible Bitcoin side chain that allows direct connection to the network through Bitcoin address generated by Particle network, with an unique Ethereum address generated. It can also be connected directly to an RPC node with an Ethereum account. Merlin Chain currently supports the transfer of BTC, Bitmap, BRC-420 and BRC-20 assets across the chain. The BRC420 protocol is developed by the Bitmap asset community based on recursive inscriptions like Merlin Chain, and the whole community has also put forward projects such as RCSV's recursive inscription matrix and the Bitmap Game meta-universe platform based on recursive inscriptions. Source: Merlin Docs Merlin Chain went live on February 5th, followed by a round of IDOs and staking rewards allocating 21% of the governance token MERL. The direct and massive airdrop attracted a large number of participants, Merlin Chain's TVL has surpassed $3 billion by now, with Bitcoin's on-chain TVL surpassing Polygon's, reaching #6 on all blockchains. Source: DeFiLlama During People's Launchpad's IDO, users can stake Ally or more than 0.00025 BTC to earn bonus points that could be redeemed for MERL, with a cumulative bonus stake limit of 0.02 BTC, which corresponded to 460 MERL tokens. Allocation of this round is relatively small, accounting for only 1% of total amount of MERL. However, considering OTC price of $2.90 MERL today, it has created a return of over 100%. In the second staking incentive round, Merlin allocated 20% of its total tokens, allowing users to stake BTC, Bitmap, USDT, USDC, and part of BRC-20 and BRC-420 assets on the Merlin Chain through Merlin's Seal. User's assets on Merlin will take an hourly snapshot of their value in USD, and the final daily average price multiplied by 10,000 will be the amount of points the user receives. The second round of staking is based on Blast's team model, where users can choose to be leader or team member. Leaders will receive an invitation code to share with their team members. Merlin is relatively mature in the current Bitcoin Layer2 ecosystem, liberating the liquidity of Layer1 assets, and allow Bitcoin transfers on Layer2 at a lower cost. The Bitmap ecosystem behind Merlin is very large, and the technology is relatively sound, so it is probable to have good development in the long run. The stake on Merlin has a high rate of return. In addition to the expected return of MERL, there are also opportunities to obtain the corresponding Meme or other tokens airdropped by the project, such as the official airdropped Voya tokens. Staking of more than 0.01 BTC can obtain airdropping of 90 Voya tokens, whose price has been rising since the launch of the program, and the highest of which reaches 514% of issuance price. Voya's current price is quoted at US$5.89, and the yield is as high as 106% when calculated according to the average price of Bitcoin at US$50,000 when staked. Source: CoinGecko 2.3 Rollup 2.3.1 BitVM BitVM is based on Optimistic Rollup for Bitcoin Layer2. Similar to Optimistic Rollup on Ethereum, traders first send transactions to Layer2 on the Bitcoin network, where they can be calculated and packed, after which the results will be sent to smart contract on Layer1 for verification while time is given to the verifier to challenge the prover's statement. However, Bitcoin does not support native smart contract, so the implementation is not as simple as Ethereum's Optimistic Rollup. The whole process involves Bit Value Commitment, Logic Gate Commitment, and Binary Circuit Commitment, which can be summarized as BVC, LGC and BCC below. BVC (Bit Value Commitment): BVC is essentially a level result with only two possibilities, 0 and 1, similar to a Bool type variable in other programming languages. Bitcoin is a stack-based scripting language, where bool type doesn't exist, so bytecode combinations are used to emulate it in BitVM.<Input Preimage of HASH> OP_IF OP_HASH160 //Hash the input of user <HASH1> OP_EQUALVERIFY //Output 1 if Hash(input)== HASH1 <1> OP_ELSE OP_HASH160 //Hash the input of user <HASH2> OP_EQUALVERIFY //Output 0 if Hash(input)== HASH2 <0>  In BVC, the user needs to submit an input first, then the Bitcoin network will hash the input and unlock the script only if the hash result is equal to HASH1 or HASH0 with HASH1 having an output of 1 and HASH2 having an output of 0. In the following section, we will summarise the entire snippet into an OP_BITCOMMITMENT opcode to simplify the description process.LGC (Logic Gate Commitment): All functions in a computer are essentially a combination of a series of bool gates, which can be simplified to a series of NAND gates. That's to say, if we can simulate NAND gates in the Bitcoin network through bytecode, we can essentially realize any function. Although Bitcoin does not have a direct implementation of the NAND opcode, it does have an AND gate, OP_BOOLAND, and a NOT gate, OP_NOT, which can be superimposed to reproduce the NAND. For the two output levels obtained from OP_BITCOMMITMENT, we can form a NAND output circuit with the OP_BOOLAND and OP_NOT opcodes.BCC (Binary Circuit Commitment): Based on LGC circuits, we can construct specific gate relationships between inputs and outputs. In BCC gate circuits, this input comes from the corresponding hash-primary image in the TapScript script, and different Taproot addresses correspond to a different gate, which we call a TapLeaf, and the many TapLeafs make up a Taptree, which serves as the input to the BCC circuit. Source: BitVM White Paper Ideally, a BitVM prover would compile and compute the circuits off-chain and return the results to the Bitcoin network for execution. However, since the off-chain process is not automated by smart contract, to prevent the provers from committing fraud transactions, BitVm requires the provers on the network to conduct a challenge. Verifier first reproduce the output of a certain TapLeaf, then add it with other TapLeaf results provided by the provers as inputs to drive the circuit. If the output is false, the challenge is successful, which means that the prover has provided a fraud message, and vice versa. However, to accomplish this process, the Taproot circuit needs to be shared between the challenger and the prover in advance, and, only the interaction between a single prover and a single verifier can be realized. 2.3.2 SatoshiVM SatoshiVM is an EVM compatible zkRollup Layer2 solution for Bitcoin. The implementation of smart contracts on SatoshiVM is the same as on BitVM, using Taproot circuits to simulate complex functions. SatoshiVM is divided into three layers, the Settlement Layer, the Sequencing Layer and the Proving Layer. The Settlement Layer, also known as the Bitcoin mainnet, is responsible for providing the DA layer, storing the Merkle Roots and Zero Knowledge Proofs of transactions, and settling transactions by verifying the correctness of the Layer2 packaged transactions through the Taproot circuit. The Sequencing Layer is responsible for packaging and processing transactions, and returning the results of the transactions to the mainnet along with the zero-knowledge certificates, and the Proving Layer is responsible for generating zero-knowledge certificates for the tasks received from the Sequencing Layer and passing them back to the Sequencing Layer. Source: SatoshiVM Docs 2.3.3 BL2 BL2 is a zkRollup Bitcoin Layer2 based on the VM Common Protocol (the official preconfigured VM protocol that is compatible with all major VMs). similar to other zkRollup Layers, its Rollup Layer mainly packs transactions and generates the corresponding zero-knowledge certificates through zkEVM. BL2's DA layer introduces Celestia to store bulk transaction data and only uses the BL2 network to store the zero-knowledge proofs, and finally returns the zero-knowledge proofs validation and a small amount of validation data, including BVC, to the main network for settlement. Source: BL2.io BL2's official X account has been updated daily, and it has also announced its development plan and token program, which will allocate 20% of its tokens to OG Mining, as well as the launch of a testnet in the near future. At this stage, the project is relatively new compared to other Bitcoin Layer2 and in its early stage, with only 33,000 followers on X. It's worth paying attention to as it introduces some of the more recent concepts such as Celestia and Bitcoin Layer2. However, there are no actual technical details on the website, with only a demo of what to expect, and no whitepaper for the project. At the same time, the goals are quite big, such as the abstraction of accounts on Bitcoin and compatibility with the VM protocol of mainstream virtual machines. Whether the team will be able to achieve this goal is still questionable, so we will consider taking a more reserved approach. Source: BL2's X Account 2.3.4 B2 Network The B2 Network is a zkRollup Layer2 with Bitcoin as the settlement layer and DA layer, which is structured into a Rollup Layer and a DA Layer. User transactions are first submitted and processed in the Rollup Layer, which uses a zkEVM scheme to execute user transactions and output the associated proofs, followed by storage of user state in the zkRollup Layer. The batch transactions and generated zero-knowledge proofs are forwarded to the DA Layer for storage and validation. The DA Layer can be subdivided into three parts: the Decentralised Storage Node, the B2 Node, and Bitcoin mainnet. The decentralised storage node receives the Rollup data and periodically generates temporal and spatial zero-knowledge proofs based on the Rollup data and sends the generated zero-knowledge proofs to the B2 Node, which is responsible for off-chain validation of the data, and then records the transaction data and corresponding zero-knowledge proofs in TapScript on the Bitcoin mainnet after validation is completed. The B2 Node is responsible for confirming the authenticity of the ZKP and finalising the settlement. Source: B2 Network White Paper B2 Network has a good influence among major BTC Layer2 programs, with 300,000 followers on X, surpassing BEVM's 140,000 and SatoshiVM's 166,000, which is also a Zk Rollup Layer2. At the same time, the project has received seed round funding from OKX and HashKey, attracting a lot of attention, and the TVL on the chain has exceeded $600 million. Source: bsquared.network B2 Network has launched B2 Buzz,and in order to use B2 Network, you need an invitation link. B2 Network uses the same communication model as Blast, which provides a strong two-way benefit binding newcomers and those who have already joined the network giving them sufficient motivation to promote the project. After completing simple tasks such as following the official X account, you can enter the staking interface, which supports the use of assets on four chains: BTC, Ethereum, BSC and Polygon. In addition to Bitcoin, inscriptions, ORDI and SATS can also be staked on the Bitcoin network. If you stake BTC, you can transfer the assets directly, whereas if you stake an inscription, you need to inscribe and transfer, and it is important to note that since there are no smart contracts on the Bitcoin network, the assets are essentially multisig-locked to a specific BTC address. The assets staked on the B2 network will not be released until at least April this year, and the points gained from staking during this period can be exchanged for mining components used for virtual mining, of which the BASIC miners only requires 10 components to activate, while the ADVANCED miner requires more than 80 components. Officials announced a partial token program, 5% of the total tokens will be used to reward virtual mining, and the other 5% will be allocated to ecological projects on B2 network for airdrop. At the time when much attentions are paid for Tokenomics fairness, 10% of the total amount of tokens is difficult to fully mobilize the enthusiasm of the community. It is expected that B2 network will have other staking incentives or LaunchPad plans in the future. 2.4 Comprehensive Comparison Among the three types of BTC Layer2, Lightning Network has the fastest transaction speed and lowest transaction cost, and has more applications in real-time payment and offline purchase. However, to realize the development of the application ecosystem on Bitcoin, it is difficult to build all kinds of DeFi or cross-chain protocols on Lightning network in terms of stability and security, and thus the competition in the application layer market is mainly between the sidechain and Rollup types. Sidechain solutions do not need to confirm transactions on the main network, and have more mature technical solutions and implementation difficulties, and thus have the highest TVL among the three. Due to the lack of smart contracts on the Bitcoin main network, the confirmation solution for Rollup data is still under development, and it might take a while for actual usage. Source: Kernel Ventures 3. Bitcoin Cross-chain Bridge 3.1 Multibit Multibit is a cross-chain bridge designed specifically for BRC20 assets on the Bitcoin network, and currently supports the migration of BRC20 assets to Ethereum, BSC, Solana, and Polygon. In the process of cross-chain bridging, users first need to send their assets to a BRC20 address designated by Multibit, and wait for Multibit to confirm the transfer of the assets on the main network, then the users will have the right to cast the corresponding assets on other chains, and to complete the cross-chain bridging process, users need to pay gas to mint on the other chain. Among the cross-chain bridges, Multibit has the best interoperability and the largest number of BRC20 assets, including more than ten kinds of BRC20 assets such as ORDI. In addition, Multibit also actively expands the cross-chain bridging of assets other than BRC20, and currently supports the Farming and cross-chain bridging of governance tokens and stablecoins of Bitstable, the native stablecoin protocol of BTC. Multibit is at the forefront of cross-chain bridges for BTC-derived assets. The Cross Chain Assets that Multibit supports, Source: Multibit's X Account 3.2 Sobit Sobit is a cross-chain protocol between Solana and Bitcoin network. Cross-chain assets are mainly BRC20 tokens and Sobit's native tokens. Users collateralize BRC20 assets on the Bitcoin mainnet to a designated Sobit address, and wait for Sobit's validation network to verify that the user can then Mint the mapped assets at the designated address on the Solana network. At the heart of SoBit's validation network is a validator-based framework that requires multiple trusted validators to approve cross-chain transactions, providing additional security against unauthorized transfers. Sobit's native token is Sobb, which can be used to pay for cross-chain fees for the Sobit Cross-Chain Bridge, totaling 1 billion coins. Sobb distributes 74% of its assets in a Fair Launch. Unlike other DeFi and cross-chain tokens on Bitcoin that have gone a upward trend these days, Sobb's price has been on a downward cycle after a brief uptrend, dropping more than 90 percent, not picking up any significant momentum along with BTC's uptrend, which may be caused by Sobb's chosen vertical. Sobit and Multibit's market orientations are very similar. But at this stage, Sobit can only support cross-chain for Solana, with only three kinds of BRC20 assets that can be bridged cross-chain. Compared with Multibit, which also provides cross-chain bridging of BRC20 assets, Sobit is far behind in terms of its ecosystem and completeness of cross-chain assets, and thus can hardly gain any advantage in the competition with Multibit. The Price of Sobb, Source: Coinmarketcap 3.3 Meson Fi Meson Fi is a cross-chain bridge based on the principle of HTLC (Hash Time Locked Contract). It supports cross-chain interactions between 17 mainstream chains including BTC, ETH and SOL. In the cross-chain process, users sign the transaction under the chain, then submit it to Meson Contract for confirmation and lock the corresponding assets in the original chain. Meson Contract broadcasts the message to the target chain through Relayer after confirming the message. There are three types of Relayer: P2P node, centralized node and no node, P2P node has better security, centralized node has higher efficiency and availability, while no node requires user to hold certain assets on both chains, which user can choose depending on actual situation. LP on the target chain also calls the Lock method on the Meson Contract to lock the corresponding asset after checking the transaction through postSwap of the Meson Contract, and then exposes the address to Meson Fi. The next operation is the HTLC process, where the user specifies the address of the LP on the original chain and creates a hash lock, removing the asset by exposing the hash lock original image on the target chain. This is then followed by the HTLC process, where the user specifies the LP address and creates a hash lock in the original chain, exposing the hash lock image in the target chain to retrieve the asset, and then the LP retrieves the user-locked asset in the original chain through the original image. Source: Kernel Ventures Meson Fi is not a cross-chain bridge specifically designed for Bitcoin assets, but a full-chain bridge like LayerZero. However, major BTC Layer2 such as B2 Network, Merlin Chain, and BEVM have all established partnership with Meson Fi and recommend using it to cross-chain bridge their assets during the staking process. According to official reports, Meson Fi processed more than 200,000 transactions during the three-day Merlin Chain staking event, as well as about 2,000 cross-chain staking of BTC assets, including transactions across all major chains to Bitcoin. As Layer2 on Bitcoin continues to release and introduce staking incentives, Meson Fi’ is more likely to attract assets for cross-chain, and see an increase protocol revenue. 3.4 Comprehensive Comparison Overall, Meson Fi and the other two cross-chain bridges are two different kinds of cross-chain bridge. Meson Fi is essentially a full-chain cross-chain bridge, but happens to work with many of Bitcoin's Layer2s to help it bridge assets from other networks. Sobit and Multibit, on the other hand, are cross-chain bridges designed for Bitcoin's native assets, serving BRC20 assets as well as other DeFi and Stablecoin protocol assets on Bitcoin. Comparatively speaking, Multibit offers a wider variety of BRC20 assets, including dozens of assets such as ORDI and SATS, while Sobit only supports three BRC20 assets so far. In addition, Multibit has partnered with some of the Bitcoin stablecoin protocols to provide cross-chain services and stake revenue activities, providing a more comprehensive range of services. Finally, Multibit also offers better cross-chain liquidity, providing cross-chain services for five major chains, including Ethereum, Solana, and Polygon. 4. Bitcoin Stablecoin 4.1 BitSmiley BitSmiley is a series of protocols based on the Fintegra framework on the Bitcoin network, including the Stablecoin Protocol, the Lending Protocol, and the Derivatives Protocol. Users can mint bitUSD by over-collateralization of BTC in BitSmliey's stablecoin protocol, and when they want to withdraw their collateralized BTC, they need to send the bitUSD back to the Vault Wallet for destruction and pay a fee. When the value of the collateralization falls below a certain threshold, BItSmiley will enter into an automatic liquidation process for the collateralized assets, and the formula for calculating the liquidation price is as follows: $$𝐿𝑖𝑞𝑢𝑖𝑑𝑎𝑡𝑖𝑜𝑛 𝑃𝑟𝑖𝑐𝑒 = \frac{𝑏𝑖𝑡𝑈𝑆𝐷𝐺𝑒𝑛𝑒𝑟𝑎𝑡𝑒𝑑 ∗ 𝐿𝑖𝑞𝑢𝑖𝑑𝑎𝑡𝑖𝑜𝑛𝑅𝑎𝑡𝑖𝑜}{𝑄𝑢𝑎𝑛𝑡𝑖𝑡𝑦 𝑜𝑓 𝐶𝑜𝑙𝑙𝑎𝑡𝑒𝑟𝑎𝑙 } $$ The exact liquidation price is related to the real-time value of the user's collateral and the amount of bitUSD minted, where Liquidation Ratio is a fixed constant. During the liquidation process, in order to prevent price fluctuations from causing losses to the liquidated, a Liquidation Penalty is designed in BItSmily to compensate for this, and the longer the liquidation time, the greater the amount of this compensation. The liquidation of assets is done by Dutch Auction, in order to complete the liquidation of assets in the shortest possible time. At the same time, the surplus of the BitSmiley protocol will be stored in a designated account and auctioned at regular intervals, in the form of a British auction with BTC bidding, which can maximize the value of the surplus assets. The BitSmiley project will use 90% of the surplus assets to subsidize on-chain collateral, while the remaining 10% will be allocated to the BitSmiley team for daily maintenance costs. BitSmiley's lending agreement also introduces a number of innovations to the settlement mechanism for the Bitcoin network. Due to the 10-minute block rate of the main Bitcoin network, it is not possible to introduce a prediction machine to judge price fluctuations in real time like Ether, so BitSmiley introduces a mechanism to insure a third party against failure of the other party to deliver on time, whereby the user has the option to pay a certain amount of BTC to the third party in advance to insure the transaction (which both parties are required to pay for), and when one party fails to complete the transaction on time, the transaction will be insured by the third party. When one party fails to complete the transaction on time, the guarantor will compensate the other party for the loss. Source: BitSmliey WhitePaper BitSmiley offers a wide range of DeFi and stablecoin features, as well as a number of innovations in its settlement mechanism to better protect users and improve its compatibility with the Bitcoin network. BitSmiley is an excellent stablecoin and DeFi model in terms of both settlement and collateralization mechanisms, and with the Bitcoin ecosystem still in its infancy, BitSmiley should be able to capture a significant share of the stablecoin competition. 4.2 BitStable BitStable is a Bitcoin stablecoin protocol based on over-collateralization, and currently supports collateralization of ORDI and MUBI assets from Bitcoin mainnet, as well as USDT from Ethereum. Depending on the volatility of the three assets, BitStable sets different overcollateralization ratios, with USDT at 0%, ORDI at 70%, and MUBI at 90%. Source: Bitstable.finance BitStable has also deployed corresponding smart contracts on Ethereum, and the DALL stablecoin obtained by staking can be exchanged 1:1 on Ethereum for USDT and USDC. Meanwhile, BitStable has adopted a dual-token mechanism, in addition to the stablecoin DALL, it has adopted BSSB as its own governance token, through which holders can participate in the community's governance and share the revenue of the network. The total number of BSSBs is 21 million, which are distributed in two ways. The first is by staking DALL tokens on the Bitcoin network to earn the corresponding BSSB governance tokens, with the project distributing 50 percent of the BSSB tokens through staking rewards. The second method was the two rounds of LaunchPad on Bounce Finance at the end of November last year, in which 30% and 20% of the BSSBs were distributed through staking Auctions and Fixed Price Auctions. However, there was a hacking attack during the staking Auctions, which led to the destruction of more than 3 million BBSB tokens. Source: coinmarketcap During the hacker attack, the project team responded in a timely manner. The remaining 25% of the tokens that were not affected by the hacker attack were still issued, although at a higher cost, but this measure better restored the community's confidence, and ultimately prevent the clash of price. 5. Bitcoin DeFi 5.1 Bounce Finance Bounce Finance consists of a series of DeFi ecosystem projects, including BounceBit, BounceBox and Bounce Auction. It is worth noting that Bounce Finance was not originally a project that served the BTC ecosystem, but an auction protocol set up for Ethereum and BSC, which shifted gears last May to take advantage of the Bitcoin development boom. BounceBit is an EVM-compatible POS sidechain for Bitcoin, and will select verifiers based on who are staking Bitcoins from the Bitcoin mainnet. BounceBit also introduces a hybrid revenue mechanism, whereby users can stake BTC assets on BounceBit to earn revenue on-chain through POS validation and the associated DeFi protocol, and can also securely transfer their assets to and from CEX by mirroring the assets on-chain and earning revenue on CEX. BounceBox is similar to the application store in Web2, in which the publisher can custom design a dApp, that is, a box, and then distribute it through BounceBox, and then users can choose their favorite boxes to participate in the DeFi activities. Bounce Auction, the main part of the project on Ether, is an auction for various assets and offers a variety of auction options, including fixed-price auctions, UK auctions and Dutch auctions. Bounce's native token, Auction, was released in 2021 and has been used as the designated staking token for earning points in several rounds of Token LaunchPad on Bounce Finance, which has fueled the recent rise in the price of Auction tokens. What's more noteworthy is that BounceBit, the new staking chain that Bounce has built after switching to Bitcoin, is now open for on-chain staking to get points and test network interaction points, and the project's X account clearly indicates that points can be exchanged for tokens and that token issuance will take place in May this year. Source: Coinmarketcap 5.2 Orders Exchange Orders Exchange is a DeFi project built entirely on the Bitcoin network, currently supporting limit and market pending orders for dozens of BRC20 assets, with a blueprint to introduce swaps between BRC20 assets in the future. The underlying technology of Orders Exchange consists of Ordinals Protocol, PSBT and Nostr Protocol. More information on the Ordinals Protocol please refer to Kernel's previous research article, Kernel Ventures: Can RGB Replicate The Ordinals Hype. PSBT is a key feature on Bitcoin, where users sign a PSBT-type signature consisting of an Input and an Output via SIGHASH_SINGLE | ANYONECANPAY. PSBT is a bitcoin signature technology that allows users to sign a PSBT-X format consisting of an Input and an Output, with the Input containing the transaction that the user will execute and the Output containing the the prerequisite for user's transactions, which requires another user to execute the Output content and perform a SIGHASH_ALL signing on the network formula before the content of the Input finally takes effect. In Exchange's Pending Order transaction, the user completes the Pending Order by means of PSBT signature and waits for other party to complete the transaction. Source: orders-exchange.gitbook.io Nostr is an asset transfer protocol set up using NIP-100 that improves the interoperability of assets between different DEXs. All of Orders Exchange's 100 million tokens have been fully released. And although it emphasized in the whitepaper that ttokens are only experimental and do not have any value, the project's elaborate airdrop plan still shows a clear intention of token economy. There were 3 main directions for the initial token distribution, 45% of the tokens were distributed to traders on Orders Exchage, 40% of the tokens were airdropped to early users and promoters, and 10% were distributed to developers. However, the 40% drop was not described in detail on either the official website or the official tweets, and there was no discussion on X or in Discord's Orders community after the official announcement of the drop, so the actual distribution of the drop is still questionable. Overall, Orders Exchange's buy order page is intuitive and clear, and you can see the prices of all buy orders and sell orders explicitly, which is of high quality among the platforms offering BRC20 trading. The subsequent launch of the BRC20 token swap service on Orders Exchange should also help the value capture of protocols. 5.3 Alex Alex is a DeFi Protocol built on top of the Bitcoin sidechain Stacks, currently supporting Swap, Lending, Borrow, and some other transaction types. At the same time, Alex has introduced some innovations to the traditional DeFi transaction model. The first is Swap, the traditional Swap pricing model can be divided into two types: x*y=k for ordinary pairs and x+y=k for stablecoins, but on Alex, you can set up the trading rules for pairs, and set it to be a linear combination of the results of the two calculations according to a certain ratio, x*y=k and x+y=k. Alex has also introduced OrderBook, a combined on-chain and off-chain order thinning model that allows users to quickly cancel pending transactions at zero cost Finally, Alex offers fixed-rate lending activities and has established a diversified collateral pool for lending services instead of the traditional single collateral, which consists of both risky and risk-free assets, reducing the risk of lending. Source: Alexgo Docs Unlike other DeFi projects in the BTC ecosystem, which entered the market after the Ordinals protocol had blown up the BTC ecosystem, Alex started working on the BTC DeFi ecosystem as early as the last bull market, and has raised a seed round of funding. Alex is also excellent in terms of performance and the different types of transactions, even many DeFi projects on Ethereum do not have much competitive edge over Alex's transaction experience. Alex's native token, Alex Lab, has a total supply of 1 billion, and 60% of it has already been released, which can still be earned by staking or by offering as a liquidity provider on Alex. However, revenue will hardly reach the level it was at during early launch. As one of the most well-established DeFi project on Bitcoin, Alex's market cap is considered not that high, with the Bitcoin ecosystem probably being an important engine in this bull market. In addition, the sidechain where Alex was deployed, Stacks, will execute an important Satoshi Nakamoto upgrade, of which Stacks will be greatly optimized in terms of both transaction speed and transaction cost, and its security will be backed by the Bitcoin mainnet, making it a true Layer 2. This upgrade will also greatly reduce Alex's operating costs and improve its transaction experience and security. The Stacks chain will also provide Alex with larger market and trading demand, bringing more revenue to the protocol. 6. Conclusion The application of the Ordinals protocol has changed the inability of the Bitcoin network to implement complex logic and issue assets, and various types of asset protocols have been introduced on the Bitcoin network one after another, improving upon the idea of Ordinals. However, application layer is not prepared to provide services, and in the case of the surge of inscription assets, the functions that can be realized by Bitcoin applications appear to be anachronistic, and thus the development of applications on Bitcoin network has become a hotspot for all parties to seize. Layer 2 has the highest priority among all types of applications, because all other DeFi protocols, no matter developed they are, if they do not improve the transaction speed and reduce the transaction cost of the Bitcoin mainnet, it will be difficult to release the liquidity, and the chain will be flooded with new transactions for speculation purposes. After improving the speed and cost of transactions on the Bitcoin mainnet, the next step is to improve the experience and diversity of transactions. Various DeFi or stablecoin protocols provide traders with a wide range of financial derivatives. Finally, there are cross-chain protocols that allow assets on Bitcoin mainnet to flow to and from the other networks. Cross-chain protocols on Bitcoin are relatively mature, but not exclusively since the development of the Bitcoin mainnet, as many of the multi-chain bridges and mainstream cross-chain bridges were designed to provide cross-chain services to the Bitcoin network. For dApps like SocialFi and GameFi, due to the high gas and latency constraints of the main Bitcoin network, no excellent projects have appeared so far, but with the speed up and scaling of the Layer2 network, it is likely that they will emerge on Layer2 of the Bitcoin network. It is certain that the Bitcoin ecosystem will be at least one of the hot topics in this bull market. With plenty of enthusiasm and a huge market, although the various ecosystems on bitcoin are still in the early stages of development, we are likely to see the emergence of excellent projects from various verticals in the bull market this time. Source: Kernel Ventures Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world. References BEVM White Paper:https://github.com/btclayer2/BEVM-white-paperWhat is a Bitcoin Merkelized Abstract Syntax Tree:https://www.btcstudy.org/2021/09/07/what-is-a-bitcoin-merklized-abstract-syntax-tree-mast/#MAST-%E7%9A%84%E4%B8%80%E4%B8%AA%E4%BE%8B%E5%AD%90BitVM White Paper:https://bitvm.org/bitvm.pdfBitcoin Scripting Principles:https://happypeter.github.io/binfo/bitcoin-scriptsSatoshiVM Official Website:https://www.satoshivm.io/Multibit's Docs:https://docs.multibit.exchange/multibit/protocol/cross-chain-processAlex White Paper:https://docs.alexgo.io/Merlin Technical Docs:https://docs.merlinchain.io/merlin-docs/Sobit WhitePaper:https://sobit.gitbook.io/sobit/

Kernel Ventures: The Upsurge of Bitcoin Ecosystem — A Panoramic View of its Application Layer

Author: Kernel Ventures Jerry Luo
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
Along with the rise of inscription, the existing application layer of the Bitcoin network is unable to sustain market activities and is the main focus of the current Bitcoin ecosystem development.There are three mainstream Layer2 solutions for Bitcoin: Lightning network, Sidechain, and RollupThe Lightning network enables peer-to-peer payments by establishing an off-chain payment channel, which is settled on the main network after the channel is closed.Side chain locks BTC assets on the mainnet through specific addresses or multisig addresses, while mints equivalent BTC assets on the sidechain. Merlin Chain is able to support multiple types of inscription assets across the chain, backed by the Bitmap ecosystem, and its TVL has reached nearly 4 billion dollars.BTC Rollup is based on the Taproot circuit, which can simulates smart contracts on-chain, and it performs packing and computation operations outside the main Bitcoin Network. The B2 network is at the forefront of this implementation, with over $200 million on-chain TVL.Cross-chain bridges built specifically for Bitcoin aren't very common. There are more multi-chain and full-chain bridges that integrate with mainstream blockchains, one of which is Meson.Fi, that has established relationships with a number of Bitcoin Layer2.Stablecoin protocols on the Bitcoin network are mostly implemented in the form of over-collateralization and support other DeFi protocols to bring more yield for users.There are various DeFi projects in the Bitcoin ecosystem, from those that migrated from other chains, to those that are built on the native Bitcoin network during the current development boom, and those that were built during the last bull market and deployed as a sidechain. Overall Alex provides the most variety of trading products and the smoothest trading experience, but Orders Exchange has a higher growth ceiling.Bitcoin will be an important narrative in this cycle of bull markets. It is necessary to pay close attention to top tier projects in each vertical of the Bitcoin ecosystem.
1. Background
With the overflow of inscription assets due to the Ordinals protocol, Bitcoin network, which was once characterized by lack of smart contracts, inefficent for developing, and in dearth of infrastructure and scaling capabilities, is experiencing a data boom on chain (refer to Kernel's previous research article: Can RGB Replicate The Ordinals Hype for more details). Similar to what happened when the Ethereum network first came into established, formatted text, images, even videos were being scrambled to 4MB Tapscript scripts that would have never been executed. While this surge in on-chain activities contributed to the growth and development of the Bitcoin ecosystem and infrastructure, it also created a surge in transaction volumes and a huge storage burden on the network. In addition, for a wide variety of inscriptions, simple transfers can no longer satisfy users' transaction needs, and users are looking forward to the introduction of a wide range of derivatives trading services in Bitcoin. Hence the development of the Bitcoin application layer has become relatively urgent right now.

Source: CryptoQuant
2. Bitcoin Layer2
Unlike Layer2 on Ethereum, which is dominated by Rollup, Layer2 solution for Bitcoin is still vague. Bitcoin is not able to write smart contract on its own scripting language, and the publication of smart contracts must rely on third-party protocols, so applying a similar solution to Bitcoin can not guarantee the same level of security as an Ethereum Rollup. As a result, a variety of Layer2 solutions exist for the Bitcoin, including Lightning network, side chain, and Rollup based on TapScript.
2.1 Lightning network
Lightning network is the earliest Bitcoin Layer2 solution, first proposed by Gregory Maxwell in December 2015. The Lightning network stack, known as BOLT, was released by Lightning Labs in January 2017. Since then, it has undergone upgrades and improvements. The Lightning network allows users to make peer-to-peer, off-chain payment channel transfers of any size and number without fees until the Lightning network is closed. At that point, all previous transactions are settled with a single transaction. The Lightning network has the potential to achieve up to 10 million TPS (transactions per second) due to its use of off-chain channels. However, there is a risk of centralization with off-chain channels. And to successfully transact between two addresses, the off-chain channel must be established either directly or through a third party. Additionally, both parties must be online during the transaction for secure execution.

Source: Kernel Ventures
2.2 Side Chain
The side chain solution on Bitcoin is similar to that of Ethereum, with a new token pegged to Bitcoin 1:1 is issued on a new chain. This new chain would not be limited by transaction speed and development bottlenecks of the Bitcoin network, allowing for the transfer of Bitcoin-pegged tokens at a much faster rate and lower cost. The side chain solution presumably inherits the asset value of the mainnet, but not the security of the mainnet, and all transactions are recorded and confirmed on the side chain.
2.2.1 Stacks
Stacks 2.0 was released in 2021, where users can lock BTC on the Bitcoin mainnet and receive the equivalent value of SBTC assets on Stacks, but their transactions on the side chain require payment of STX, the Stacks' native token, as gas. Unlike Ethereum, Bitcoin network doesn't allow for a smart contract address that can effectively manage the locked BTC. Therefore, locked BTC is sent to a specific multisig address. The release process is relatively simple, requiring a request to the Burn-Unlock contract on Stacks to destroy the SBTC on Stacks and send the locked BTC back to the original address, since the Stacks network allows Clarity language for smart contract development. The block release process of the Stacks network uses the POX consensus mechanism. Bitcoin miners send BTC bids for block opportunities, and the higher the bid, the higher the weight of the miner. Ultimately, the winner is selected by a specific verifiable random function to package the blocks on the Stacks network, and receives a reward in the form of the corresponding STX. At the same time, this part of the bidding BTC will be distributed in the form of SBTC to the holders of STX tokens as a reward.

Source: Kernel Ventures
In addition, Stacks is expected to push Satoshi Nakamoto upgrade in April, which will include optimizations to its development language, Clarity, to lower the barriers for developers. Secondly, Stacks has optimized the security level of the network, confirming transactions on Stacks to be settled on Bitcoin mainnet, upgrading the security of Stacks from a sidechain to Layer2, which is the same as that of Bitcoin's mainnet. Finally, Stacks has also made significant improvements to its block rate, reaching 5 seconds per block in the testing phase (compared to 10-30 minutes per block in the current phase). If Satoshi Nakamoto upgrade is completed successfully, Stacks can narrow, perhaps even eliminate the gap between Layer2 on Ethereum, which should attract a lot of attention and stimulate the development of ecosystem.

2.2.2 RSK
RSK (RootStock) is a Bitcoin side chain without native tokens, and transactions on the side chain are currently handled on Bitcoin. Users can exchange BTC from the mainnet for RBTC at a 1:1 ratio on RSK through the built-in PowPeg protocol. RSK is also a POW chain, but with the introduction of a merger mining mechanism, the infrastructure and setup of Bitcoin miners can be fully applied to the RSK mining process, which reduces the cost of Bitcoin miners to participate in RSK mining. Until now, transactions on RSK are three times faster than on the mainnet and cost 1/20 as much as on the mainnet.

Source: RSK White Paper

2.2.3 BEVM
BEVM is an EVM-compatible POS side chain that has yet issued its own native token. It uses Schnorr's multisig algorithm on the Bitcoin network to store incoming assets in a multisig script address controlled by 1,000 addresses, which corresponds to the 1,000 POS verifiers on BEVM. The automated control of assets can be achieved by writing MAST (Merkelized Abstract Syntax Tree) scripts in the TapScript area, where the program is described in a number of independent chunks, each of which corresponds to a portion of the code logic, with no need to store a large amount of logic in the Script, only the hash result of each chunk. That has greatly reduced the amount of code that needs to be stored on the blockchain. When a user transfers BTC to BEVM, this part of BTC is locked by the script program, and the locked BTC can only be unlocked and sent back to the corresponding address if signed by more than 2/3 of the verifiers. BEVM is EVM compatible, so that allows for cost efficient migration of dApps originally built on Ethereum, trading with the above BTC-pegged assets while using it for gas expenses.

Source: BTCStudy
2.2.4 Merlin Chain
Merlin Chain is an EVM-compatible Bitcoin side chain that allows direct connection to the network through Bitcoin address generated by Particle network, with an unique Ethereum address generated. It can also be connected directly to an RPC node with an Ethereum account. Merlin Chain currently supports the transfer of BTC, Bitmap, BRC-420 and BRC-20 assets across the chain. The BRC420 protocol is developed by the Bitmap asset community based on recursive inscriptions like Merlin Chain, and the whole community has also put forward projects such as RCSV's recursive inscription matrix and the Bitmap Game meta-universe platform based on recursive inscriptions.

Source: Merlin Docs
Merlin Chain went live on February 5th, followed by a round of IDOs and staking rewards allocating 21% of the governance token MERL. The direct and massive airdrop attracted a large number of participants, Merlin Chain's TVL has surpassed $3 billion by now, with Bitcoin's on-chain TVL surpassing Polygon's, reaching #6 on all blockchains.

Source: DeFiLlama
During People's Launchpad's IDO, users can stake Ally or more than 0.00025 BTC to earn bonus points that could be redeemed for MERL, with a cumulative bonus stake limit of 0.02 BTC, which corresponded to 460 MERL tokens. Allocation of this round is relatively small, accounting for only 1% of total amount of MERL. However, considering OTC price of $2.90 MERL today, it has created a return of over 100%. In the second staking incentive round, Merlin allocated 20% of its total tokens, allowing users to stake BTC, Bitmap, USDT, USDC, and part of BRC-20 and BRC-420 assets on the Merlin Chain through Merlin's Seal. User's assets on Merlin will take an hourly snapshot of their value in USD, and the final daily average price multiplied by 10,000 will be the amount of points the user receives. The second round of staking is based on Blast's team model, where users can choose to be leader or team member. Leaders will receive an invitation code to share with their team members.
Merlin is relatively mature in the current Bitcoin Layer2 ecosystem, liberating the liquidity of Layer1 assets, and allow Bitcoin transfers on Layer2 at a lower cost. The Bitmap ecosystem behind Merlin is very large, and the technology is relatively sound, so it is probable to have good development in the long run. The stake on Merlin has a high rate of return. In addition to the expected return of MERL, there are also opportunities to obtain the corresponding Meme or other tokens airdropped by the project, such as the official airdropped Voya tokens. Staking of more than 0.01 BTC can obtain airdropping of 90 Voya tokens, whose price has been rising since the launch of the program, and the highest of which reaches 514% of issuance price. Voya's current price is quoted at US$5.89, and the yield is as high as 106% when calculated according to the average price of Bitcoin at US$50,000 when staked.

Source: CoinGecko
2.3 Rollup
2.3.1 BitVM
BitVM is based on Optimistic Rollup for Bitcoin Layer2. Similar to Optimistic Rollup on Ethereum, traders first send transactions to Layer2 on the Bitcoin network, where they can be calculated and packed, after which the results will be sent to smart contract on Layer1 for verification while time is given to the verifier to challenge the prover's statement. However, Bitcoin does not support native smart contract, so the implementation is not as simple as Ethereum's Optimistic Rollup. The whole process involves Bit Value Commitment, Logic Gate Commitment, and Binary Circuit Commitment, which can be summarized as BVC, LGC and BCC below.
BVC (Bit Value Commitment): BVC is essentially a level result with only two possibilities, 0 and 1, similar to a Bool type variable in other programming languages. Bitcoin is a stack-based scripting language, where bool type doesn't exist, so bytecode combinations are used to emulate it in BitVM.<Input Preimage of HASH>
OP_IF
OP_HASH160 //Hash the input of user
<HASH1>
OP_EQUALVERIFY //Output 1 if Hash(input)== HASH1
<1>
OP_ELSE
OP_HASH160 //Hash the input of user
<HASH2>
OP_EQUALVERIFY //Output 0 if Hash(input)== HASH2
<0>  In BVC, the user needs to submit an input first, then the Bitcoin network will hash the input and unlock the script only if the hash result is equal to HASH1 or HASH0 with HASH1 having an output of 1 and HASH2 having an output of 0. In the following section, we will summarise the entire snippet into an OP_BITCOMMITMENT opcode to simplify the description process.LGC (Logic Gate Commitment): All functions in a computer are essentially a combination of a series of bool gates, which can be simplified to a series of NAND gates. That's to say, if we can simulate NAND gates in the Bitcoin network through bytecode, we can essentially realize any function. Although Bitcoin does not have a direct implementation of the NAND opcode, it does have an AND gate, OP_BOOLAND, and a NOT gate, OP_NOT, which can be superimposed to reproduce the NAND. For the two output levels obtained from OP_BITCOMMITMENT, we can form a NAND output circuit with the OP_BOOLAND and OP_NOT opcodes.BCC (Binary Circuit Commitment): Based on LGC circuits, we can construct specific gate relationships between inputs and outputs. In BCC gate circuits, this input comes from the corresponding hash-primary image in the TapScript script, and different Taproot addresses correspond to a different gate, which we call a TapLeaf, and the many TapLeafs make up a Taptree, which serves as the input to the BCC circuit.

Source: BitVM White Paper
Ideally, a BitVM prover would compile and compute the circuits off-chain and return the results to the Bitcoin network for execution. However, since the off-chain process is not automated by smart contract, to prevent the provers from committing fraud transactions, BitVm requires the provers on the network to conduct a challenge. Verifier first reproduce the output of a certain TapLeaf, then add it with other TapLeaf results provided by the provers as inputs to drive the circuit. If the output is false, the challenge is successful, which means that the prover has provided a fraud message, and vice versa. However, to accomplish this process, the Taproot circuit needs to be shared between the challenger and the prover in advance, and, only the interaction between a single prover and a single verifier can be realized.
2.3.2 SatoshiVM
SatoshiVM is an EVM compatible zkRollup Layer2 solution for Bitcoin. The implementation of smart contracts on SatoshiVM is the same as on BitVM, using Taproot circuits to simulate complex functions. SatoshiVM is divided into three layers, the Settlement Layer, the Sequencing Layer and the Proving Layer. The Settlement Layer, also known as the Bitcoin mainnet, is responsible for providing the DA layer, storing the Merkle Roots and Zero Knowledge Proofs of transactions, and settling transactions by verifying the correctness of the Layer2 packaged transactions through the Taproot circuit. The Sequencing Layer is responsible for packaging and processing transactions, and returning the results of the transactions to the mainnet along with the zero-knowledge certificates, and the Proving Layer is responsible for generating zero-knowledge certificates for the tasks received from the Sequencing Layer and passing them back to the Sequencing Layer.

Source: SatoshiVM Docs
2.3.3 BL2
BL2 is a zkRollup Bitcoin Layer2 based on the VM Common Protocol (the official preconfigured VM protocol that is compatible with all major VMs). similar to other zkRollup Layers, its Rollup Layer mainly packs transactions and generates the corresponding zero-knowledge certificates through zkEVM. BL2's DA layer introduces Celestia to store bulk transaction data and only uses the BL2 network to store the zero-knowledge proofs, and finally returns the zero-knowledge proofs validation and a small amount of validation data, including BVC, to the main network for settlement.

Source: BL2.io
BL2's official X account has been updated daily, and it has also announced its development plan and token program, which will allocate 20% of its tokens to OG Mining, as well as the launch of a testnet in the near future. At this stage, the project is relatively new compared to other Bitcoin Layer2 and in its early stage, with only 33,000 followers on X. It's worth paying attention to as it introduces some of the more recent concepts such as Celestia and Bitcoin Layer2. However, there are no actual technical details on the website, with only a demo of what to expect, and no whitepaper for the project. At the same time, the goals are quite big, such as the abstraction of accounts on Bitcoin and compatibility with the VM protocol of mainstream virtual machines. Whether the team will be able to achieve this goal is still questionable, so we will consider taking a more reserved approach.

Source: BL2's X Account
2.3.4 B2 Network
The B2 Network is a zkRollup Layer2 with Bitcoin as the settlement layer and DA layer, which is structured into a Rollup Layer and a DA Layer. User transactions are first submitted and processed in the Rollup Layer, which uses a zkEVM scheme to execute user transactions and output the associated proofs, followed by storage of user state in the zkRollup Layer. The batch transactions and generated zero-knowledge proofs are forwarded to the DA Layer for storage and validation. The DA Layer can be subdivided into three parts: the Decentralised Storage Node, the B2 Node, and Bitcoin mainnet. The decentralised storage node receives the Rollup data and periodically generates temporal and spatial zero-knowledge proofs based on the Rollup data and sends the generated zero-knowledge proofs to the B2 Node, which is responsible for off-chain validation of the data, and then records the transaction data and corresponding zero-knowledge proofs in TapScript on the Bitcoin mainnet after validation is completed. The B2 Node is responsible for confirming the authenticity of the ZKP and finalising the settlement.

Source: B2 Network White Paper
B2 Network has a good influence among major BTC Layer2 programs, with 300,000 followers on X, surpassing BEVM's 140,000 and SatoshiVM's 166,000, which is also a Zk Rollup Layer2. At the same time, the project has received seed round funding from OKX and HashKey, attracting a lot of attention, and the TVL on the chain has exceeded $600 million.

Source: bsquared.network
B2 Network has launched B2 Buzz,and in order to use B2 Network, you need an invitation link. B2 Network uses the same communication model as Blast, which provides a strong two-way benefit binding newcomers and those who have already joined the network giving them sufficient motivation to promote the project. After completing simple tasks such as following the official X account, you can enter the staking interface, which supports the use of assets on four chains: BTC, Ethereum, BSC and Polygon. In addition to Bitcoin, inscriptions, ORDI and SATS can also be staked on the Bitcoin network. If you stake BTC, you can transfer the assets directly, whereas if you stake an inscription, you need to inscribe and transfer, and it is important to note that since there are no smart contracts on the Bitcoin network, the assets are essentially multisig-locked to a specific BTC address. The assets staked on the B2 network will not be released until at least April this year, and the points gained from staking during this period can be exchanged for mining components used for virtual mining, of which the BASIC miners only requires 10 components to activate, while the ADVANCED miner requires more than 80 components.
Officials announced a partial token program, 5% of the total tokens will be used to reward virtual mining, and the other 5% will be allocated to ecological projects on B2 network for airdrop. At the time when much attentions are paid for Tokenomics fairness, 10% of the total amount of tokens is difficult to fully mobilize the enthusiasm of the community. It is expected that B2 network will have other staking incentives or LaunchPad plans in the future.
2.4 Comprehensive Comparison
Among the three types of BTC Layer2, Lightning Network has the fastest transaction speed and lowest transaction cost, and has more applications in real-time payment and offline purchase. However, to realize the development of the application ecosystem on Bitcoin, it is difficult to build all kinds of DeFi or cross-chain protocols on Lightning network in terms of stability and security, and thus the competition in the application layer market is mainly between the sidechain and Rollup types. Sidechain solutions do not need to confirm transactions on the main network, and have more mature technical solutions and implementation difficulties, and thus have the highest TVL among the three. Due to the lack of smart contracts on the Bitcoin main network, the confirmation solution for Rollup data is still under development, and it might take a while for actual usage.

Source: Kernel Ventures
3. Bitcoin Cross-chain Bridge
3.1 Multibit
Multibit is a cross-chain bridge designed specifically for BRC20 assets on the Bitcoin network, and currently supports the migration of BRC20 assets to Ethereum, BSC, Solana, and Polygon. In the process of cross-chain bridging, users first need to send their assets to a BRC20 address designated by Multibit, and wait for Multibit to confirm the transfer of the assets on the main network, then the users will have the right to cast the corresponding assets on other chains, and to complete the cross-chain bridging process, users need to pay gas to mint on the other chain. Among the cross-chain bridges, Multibit has the best interoperability and the largest number of BRC20 assets, including more than ten kinds of BRC20 assets such as ORDI. In addition, Multibit also actively expands the cross-chain bridging of assets other than BRC20, and currently supports the Farming and cross-chain bridging of governance tokens and stablecoins of Bitstable, the native stablecoin protocol of BTC. Multibit is at the forefront of cross-chain bridges for BTC-derived assets.

The Cross Chain Assets that Multibit supports, Source: Multibit's X Account
3.2 Sobit
Sobit is a cross-chain protocol between Solana and Bitcoin network. Cross-chain assets are mainly BRC20 tokens and Sobit's native tokens. Users collateralize BRC20 assets on the Bitcoin mainnet to a designated Sobit address, and wait for Sobit's validation network to verify that the user can then Mint the mapped assets at the designated address on the Solana network. At the heart of SoBit's validation network is a validator-based framework that requires multiple trusted validators to approve cross-chain transactions, providing additional security against unauthorized transfers. Sobit's native token is Sobb, which can be used to pay for cross-chain fees for the Sobit Cross-Chain Bridge, totaling 1 billion coins. Sobb distributes 74% of its assets in a Fair Launch. Unlike other DeFi and cross-chain tokens on Bitcoin that have gone a upward trend these days, Sobb's price has been on a downward cycle after a brief uptrend, dropping more than 90 percent, not picking up any significant momentum along with BTC's uptrend, which may be caused by Sobb's chosen vertical. Sobit and Multibit's market orientations are very similar. But at this stage, Sobit can only support cross-chain for Solana, with only three kinds of BRC20 assets that can be bridged cross-chain. Compared with Multibit, which also provides cross-chain bridging of BRC20 assets, Sobit is far behind in terms of its ecosystem and completeness of cross-chain assets, and thus can hardly gain any advantage in the competition with Multibit.

The Price of Sobb, Source: Coinmarketcap
3.3 Meson Fi
Meson Fi is a cross-chain bridge based on the principle of HTLC (Hash Time Locked Contract). It supports cross-chain interactions between 17 mainstream chains including BTC, ETH and SOL. In the cross-chain process, users sign the transaction under the chain, then submit it to Meson Contract for confirmation and lock the corresponding assets in the original chain. Meson Contract broadcasts the message to the target chain through Relayer after confirming the message. There are three types of Relayer: P2P node, centralized node and no node, P2P node has better security, centralized node has higher efficiency and availability, while no node requires user to hold certain assets on both chains, which user can choose depending on actual situation. LP on the target chain also calls the Lock method on the Meson Contract to lock the corresponding asset after checking the transaction through postSwap of the Meson Contract, and then exposes the address to Meson Fi. The next operation is the HTLC process, where the user specifies the address of the LP on the original chain and creates a hash lock, removing the asset by exposing the hash lock original image on the target chain. This is then followed by the HTLC process, where the user specifies the LP address and creates a hash lock in the original chain, exposing the hash lock image in the target chain to retrieve the asset, and then the LP retrieves the user-locked asset in the original chain through the original image.

Source: Kernel Ventures
Meson Fi is not a cross-chain bridge specifically designed for Bitcoin assets, but a full-chain bridge like LayerZero. However, major BTC Layer2 such as B2 Network, Merlin Chain, and BEVM have all established partnership with Meson Fi and recommend using it to cross-chain bridge their assets during the staking process. According to official reports, Meson Fi processed more than 200,000 transactions during the three-day Merlin Chain staking event, as well as about 2,000 cross-chain staking of BTC assets, including transactions across all major chains to Bitcoin. As Layer2 on Bitcoin continues to release and introduce staking incentives, Meson Fi’ is more likely to attract assets for cross-chain, and see an increase protocol revenue.
3.4 Comprehensive Comparison
Overall, Meson Fi and the other two cross-chain bridges are two different kinds of cross-chain bridge. Meson Fi is essentially a full-chain cross-chain bridge, but happens to work with many of Bitcoin's Layer2s to help it bridge assets from other networks. Sobit and Multibit, on the other hand, are cross-chain bridges designed for Bitcoin's native assets, serving BRC20 assets as well as other DeFi and Stablecoin protocol assets on Bitcoin. Comparatively speaking, Multibit offers a wider variety of BRC20 assets, including dozens of assets such as ORDI and SATS, while Sobit only supports three BRC20 assets so far. In addition, Multibit has partnered with some of the Bitcoin stablecoin protocols to provide cross-chain services and stake revenue activities, providing a more comprehensive range of services. Finally, Multibit also offers better cross-chain liquidity, providing cross-chain services for five major chains, including Ethereum, Solana, and Polygon.
4. Bitcoin Stablecoin
4.1 BitSmiley
BitSmiley is a series of protocols based on the Fintegra framework on the Bitcoin network, including the Stablecoin Protocol, the Lending Protocol, and the Derivatives Protocol. Users can mint bitUSD by over-collateralization of BTC in BitSmliey's stablecoin protocol, and when they want to withdraw their collateralized BTC, they need to send the bitUSD back to the Vault Wallet for destruction and pay a fee. When the value of the collateralization falls below a certain threshold, BItSmiley will enter into an automatic liquidation process for the collateralized assets, and the formula for calculating the liquidation price is as follows:
$$𝐿𝑖𝑞𝑢𝑖𝑑𝑎𝑡𝑖𝑜𝑛 𝑃𝑟𝑖𝑐𝑒 = \frac{𝑏𝑖𝑡𝑈𝑆𝐷𝐺𝑒𝑛𝑒𝑟𝑎𝑡𝑒𝑑 ∗ 𝐿𝑖𝑞𝑢𝑖𝑑𝑎𝑡𝑖𝑜𝑛𝑅𝑎𝑡𝑖𝑜}{𝑄𝑢𝑎𝑛𝑡𝑖𝑡𝑦 𝑜𝑓 𝐶𝑜𝑙𝑙𝑎𝑡𝑒𝑟𝑎𝑙 }
$$
The exact liquidation price is related to the real-time value of the user's collateral and the amount of bitUSD minted, where Liquidation Ratio is a fixed constant. During the liquidation process, in order to prevent price fluctuations from causing losses to the liquidated, a Liquidation Penalty is designed in BItSmily to compensate for this, and the longer the liquidation time, the greater the amount of this compensation. The liquidation of assets is done by Dutch Auction, in order to complete the liquidation of assets in the shortest possible time. At the same time, the surplus of the BitSmiley protocol will be stored in a designated account and auctioned at regular intervals, in the form of a British auction with BTC bidding, which can maximize the value of the surplus assets. The BitSmiley project will use 90% of the surplus assets to subsidize on-chain collateral, while the remaining 10% will be allocated to the BitSmiley team for daily maintenance costs. BitSmiley's lending agreement also introduces a number of innovations to the settlement mechanism for the Bitcoin network. Due to the 10-minute block rate of the main Bitcoin network, it is not possible to introduce a prediction machine to judge price fluctuations in real time like Ether, so BitSmiley introduces a mechanism to insure a third party against failure of the other party to deliver on time, whereby the user has the option to pay a certain amount of BTC to the third party in advance to insure the transaction (which both parties are required to pay for), and when one party fails to complete the transaction on time, the transaction will be insured by the third party. When one party fails to complete the transaction on time, the guarantor will compensate the other party for the loss.

Source: BitSmliey WhitePaper
BitSmiley offers a wide range of DeFi and stablecoin features, as well as a number of innovations in its settlement mechanism to better protect users and improve its compatibility with the Bitcoin network. BitSmiley is an excellent stablecoin and DeFi model in terms of both settlement and collateralization mechanisms, and with the Bitcoin ecosystem still in its infancy, BitSmiley should be able to capture a significant share of the stablecoin competition.
4.2 BitStable
BitStable is a Bitcoin stablecoin protocol based on over-collateralization, and currently supports collateralization of ORDI and MUBI assets from Bitcoin mainnet, as well as USDT from Ethereum. Depending on the volatility of the three assets, BitStable sets different overcollateralization ratios, with USDT at 0%, ORDI at 70%, and MUBI at 90%.

Source: Bitstable.finance
BitStable has also deployed corresponding smart contracts on Ethereum, and the DALL stablecoin obtained by staking can be exchanged 1:1 on Ethereum for USDT and USDC. Meanwhile, BitStable has adopted a dual-token mechanism, in addition to the stablecoin DALL, it has adopted BSSB as its own governance token, through which holders can participate in the community's governance and share the revenue of the network. The total number of BSSBs is 21 million, which are distributed in two ways. The first is by staking DALL tokens on the Bitcoin network to earn the corresponding BSSB governance tokens, with the project distributing 50 percent of the BSSB tokens through staking rewards. The second method was the two rounds of LaunchPad on Bounce Finance at the end of November last year, in which 30% and 20% of the BSSBs were distributed through staking Auctions and Fixed Price Auctions. However, there was a hacking attack during the staking Auctions, which led to the destruction of more than 3 million BBSB tokens.

Source: coinmarketcap
During the hacker attack, the project team responded in a timely manner. The remaining 25% of the tokens that were not affected by the hacker attack were still issued, although at a higher cost, but this measure better restored the community's confidence, and ultimately prevent the clash of price.

5. Bitcoin DeFi
5.1 Bounce Finance
Bounce Finance consists of a series of DeFi ecosystem projects, including BounceBit, BounceBox and Bounce Auction. It is worth noting that Bounce Finance was not originally a project that served the BTC ecosystem, but an auction protocol set up for Ethereum and BSC, which shifted gears last May to take advantage of the Bitcoin development boom. BounceBit is an EVM-compatible POS sidechain for Bitcoin, and will select verifiers based on who are staking Bitcoins from the Bitcoin mainnet. BounceBit also introduces a hybrid revenue mechanism, whereby users can stake BTC assets on BounceBit to earn revenue on-chain through POS validation and the associated DeFi protocol, and can also securely transfer their assets to and from CEX by mirroring the assets on-chain and earning revenue on CEX. BounceBox is similar to the application store in Web2, in which the publisher can custom design a dApp, that is, a box, and then distribute it through BounceBox, and then users can choose their favorite boxes to participate in the DeFi activities. Bounce Auction, the main part of the project on Ether, is an auction for various assets and offers a variety of auction options, including fixed-price auctions, UK auctions and Dutch auctions.
Bounce's native token, Auction, was released in 2021 and has been used as the designated staking token for earning points in several rounds of Token LaunchPad on Bounce Finance, which has fueled the recent rise in the price of Auction tokens. What's more noteworthy is that BounceBit, the new staking chain that Bounce has built after switching to Bitcoin, is now open for on-chain staking to get points and test network interaction points, and the project's X account clearly indicates that points can be exchanged for tokens and that token issuance will take place in May this year.

Source: Coinmarketcap
5.2 Orders Exchange
Orders Exchange is a DeFi project built entirely on the Bitcoin network, currently supporting limit and market pending orders for dozens of BRC20 assets, with a blueprint to introduce swaps between BRC20 assets in the future. The underlying technology of Orders Exchange consists of Ordinals Protocol, PSBT and Nostr Protocol. More information on the Ordinals Protocol please refer to Kernel's previous research article, Kernel Ventures: Can RGB Replicate The Ordinals Hype. PSBT is a key feature on Bitcoin, where users sign a PSBT-type signature consisting of an Input and an Output via SIGHASH_SINGLE | ANYONECANPAY. PSBT is a bitcoin signature technology that allows users to sign a PSBT-X format consisting of an Input and an Output, with the Input containing the transaction that the user will execute and the Output containing the the prerequisite for user's transactions, which requires another user to execute the Output content and perform a SIGHASH_ALL signing on the network formula before the content of the Input finally takes effect. In Exchange's Pending Order transaction, the user completes the Pending Order by means of PSBT signature and waits for other party to complete the transaction.

Source: orders-exchange.gitbook.io
Nostr is an asset transfer protocol set up using NIP-100 that improves the interoperability of assets between different DEXs. All of Orders Exchange's 100 million tokens have been fully released. And although it emphasized in the whitepaper that ttokens are only experimental and do not have any value, the project's elaborate airdrop plan still shows a clear intention of token economy. There were 3 main directions for the initial token distribution, 45% of the tokens were distributed to traders on Orders Exchage, 40% of the tokens were airdropped to early users and promoters, and 10% were distributed to developers. However, the 40% drop was not described in detail on either the official website or the official tweets, and there was no discussion on X or in Discord's Orders community after the official announcement of the drop, so the actual distribution of the drop is still questionable. Overall, Orders Exchange's buy order page is intuitive and clear, and you can see the prices of all buy orders and sell orders explicitly, which is of high quality among the platforms offering BRC20 trading. The subsequent launch of the BRC20 token swap service on Orders Exchange should also help the value capture of protocols.
5.3 Alex
Alex is a DeFi Protocol built on top of the Bitcoin sidechain Stacks, currently supporting Swap, Lending, Borrow, and some other transaction types. At the same time, Alex has introduced some innovations to the traditional DeFi transaction model. The first is Swap, the traditional Swap pricing model can be divided into two types: x*y=k for ordinary pairs and x+y=k for stablecoins, but on Alex, you can set up the trading rules for pairs, and set it to be a linear combination of the results of the two calculations according to a certain ratio, x*y=k and x+y=k. Alex has also introduced OrderBook, a combined on-chain and off-chain order thinning model that allows users to quickly cancel pending transactions at zero cost Finally, Alex offers fixed-rate lending activities and has established a diversified collateral pool for lending services instead of the traditional single collateral, which consists of both risky and risk-free assets, reducing the risk of lending.

Source: Alexgo Docs
Unlike other DeFi projects in the BTC ecosystem, which entered the market after the Ordinals protocol had blown up the BTC ecosystem, Alex started working on the BTC DeFi ecosystem as early as the last bull market, and has raised a seed round of funding. Alex is also excellent in terms of performance and the different types of transactions, even many DeFi projects on Ethereum do not have much competitive edge over Alex's transaction experience. Alex's native token, Alex Lab, has a total supply of 1 billion, and 60% of it has already been released, which can still be earned by staking or by offering as a liquidity provider on Alex. However, revenue will hardly reach the level it was at during early launch. As one of the most well-established DeFi project on Bitcoin, Alex's market cap is considered not that high, with the Bitcoin ecosystem probably being an important engine in this bull market. In addition, the sidechain where Alex was deployed, Stacks, will execute an important Satoshi Nakamoto upgrade, of which Stacks will be greatly optimized in terms of both transaction speed and transaction cost, and its security will be backed by the Bitcoin mainnet, making it a true Layer 2. This upgrade will also greatly reduce Alex's operating costs and improve its transaction experience and security. The Stacks chain will also provide Alex with larger market and trading demand, bringing more revenue to the protocol.
6. Conclusion
The application of the Ordinals protocol has changed the inability of the Bitcoin network to implement complex logic and issue assets, and various types of asset protocols have been introduced on the Bitcoin network one after another, improving upon the idea of Ordinals. However, application layer is not prepared to provide services, and in the case of the surge of inscription assets, the functions that can be realized by Bitcoin applications appear to be anachronistic, and thus the development of applications on Bitcoin network has become a hotspot for all parties to seize. Layer 2 has the highest priority among all types of applications, because all other DeFi protocols, no matter developed they are, if they do not improve the transaction speed and reduce the transaction cost of the Bitcoin mainnet, it will be difficult to release the liquidity, and the chain will be flooded with new transactions for speculation purposes. After improving the speed and cost of transactions on the Bitcoin mainnet, the next step is to improve the experience and diversity of transactions. Various DeFi or stablecoin protocols provide traders with a wide range of financial derivatives. Finally, there are cross-chain protocols that allow assets on Bitcoin mainnet to flow to and from the other networks. Cross-chain protocols on Bitcoin are relatively mature, but not exclusively since the development of the Bitcoin mainnet, as many of the multi-chain bridges and mainstream cross-chain bridges were designed to provide cross-chain services to the Bitcoin network. For dApps like SocialFi and GameFi, due to the high gas and latency constraints of the main Bitcoin network, no excellent projects have appeared so far, but with the speed up and scaling of the Layer2 network, it is likely that they will emerge on Layer2 of the Bitcoin network. It is certain that the Bitcoin ecosystem will be at least one of the hot topics in this bull market. With plenty of enthusiasm and a huge market, although the various ecosystems on bitcoin are still in the early stages of development, we are likely to see the emergence of excellent projects from various verticals in the bull market this time.

Source: Kernel Ventures
Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world.
References
BEVM White Paper:https://github.com/btclayer2/BEVM-white-paperWhat is a Bitcoin Merkelized Abstract Syntax Tree:https://www.btcstudy.org/2021/09/07/what-is-a-bitcoin-merklized-abstract-syntax-tree-mast/#MAST-%E7%9A%84%E4%B8%80%E4%B8%AA%E4%BE%8B%E5%AD%90BitVM White Paper:https://bitvm.org/bitvm.pdfBitcoin Scripting Principles:https://happypeter.github.io/binfo/bitcoin-scriptsSatoshiVM Official Website:https://www.satoshivm.io/Multibit's Docs:https://docs.multibit.exchange/multibit/protocol/cross-chain-processAlex White Paper:https://docs.alexgo.io/Merlin Technical Docs:https://docs.merlinchain.io/merlin-docs/Sobit WhitePaper:https://sobit.gitbook.io/sobit/
See original
Kernel Ventures: Panoramic view of the application layer under the BTC ecological development boomAuthor: Kernel Ventures Jerry Luo Reviewers: Kernel Ventures Mandy, Kernel Ventures Joshua, Kernel Ventures Rose TLDR: With the popularity of the inscription track, the existing application layer of the Bitcoin main network cannot satisfy the inscription market, which is the focus of current Bitcoin network development. There are three mainstream Layer 2 solutions in Bitcoin at this stage, namely Lightning Network, Side Chain and Rollup: Lightning Network realizes point-to-point payment by establishing an off-chain payment channel, and settlement is carried out on the main network after the channel is closed; Side chain uses a specific address on the main network Or the multi-signature address locks the main network BTC assets and mints equivalent BTC assets on the side chain. Among them, Merlin Chain can support multiple types of inscribed assets across chains, and is closely connected with the BRC420 asset community. The total amount of TVL on the chain at this stage exceeds 3 billion US dollars; the current BTC Rollup is based on the Taproot circuit to simulate smart contracts on the chain, and Packaging and computing operations are completed outside the Bitcoin mainnet. Among them, B2 Network is at the forefront of the implementation process, and the total amount of TVL on the chain has exceeded 200 million US dollars. There are not many cross-chain bridges designed specifically for Bitcoin. At this stage, more of them integrate multi-chain bridges and full-chain bridges from mainstream public chains. Among them, Meson.Fi has established cooperative relationships with many Bitcoin second-layer projects. The Bitcoin stablecoin protocol is mostly implemented in the form of over-collateralization, and other DeFi protocols are built based on the stablecoin protocol to supplement it to bring more benefits to protocol users. DeFi projects in the Bitcoin ecosystem vary widely. Some have migrated from other chains, some have been built on the Bitcoin mainnet during this development boom, and some have made their fortunes in the last bull market and are deployed on Bitcoin side chains. Up. Overall, Alex has the most complete trading types and the best trading experience, but Orders Exchange has greater room for growth. The Bitcoin ecosystem will be an important narrative in this bull market, and you can pay appropriate attention to the trends of leading projects in each segment of the Bitcoin ecosystem.

Kernel Ventures: Panoramic view of the application layer under the BTC ecological development boom

Author: Kernel Ventures Jerry Luo
Reviewers: Kernel Ventures Mandy, Kernel Ventures Joshua, Kernel Ventures Rose
TLDR:
With the popularity of the inscription track, the existing application layer of the Bitcoin main network cannot satisfy the inscription market, which is the focus of current Bitcoin network development. There are three mainstream Layer 2 solutions in Bitcoin at this stage, namely Lightning Network, Side Chain and Rollup: Lightning Network realizes point-to-point payment by establishing an off-chain payment channel, and settlement is carried out on the main network after the channel is closed; Side chain uses a specific address on the main network Or the multi-signature address locks the main network BTC assets and mints equivalent BTC assets on the side chain. Among them, Merlin Chain can support multiple types of inscribed assets across chains, and is closely connected with the BRC420 asset community. The total amount of TVL on the chain at this stage exceeds 3 billion US dollars; the current BTC Rollup is based on the Taproot circuit to simulate smart contracts on the chain, and Packaging and computing operations are completed outside the Bitcoin mainnet. Among them, B2 Network is at the forefront of the implementation process, and the total amount of TVL on the chain has exceeded 200 million US dollars. There are not many cross-chain bridges designed specifically for Bitcoin. At this stage, more of them integrate multi-chain bridges and full-chain bridges from mainstream public chains. Among them, Meson.Fi has established cooperative relationships with many Bitcoin second-layer projects. The Bitcoin stablecoin protocol is mostly implemented in the form of over-collateralization, and other DeFi protocols are built based on the stablecoin protocol to supplement it to bring more benefits to protocol users. DeFi projects in the Bitcoin ecosystem vary widely. Some have migrated from other chains, some have been built on the Bitcoin mainnet during this development boom, and some have made their fortunes in the last bull market and are deployed on Bitcoin side chains. Up. Overall, Alex has the most complete trading types and the best trading experience, but Orders Exchange has greater room for growth. The Bitcoin ecosystem will be an important narrative in this bull market, and you can pay appropriate attention to the trends of leading projects in each segment of the Bitcoin ecosystem.
Kernel Ventures: Rollup Summer — The Flywheel Momentum Kicked Off by ZK FairAuthor: Kernel Ventures Stanley Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua TLDR : In just a few days, ZK Fair has achieved a Total Value Locked (TVL) of $120 million, currently stabilizing at $80 million, making it one of the fastest-growing Rollups. This "three-no" public chain, with no financing, no market makers, and no institutions, has managed such growth. This article will delve into the development of ZK Fair and provide a fundamental analysis of the momentum in the current Rollup market. Rollup Background Introduction Rollup is one of the Layer 2 solutions that transfers computation and storage of transactions from the Ethereum mainnet (Layer 1) to Layer 2 for processing and compression. The compressed data is then uploaded back to the Ethereum mainnet to enhance the performance of Ethereum. The emergence of Rollup has significantly reduced Gas fees on Layer 2 compared to the mainnet, leading to savings in Gas consumption, faster Transactions Per Second (TPS), and smoother transaction interactions. Some mainstream Rollup chains that have already been launched include Arbitrum, Optimism, Base, and ZK Rollup solutions such as Starknet and zkSync, which are widely used in the market. Data Overview Rollup Chain Data Comparison, Image Source: Kernel Ventures From the data, it is evident that currently, OP and ARB still dominate among the Rollup chains. However, newcomers such as Manta and ZK Fair have managed to accumulate a significant TVL in a short period. Nevertheless, in terms of the number of protocols, they may need some time to catch up. The protocols of mainstream Rollups are well-developed, and their infrastructure is robust. Meanwhile, emerging chains still have room for development in terms of protocol expansion and infrastructure enhancement. Rollup Analysis We will categorize and introduce some recently popular Rollup chains, as well as well-established Rollup chains. Existing Rollup Chains ARB Arbitrum is an Ethereum Layer 2 scaling solution created by the Offchain Labs, based on Optimistic Rollup . While Arbitrum settlements still occur on the Ethereum mainnet, the execution and contract storage take place off-chain, with only the essential transaction data being submitted to Ethereum. As a result, Arbitrum incurs significantly lower gas fees compared to the mainnet. OP Optimism is built on the Optimistic Rollup, utilizing a single-round interactive fraud proof mechanism to ensure that the data synchronized to Layer 1 is valid. Polygon zkEVM Polygon zkEVM is an Ethereum Layer 2 scaling solution built on ZK Rollup. This zkEVM expansion solution utilizes ZK proofs to reduce transaction costs, increase throughput, and concurrently maintain the security of the Ethereum Layer 1. Emerging Rollup Chains ZK Fair ZK Fair as a Rollup, has several key features: Built on the Polygon CDK, with the Data Availability (DA) layer utilizing Celestia (currently maintained by a self-operated data committee), and EVM compatibility.Uses USDC as Gas fees.The Rollup token, ZKF, is 100% distributed to the community. 75% of the tokens are distributed in four phases, completing distribution to participants in gas consumption activities within 48 hours. Essentially, participants engage in the token's primary market sale by paying gas fees to the official sequencer. The corresponding primary market financing valuation is only $4 million. ZK Fair TVL Growth Trends, Image Source: Kernel Ventures ZK Fair has experienced rapid growth in TVL in the short term, partly owing to its decentralized nature. As per community insights, the listing on mainstream exchanges like Bitget, Kucoin, and Gate resulted from the community and users establishing contact with the exchanges. Subsequently, the official team was invited for technical integration, all initiated by the community. Projects like Izumi Finance on-chain also follow a community-driven approach, with the community taking the lead and the project team providing support, showcasing a strong community cohesion. According to information from Lumoz, the development team behind ZK Fair (formerly Opside), they have plans to introduce different themed Rollup chains in the future. This includes Rollup chains based on current hot topics like Bitcoin, as well as those focused on social aspects and financial derivatives. The upcoming chains may be launched in collaboration with project teams, resembling the current trend of Layer 3 concepts, where each Dapp has its own chain. As revealed by the team, these upcoming chains will also adopt the Fair model, distributing a portion of the original tokens to participants on the chain. Blast Blast is a Layer2 network based on Optimistic Rollups and is compatible with Ethereum. In just 6 days, the TVL on the chain has surpassed $500 million, approaching $600 million. This surge has notably doubled the price of the $Blur token. Blast originated from the founder Pacman's observation that over a billion dollars in funds within the Blur bid pool were essentially dormant, not generating any returns. This situation is prevalent across applications on almost every chain, indicating that these funds are subjected to passive depreciation caused by inflation. Specifically, when users deposit funds into Blast, the corresponding ETH locked on the Layer 1 network is utilized for native network staking. The earned ETH staking rewards are then automatically returned to users on the Blast platform. In essence, if a user holds 1 ETH in their Blast account, it may grow automatically over time. Manta Manta Network serves as the gateway for modular ZK applications, establishing a new paradigm for L2 smart contract platforms by leveraging modular blockchain and zkEVM. It aims to build a modular ecosystem for the next generation of decentralized applications (dApps). Currently, Manta Network provides two networks. The focus here is on Manta Pacific, a modular L2 ecosystem built on Ethereum. It addresses usability concerns through modular infrastructure design, enabling seamless integration of modular Data Availability (DA) and zkEVM. Since becoming the first platform integrated into Celestia on Ethereum L2, Manta Pacific has assisted users in saving over $750,000 in gas fees. Metis Metis has been operational for over 2 years, but its recent introduction of a decentralized sequencer has brought it back into the spotlight. Metis is a Layer 2 solution built on the Ethereum blockchain. It is the first to innovate by using a decentralized sequencing pool (PoS Sequencer Pool) and a hybrid of Optimistic Rollup (OP) and Zero-Knowledge Rollup (ZK) to enhance network security, sustainability, and decentralization. In Metis' design, the initial sequencer nodes are created by whitelisted users, complemented by a parallel staking mechanism. Users can become new sequencer nodes by staking the native token $METIS, enabling network participants to supervise the sequencer nodes. This enhances the transparency and credibility of the entire system. Tech Stack Analysis Polygon CDK Polygon Chain Development Kit (CDK) is a modular open-source software toolkit designed for blockchain developers to launch new Layer 2 (L2) chains on Ethereum. Polygon CDK utilizes zero-knowledge proofs to compress transactions and enhance scalability. It prioritizes modularity, facilitating the flexible design of application-specific chains. This enables developers to choose the virtual machine, sequencer type, Gas token, and data availability solution based on their specific needs. It features: High Modularity Polygon CDK allows developers to customize L2 chains according to specific requirements, catering to the unique needs of various applications. Data Availability Chains built using CDK will have a dedicated Data Availability Committee (DAC) to ensure reliable off-chain data access. Celestia DA Celestia pioneered the concept of modular blockchains by decoupling blockchain into three layers: data, consensus, and execution. In a monolithic blockchain, these three layers are typically handled by a single network. Celestia focuses on the data and consensus layers, allowing L2 to delegate the data availability layer (DA) to reduce transaction gas fees. For instance, Manta Pacific has already adopted Celestia as its data availability layer, and according to official statements from Manta Pacific, after migrating DA from Ethereum to Celestia, costs have decreased by 99.81%. For specific technical details, you can refer to a previous article by Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design (details may be provided in the mentioned article). Comparison between OP and ARB Optimism is not the sole existing rollup solution. Arbitrum also provides a similar solution, and in terms of functionality and popularity, Arbitrum is the closest alternative to Optimism. Arbitrum allows developers to run unmodified EVM contracts and Ethereum transactions on Layer 2 protocols while still benefiting from the security of Ethereum's Layer 1 network. In these aspects, it offers features very similar to Optimism. The main difference between Optimism and Arbitrum lies in the type of fraud proof they use, with Optimism utilizing single-round fraud proofs, while Arbitrum uses multi-round fraud proofs. Optimism's single-round fraud proofs rely on Layer 1 to execute all Layer 2 transactions, ensuring that fraud proof verification is instant. Since its launch, Arbitrum has consistently shown better performance in various data on Layer 2 compared to Optimism. However, this trend began to change gradually after Optimism started promoting the OP stack. OP stack is an open-source Layer 2 technology stack, meaning that other projects wishing to run Layer 2 can use it for free to quickly deploy their own Layer 2, significantly reducing development and testing costs. L2 projects adopting the OP stack can achieve security and efficiency due to technical consistency in architecture. After the launch of the OP stack, it gained initial adoption by Coinbase, and with the demonstration effect of Coinbase, OP stack has been adopted by more projects, including Binance's opBNB, NFT project Zora, and others. Future Prospects Fair Launch The Fair launch model of the current Inscription vertical has a broad audience, allowing retail investors to directly acquire original tokens. This is also the reason why Inscription remains popular to this day. ZK Fair follows the essence of this model, namely, a public launch. In the future, more chains may adopt this model, leading to a rapid increase in TVL. Rollup Absorbing L1 Market Share From a user experience perspective, Rollup and L1 have little substantive difference. Efficient transactions and low fees often attract users, as most users make decisions based on experience rather than technical details. Some rapidly growing Rollup networks offer an excellent user experience with fast transaction speeds, providing substantial incentives for both users and developers. With the precedent set by ZK Fair, future chains may continue to adopt this approach, further absorbing market share from L1. Clear Plans & Healthy Ecosystem In this narrative of the current Rollup wave, projects like ZK Fair and Blast provide significant incentives, contributing to a healthier ecosystem. This has reduced much of the unnecessary TVL and meaningless activities. For example, zkSync has been live for years without token distribution. Although it boasts a high TVL due to substantial fundraising and continuous engagement of technical enthusiasts, there are few new projects, especially those with new narratives and themes, running on the chain. Public Goods In the latest Rollup wave, many chains have introduced the concept of fee sharing. In the case of ZK Fair, 75% of the fees are distributed to all ZKF token stakers, and 25% is allocated to dApp deployers. Blast also allocates fees to Dapp deployers. This allows many developers to go beyond project income and ecosystem fund grants, leveraging gas revenue to develop more free public goods. Decentralized Sequencers The cost collection on Layer 2 (L2) and cost payment on Layer 1 (L1) are both executed by the L2 sequencer. The profits are also attributed to the sequencer. Currently, both OP and ARB sequencers are operated by the respective official entities, with profits going to the official treasuries. The mechanism for decentralized sequencers is likely to operate on a Proof-of-Stake (POS) basis. In this system, decentralized sequencers need to stake the native tokens of L2, such as ARB or OP, as collateral. If they fail to fulfill their duties, the collateral may be slashed. Regular users can either stake themselves as sequencers or use services similar to Lido's staking service. In the latter case, users provide staking tokens, and professional, decentralized sequencer operators execute sequencing and uploading services. Stakers receive a significant portion of the sequencers' L2 fees and MEV rewards (in Lido's mechanism, this is 90%). This model aims to make Rollup more transparent, decentralized, and trustworthy. Disruptive Business Model Almost all Layer2 solutions profit from a "subletting" model. In this context, "subletting" refers to directly renting a property from the landlord and then subleasing it to other tenants. Similarly, in the blockchain world, Layer2 chains generate revenue by collecting Gas fees from users (tenants) and subsequently paying fees to Layer1 (landlords). In theory, economies of scale are crucial, as long as a sufficient number of users adopt Layer2, the costs paid to Layer1 do not change significantly (unless the volume is enormous, such as in the case of OP and ARB). Therefore, if a chain's transaction volume cannot meet expectations within a certain period, it may be in a long-term loss-making state. This is also why chains like zkSync, as mentioned earlier, prefer to attract and engage users actively; with a substantial TVL, they don't worry about a lack of user transactions. However, this business model is not sustainable in the long run. While the focus has been on chains like zkSync, which has excellent financing conditions, for smaller chains, relying solely on actively engaging and retaining users might not be as effective. Therefore, the rise of "grassroots" projects like ZK Fair, as mentioned earlier, provides valuable lessons for other chains. In the pursuit of TVL, it is essential to consider the long-term sustainability of TVL, not just blindly focus on acquiring it. Summary The article starts with ZK Fair achieving a TVL of $120 million in a short period, using it as a focal point to explore the Rollup landscape. It covers established players like Arbitrum and Optimism, as well as newer entrants such as ZK Fair, Blast, Manta, and Metis. On the technical front, it delves into the modular toolkit of Polygon CDK and the modular concept of Celestia DA. It compares the differences between Optimism and Arbitrum, highlights the potential adoption of a POS mechanism for decentralized sequencers, aiming to make Rollup more transparent and decentralized. In the future outlook, the article emphasizes the widespread appeal of the fair launch model and the potential for Rollup to absorb market share from L1. It points out the negligible difference in user experience between Rollup and L1, with efficient transactions and low fees attracting users. The significance of public goods and the fee-sharing concept introduced by chains in the latest Rollup wave is emphasized. The article concludes by addressing the need to focus not only on acquiring TVL but also on its long-term sustainability. In essence, this new wave of Rollup is characterized by new projects with tokens, modular design, generous incentives, accelerating the initial business and token price dynamics. Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world. Reference Rollup Summer Reflection:https://www.chaincatcher.com/article/2110635ZK Fair Official Docs:https://docs.zkfair.io/

Kernel Ventures: Rollup Summer — The Flywheel Momentum Kicked Off by ZK Fair

Author: Kernel Ventures Stanley
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua

TLDR :
In just a few days, ZK Fair has achieved a Total Value Locked (TVL) of $120 million, currently stabilizing at $80 million, making it one of the fastest-growing Rollups. This "three-no" public chain, with no financing, no market makers, and no institutions, has managed such growth. This article will delve into the development of ZK Fair and provide a fundamental analysis of the momentum in the current Rollup market.
Rollup Background
Introduction
Rollup is one of the Layer 2 solutions that transfers computation and storage of transactions from the Ethereum mainnet (Layer 1) to Layer 2 for processing and compression. The compressed data is then uploaded back to the Ethereum mainnet to enhance the performance of Ethereum. The emergence of Rollup has significantly reduced Gas fees on Layer 2 compared to the mainnet, leading to savings in Gas consumption, faster Transactions Per Second (TPS), and smoother transaction interactions. Some mainstream Rollup chains that have already been launched include Arbitrum, Optimism, Base, and ZK Rollup solutions such as Starknet and zkSync, which are widely used in the market.
Data Overview

Rollup Chain Data Comparison, Image Source: Kernel Ventures
From the data, it is evident that currently, OP and ARB still dominate among the Rollup chains. However, newcomers such as Manta and ZK Fair have managed to accumulate a significant TVL in a short period. Nevertheless, in terms of the number of protocols, they may need some time to catch up. The protocols of mainstream Rollups are well-developed, and their infrastructure is robust. Meanwhile, emerging chains still have room for development in terms of protocol expansion and infrastructure enhancement.
Rollup Analysis
We will categorize and introduce some recently popular Rollup chains, as well as well-established Rollup chains.
Existing Rollup Chains
ARB
Arbitrum is an Ethereum Layer 2 scaling solution created by the Offchain Labs, based on Optimistic Rollup . While Arbitrum settlements still occur on the Ethereum mainnet, the execution and contract storage take place off-chain, with only the essential transaction data being submitted to Ethereum. As a result, Arbitrum incurs significantly lower gas fees compared to the mainnet.
OP
Optimism is built on the Optimistic Rollup, utilizing a single-round interactive fraud proof mechanism to ensure that the data synchronized to Layer 1 is valid.
Polygon zkEVM
Polygon zkEVM is an Ethereum Layer 2 scaling solution built on ZK Rollup. This zkEVM expansion solution utilizes ZK proofs to reduce transaction costs, increase throughput, and concurrently maintain the security of the Ethereum Layer 1.
Emerging Rollup Chains
ZK Fair
ZK Fair as a Rollup, has several key features:
Built on the Polygon CDK, with the Data Availability (DA) layer utilizing Celestia (currently maintained by a self-operated data committee), and EVM compatibility.Uses USDC as Gas fees.The Rollup token, ZKF, is 100% distributed to the community. 75% of the tokens are distributed in four phases, completing distribution to participants in gas consumption activities within 48 hours. Essentially, participants engage in the token's primary market sale by paying gas fees to the official sequencer. The corresponding primary market financing valuation is only $4 million.

ZK Fair TVL Growth Trends, Image Source: Kernel Ventures
ZK Fair has experienced rapid growth in TVL in the short term, partly owing to its decentralized nature. As per community insights, the listing on mainstream exchanges like Bitget, Kucoin, and Gate resulted from the community and users establishing contact with the exchanges. Subsequently, the official team was invited for technical integration, all initiated by the community. Projects like Izumi Finance on-chain also follow a community-driven approach, with the community taking the lead and the project team providing support, showcasing a strong community cohesion.
According to information from Lumoz, the development team behind ZK Fair (formerly Opside), they have plans to introduce different themed Rollup chains in the future. This includes Rollup chains based on current hot topics like Bitcoin, as well as those focused on social aspects and financial derivatives. The upcoming chains may be launched in collaboration with project teams, resembling the current trend of Layer 3 concepts, where each Dapp has its own chain. As revealed by the team, these upcoming chains will also adopt the Fair model, distributing a portion of the original tokens to participants on the chain.
Blast
Blast is a Layer2 network based on Optimistic Rollups and is compatible with Ethereum. In just 6 days, the TVL on the chain has surpassed $500 million, approaching $600 million. This surge has notably doubled the price of the $Blur token.
Blast originated from the founder Pacman's observation that over a billion dollars in funds within the Blur bid pool were essentially dormant, not generating any returns. This situation is prevalent across applications on almost every chain, indicating that these funds are subjected to passive depreciation caused by inflation. Specifically, when users deposit funds into Blast, the corresponding ETH locked on the Layer 1 network is utilized for native network staking. The earned ETH staking rewards are then automatically returned to users on the Blast platform. In essence, if a user holds 1 ETH in their Blast account, it may grow automatically over time.
Manta
Manta Network serves as the gateway for modular ZK applications, establishing a new paradigm for L2 smart contract platforms by leveraging modular blockchain and zkEVM. It aims to build a modular ecosystem for the next generation of decentralized applications (dApps). Currently, Manta Network provides two networks.
The focus here is on Manta Pacific, a modular L2 ecosystem built on Ethereum. It addresses usability concerns through modular infrastructure design, enabling seamless integration of modular Data Availability (DA) and zkEVM. Since becoming the first platform integrated into Celestia on Ethereum L2, Manta Pacific has assisted users in saving over $750,000 in gas fees.
Metis
Metis has been operational for over 2 years, but its recent introduction of a decentralized sequencer has brought it back into the spotlight. Metis is a Layer 2 solution built on the Ethereum blockchain. It is the first to innovate by using a decentralized sequencing pool (PoS Sequencer Pool) and a hybrid of Optimistic Rollup (OP) and Zero-Knowledge Rollup (ZK) to enhance network security, sustainability, and decentralization.
In Metis' design, the initial sequencer nodes are created by whitelisted users, complemented by a parallel staking mechanism. Users can become new sequencer nodes by staking the native token $METIS, enabling network participants to supervise the sequencer nodes. This enhances the transparency and credibility of the entire system.
Tech Stack Analysis
Polygon CDK
Polygon Chain Development Kit (CDK) is a modular open-source software toolkit designed for blockchain developers to launch new Layer 2 (L2) chains on Ethereum.
Polygon CDK utilizes zero-knowledge proofs to compress transactions and enhance scalability. It prioritizes modularity, facilitating the flexible design of application-specific chains. This enables developers to choose the virtual machine, sequencer type, Gas token, and data availability solution based on their specific needs. It features:
High Modularity
Polygon CDK allows developers to customize L2 chains according to specific requirements, catering to the unique needs of various applications.
Data Availability
Chains built using CDK will have a dedicated Data Availability Committee (DAC) to ensure reliable off-chain data access.
Celestia DA
Celestia pioneered the concept of modular blockchains by decoupling blockchain into three layers: data, consensus, and execution. In a monolithic blockchain, these three layers are typically handled by a single network. Celestia focuses on the data and consensus layers, allowing L2 to delegate the data availability layer (DA) to reduce transaction gas fees. For instance, Manta Pacific has already adopted Celestia as its data availability layer, and according to official statements from Manta Pacific, after migrating DA from Ethereum to Celestia, costs have decreased by 99.81%.
For specific technical details, you can refer to a previous article by Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design (details may be provided in the mentioned article).
Comparison between OP and ARB
Optimism is not the sole existing rollup solution. Arbitrum also provides a similar solution, and in terms of functionality and popularity, Arbitrum is the closest alternative to Optimism. Arbitrum allows developers to run unmodified EVM contracts and Ethereum transactions on Layer 2 protocols while still benefiting from the security of Ethereum's Layer 1 network. In these aspects, it offers features very similar to Optimism.
The main difference between Optimism and Arbitrum lies in the type of fraud proof they use, with Optimism utilizing single-round fraud proofs, while Arbitrum uses multi-round fraud proofs. Optimism's single-round fraud proofs rely on Layer 1 to execute all Layer 2 transactions, ensuring that fraud proof verification is instant.
Since its launch, Arbitrum has consistently shown better performance in various data on Layer 2 compared to Optimism. However, this trend began to change gradually after Optimism started promoting the OP stack. OP stack is an open-source Layer 2 technology stack, meaning that other projects wishing to run Layer 2 can use it for free to quickly deploy their own Layer 2, significantly reducing development and testing costs. L2 projects adopting the OP stack can achieve security and efficiency due to technical consistency in architecture. After the launch of the OP stack, it gained initial adoption by Coinbase, and with the demonstration effect of Coinbase, OP stack has been adopted by more projects, including Binance's opBNB, NFT project Zora, and others.
Future Prospects
Fair Launch
The Fair launch model of the current Inscription vertical has a broad audience, allowing retail investors to directly acquire original tokens. This is also the reason why Inscription remains popular to this day. ZK Fair follows the essence of this model, namely, a public launch. In the future, more chains may adopt this model, leading to a rapid increase in TVL.
Rollup Absorbing L1 Market Share
From a user experience perspective, Rollup and L1 have little substantive difference. Efficient transactions and low fees often attract users, as most users make decisions based on experience rather than technical details. Some rapidly growing Rollup networks offer an excellent user experience with fast transaction speeds, providing substantial incentives for both users and developers. With the precedent set by ZK Fair, future chains may continue to adopt this approach, further absorbing market share from L1.
Clear Plans & Healthy Ecosystem
In this narrative of the current Rollup wave, projects like ZK Fair and Blast provide significant incentives, contributing to a healthier ecosystem. This has reduced much of the unnecessary TVL and meaningless activities. For example, zkSync has been live for years without token distribution. Although it boasts a high TVL due to substantial fundraising and continuous engagement of technical enthusiasts, there are few new projects, especially those with new narratives and themes, running on the chain.
Public Goods
In the latest Rollup wave, many chains have introduced the concept of fee sharing. In the case of ZK Fair, 75% of the fees are distributed to all ZKF token stakers, and 25% is allocated to dApp deployers. Blast also allocates fees to Dapp deployers. This allows many developers to go beyond project income and ecosystem fund grants, leveraging gas revenue to develop more free public goods.
Decentralized Sequencers
The cost collection on Layer 2 (L2) and cost payment on Layer 1 (L1) are both executed by the L2 sequencer. The profits are also attributed to the sequencer. Currently, both OP and ARB sequencers are operated by the respective official entities, with profits going to the official treasuries.
The mechanism for decentralized sequencers is likely to operate on a Proof-of-Stake (POS) basis. In this system, decentralized sequencers need to stake the native tokens of L2, such as ARB or OP, as collateral. If they fail to fulfill their duties, the collateral may be slashed. Regular users can either stake themselves as sequencers or use services similar to Lido's staking service. In the latter case, users provide staking tokens, and professional, decentralized sequencer operators execute sequencing and uploading services. Stakers receive a significant portion of the sequencers' L2 fees and MEV rewards (in Lido's mechanism, this is 90%). This model aims to make Rollup more transparent, decentralized, and trustworthy.
Disruptive Business Model
Almost all Layer2 solutions profit from a "subletting" model. In this context, "subletting" refers to directly renting a property from the landlord and then subleasing it to other tenants. Similarly, in the blockchain world, Layer2 chains generate revenue by collecting Gas fees from users (tenants) and subsequently paying fees to Layer1 (landlords). In theory, economies of scale are crucial, as long as a sufficient number of users adopt Layer2, the costs paid to Layer1 do not change significantly (unless the volume is enormous, such as in the case of OP and ARB). Therefore, if a chain's transaction volume cannot meet expectations within a certain period, it may be in a long-term loss-making state. This is also why chains like zkSync, as mentioned earlier, prefer to attract and engage users actively; with a substantial TVL, they don't worry about a lack of user transactions.
However, this business model is not sustainable in the long run. While the focus has been on chains like zkSync, which has excellent financing conditions, for smaller chains, relying solely on actively engaging and retaining users might not be as effective. Therefore, the rise of "grassroots" projects like ZK Fair, as mentioned earlier, provides valuable lessons for other chains. In the pursuit of TVL, it is essential to consider the long-term sustainability of TVL, not just blindly focus on acquiring it.
Summary
The article starts with ZK Fair achieving a TVL of $120 million in a short period, using it as a focal point to explore the Rollup landscape. It covers established players like Arbitrum and Optimism, as well as newer entrants such as ZK Fair, Blast, Manta, and Metis.
On the technical front, it delves into the modular toolkit of Polygon CDK and the modular concept of Celestia DA. It compares the differences between Optimism and Arbitrum, highlights the potential adoption of a POS mechanism for decentralized sequencers, aiming to make Rollup more transparent and decentralized.
In the future outlook, the article emphasizes the widespread appeal of the fair launch model and the potential for Rollup to absorb market share from L1. It points out the negligible difference in user experience between Rollup and L1, with efficient transactions and low fees attracting users. The significance of public goods and the fee-sharing concept introduced by chains in the latest Rollup wave is emphasized. The article concludes by addressing the need to focus not only on acquiring TVL but also on its long-term sustainability.
In essence, this new wave of Rollup is characterized by new projects with tokens, modular design, generous incentives, accelerating the initial business and token price dynamics.
Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world.
Reference
Rollup Summer Reflection:https://www.chaincatcher.com/article/2110635ZK Fair Official Docs:https://docs.zkfair.io/
See original
Kernel Ventures: Rollup flywheel started by ZK FairAuthor: Kernel Ventures Stanley Reviewers: Kernel Ventures Mandy, Kernel Ventures Joshua TLDR: ZK Fair’s TVL reached 120 million US dollars in just a few days, and currently remains stable at 80 million US dollars. It is one of the fastest growing rollups. How does this "three-no" public chain without financing, market makers, and institutions achieve this? This article will introduce the development of ZK Fair and analyze the flywheel of this round of Rollup market from its essence. Rollup track background Track introduction Rollup is one of the Layer2 solutions. It transfers the calculation and storage of transactions on the Ethereum main network (i.e. Layer1) to Layer2 for processing and compression, and then uploads the compressed data to the Ethereum main network to expand the performance of Ethereum. The emergence of Rollup makes the Gas fee of Layer 2 much lower than that of the main network, saving Gas consumption, faster TPS, etc., making transactions and interactions smoother. Some mainstream Rollup chains that have been launched such as Arbitrum, Optimism, and Base, as well as ZK Rollup chains such as Starknet and zkSync are all commonly used chains on the market.

Kernel Ventures: Rollup flywheel started by ZK Fair

Author: Kernel Ventures Stanley
Reviewers: Kernel Ventures Mandy, Kernel Ventures Joshua

TLDR:
ZK Fair’s TVL reached 120 million US dollars in just a few days, and currently remains stable at 80 million US dollars. It is one of the fastest growing rollups. How does this "three-no" public chain without financing, market makers, and institutions achieve this? This article will introduce the development of ZK Fair and analyze the flywheel of this round of Rollup market from its essence.
Rollup track background
Track introduction
Rollup is one of the Layer2 solutions. It transfers the calculation and storage of transactions on the Ethereum main network (i.e. Layer1) to Layer2 for processing and compression, and then uploads the compressed data to the Ethereum main network to expand the performance of Ethereum. The emergence of Rollup makes the Gas fee of Layer 2 much lower than that of the main network, saving Gas consumption, faster TPS, etc., making transactions and interactions smoother. Some mainstream Rollup chains that have been launched such as Arbitrum, Optimism, and Base, as well as ZK Rollup chains such as Starknet and zkSync are all commonly used chains on the market.
LIVE
--
Bullish
🦄
🦄
LIVE
Kernel Ventures
--
Kernel Ventures: Cancun Upgrade — And Its Impact on the Broader Ethereum Ecosystem
Author: Kernel Ventures Jerry Luo
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
Ethereum has completed the first three upgrade phases, which addressed the problems of development thresholds, DoS attacks, and POS transition, respectively, and the main goal of the current upgrade phase is to reduce transaction fees and optimize the user experience.EIP-1553, EIP-4788, EIP-5656 and EIP-6780, have been realized to reduce the cost of inter-contractual interactions, to improve the efficiency of beacon chain access, to reduce the cost of data replication, and to limit the role authority of the SELFDESTRUCT byte code, respectively.By introducing blob data that is external to the block, EIP-4844 can greatly increase Ethereum's TPS and reduce data storage costs.The Cancun upgrade will have additional benefits for Ethereum-specific DAs while Ethereum Foundation is not open to DA solutions that do not utilize Ethereum at all in their data stores.The Cancun upgrade is likely to be relatively more favorable for Op Layer2 due to its more mature development environment as well as the increased demand for the Ethereum DA layer.The Cancun upgrade will raise the performance limit of the DApp, allowing it to have functionality closer to that of an app in Web2. On-chain games that haven't lost their popularity while need a lot of storage space on Ethereum are worth watching.The Ethereum is undervalued at this stage, and the Cancun upgrade could be the signal that Ethereum starts to soar up.
1. Ethereum's Upgrade
From October 16th of last year, when Cointelegraph published fake news about the pass of the Bitcoin ETF, to January 11th this year, when the ETF was finally passed, crypto market has experienced a surge in price. As bitcoin is more directly impacted by ETF, Ethereum and bitcoin's price diverged during this period. With bitcoin peaking at nearly $49,000, having recovered 2/3 of its previous bull market peak, Ethereum peaked at around $2,700, just over half of its previous bull market peak. But since the Bitcoin ETF landed, the ETH/BTC trend has rebounded significantly, in addition to the expectation of an upcoming Ethereum ETF, another important reason is that the delayed Cancun upgrade recently announced public testing on the Goerli test network, signaling that it is on the edge. As things stand, the Cancun upgrade will not take place until the first quarter of 2024 at the earliest. The Cancun upgrade is part of Ethereum's Serenity phase, designed to address Ethereum's low TPS and high transaction costs at this stage, and follows the Frontier, Homestead, and Metropolis phases of Ethereum. Prior to Serenity, Ethereum had gone through Frontier, Homestead, and Metropolis phases, which seperately addressed problems of developing thresholds, Dos attacks, and POS transition on Ethereum. The Ethereum roadmap clearly states that the main goal of the current phase is to realize cheaper transactions and a better user experience.

Source: TradingView
2. Content of the Cancun Upgrade
As a decentralized community, Ethereum's upgrades are based on proposals made by the developer community that are ultimately supported by the majority of the Ethereum community, including the ERC proposals that have been adopted and those that are still under discussion or will be implemented on the mainnet soon, collectively referred to as EIP proposals. At the Cancun upgrade, five EIP proposals are expected to be adopted: EIP-1153, EIP-4788, EIP-5656, EIP-6780 and EIP-4844.
2.1 Essential Mission EIP-4844
Blob: EIP-4844 introduced a new transaction type for Ethereum, the blob, a 125kb data block. Blobs compress and encode transaction data and are not permanently stored on Ethereum as CALLDATA bytecodes, which greatly reduces gas consumption, but cannot be accessed directly in EVMs.The EIP-4844 implementation allows for up to two blobs per transaction and up to 16 blobs per block. After the implementation of EIP-4844, each transaction can carry up to two blobs, and each block can carry up to 16 blobs. However, the Ethereum community recommends that each block carry eight blobs, and when the number exceeded 8, it can continue to be carried, but will face a relatively constant increase in gas cost until it reaches the maximum of 16 blobs.
In addition, two other core technologies utilized in EIP-4844 are KZG polynomial promises and temporary storage, which were analyzed in detail in our previous article Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design, which explored the design of the DA and historical data layers. In summary, EIP-4844's changes to the size of Ethereum's individual block capacity and the location where transaction data is stored have significantly increased the TPS of Ethereum network while reducing its gas.
2.1 Side Missions EIP-1553
EIP-1553: This proposal is made to reduce storage costs during contract interactions. A transaction on Ethereum can be broken down into multiple frames created by the CALL instruction set, which may belong to different contracts and thus may involve the transfer of information across multiple contracts. There are two ways of transferring state between contracts, one is in the form of input/output, and the other is to call SSTORE/SLOAD bytecode for on-chain permanent storage. In the past, data is stored and transmitted in the form of memory, which has lower cost, but if the whole transmission process passes through any untrustworthy third-party contract, there will be a huge security risk. However, if use the SSTORE/SLOAD bytecode, it will bring a considerable amount of storage overhead and increase the burden of on-chain storage. EIP-1553 solves this problem by introducing the instantaneous storage opcodes TSTORE and TLOAD. Variables stored by these two bytecodes have the same properties as those stored by the SSTORE/SLOAD bytecodes and cannot be modified during transmission. However, the difference is that the transiently stored data will not remain on the chain after the transaction is over, but to be destroyed like the temporary variables, which realize the security of the state transmission process and the relatively low storage cost.

Source: Kernel Ventures
EIP-4788: In the beacon chain after Ethereum's POS upgrade, each new execution block contains the Roots of the parent beacon block, and even if the missing of some of the older Roots, it only need to keep some of the latest Roots during the process of creating a new block due to the reliability of the Roots that have been stored by the Consensus Layer. However, in the process of creating new blocks, frequently requesting data from the EVM to the consensus layer will cause inefficiency and create possibilities for MEV. Therefore, in EIP-4788, it is proposed to use a specialized Beacon Root Contract to store the latest Roots, which makes the Roots of the parent beacons exposed by EVM, and greatly improves the efficiency of calling for data.

Source: Kernel Ventures
EIP-5656: Copying data in memory is a very high-frequency basic operation on Ethereum, but performing this operation on the EVM incurs a lot of overhead. To solve this problem, the Ethereum community proposed the MCOPY opcode in EIP-5656, which allows efficient replication on EVMs. MCOPY uses a special data structure for short-term storage of the data in charge, including efficient slice access and in-memory object replication. Having a dedicated MCOPY instruction also provides forward-looking protection against changes in the gas cost of CALL instructions in future Ethereum upgrades.

Source: Kernel Ventures
EIP-6780: In Ethereum, SELFDESTRUCT can destroy a contract and clear out all the code and all the state associated with that contract. However, in the Verkle Tree structure, that will be used in the future of Ethereum, this poses a huge problem. In Ethereum that uses Verkle Tire to store state, the emptied storage will be marked as previously written but empty, which will not result in observable differences in EVM execution, but will result in different Verkle Commitments for created and deleted contracts compared to operations that did not take place, which will result in data validation issues for Ethereum in the Verkle Tree structure. data validation problems under the Verkle Tree structure. As a result, SELFDESTRUCT in EIP-6780 retains only the ability to return ETH from a contract to a specified address, leaving the code and storage state associated with that contract on the Ethereum.
3. Prospect of Different Verticals Post Cancun Upgrade
3.1 DA
3.1.1 Profit Model
For an introduction to the principles of DA and the various DA types, it can be learned from our organization's previous article Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design. For DA projects, the revenue comes from the fees paid by users for storing data on them, and the expenses come from the fees paid to maintain the operation of the storage network and the persistence and security of the stored data. The remaining value of the network is the value accumulated by the network, and the main means for DA projects to realize the value increase is to improve the utilization of network storage space, so as to attract as many users as possible to use the network for storage. On the other hand, improvements in storage technology such as data compression or slice-and-dice storage can reduce network expenses, and on the other hand, realize higher value accumulation.
3.1.2 Detachment of DA
There are three main types of DA services today, DA for main chain, modularization DA, and Storage Chain DA, which are described and differentiated in Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design.
3.1.3 Impact of Cancun Upgrade on DA
User requirements: After the Cancun upgrade, the historical transaction data of Ethereum will increase by tens of times. These historical data will also bring about greater storage needs. Since the Ethereum after the Cancun upgrade has not realized the improvement of storage performance, the DA layer of the main chain adopts a simple and regular cleaning of these histories, and this part of the data storage market will naturally fall on the heads of all kinds of DA projects, which will bring about greater user demand.Direction of Development: The increase in the historical data of Ethereum after the Cancun upgrade will prompt major DA projects to improve the efficiency and interoperability of data interaction with Ethereum in order to better capture this part of the market. It is foreseeable that all kinds of cross-public chain storage bridge technologies will become the focus of the development of storage public chain DAs and modular DAs, while for the main chain DAs of Ethereum, it is also necessary to consider how to further enhance their compatibility with the main network and minimize the transmission costs and risks.
3.1.4 Cancun Upgrade and Various DA Verticals
The Cancun upgrade brought faster data growth to Ethereum while not changing the data storage method synchronized across the network, which made the main chain have to carry out regular cleaning of a large amount of historical data and delegate the function of long-term storage of transaction data. However, this part of the historical data is still in demand in the process of airdrops conducted by project parties and data analysis by on-chain analytics organizations. The value of the data behind it will attract competition from different DA projects, and the key to determining the market share lies in the data security and storage cost of DA projects.
DA for main chain: In the current stage of DA for main chain projects, such as EthStorage, its storage market mainly comes from some images, music and other large-memory data of the NFT project on Ethereum. Due to the high compatibility between the node clusters and Ethereum, the main chain DA can realize secure data interaction with the main network of Ethereum at a low cost. At the same time, it stores the storage index data on the smart contract of Ethereum and does not completely detach the DA layer from Ethereum, which has received strong support from the Ethereum Foundation. For the storage market brought by Ethereum, the main chain-specific DA has a natural advantage over other DAs.Modularization DA and Storage Chain DA: These projects are difficult to achieve a competitive advantage in historical data storage performance in the Cancun upgrade compared to the DA for main chain. However, at this stage, DA for main chain is still in the testing stage and has not been fully implemented, while the Cancun upgrade is imminent, and if the dedicated DA projects fail to provide an implemented storage solution before the Cancun upgrade, this round of data value mining may still be dominated by modular DAs.
3.1.5 Opportunities for DA Post Cancun Upgrade
EthStorage: DA for main chain, like EthStorage, will be the biggest beneficiary of the Cancun upgrade, which deserves attention. In addition, after the recent news that the Cancun upgrade may take place in February this year, EthStorage's official X account has also been very active, releasing its latest official website and annual report, and the marketing seems to be very successful.
Let’s celebrate the reveal of our new website! Please visit http://EthStorage.io to see the brand new design!Meet the Frontier of ScalabilityReal-time Cost Comparison with EthereumHow EthStorage WorksCore Features of EthStorageApplications Enabled by EthStorage
However, comparing the content of the latest official website with that of the 2022 version, except for the cooler front-end effect and more detailed introduction, it has not realized too many innovations in service functions, and the main promotion is still storage and Web3Q domain name service. If interested, can click the following link to get the test token W3Q to experience EthStorage service on Galileo Chain network. To get the token, you need to have a W3Q domain name or an account with a balance of more than 0.1 ETH on the main network. Judging from the recent outpouring of water from the tap, there has not been very large participation at this stage, despite some publicity. However, combined with the fact that EthStorage just received a $7 million seed round of financing in July this year and did not see any obvious source of this funding, it is possible that the project is secretly brewing some infrastructure advancement, waiting for the Cancun upgrade to arrive in the pre-release to attract the greatest heat.

EthStorage's Faucet, Source: Web3q.io
Celestia: Celestia is currently the leading modular DA project. Compared to the DA for main chain projects still in development, Celestia have started to make its mark since the last bull market and received its first round of funding. After more than two years of precipitation, Celestia perfected its rollup model, token model, and finally, after a long period of testing, completed its main online launch and first airdrop on October 31st. The price of the coin has been rising since the market opened and recently exceeded US$20. According to the current circulation of 150 million TIA, the market capitalization of this project has already reached 3 billion US dollars. However, considering the limited service group of the blockchain historical storage track, the market capitalization of TIA has far exceeded that of Arweave, a traditional storage public chain with a richer profit model, and directly pushed the market capitalization of Filecoin, although there is still some room for growth compared to the bull market, and the market capitalization of TIA is somewhat overestimated at this stage. However, with the support of the Star project and the enthusiasm for airdrops that has not dissipated, if the Cancun upgrade can move forward in the first quarter of this year as expected, Celestia is still a play to watch. However, there is one risk worth noting: the Ethereum Foundation has repeatedly emphasized in discussions involving Celestia that any project that departs from the Ethereum's DA layer will not be Layer2, indicating a rejection for third party storage projects such as Celestia. The possible presentation of the Ethereum Foundation before and after the Cancun upgrade will also add uncertainty to Celestia's pricing.

Source: CoinmarketCap
3.2 Layer2
3.2.1 Profit Model
Due to the increasing number of users and the continuous development of projects on Ethereum, the low TPS of Ethereum has become a huge obstacle to the further development of its ecosystem, and the high transaction fees on Ethereum also make it difficult to promote some projects involving complex interactions on a large scale. However, many projects have already landed on Ethereum, and there are huge costs and risks in migrating, and at the same time, except for the Bitcoin public chain focused on payment, it is difficult to find a public chain with the same security as Ethereum. The emergence of Layer2 is an attempt to solve the above problems by placing all transaction processing and calculations on another public chain (Layer2), verifying the packaged data through the smart contracts bridged with Layer1, and changing the status on the main network. Layer2 focuses on transaction processing and validation, using Ethereum as the DA layer to store compressed transaction data, resulting in faster speeds and lower computational costs. Users who wish to use Layer2 to execute transactions must purchase Layer2 tokens and pay the network operator in advance. The Layer2 network operator has to pay for the security of the data stored on Ethereum, and the revenue of Layer2 is the amount paid by users for the security of the Layer2 data minus the amount paid by Layer2 for the security of the data on Layer1. So for Layer2 on Ethereum, the following two improvements can bring more revenue. From the perspective of open source, the more active the Ethereum ecosystem is, the more projects there are, the more users and projects will have the need to reduce gas and accelerate transactions, which will bring a larger user base to the Layer2 ecosystem, and under the premise that the profit from a single transaction remains unchanged, more transactions will bring more revenue to the Layer2 network operator. From the point of view of cost saving, if the storage cost of Ethereum decreases, the DA layer storage cost paid by the Layer2 project side decreases, and the number of transactions remains unchanged, the Layer2 operator can also get more revenue.
3.2.2 Detachment of Layer2
Around 2018, the Layer2 scheme of Ethereum presents a blossoming situation, and there are 4 kinds of schemes: Sidechain, Rollup, State Channel and Plasma. However, due to the risk of data unavailability during off-chain transmission and a large number of grief attacks, State Channel has been gradually marginalized from Layer2 schemes at this stage, and Plasma is relatively niche and cannot enter the top 10 in terms of TVL in Layer2, so it will not be discussed there. Finally, Layer2 solutions in the form of sidechains that do not use Ethereum as a DA layer at all have been gradually excluded from the definition of Layer2. In this paper, we will only discuss the mainstream Layer2 scheme Rollup and analyze it with its sub-tracks ZK Rollup and Op Rollup.
Optimistic Rollup
Implementation Principle: To begin with, Optimistic Rollup chain needs to deploy a bridge contract on the Ethereum main network, through which it can realize the interaction with the Ethereum main network.Op Layer2 will batch pack the user's transaction data and send it to the Ethereum, which includes the latest state root of the account on Layer2, the batch processed root and the compressed transaction data. data. At this stage, these data are stored in the form of Calldata in the Chain Bridge contract, although it has reduced a lot of gas compared to the permanent storage in MPT, but it is still a considerable data overhead, and also creates a lot of obstacles to the possible performance improvement of Op Layer2 (Optimistic Rollup Layer2) in the future.

Source: Kernel Ventures
Current status: These days, Op Layer2 is the top ecosystem of Layer2, with the top five Layer2 in TVL all from Optimistic Rollup ecosystem. Also, the combined TVL of Optimism and Arbitrium alone have exceeded 16 billion dollars.

Source: L2BEAT
  One of the main reasons why Op Rollup ecosystem can occupy the leading position now is its friendly development environment. It has completed the first round of Layer2 release and mainnet launch before ZK Rollup, which attracted a large number of DApp developers suffering from the limitations of Ethereum fees and low TPS, and shifted the position of DApp development from Layer1 to Layer2 migration. At the same time, Op Layer2 has a higher compatibility with EVM in the bottom layer, which clears the obstacles for the migration of projects on the main network of Ethereum, and realizes the deployment of various types of DApps on Ethereum such as Uniswap, Sushiswap, Cureve and so on to Layer2 in the fastest possible time, and even attracts projects such as Wordcoin and other projects to migrate from the main network of Polygon. At the present stage, Op Layer2 has not only Uniswap V3, a leading Ethereum DeFi, and GMX, a native DeFi project with a TVL of more than 100 million dollars, but also Friend.tech, a SocialFi project with a transaction fee of more than 20 million dollars, which not only completes the accumulation of the number of projects, but also promotes the qualitative breakthrough of the whole ecosystem by the high-quality projects in each track. But in the long run, ZK Lite will not be the best choice. However, in the long run, ZK Layer2 (ZK Rollup Layer2) has a higher TPS limit and lower gas consumption for a single transaction, and Op Layer2 will face a fierce competition with ZK Layer2 when ZK Rollup technology is gradually improved.

Source: Dune
ZK Rollup (Zero-knowledge Rollup)
Implementation Principle: The transaction data in ZK Layer2 has the similar processing method as Op Layer2, which is packaged and processed in Layer2 and then returned to the smart contract in Layer1 to be stored in Calldata. However, the transaction data in Layer2 has an extra step of generating ZKp, and it does not need to return the compressed transaction data to the network, but only needs to return the transaction root and batch root with ZKp used for verifying the legitimacy of the corresponding transaction. The data returned to Layer1 via ZK Rollup does not require any window period and can be updated in real time on the main network after validation.

Source: Kernel Ventures
Current status: ZK Layer2 has become the second largest Layer2 ecosystem, right after Op Layer2 with 4 of the top 10 Layer2 in TVL ranking are ZK Layer2. But the general phenomenon is that there are not any ZK Layer2 strong enough as Op Layer2. While we all think that ZK Layer2 have a good prospect, they just can't be developed. The first reason is that the early release of Op Layer2 has attracted many developers to implement projects on it, and if they can't get enough benefits from project migration, it is unlikely that they will migrate their projects that have already generated stable income on Op Layer2. Secondly, many ZK Layer2 projects are still struggling with the compatibility of the underlying layer with Ethereum. For example, Linea, a ZK star project, is currently incompatible with many EVM opcodes, which brings a lot of development obstacles for developers to adapt to EVM. And another star project, zkSync, is currently unable to realize compatibility with the underlying layer of EVM, and can only be compatible with some development tools of Ethereum.

Source: Kernel Ventures
  Compatibility with Ethereum also makes it difficult to migrate native projects to it. Since bytecode is not fully interoperable, projects need to make changes to the underlying contract to adapt to ZKEVM, a process that involves many difficulties and risks and thus slows down the migration process of Ethereum native projects. It can be seen that at this stage, most of the projects on ZK Layer2 are native projects, and they are mainly DeFi such as Zigzag and SyncSwap, which are relatively less difficult to develop, and the total number and diversity of projects on ZK Layer2 are waiting for further development. However, the advantage of ZK Layer2 lies in its technological advancement. If the compatibility between ZKEVM and EVM can be realized and the ZKp generation algorithm can be perfected, the performance of ZK Layer2 will have a better upper limit compared to Op Layer2. This is also the reason why ZK Layer2 projects continue to emerge in the Op Layer2-dominated market. As the Op Layer2 track has already been carved up, the most appropriate way for the latecomers to attract users to migrate from their original networks is to propose an expected better solution. However, even if ZK Layer2 is technically perfected one day, if Op Layer2 has formed a comprehensive ecosystem with enough projects on the ground, even if there is a Layer2 with better performance, whether users and developers are willing to take the huge risk of migrating will still be an unknown. In addition, Op Layer2 is also making improvements at this stage to stabilize its ecological position, including Optimism's open-source Op Stack to assist other Op Layer2 developers in rapid development, and improvements to the challenge method such as the dichotomous challenge method. While ZK Layer2 is in the process of improvement, Op Layer2 is not slowing down its development, so the important task of ZK Layer2 at this stage is to grasp the improvement of cryptographic algorithms and EVM compatibility in order to prevent users' dependence on the Op Layer2 ecosystem.
3.2.3 Impact of Cancun Upgrade on Layer2
Transaction speed: After Cancun's upgrade, a block can carry up to 20 times more data through a blob, while keeping the block's exit speed unchanged. Therefore, theoretically, Layer2, which uses Layer1 as the DA layer and settlement layer, can also get up to 20 times the TPS increase compared to the original. Even at a 10x increase, any one of the major Layer2 stars would exceed the highest transaction speed in the history of the mainnet.

Source: L2BEAT
Transaction fee: One of the most important factors limiting the decline of the Layer2 network is the cost of data security provided to Layer1, which is currently quoted at close to $3 for 1KB of Calldata data stored on an Ethereum smart contract. But through the Cancun upgrade, Layer2 packaged transaction data is only stored in the form of blobs in the consensus layer of Ethereum, and 1 GB of data storage costs only about $0.1 a month, which greatly reduces the operating costs of Layer2. This greatly reduces Layer2's operating costs. As for the revenue generated from this open source, Layer2 operators will surely give a portion of it to users in order to attract more users and thus reduce Layer2's transaction costs.Scalability: The impact of the Cancun upgrade on Layer2 is mainly due to its temporary storage scheme and the new blob data type. Temporary storage periodically removes old state on the main network that is not useful for current validation, which reduces the storage pressure on nodes, thus speeding up network synchronization and node access between Layer1 and Layer2 at the same time. The blob, with its large external space and flexible adjustment mechanism based on the price of gas, can better adapt to changes in the network transaction volume, increasing the number of blobs carried by a block when the transaction volume is too large, and decreasing it when the transaction volume drops.
3.2.4 Cancun Upgrade and Various Layer2 Verticals
The Cancun upgrade will be positive for the entire Layer2 ecosystem. Since the core change in the Cancun upgrade is to reduce the cost of data storage and the size of individual blocks on Ethereum, Layer2, which uses Ethereum as its DA layer, will naturally see a corresponding increase in TPS and a reduction in the storage fees it pays to Layer1. However, due to the difference in the degree of use of the two Rollups for the Ethereum DA layer, there will be a difference in the degree of benefit for Op Layer2 and ZK Layer2.
Op Layer2: Since Op Layer2 needs to leave the compressed transaction data on the Ethereum for recording, it needs to pay more transaction fees to the Ethereum than ZK Layer2. Therefore, by reducing the gas consumption through EIP-4844, Op Layer2 can get a larger reduction in fees, thus narrowing the disadvantage of ZK Layer2 in terms of fee difference. At the same time, this round of Ethereum gas reduction is also bound to attract more participants and developers, compared with ZK Layer2, which has not issued any coins and its underlying layer is difficult to be compatible with EVMs, more projects and capitals will tend to flock to Op Layer2, especially Arbitrium, which has a strong performance in the recent period, which may lead to a new round of development of Layer2 ecosystem dominated by Op Layer2. This may lead to a new round of development in the Layer2 ecosystem led by Op Layer2, especially for SocialFi and GameFi projects, which are affected by high fees and have difficulties in providing quality user experience. Along with that, this phase of Layer2 is likely to see the emergence of many quality projects that can approach the Web2 user experience. If this round of development is taken by Op again, it will further widen the gap with the ZK Layer2 ecosystem, making it difficult enough for ZK Layer2 to catch up.ZK Layer2: Compared to Op Layer2, the benefit of downward gas adjustments will be smaller because ZK Layer2 does not need to store transaction-specific information on the chain, and although ZK Layer2 is still in the process of development and does not have the large ecosystem of Op Layer2, the facilities of Op Layer2 have already been improved, and there is more intense competition for the development of Op Layer2, which is attracted by the Cancun upgrades. However, the facilities on Op Layer2 are already well established and there is more competition for development on it, and it may not be wise for the new entrants attracted by the Cancun upgrades to compete with the already mature Op Layer2 developers. If ZK Layer2 is able to improve the supporting facilities for developers at this stage and provide a better development environment for developers, considering the better expectation of ZK Layer2 and the fierce competition in the market, new developers may choose to flock to the ZK Layer2 track, and this process will speed up the process of catching up with ZK Layer2, and achieve the goal of catching up with Op Layer2 before Op Layer2 completely dominates the market. before Op Layer2 completely dominates the market.
3.2.5 Opportunities for Layer2 Post Cancun Upgrade
DYDX:Although DYDX is a DEX deployed on Ethereum, its functions and principles are very different from traditional DEX on Ethereum such as Uniswap. First of all, it chooses thin orders instead of the AMM trading model used by mainstream DEXs, which allows users to have a smoother trading experience and creates a good condition for leveraged trading on it. In addition, it utilizes Layer 2 solutions such as StarkEx to achieve scalability and process transactions, packaging transactions off-chain and transmitting them back on-chain. Through the underlying principles of Layer2, DYDX allows users to obtain a far lower transaction cost than traditional DEX, with each transaction costing only about $0.005. At a time when the Cancun upgrade and the volatility of Ethereum and related tokens is almost certain to see a surge in high-risk investments such as leveraged trading. Through the Cancun upgrade, the transaction fees on DYDX will surpass those of CEX even for small transactions, while providing higher fairness and security, thus providing an excellent trading environment for high-risk investments and leveraged trading enthusiasts. From the above perspective, the Cancun upgrade will bring a very good opportunity for DYDX.Rollup Node:The data that was regularly purged in the Cancun upgrade is no longer relevant for the validation of new out-of-block, but that doesn't mean that there is no value in that purged data. For example, projects that are about to be airdropped conveniently need complete historical data to determine the security of the funds of each project that is about to receive airdrops, and there are also some on-chain analytics organizations that often need complete historical data to trace the flow of funds. At this time, one option is to query the historical data from the Rollup operator of Layer2, and in the process the Rollup operator can charge for data retrieval. Therefore, in the context of the Cancun upgrade, if we can effectively improve the data storage and retrieval mechanism on Rollup, and develop related projects in advance for layout, it will greatly increase the possibility of project survival and further development.
3.3 DApp
3.3.1 Profit Model
Similar to Web2 applications, DApps serves to provide a service to users on Ethereum. For example, Uniswap provides users with real-time exchange of different ERC20 tokens; Aave provides users with overcollateralized lending and flash lending services; and Mirror provides creators with decentralized content creation opportunities. However, the difference is that in Web2, the main way to profit is to attract more users to the platform through low-cost and high-quality services, and then use the traffic as a value to attract third-party advertisements and profit from the advertisements. However, DApp maintains zero infringement on users' attention in the whole process, and does not provide any recommendation to users, but collects the corresponding commission from a single service after providing a certain service to users. Thus, the value of a DApp comes mainly from the number of times users use the DApp's services and the depth of each interaction, and if a DApp wants to increase its value, it needs to provide services that are better than those of similar DApps, so that more developers will tend to use it rather than other DApps.
3.3.2 Detachment of DApps
At this stage, Ethereum DApps are dominated by DeFi, GameFi, and SocialFi. There were some Gamble projects in the early days, but due to the limitation of Ethereum's transaction speeds and the release of EOS, which is a more suitable public chain, the Gamble projects have gradually declined on Ethereum. These three types of DApps provide financial, gaming and social services respectively, and realize value capture from them.
DeFi
Implementation Principle: DeFi is essentially one or a series of smart contracts on Ethereum.In the release phase of DeFi, relevant contracts (such as coin contracts, exchange contracts, etc.) need to be deployed on the Ethereum main network, and the contracts will realize the interaction between DeFi function modules and Ethereum through the interfaces. When users interact with DeFi, they will call the contract interface to deposit, withdraw and exchange coins, etc. The DeFi smart contract will package the transaction data, interact with Ethereum through the script interface of the contract, and record the state changes on the Ethereum chain. In this process, the DeFi contract will charge a certain fee as a reward for upstream and downstream liquidity providers and for its own profit.Current status: DeFi has an absolute dominance among DApps. Apart from cross-chain projects and Layer2 projects. DeFi occupies other places in the top 10 DApps in terms of contract assets on Ethereum. Until this time, the cumulative number of DeFi users on Ethereum has exceeded 40 million. Although the number of monthly active users has declined from the peak of nearly 8 million in November 2021 due to the impact of the bear market, with the recovery of the market, the number of monthly active users has also recovered to about half of the peak, and is waiting for the next round of the bull market to make another surge. Meanwhile, DeFi is becoming more diverse and versatile. From the early cryptocurrency trading and mortgage lending to the current leveraged trading, forward buying, NFT financing, flash loans, etc., financial methods that can be realized in Web2 have been gradually realized in DeFi, even somthing can't be realized in Web2, including flash loans, have also been realized in DeFi.

Source: DAppRadar
SocialFi
Implementation Principle: Similar to traditional design platforms, SocialFi supports individuals to create content and publish the created content through the platform to spread it and further attract followers for the accounts, while users can access the content they need and obtain the services they need through the platform. The difference is that the content published by users, the interaction records between the content publishers and their fans, and the information of the accounts themselves are all decentralized through blockchain smart contracts, which means that the ownership of the information is returned to each individual account. For the SocialFi platform, the more people are willing to create and share content through its platform, the more revenue it can generate by providing these services. The cost of user interaction on the SocialFi platform minus the cost of storing account and transaction data is the profit of the SocialFi project.Current status: Although the UAW (User Active Wallet) of SocialFi seems to be comparable to DeFi's when it comes to the Head project, its volume often comes from the airdrop expectation of some projects, which is unsustainable. After the intial boom, Friend.tech had less than 1,000 UAWs these days. And when comparing with DeFi outside the top 5, it is more supportive of this conclusion. The root cause of this is that SocialFi's high service fees and inefficiencies have made it impossible for SocialFi to take on the social attributes it is supposed to have, and it has been reduced to a purely speculative platform.

Source: DAppRadar
GameFi
Implementation Principle: The application of GameFi is similar to that of SocialFi, except that the object of application has become a game. At this stage, the mainstream profit method of GameFi is to sell the props in the game for profit.Current status: If the project owner wants to get more profits, more people to participate in the game is essentially needed. At this stage, there are only two things that can attract users to participate in the game, one is the fun of the game, which drives users to buy props in order to get the right to participate in the game or a better gaming experience. The other is the expectation of profitability, as users believe they can sell the props at a higher price in the future. The first model is similar to Steam, where the program gets real money and the users get to enjoy the game. In the other model, if the users and the project's profits come from the constant influx of new users, and once the new funds can not offset the project's props issued, the project will quickly fall into a vicious cycle of selling, market expectations decline, and continue to sell and difficult to sustainably realize the revenue, with the Ponzi attribute. Due to the limitations brought by blockchain fees and transaction speed, GameFi at this stage is basically unable to achieve the user experience required by the former mode, and is mostly based on the second mode.
3.3.3 Impact of Cancun Upgrade on DApps
Performance optimization: Cancun upgraded a block can carry more transaction data, corresponding to the DApp can realize more state changes. According to the average expansion of 8 blob capacity calculation, Cancun upgraded DApp processing speed can reach ten times the original.Reduced Costs: Data storage costs are a fixed expense for DApps, and both Layer1 and Layer2 DApps directly or indirectly utilize Ethereum to record the status of accounts within the DApp. With the Cancun upgrade, every transaction in a DApp can be stored as a blob of data, significantly reducing the cost of running the DApp.Functionality Expansions: Due to the high cost of storage on Ethereum, project owners are deliberately reducing the amount of data that can be uploaded during the development of DApps. This has made it impossible to migrate many Web2 experiences to DApps, such as SocialFi's inability to support video creation in Twitter, or even if they could, the data would not be as secure as Ethereum on the underlying chain, and GameFi's gameplay options are often low-level and uninteresting, as every state change needs to be recorded on the chain. With the Cancun upgrade, project owners will have more opportunities to experiment with these aspects.
3.3.4 Cancun Upgrade and Various DApp Verticals
DeFi: The impact of the Cancun upgrade on DeFi is relatively small because the only thing that needs to be recorded in DeFi is the current state of the user's assets in the contract, whether it is pledged, borrowed or other states, and the amount of data required to be stored is much smaller than that of the other two types of DApps. However, the increase of Ethereum's TPS brought by the Cancun upgrade can greatly facilitate the arbitrage business of DeFi, which has a high trading frequency, and the leverage business, which needs to complete the opening and closing of positions in a short period of time. At the same time, the reduction in storage costs, which is not evident in single-coin exchanges, can also add up to significant fee savings in leveraged and arbitrage transactions.SocialFi: The Cancun upgrade has the most immediate impact on SocialFi's performance. The Cancun upgrade improves the ability of SocialFi's smart contracts to process and store large amounts of data to provide a superior user experience that is closer to that of Web2. At the same time, basic interactions such as user creation, commenting, liking, etc. on SocialFi can be done at a lower cost, thus attracting truly socially oriented long-term participants.GameFi: For Asset on chain games in the last bull market, the effect is similar to DeFi, with a relatively small decrease in storage cost. But the increase in TPS can still benefit high frequency interactions, timeliness of interactions, and support for interactive features that can improve game playability. Fully On-chain games are more directly affected by the Cancun upgrade. Since all game logic, state, and data is stored on the chain, the Cancun upgrade will significantly reduce the cost of operation and user interaction for the Fully On-chain game. At the same time, the initial deployment cost of the game will also be greatly reduced, thus lowering the threshold for game development and encouraging the emergence of more fully chain games in the future.
3.3.5 Opportunities for DApps Post Cancun Upgrade
Dark Forest: Since the third quarter of 2023, perhaps because of the question that traditional asset-on-chain games are not decentralized enough, or simply because the traditional GameFi narrative seemed lukewarm, capital began to look for new growth points, Fully On-chain games began to explode and attracted a lot of attention. But for the fully on-chain game on Ethereum, the transaction speed of 15 TPS and the storage cost of 16 gas single bytes for the CALLDATA field severely limit the upper limit of its development. The landing of the Cancun upgrade can be a good improvement to both problems, combined with the continuous development of related projects in the second half of 2023, the Cancun upgrade can bring a relatively large positive for this track. Considering the head effect, Dark Forest is one of the few fully on-chain games from the last round of the bull market, with a relatively well-established community base, and has not yet issued its own tokens. It should have good prospects if the project side takes action around the time of Cancun's upgrade.
4. Conclusion
The landing of the Cancun upgrade will not only bring higher TPS and lower storage costs to Ethereum, but also a surge in storage pressure. DA and Layer2 are the ones that will be heavily impacted by the upgrade. In contrast, DA projects that do not use Ethereum in their underlying data storage are not supported by the Ethereum development community, and while there are opportunities, it need to be more cautious when dealing with specific projects. Since most of the ZK system Layer2 tokens have not yet been introduced, and Arbitrium has strengthened significantly in the recent period in anticipation of the Cancun upgrade, if the price of Arb's coins can stabilize through the pullback phase, Arb and its ecosystem of related projects should see a good rise along with the landing of Cancun. Due to the influx of speculators, the DYDX project may also have some opportunity at the Cancun upgrade node. Finally, Rollup has a natural advantage for storing Layer2-related transaction history data, when it comes to providing historical data access services, Rollup on Layer2 will also be a good choice.
If we take a longer-term perspective, the Cancun upgrade has created conditions for the development and performance of various types of DApps, and in the future, we will inevitably see Web3 projects gradually approaching Web2 in terms of interactive functions and real-time performance, which will bring Ethereum to the goal of a world computer, and it is worth making long-term investments for any pragmatic development projects. Ethereum has been in a weak position relative to Bitcoin in the recent market rally, and while Bitcoin has recovered to nearly 2/3 of its previous bull market high, Ethereum has not yet recovered 1/2 of its previous high.The arrival of the Cancun upgrade may change this trend and bring Ethereum a round of complementary gains, after all, as a rare public chain that can maintain profitability while in the midst of token deflation, there is indeed an undervalued value at this stage.
Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, DApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world.
Reference
eips.Ethereums-core: https://eips.Ethereum.org/coreEthStorage 官网: https://eth-store.w3eth.io/#/EIP-1153: Transient storage opcodes: https://eips.Ethereum.org/EIPS/eip-1153EIP-4788: Beacon block root in the EVM: https://eips.Ethereum.org/EIPS/eip-4788EIP-5656: MCOPY - Memory copying instruction:https://eips.Ethereum.org/EIPS/eip-5656EIP-6780: SELFDESTRUCT only in same transaction:https://eips.Ethereum.org/EIPS/eip-6780零知识卷叠如何运作:https://Ethereum.org/zh/developers/docs/scaling/ZK-rollups#how-do-ZK-rollups-workOPTIMISTIC ROLLUPS:https://Ethereum.org/developers/docs/scaling/optimistic-rollupsZK、ZKVM、ZKEVM 及其未来:https://foresightnews.pro/article/detail/11802重建与突破,探讨全链游戏的现在与未来:https://foresightnews.pro/article/detail/39608一文分析 Axie Infinity 背后的经济模式:https://www.tuoluo.cn/article/detail-10066131.html
🐳
🐳
LIVE
Kernel Ventures
--
Kernel Ventures: Outlook for the Pan-Ethereum Ecosystem under the Cancun Upgrade
Author: Kernel Ventures Jerry Luo
Reviewers: Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
Ethereum has completed the first three upgrade stages, which respectively solved the problems of development threshold, DoS attack and POS transformation. The main upgrade goal at this stage is to reduce transaction fees and optimize user experience. The four proposals of EIP-1553, EIP-4788, EIP-5656, and EIP-6780 respectively reduce the cost of interaction between contracts, improve the efficiency of beacon chain access, reduce the cost of data copying, and limit the role permissions of SELFDESTRUCT bytecode. EIP-4844 can greatly improve Ethereum's TPS and reduce data storage costs by introducing blob data plugged into blocks. The Cancun upgrade will have additional benefits for Ethereum-specific DA in the DA track. At this stage, the Ethereum Foundation has a repulsive attitude towards DA solutions that do not use Ethereum at all for data storage. Due to Op Layer2's more mature development environment and greater demand for the Ethereum DA layer, the Cancun upgrade may bring relatively more benefits to it. The Cancun upgrade can increase the performance upper limit of DApps, making DApps have functions closer to those of Apps in Web2. Full-chain games that have not dissipated in popularity and require a large amount of storage space on Ethereum are worthy of attention. At this stage, the Ethereum ecosystem is undervalued, and the Cancun upgrade may be a signal that Ethereum begins to strengthen.
Kernel Ventures: Cancun Upgrade — And Its Impact on the Broader Ethereum EcosystemAuthor: Kernel Ventures Jerry Luo Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua TLDR: Ethereum has completed the first three upgrade phases, which addressed the problems of development thresholds, DoS attacks, and POS transition, respectively, and the main goal of the current upgrade phase is to reduce transaction fees and optimize the user experience.EIP-1553, EIP-4788, EIP-5656 and EIP-6780, have been realized to reduce the cost of inter-contractual interactions, to improve the efficiency of beacon chain access, to reduce the cost of data replication, and to limit the role authority of the SELFDESTRUCT byte code, respectively.By introducing blob data that is external to the block, EIP-4844 can greatly increase Ethereum's TPS and reduce data storage costs.The Cancun upgrade will have additional benefits for Ethereum-specific DAs while Ethereum Foundation is not open to DA solutions that do not utilize Ethereum at all in their data stores.The Cancun upgrade is likely to be relatively more favorable for Op Layer2 due to its more mature development environment as well as the increased demand for the Ethereum DA layer.The Cancun upgrade will raise the performance limit of the DApp, allowing it to have functionality closer to that of an app in Web2. On-chain games that haven't lost their popularity while need a lot of storage space on Ethereum are worth watching.The Ethereum is undervalued at this stage, and the Cancun upgrade could be the signal that Ethereum starts to soar up. 1. Ethereum's Upgrade From October 16th of last year, when Cointelegraph published fake news about the pass of the Bitcoin ETF, to January 11th this year, when the ETF was finally passed, crypto market has experienced a surge in price. As bitcoin is more directly impacted by ETF, Ethereum and bitcoin's price diverged during this period. With bitcoin peaking at nearly $49,000, having recovered 2/3 of its previous bull market peak, Ethereum peaked at around $2,700, just over half of its previous bull market peak. But since the Bitcoin ETF landed, the ETH/BTC trend has rebounded significantly, in addition to the expectation of an upcoming Ethereum ETF, another important reason is that the delayed Cancun upgrade recently announced public testing on the Goerli test network, signaling that it is on the edge. As things stand, the Cancun upgrade will not take place until the first quarter of 2024 at the earliest. The Cancun upgrade is part of Ethereum's Serenity phase, designed to address Ethereum's low TPS and high transaction costs at this stage, and follows the Frontier, Homestead, and Metropolis phases of Ethereum. Prior to Serenity, Ethereum had gone through Frontier, Homestead, and Metropolis phases, which seperately addressed problems of developing thresholds, Dos attacks, and POS transition on Ethereum. The Ethereum roadmap clearly states that the main goal of the current phase is to realize cheaper transactions and a better user experience. Source: TradingView 2. Content of the Cancun Upgrade As a decentralized community, Ethereum's upgrades are based on proposals made by the developer community that are ultimately supported by the majority of the Ethereum community, including the ERC proposals that have been adopted and those that are still under discussion or will be implemented on the mainnet soon, collectively referred to as EIP proposals. At the Cancun upgrade, five EIP proposals are expected to be adopted: EIP-1153, EIP-4788, EIP-5656, EIP-6780 and EIP-4844. 2.1 Essential Mission EIP-4844 Blob: EIP-4844 introduced a new transaction type for Ethereum, the blob, a 125kb data block. Blobs compress and encode transaction data and are not permanently stored on Ethereum as CALLDATA bytecodes, which greatly reduces gas consumption, but cannot be accessed directly in EVMs.The EIP-4844 implementation allows for up to two blobs per transaction and up to 16 blobs per block. After the implementation of EIP-4844, each transaction can carry up to two blobs, and each block can carry up to 16 blobs. However, the Ethereum community recommends that each block carry eight blobs, and when the number exceeded 8, it can continue to be carried, but will face a relatively constant increase in gas cost until it reaches the maximum of 16 blobs. In addition, two other core technologies utilized in EIP-4844 are KZG polynomial promises and temporary storage, which were analyzed in detail in our previous article Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design, which explored the design of the DA and historical data layers. In summary, EIP-4844's changes to the size of Ethereum's individual block capacity and the location where transaction data is stored have significantly increased the TPS of Ethereum network while reducing its gas. 2.1 Side Missions EIP-1553 EIP-1553: This proposal is made to reduce storage costs during contract interactions. A transaction on Ethereum can be broken down into multiple frames created by the CALL instruction set, which may belong to different contracts and thus may involve the transfer of information across multiple contracts. There are two ways of transferring state between contracts, one is in the form of input/output, and the other is to call SSTORE/SLOAD bytecode for on-chain permanent storage. In the past, data is stored and transmitted in the form of memory, which has lower cost, but if the whole transmission process passes through any untrustworthy third-party contract, there will be a huge security risk. However, if use the SSTORE/SLOAD bytecode, it will bring a considerable amount of storage overhead and increase the burden of on-chain storage. EIP-1553 solves this problem by introducing the instantaneous storage opcodes TSTORE and TLOAD. Variables stored by these two bytecodes have the same properties as those stored by the SSTORE/SLOAD bytecodes and cannot be modified during transmission. However, the difference is that the transiently stored data will not remain on the chain after the transaction is over, but to be destroyed like the temporary variables, which realize the security of the state transmission process and the relatively low storage cost. Source: Kernel Ventures EIP-4788: In the beacon chain after Ethereum's POS upgrade, each new execution block contains the Roots of the parent beacon block, and even if the missing of some of the older Roots, it only need to keep some of the latest Roots during the process of creating a new block due to the reliability of the Roots that have been stored by the Consensus Layer. However, in the process of creating new blocks, frequently requesting data from the EVM to the consensus layer will cause inefficiency and create possibilities for MEV. Therefore, in EIP-4788, it is proposed to use a specialized Beacon Root Contract to store the latest Roots, which makes the Roots of the parent beacons exposed by EVM, and greatly improves the efficiency of calling for data. Source: Kernel Ventures EIP-5656: Copying data in memory is a very high-frequency basic operation on Ethereum, but performing this operation on the EVM incurs a lot of overhead. To solve this problem, the Ethereum community proposed the MCOPY opcode in EIP-5656, which allows efficient replication on EVMs. MCOPY uses a special data structure for short-term storage of the data in charge, including efficient slice access and in-memory object replication. Having a dedicated MCOPY instruction also provides forward-looking protection against changes in the gas cost of CALL instructions in future Ethereum upgrades. Source: Kernel Ventures EIP-6780: In Ethereum, SELFDESTRUCT can destroy a contract and clear out all the code and all the state associated with that contract. However, in the Verkle Tree structure, that will be used in the future of Ethereum, this poses a huge problem. In Ethereum that uses Verkle Tire to store state, the emptied storage will be marked as previously written but empty, which will not result in observable differences in EVM execution, but will result in different Verkle Commitments for created and deleted contracts compared to operations that did not take place, which will result in data validation issues for Ethereum in the Verkle Tree structure. data validation problems under the Verkle Tree structure. As a result, SELFDESTRUCT in EIP-6780 retains only the ability to return ETH from a contract to a specified address, leaving the code and storage state associated with that contract on the Ethereum. 3. Prospect of Different Verticals Post Cancun Upgrade 3.1 DA 3.1.1 Profit Model For an introduction to the principles of DA and the various DA types, it can be learned from our organization's previous article Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design. For DA projects, the revenue comes from the fees paid by users for storing data on them, and the expenses come from the fees paid to maintain the operation of the storage network and the persistence and security of the stored data. The remaining value of the network is the value accumulated by the network, and the main means for DA projects to realize the value increase is to improve the utilization of network storage space, so as to attract as many users as possible to use the network for storage. On the other hand, improvements in storage technology such as data compression or slice-and-dice storage can reduce network expenses, and on the other hand, realize higher value accumulation. 3.1.2 Detachment of DA There are three main types of DA services today, DA for main chain, modularization DA, and Storage Chain DA, which are described and differentiated in Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design. 3.1.3 Impact of Cancun Upgrade on DA User requirements: After the Cancun upgrade, the historical transaction data of Ethereum will increase by tens of times. These historical data will also bring about greater storage needs. Since the Ethereum after the Cancun upgrade has not realized the improvement of storage performance, the DA layer of the main chain adopts a simple and regular cleaning of these histories, and this part of the data storage market will naturally fall on the heads of all kinds of DA projects, which will bring about greater user demand.Direction of Development: The increase in the historical data of Ethereum after the Cancun upgrade will prompt major DA projects to improve the efficiency and interoperability of data interaction with Ethereum in order to better capture this part of the market. It is foreseeable that all kinds of cross-public chain storage bridge technologies will become the focus of the development of storage public chain DAs and modular DAs, while for the main chain DAs of Ethereum, it is also necessary to consider how to further enhance their compatibility with the main network and minimize the transmission costs and risks. 3.1.4 Cancun Upgrade and Various DA Verticals The Cancun upgrade brought faster data growth to Ethereum while not changing the data storage method synchronized across the network, which made the main chain have to carry out regular cleaning of a large amount of historical data and delegate the function of long-term storage of transaction data. However, this part of the historical data is still in demand in the process of airdrops conducted by project parties and data analysis by on-chain analytics organizations. The value of the data behind it will attract competition from different DA projects, and the key to determining the market share lies in the data security and storage cost of DA projects. DA for main chain: In the current stage of DA for main chain projects, such as EthStorage, its storage market mainly comes from some images, music and other large-memory data of the NFT project on Ethereum. Due to the high compatibility between the node clusters and Ethereum, the main chain DA can realize secure data interaction with the main network of Ethereum at a low cost. At the same time, it stores the storage index data on the smart contract of Ethereum and does not completely detach the DA layer from Ethereum, which has received strong support from the Ethereum Foundation. For the storage market brought by Ethereum, the main chain-specific DA has a natural advantage over other DAs.Modularization DA and Storage Chain DA: These projects are difficult to achieve a competitive advantage in historical data storage performance in the Cancun upgrade compared to the DA for main chain. However, at this stage, DA for main chain is still in the testing stage and has not been fully implemented, while the Cancun upgrade is imminent, and if the dedicated DA projects fail to provide an implemented storage solution before the Cancun upgrade, this round of data value mining may still be dominated by modular DAs. 3.1.5 Opportunities for DA Post Cancun Upgrade EthStorage: DA for main chain, like EthStorage, will be the biggest beneficiary of the Cancun upgrade, which deserves attention. In addition, after the recent news that the Cancun upgrade may take place in February this year, EthStorage's official X account has also been very active, releasing its latest official website and annual report, and the marketing seems to be very successful. Let’s celebrate the reveal of our new website! Please visit http://EthStorage.io to see the brand new design!Meet the Frontier of ScalabilityReal-time Cost Comparison with EthereumHow EthStorage WorksCore Features of EthStorageApplications Enabled by EthStorage However, comparing the content of the latest official website with that of the 2022 version, except for the cooler front-end effect and more detailed introduction, it has not realized too many innovations in service functions, and the main promotion is still storage and Web3Q domain name service. If interested, can click the following link to get the test token W3Q to experience EthStorage service on Galileo Chain network. To get the token, you need to have a W3Q domain name or an account with a balance of more than 0.1 ETH on the main network. Judging from the recent outpouring of water from the tap, there has not been very large participation at this stage, despite some publicity. However, combined with the fact that EthStorage just received a $7 million seed round of financing in July this year and did not see any obvious source of this funding, it is possible that the project is secretly brewing some infrastructure advancement, waiting for the Cancun upgrade to arrive in the pre-release to attract the greatest heat. EthStorage's Faucet, Source: Web3q.io Celestia: Celestia is currently the leading modular DA project. Compared to the DA for main chain projects still in development, Celestia have started to make its mark since the last bull market and received its first round of funding. After more than two years of precipitation, Celestia perfected its rollup model, token model, and finally, after a long period of testing, completed its main online launch and first airdrop on October 31st. The price of the coin has been rising since the market opened and recently exceeded US$20. According to the current circulation of 150 million TIA, the market capitalization of this project has already reached 3 billion US dollars. However, considering the limited service group of the blockchain historical storage track, the market capitalization of TIA has far exceeded that of Arweave, a traditional storage public chain with a richer profit model, and directly pushed the market capitalization of Filecoin, although there is still some room for growth compared to the bull market, and the market capitalization of TIA is somewhat overestimated at this stage. However, with the support of the Star project and the enthusiasm for airdrops that has not dissipated, if the Cancun upgrade can move forward in the first quarter of this year as expected, Celestia is still a play to watch. However, there is one risk worth noting: the Ethereum Foundation has repeatedly emphasized in discussions involving Celestia that any project that departs from the Ethereum's DA layer will not be Layer2, indicating a rejection for third party storage projects such as Celestia. The possible presentation of the Ethereum Foundation before and after the Cancun upgrade will also add uncertainty to Celestia's pricing. Source: CoinmarketCap 3.2 Layer2 3.2.1 Profit Model Due to the increasing number of users and the continuous development of projects on Ethereum, the low TPS of Ethereum has become a huge obstacle to the further development of its ecosystem, and the high transaction fees on Ethereum also make it difficult to promote some projects involving complex interactions on a large scale. However, many projects have already landed on Ethereum, and there are huge costs and risks in migrating, and at the same time, except for the Bitcoin public chain focused on payment, it is difficult to find a public chain with the same security as Ethereum. The emergence of Layer2 is an attempt to solve the above problems by placing all transaction processing and calculations on another public chain (Layer2), verifying the packaged data through the smart contracts bridged with Layer1, and changing the status on the main network. Layer2 focuses on transaction processing and validation, using Ethereum as the DA layer to store compressed transaction data, resulting in faster speeds and lower computational costs. Users who wish to use Layer2 to execute transactions must purchase Layer2 tokens and pay the network operator in advance. The Layer2 network operator has to pay for the security of the data stored on Ethereum, and the revenue of Layer2 is the amount paid by users for the security of the Layer2 data minus the amount paid by Layer2 for the security of the data on Layer1. So for Layer2 on Ethereum, the following two improvements can bring more revenue. From the perspective of open source, the more active the Ethereum ecosystem is, the more projects there are, the more users and projects will have the need to reduce gas and accelerate transactions, which will bring a larger user base to the Layer2 ecosystem, and under the premise that the profit from a single transaction remains unchanged, more transactions will bring more revenue to the Layer2 network operator. From the point of view of cost saving, if the storage cost of Ethereum decreases, the DA layer storage cost paid by the Layer2 project side decreases, and the number of transactions remains unchanged, the Layer2 operator can also get more revenue. 3.2.2 Detachment of Layer2 Around 2018, the Layer2 scheme of Ethereum presents a blossoming situation, and there are 4 kinds of schemes: Sidechain, Rollup, State Channel and Plasma. However, due to the risk of data unavailability during off-chain transmission and a large number of grief attacks, State Channel has been gradually marginalized from Layer2 schemes at this stage, and Plasma is relatively niche and cannot enter the top 10 in terms of TVL in Layer2, so it will not be discussed there. Finally, Layer2 solutions in the form of sidechains that do not use Ethereum as a DA layer at all have been gradually excluded from the definition of Layer2. In this paper, we will only discuss the mainstream Layer2 scheme Rollup and analyze it with its sub-tracks ZK Rollup and Op Rollup. Optimistic Rollup Implementation Principle: To begin with, Optimistic Rollup chain needs to deploy a bridge contract on the Ethereum main network, through which it can realize the interaction with the Ethereum main network.Op Layer2 will batch pack the user's transaction data and send it to the Ethereum, which includes the latest state root of the account on Layer2, the batch processed root and the compressed transaction data. data. At this stage, these data are stored in the form of Calldata in the Chain Bridge contract, although it has reduced a lot of gas compared to the permanent storage in MPT, but it is still a considerable data overhead, and also creates a lot of obstacles to the possible performance improvement of Op Layer2 (Optimistic Rollup Layer2) in the future. Source: Kernel Ventures Current status: These days, Op Layer2 is the top ecosystem of Layer2, with the top five Layer2 in TVL all from Optimistic Rollup ecosystem. Also, the combined TVL of Optimism and Arbitrium alone have exceeded 16 billion dollars. Source: L2BEAT   One of the main reasons why Op Rollup ecosystem can occupy the leading position now is its friendly development environment. It has completed the first round of Layer2 release and mainnet launch before ZK Rollup, which attracted a large number of DApp developers suffering from the limitations of Ethereum fees and low TPS, and shifted the position of DApp development from Layer1 to Layer2 migration. At the same time, Op Layer2 has a higher compatibility with EVM in the bottom layer, which clears the obstacles for the migration of projects on the main network of Ethereum, and realizes the deployment of various types of DApps on Ethereum such as Uniswap, Sushiswap, Cureve and so on to Layer2 in the fastest possible time, and even attracts projects such as Wordcoin and other projects to migrate from the main network of Polygon. At the present stage, Op Layer2 has not only Uniswap V3, a leading Ethereum DeFi, and GMX, a native DeFi project with a TVL of more than 100 million dollars, but also Friend.tech, a SocialFi project with a transaction fee of more than 20 million dollars, which not only completes the accumulation of the number of projects, but also promotes the qualitative breakthrough of the whole ecosystem by the high-quality projects in each track. But in the long run, ZK Lite will not be the best choice. However, in the long run, ZK Layer2 (ZK Rollup Layer2) has a higher TPS limit and lower gas consumption for a single transaction, and Op Layer2 will face a fierce competition with ZK Layer2 when ZK Rollup technology is gradually improved. Source: Dune ZK Rollup (Zero-knowledge Rollup) Implementation Principle: The transaction data in ZK Layer2 has the similar processing method as Op Layer2, which is packaged and processed in Layer2 and then returned to the smart contract in Layer1 to be stored in Calldata. However, the transaction data in Layer2 has an extra step of generating ZKp, and it does not need to return the compressed transaction data to the network, but only needs to return the transaction root and batch root with ZKp used for verifying the legitimacy of the corresponding transaction. The data returned to Layer1 via ZK Rollup does not require any window period and can be updated in real time on the main network after validation. Source: Kernel Ventures Current status: ZK Layer2 has become the second largest Layer2 ecosystem, right after Op Layer2 with 4 of the top 10 Layer2 in TVL ranking are ZK Layer2. But the general phenomenon is that there are not any ZK Layer2 strong enough as Op Layer2. While we all think that ZK Layer2 have a good prospect, they just can't be developed. The first reason is that the early release of Op Layer2 has attracted many developers to implement projects on it, and if they can't get enough benefits from project migration, it is unlikely that they will migrate their projects that have already generated stable income on Op Layer2. Secondly, many ZK Layer2 projects are still struggling with the compatibility of the underlying layer with Ethereum. For example, Linea, a ZK star project, is currently incompatible with many EVM opcodes, which brings a lot of development obstacles for developers to adapt to EVM. And another star project, zkSync, is currently unable to realize compatibility with the underlying layer of EVM, and can only be compatible with some development tools of Ethereum. Source: Kernel Ventures   Compatibility with Ethereum also makes it difficult to migrate native projects to it. Since bytecode is not fully interoperable, projects need to make changes to the underlying contract to adapt to ZKEVM, a process that involves many difficulties and risks and thus slows down the migration process of Ethereum native projects. It can be seen that at this stage, most of the projects on ZK Layer2 are native projects, and they are mainly DeFi such as Zigzag and SyncSwap, which are relatively less difficult to develop, and the total number and diversity of projects on ZK Layer2 are waiting for further development. However, the advantage of ZK Layer2 lies in its technological advancement. If the compatibility between ZKEVM and EVM can be realized and the ZKp generation algorithm can be perfected, the performance of ZK Layer2 will have a better upper limit compared to Op Layer2. This is also the reason why ZK Layer2 projects continue to emerge in the Op Layer2-dominated market. As the Op Layer2 track has already been carved up, the most appropriate way for the latecomers to attract users to migrate from their original networks is to propose an expected better solution. However, even if ZK Layer2 is technically perfected one day, if Op Layer2 has formed a comprehensive ecosystem with enough projects on the ground, even if there is a Layer2 with better performance, whether users and developers are willing to take the huge risk of migrating will still be an unknown. In addition, Op Layer2 is also making improvements at this stage to stabilize its ecological position, including Optimism's open-source Op Stack to assist other Op Layer2 developers in rapid development, and improvements to the challenge method such as the dichotomous challenge method. While ZK Layer2 is in the process of improvement, Op Layer2 is not slowing down its development, so the important task of ZK Layer2 at this stage is to grasp the improvement of cryptographic algorithms and EVM compatibility in order to prevent users' dependence on the Op Layer2 ecosystem. 3.2.3 Impact of Cancun Upgrade on Layer2 Transaction speed: After Cancun's upgrade, a block can carry up to 20 times more data through a blob, while keeping the block's exit speed unchanged. Therefore, theoretically, Layer2, which uses Layer1 as the DA layer and settlement layer, can also get up to 20 times the TPS increase compared to the original. Even at a 10x increase, any one of the major Layer2 stars would exceed the highest transaction speed in the history of the mainnet. Source: L2BEAT Transaction fee: One of the most important factors limiting the decline of the Layer2 network is the cost of data security provided to Layer1, which is currently quoted at close to $3 for 1KB of Calldata data stored on an Ethereum smart contract. But through the Cancun upgrade, Layer2 packaged transaction data is only stored in the form of blobs in the consensus layer of Ethereum, and 1 GB of data storage costs only about $0.1 a month, which greatly reduces the operating costs of Layer2. This greatly reduces Layer2's operating costs. As for the revenue generated from this open source, Layer2 operators will surely give a portion of it to users in order to attract more users and thus reduce Layer2's transaction costs.Scalability: The impact of the Cancun upgrade on Layer2 is mainly due to its temporary storage scheme and the new blob data type. Temporary storage periodically removes old state on the main network that is not useful for current validation, which reduces the storage pressure on nodes, thus speeding up network synchronization and node access between Layer1 and Layer2 at the same time. The blob, with its large external space and flexible adjustment mechanism based on the price of gas, can better adapt to changes in the network transaction volume, increasing the number of blobs carried by a block when the transaction volume is too large, and decreasing it when the transaction volume drops. 3.2.4 Cancun Upgrade and Various Layer2 Verticals The Cancun upgrade will be positive for the entire Layer2 ecosystem. Since the core change in the Cancun upgrade is to reduce the cost of data storage and the size of individual blocks on Ethereum, Layer2, which uses Ethereum as its DA layer, will naturally see a corresponding increase in TPS and a reduction in the storage fees it pays to Layer1. However, due to the difference in the degree of use of the two Rollups for the Ethereum DA layer, there will be a difference in the degree of benefit for Op Layer2 and ZK Layer2. Op Layer2: Since Op Layer2 needs to leave the compressed transaction data on the Ethereum for recording, it needs to pay more transaction fees to the Ethereum than ZK Layer2. Therefore, by reducing the gas consumption through EIP-4844, Op Layer2 can get a larger reduction in fees, thus narrowing the disadvantage of ZK Layer2 in terms of fee difference. At the same time, this round of Ethereum gas reduction is also bound to attract more participants and developers, compared with ZK Layer2, which has not issued any coins and its underlying layer is difficult to be compatible with EVMs, more projects and capitals will tend to flock to Op Layer2, especially Arbitrium, which has a strong performance in the recent period, which may lead to a new round of development of Layer2 ecosystem dominated by Op Layer2. This may lead to a new round of development in the Layer2 ecosystem led by Op Layer2, especially for SocialFi and GameFi projects, which are affected by high fees and have difficulties in providing quality user experience. Along with that, this phase of Layer2 is likely to see the emergence of many quality projects that can approach the Web2 user experience. If this round of development is taken by Op again, it will further widen the gap with the ZK Layer2 ecosystem, making it difficult enough for ZK Layer2 to catch up.ZK Layer2: Compared to Op Layer2, the benefit of downward gas adjustments will be smaller because ZK Layer2 does not need to store transaction-specific information on the chain, and although ZK Layer2 is still in the process of development and does not have the large ecosystem of Op Layer2, the facilities of Op Layer2 have already been improved, and there is more intense competition for the development of Op Layer2, which is attracted by the Cancun upgrades. However, the facilities on Op Layer2 are already well established and there is more competition for development on it, and it may not be wise for the new entrants attracted by the Cancun upgrades to compete with the already mature Op Layer2 developers. If ZK Layer2 is able to improve the supporting facilities for developers at this stage and provide a better development environment for developers, considering the better expectation of ZK Layer2 and the fierce competition in the market, new developers may choose to flock to the ZK Layer2 track, and this process will speed up the process of catching up with ZK Layer2, and achieve the goal of catching up with Op Layer2 before Op Layer2 completely dominates the market. before Op Layer2 completely dominates the market. 3.2.5 Opportunities for Layer2 Post Cancun Upgrade DYDX:Although DYDX is a DEX deployed on Ethereum, its functions and principles are very different from traditional DEX on Ethereum such as Uniswap. First of all, it chooses thin orders instead of the AMM trading model used by mainstream DEXs, which allows users to have a smoother trading experience and creates a good condition for leveraged trading on it. In addition, it utilizes Layer 2 solutions such as StarkEx to achieve scalability and process transactions, packaging transactions off-chain and transmitting them back on-chain. Through the underlying principles of Layer2, DYDX allows users to obtain a far lower transaction cost than traditional DEX, with each transaction costing only about $0.005. At a time when the Cancun upgrade and the volatility of Ethereum and related tokens is almost certain to see a surge in high-risk investments such as leveraged trading. Through the Cancun upgrade, the transaction fees on DYDX will surpass those of CEX even for small transactions, while providing higher fairness and security, thus providing an excellent trading environment for high-risk investments and leveraged trading enthusiasts. From the above perspective, the Cancun upgrade will bring a very good opportunity for DYDX.Rollup Node:The data that was regularly purged in the Cancun upgrade is no longer relevant for the validation of new out-of-block, but that doesn't mean that there is no value in that purged data. For example, projects that are about to be airdropped conveniently need complete historical data to determine the security of the funds of each project that is about to receive airdrops, and there are also some on-chain analytics organizations that often need complete historical data to trace the flow of funds. At this time, one option is to query the historical data from the Rollup operator of Layer2, and in the process the Rollup operator can charge for data retrieval. Therefore, in the context of the Cancun upgrade, if we can effectively improve the data storage and retrieval mechanism on Rollup, and develop related projects in advance for layout, it will greatly increase the possibility of project survival and further development. 3.3 DApp 3.3.1 Profit Model Similar to Web2 applications, DApps serves to provide a service to users on Ethereum. For example, Uniswap provides users with real-time exchange of different ERC20 tokens; Aave provides users with overcollateralized lending and flash lending services; and Mirror provides creators with decentralized content creation opportunities. However, the difference is that in Web2, the main way to profit is to attract more users to the platform through low-cost and high-quality services, and then use the traffic as a value to attract third-party advertisements and profit from the advertisements. However, DApp maintains zero infringement on users' attention in the whole process, and does not provide any recommendation to users, but collects the corresponding commission from a single service after providing a certain service to users. Thus, the value of a DApp comes mainly from the number of times users use the DApp's services and the depth of each interaction, and if a DApp wants to increase its value, it needs to provide services that are better than those of similar DApps, so that more developers will tend to use it rather than other DApps. 3.3.2 Detachment of DApps At this stage, Ethereum DApps are dominated by DeFi, GameFi, and SocialFi. There were some Gamble projects in the early days, but due to the limitation of Ethereum's transaction speeds and the release of EOS, which is a more suitable public chain, the Gamble projects have gradually declined on Ethereum. These three types of DApps provide financial, gaming and social services respectively, and realize value capture from them. DeFi Implementation Principle: DeFi is essentially one or a series of smart contracts on Ethereum.In the release phase of DeFi, relevant contracts (such as coin contracts, exchange contracts, etc.) need to be deployed on the Ethereum main network, and the contracts will realize the interaction between DeFi function modules and Ethereum through the interfaces. When users interact with DeFi, they will call the contract interface to deposit, withdraw and exchange coins, etc. The DeFi smart contract will package the transaction data, interact with Ethereum through the script interface of the contract, and record the state changes on the Ethereum chain. In this process, the DeFi contract will charge a certain fee as a reward for upstream and downstream liquidity providers and for its own profit.Current status: DeFi has an absolute dominance among DApps. Apart from cross-chain projects and Layer2 projects. DeFi occupies other places in the top 10 DApps in terms of contract assets on Ethereum. Until this time, the cumulative number of DeFi users on Ethereum has exceeded 40 million. Although the number of monthly active users has declined from the peak of nearly 8 million in November 2021 due to the impact of the bear market, with the recovery of the market, the number of monthly active users has also recovered to about half of the peak, and is waiting for the next round of the bull market to make another surge. Meanwhile, DeFi is becoming more diverse and versatile. From the early cryptocurrency trading and mortgage lending to the current leveraged trading, forward buying, NFT financing, flash loans, etc., financial methods that can be realized in Web2 have been gradually realized in DeFi, even somthing can't be realized in Web2, including flash loans, have also been realized in DeFi. Source: DAppRadar SocialFi Implementation Principle: Similar to traditional design platforms, SocialFi supports individuals to create content and publish the created content through the platform to spread it and further attract followers for the accounts, while users can access the content they need and obtain the services they need through the platform. The difference is that the content published by users, the interaction records between the content publishers and their fans, and the information of the accounts themselves are all decentralized through blockchain smart contracts, which means that the ownership of the information is returned to each individual account. For the SocialFi platform, the more people are willing to create and share content through its platform, the more revenue it can generate by providing these services. The cost of user interaction on the SocialFi platform minus the cost of storing account and transaction data is the profit of the SocialFi project.Current status: Although the UAW (User Active Wallet) of SocialFi seems to be comparable to DeFi's when it comes to the Head project, its volume often comes from the airdrop expectation of some projects, which is unsustainable. After the intial boom, Friend.tech had less than 1,000 UAWs these days. And when comparing with DeFi outside the top 5, it is more supportive of this conclusion. The root cause of this is that SocialFi's high service fees and inefficiencies have made it impossible for SocialFi to take on the social attributes it is supposed to have, and it has been reduced to a purely speculative platform. Source: DAppRadar GameFi Implementation Principle: The application of GameFi is similar to that of SocialFi, except that the object of application has become a game. At this stage, the mainstream profit method of GameFi is to sell the props in the game for profit.Current status: If the project owner wants to get more profits, more people to participate in the game is essentially needed. At this stage, there are only two things that can attract users to participate in the game, one is the fun of the game, which drives users to buy props in order to get the right to participate in the game or a better gaming experience. The other is the expectation of profitability, as users believe they can sell the props at a higher price in the future. The first model is similar to Steam, where the program gets real money and the users get to enjoy the game. In the other model, if the users and the project's profits come from the constant influx of new users, and once the new funds can not offset the project's props issued, the project will quickly fall into a vicious cycle of selling, market expectations decline, and continue to sell and difficult to sustainably realize the revenue, with the Ponzi attribute. Due to the limitations brought by blockchain fees and transaction speed, GameFi at this stage is basically unable to achieve the user experience required by the former mode, and is mostly based on the second mode. 3.3.3 Impact of Cancun Upgrade on DApps Performance optimization: Cancun upgraded a block can carry more transaction data, corresponding to the DApp can realize more state changes. According to the average expansion of 8 blob capacity calculation, Cancun upgraded DApp processing speed can reach ten times the original.Reduced Costs: Data storage costs are a fixed expense for DApps, and both Layer1 and Layer2 DApps directly or indirectly utilize Ethereum to record the status of accounts within the DApp. With the Cancun upgrade, every transaction in a DApp can be stored as a blob of data, significantly reducing the cost of running the DApp.Functionality Expansions: Due to the high cost of storage on Ethereum, project owners are deliberately reducing the amount of data that can be uploaded during the development of DApps. This has made it impossible to migrate many Web2 experiences to DApps, such as SocialFi's inability to support video creation in Twitter, or even if they could, the data would not be as secure as Ethereum on the underlying chain, and GameFi's gameplay options are often low-level and uninteresting, as every state change needs to be recorded on the chain. With the Cancun upgrade, project owners will have more opportunities to experiment with these aspects. 3.3.4 Cancun Upgrade and Various DApp Verticals DeFi: The impact of the Cancun upgrade on DeFi is relatively small because the only thing that needs to be recorded in DeFi is the current state of the user's assets in the contract, whether it is pledged, borrowed or other states, and the amount of data required to be stored is much smaller than that of the other two types of DApps. However, the increase of Ethereum's TPS brought by the Cancun upgrade can greatly facilitate the arbitrage business of DeFi, which has a high trading frequency, and the leverage business, which needs to complete the opening and closing of positions in a short period of time. At the same time, the reduction in storage costs, which is not evident in single-coin exchanges, can also add up to significant fee savings in leveraged and arbitrage transactions.SocialFi: The Cancun upgrade has the most immediate impact on SocialFi's performance. The Cancun upgrade improves the ability of SocialFi's smart contracts to process and store large amounts of data to provide a superior user experience that is closer to that of Web2. At the same time, basic interactions such as user creation, commenting, liking, etc. on SocialFi can be done at a lower cost, thus attracting truly socially oriented long-term participants.GameFi: For Asset on chain games in the last bull market, the effect is similar to DeFi, with a relatively small decrease in storage cost. But the increase in TPS can still benefit high frequency interactions, timeliness of interactions, and support for interactive features that can improve game playability. Fully On-chain games are more directly affected by the Cancun upgrade. Since all game logic, state, and data is stored on the chain, the Cancun upgrade will significantly reduce the cost of operation and user interaction for the Fully On-chain game. At the same time, the initial deployment cost of the game will also be greatly reduced, thus lowering the threshold for game development and encouraging the emergence of more fully chain games in the future. 3.3.5 Opportunities for DApps Post Cancun Upgrade Dark Forest: Since the third quarter of 2023, perhaps because of the question that traditional asset-on-chain games are not decentralized enough, or simply because the traditional GameFi narrative seemed lukewarm, capital began to look for new growth points, Fully On-chain games began to explode and attracted a lot of attention. But for the fully on-chain game on Ethereum, the transaction speed of 15 TPS and the storage cost of 16 gas single bytes for the CALLDATA field severely limit the upper limit of its development. The landing of the Cancun upgrade can be a good improvement to both problems, combined with the continuous development of related projects in the second half of 2023, the Cancun upgrade can bring a relatively large positive for this track. Considering the head effect, Dark Forest is one of the few fully on-chain games from the last round of the bull market, with a relatively well-established community base, and has not yet issued its own tokens. It should have good prospects if the project side takes action around the time of Cancun's upgrade. 4. Conclusion The landing of the Cancun upgrade will not only bring higher TPS and lower storage costs to Ethereum, but also a surge in storage pressure. DA and Layer2 are the ones that will be heavily impacted by the upgrade. In contrast, DA projects that do not use Ethereum in their underlying data storage are not supported by the Ethereum development community, and while there are opportunities, it need to be more cautious when dealing with specific projects. Since most of the ZK system Layer2 tokens have not yet been introduced, and Arbitrium has strengthened significantly in the recent period in anticipation of the Cancun upgrade, if the price of Arb's coins can stabilize through the pullback phase, Arb and its ecosystem of related projects should see a good rise along with the landing of Cancun. Due to the influx of speculators, the DYDX project may also have some opportunity at the Cancun upgrade node. Finally, Rollup has a natural advantage for storing Layer2-related transaction history data, when it comes to providing historical data access services, Rollup on Layer2 will also be a good choice. If we take a longer-term perspective, the Cancun upgrade has created conditions for the development and performance of various types of DApps, and in the future, we will inevitably see Web3 projects gradually approaching Web2 in terms of interactive functions and real-time performance, which will bring Ethereum to the goal of a world computer, and it is worth making long-term investments for any pragmatic development projects. Ethereum has been in a weak position relative to Bitcoin in the recent market rally, and while Bitcoin has recovered to nearly 2/3 of its previous bull market high, Ethereum has not yet recovered 1/2 of its previous high.The arrival of the Cancun upgrade may change this trend and bring Ethereum a round of complementary gains, after all, as a rare public chain that can maintain profitability while in the midst of token deflation, there is indeed an undervalued value at this stage. Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, DApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world. Reference eips.Ethereums-core: https://eips.Ethereum.org/coreEthStorage 官网: https://eth-store.w3eth.io/#/EIP-1153: Transient storage opcodes: https://eips.Ethereum.org/EIPS/eip-1153EIP-4788: Beacon block root in the EVM: https://eips.Ethereum.org/EIPS/eip-4788EIP-5656: MCOPY - Memory copying instruction:https://eips.Ethereum.org/EIPS/eip-5656EIP-6780: SELFDESTRUCT only in same transaction:https://eips.Ethereum.org/EIPS/eip-6780零知识卷叠如何运作:https://Ethereum.org/zh/developers/docs/scaling/ZK-rollups#how-do-ZK-rollups-workOPTIMISTIC ROLLUPS:https://Ethereum.org/developers/docs/scaling/optimistic-rollupsZK、ZKVM、ZKEVM 及其未来:https://foresightnews.pro/article/detail/11802重建与突破,探讨全链游戏的现在与未来:https://foresightnews.pro/article/detail/39608一文分析 Axie Infinity 背后的经济模式:https://www.tuoluo.cn/article/detail-10066131.html

Kernel Ventures: Cancun Upgrade — And Its Impact on the Broader Ethereum Ecosystem

Author: Kernel Ventures Jerry Luo
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
Ethereum has completed the first three upgrade phases, which addressed the problems of development thresholds, DoS attacks, and POS transition, respectively, and the main goal of the current upgrade phase is to reduce transaction fees and optimize the user experience.EIP-1553, EIP-4788, EIP-5656 and EIP-6780, have been realized to reduce the cost of inter-contractual interactions, to improve the efficiency of beacon chain access, to reduce the cost of data replication, and to limit the role authority of the SELFDESTRUCT byte code, respectively.By introducing blob data that is external to the block, EIP-4844 can greatly increase Ethereum's TPS and reduce data storage costs.The Cancun upgrade will have additional benefits for Ethereum-specific DAs while Ethereum Foundation is not open to DA solutions that do not utilize Ethereum at all in their data stores.The Cancun upgrade is likely to be relatively more favorable for Op Layer2 due to its more mature development environment as well as the increased demand for the Ethereum DA layer.The Cancun upgrade will raise the performance limit of the DApp, allowing it to have functionality closer to that of an app in Web2. On-chain games that haven't lost their popularity while need a lot of storage space on Ethereum are worth watching.The Ethereum is undervalued at this stage, and the Cancun upgrade could be the signal that Ethereum starts to soar up.
1. Ethereum's Upgrade
From October 16th of last year, when Cointelegraph published fake news about the pass of the Bitcoin ETF, to January 11th this year, when the ETF was finally passed, crypto market has experienced a surge in price. As bitcoin is more directly impacted by ETF, Ethereum and bitcoin's price diverged during this period. With bitcoin peaking at nearly $49,000, having recovered 2/3 of its previous bull market peak, Ethereum peaked at around $2,700, just over half of its previous bull market peak. But since the Bitcoin ETF landed, the ETH/BTC trend has rebounded significantly, in addition to the expectation of an upcoming Ethereum ETF, another important reason is that the delayed Cancun upgrade recently announced public testing on the Goerli test network, signaling that it is on the edge. As things stand, the Cancun upgrade will not take place until the first quarter of 2024 at the earliest. The Cancun upgrade is part of Ethereum's Serenity phase, designed to address Ethereum's low TPS and high transaction costs at this stage, and follows the Frontier, Homestead, and Metropolis phases of Ethereum. Prior to Serenity, Ethereum had gone through Frontier, Homestead, and Metropolis phases, which seperately addressed problems of developing thresholds, Dos attacks, and POS transition on Ethereum. The Ethereum roadmap clearly states that the main goal of the current phase is to realize cheaper transactions and a better user experience.

Source: TradingView
2. Content of the Cancun Upgrade
As a decentralized community, Ethereum's upgrades are based on proposals made by the developer community that are ultimately supported by the majority of the Ethereum community, including the ERC proposals that have been adopted and those that are still under discussion or will be implemented on the mainnet soon, collectively referred to as EIP proposals. At the Cancun upgrade, five EIP proposals are expected to be adopted: EIP-1153, EIP-4788, EIP-5656, EIP-6780 and EIP-4844.
2.1 Essential Mission EIP-4844
Blob: EIP-4844 introduced a new transaction type for Ethereum, the blob, a 125kb data block. Blobs compress and encode transaction data and are not permanently stored on Ethereum as CALLDATA bytecodes, which greatly reduces gas consumption, but cannot be accessed directly in EVMs.The EIP-4844 implementation allows for up to two blobs per transaction and up to 16 blobs per block. After the implementation of EIP-4844, each transaction can carry up to two blobs, and each block can carry up to 16 blobs. However, the Ethereum community recommends that each block carry eight blobs, and when the number exceeded 8, it can continue to be carried, but will face a relatively constant increase in gas cost until it reaches the maximum of 16 blobs.
In addition, two other core technologies utilized in EIP-4844 are KZG polynomial promises and temporary storage, which were analyzed in detail in our previous article Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design, which explored the design of the DA and historical data layers. In summary, EIP-4844's changes to the size of Ethereum's individual block capacity and the location where transaction data is stored have significantly increased the TPS of Ethereum network while reducing its gas.
2.1 Side Missions EIP-1553
EIP-1553: This proposal is made to reduce storage costs during contract interactions. A transaction on Ethereum can be broken down into multiple frames created by the CALL instruction set, which may belong to different contracts and thus may involve the transfer of information across multiple contracts. There are two ways of transferring state between contracts, one is in the form of input/output, and the other is to call SSTORE/SLOAD bytecode for on-chain permanent storage. In the past, data is stored and transmitted in the form of memory, which has lower cost, but if the whole transmission process passes through any untrustworthy third-party contract, there will be a huge security risk. However, if use the SSTORE/SLOAD bytecode, it will bring a considerable amount of storage overhead and increase the burden of on-chain storage. EIP-1553 solves this problem by introducing the instantaneous storage opcodes TSTORE and TLOAD. Variables stored by these two bytecodes have the same properties as those stored by the SSTORE/SLOAD bytecodes and cannot be modified during transmission. However, the difference is that the transiently stored data will not remain on the chain after the transaction is over, but to be destroyed like the temporary variables, which realize the security of the state transmission process and the relatively low storage cost.

Source: Kernel Ventures
EIP-4788: In the beacon chain after Ethereum's POS upgrade, each new execution block contains the Roots of the parent beacon block, and even if the missing of some of the older Roots, it only need to keep some of the latest Roots during the process of creating a new block due to the reliability of the Roots that have been stored by the Consensus Layer. However, in the process of creating new blocks, frequently requesting data from the EVM to the consensus layer will cause inefficiency and create possibilities for MEV. Therefore, in EIP-4788, it is proposed to use a specialized Beacon Root Contract to store the latest Roots, which makes the Roots of the parent beacons exposed by EVM, and greatly improves the efficiency of calling for data.

Source: Kernel Ventures
EIP-5656: Copying data in memory is a very high-frequency basic operation on Ethereum, but performing this operation on the EVM incurs a lot of overhead. To solve this problem, the Ethereum community proposed the MCOPY opcode in EIP-5656, which allows efficient replication on EVMs. MCOPY uses a special data structure for short-term storage of the data in charge, including efficient slice access and in-memory object replication. Having a dedicated MCOPY instruction also provides forward-looking protection against changes in the gas cost of CALL instructions in future Ethereum upgrades.

Source: Kernel Ventures
EIP-6780: In Ethereum, SELFDESTRUCT can destroy a contract and clear out all the code and all the state associated with that contract. However, in the Verkle Tree structure, that will be used in the future of Ethereum, this poses a huge problem. In Ethereum that uses Verkle Tire to store state, the emptied storage will be marked as previously written but empty, which will not result in observable differences in EVM execution, but will result in different Verkle Commitments for created and deleted contracts compared to operations that did not take place, which will result in data validation issues for Ethereum in the Verkle Tree structure. data validation problems under the Verkle Tree structure. As a result, SELFDESTRUCT in EIP-6780 retains only the ability to return ETH from a contract to a specified address, leaving the code and storage state associated with that contract on the Ethereum.
3. Prospect of Different Verticals Post Cancun Upgrade
3.1 DA
3.1.1 Profit Model
For an introduction to the principles of DA and the various DA types, it can be learned from our organization's previous article Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design. For DA projects, the revenue comes from the fees paid by users for storing data on them, and the expenses come from the fees paid to maintain the operation of the storage network and the persistence and security of the stored data. The remaining value of the network is the value accumulated by the network, and the main means for DA projects to realize the value increase is to improve the utilization of network storage space, so as to attract as many users as possible to use the network for storage. On the other hand, improvements in storage technology such as data compression or slice-and-dice storage can reduce network expenses, and on the other hand, realize higher value accumulation.
3.1.2 Detachment of DA
There are three main types of DA services today, DA for main chain, modularization DA, and Storage Chain DA, which are described and differentiated in Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design.
3.1.3 Impact of Cancun Upgrade on DA
User requirements: After the Cancun upgrade, the historical transaction data of Ethereum will increase by tens of times. These historical data will also bring about greater storage needs. Since the Ethereum after the Cancun upgrade has not realized the improvement of storage performance, the DA layer of the main chain adopts a simple and regular cleaning of these histories, and this part of the data storage market will naturally fall on the heads of all kinds of DA projects, which will bring about greater user demand.Direction of Development: The increase in the historical data of Ethereum after the Cancun upgrade will prompt major DA projects to improve the efficiency and interoperability of data interaction with Ethereum in order to better capture this part of the market. It is foreseeable that all kinds of cross-public chain storage bridge technologies will become the focus of the development of storage public chain DAs and modular DAs, while for the main chain DAs of Ethereum, it is also necessary to consider how to further enhance their compatibility with the main network and minimize the transmission costs and risks.
3.1.4 Cancun Upgrade and Various DA Verticals
The Cancun upgrade brought faster data growth to Ethereum while not changing the data storage method synchronized across the network, which made the main chain have to carry out regular cleaning of a large amount of historical data and delegate the function of long-term storage of transaction data. However, this part of the historical data is still in demand in the process of airdrops conducted by project parties and data analysis by on-chain analytics organizations. The value of the data behind it will attract competition from different DA projects, and the key to determining the market share lies in the data security and storage cost of DA projects.
DA for main chain: In the current stage of DA for main chain projects, such as EthStorage, its storage market mainly comes from some images, music and other large-memory data of the NFT project on Ethereum. Due to the high compatibility between the node clusters and Ethereum, the main chain DA can realize secure data interaction with the main network of Ethereum at a low cost. At the same time, it stores the storage index data on the smart contract of Ethereum and does not completely detach the DA layer from Ethereum, which has received strong support from the Ethereum Foundation. For the storage market brought by Ethereum, the main chain-specific DA has a natural advantage over other DAs.Modularization DA and Storage Chain DA: These projects are difficult to achieve a competitive advantage in historical data storage performance in the Cancun upgrade compared to the DA for main chain. However, at this stage, DA for main chain is still in the testing stage and has not been fully implemented, while the Cancun upgrade is imminent, and if the dedicated DA projects fail to provide an implemented storage solution before the Cancun upgrade, this round of data value mining may still be dominated by modular DAs.
3.1.5 Opportunities for DA Post Cancun Upgrade
EthStorage: DA for main chain, like EthStorage, will be the biggest beneficiary of the Cancun upgrade, which deserves attention. In addition, after the recent news that the Cancun upgrade may take place in February this year, EthStorage's official X account has also been very active, releasing its latest official website and annual report, and the marketing seems to be very successful.
Let’s celebrate the reveal of our new website! Please visit http://EthStorage.io to see the brand new design!Meet the Frontier of ScalabilityReal-time Cost Comparison with EthereumHow EthStorage WorksCore Features of EthStorageApplications Enabled by EthStorage
However, comparing the content of the latest official website with that of the 2022 version, except for the cooler front-end effect and more detailed introduction, it has not realized too many innovations in service functions, and the main promotion is still storage and Web3Q domain name service. If interested, can click the following link to get the test token W3Q to experience EthStorage service on Galileo Chain network. To get the token, you need to have a W3Q domain name or an account with a balance of more than 0.1 ETH on the main network. Judging from the recent outpouring of water from the tap, there has not been very large participation at this stage, despite some publicity. However, combined with the fact that EthStorage just received a $7 million seed round of financing in July this year and did not see any obvious source of this funding, it is possible that the project is secretly brewing some infrastructure advancement, waiting for the Cancun upgrade to arrive in the pre-release to attract the greatest heat.

EthStorage's Faucet, Source: Web3q.io
Celestia: Celestia is currently the leading modular DA project. Compared to the DA for main chain projects still in development, Celestia have started to make its mark since the last bull market and received its first round of funding. After more than two years of precipitation, Celestia perfected its rollup model, token model, and finally, after a long period of testing, completed its main online launch and first airdrop on October 31st. The price of the coin has been rising since the market opened and recently exceeded US$20. According to the current circulation of 150 million TIA, the market capitalization of this project has already reached 3 billion US dollars. However, considering the limited service group of the blockchain historical storage track, the market capitalization of TIA has far exceeded that of Arweave, a traditional storage public chain with a richer profit model, and directly pushed the market capitalization of Filecoin, although there is still some room for growth compared to the bull market, and the market capitalization of TIA is somewhat overestimated at this stage. However, with the support of the Star project and the enthusiasm for airdrops that has not dissipated, if the Cancun upgrade can move forward in the first quarter of this year as expected, Celestia is still a play to watch. However, there is one risk worth noting: the Ethereum Foundation has repeatedly emphasized in discussions involving Celestia that any project that departs from the Ethereum's DA layer will not be Layer2, indicating a rejection for third party storage projects such as Celestia. The possible presentation of the Ethereum Foundation before and after the Cancun upgrade will also add uncertainty to Celestia's pricing.

Source: CoinmarketCap
3.2 Layer2
3.2.1 Profit Model
Due to the increasing number of users and the continuous development of projects on Ethereum, the low TPS of Ethereum has become a huge obstacle to the further development of its ecosystem, and the high transaction fees on Ethereum also make it difficult to promote some projects involving complex interactions on a large scale. However, many projects have already landed on Ethereum, and there are huge costs and risks in migrating, and at the same time, except for the Bitcoin public chain focused on payment, it is difficult to find a public chain with the same security as Ethereum. The emergence of Layer2 is an attempt to solve the above problems by placing all transaction processing and calculations on another public chain (Layer2), verifying the packaged data through the smart contracts bridged with Layer1, and changing the status on the main network. Layer2 focuses on transaction processing and validation, using Ethereum as the DA layer to store compressed transaction data, resulting in faster speeds and lower computational costs. Users who wish to use Layer2 to execute transactions must purchase Layer2 tokens and pay the network operator in advance. The Layer2 network operator has to pay for the security of the data stored on Ethereum, and the revenue of Layer2 is the amount paid by users for the security of the Layer2 data minus the amount paid by Layer2 for the security of the data on Layer1. So for Layer2 on Ethereum, the following two improvements can bring more revenue. From the perspective of open source, the more active the Ethereum ecosystem is, the more projects there are, the more users and projects will have the need to reduce gas and accelerate transactions, which will bring a larger user base to the Layer2 ecosystem, and under the premise that the profit from a single transaction remains unchanged, more transactions will bring more revenue to the Layer2 network operator. From the point of view of cost saving, if the storage cost of Ethereum decreases, the DA layer storage cost paid by the Layer2 project side decreases, and the number of transactions remains unchanged, the Layer2 operator can also get more revenue.
3.2.2 Detachment of Layer2
Around 2018, the Layer2 scheme of Ethereum presents a blossoming situation, and there are 4 kinds of schemes: Sidechain, Rollup, State Channel and Plasma. However, due to the risk of data unavailability during off-chain transmission and a large number of grief attacks, State Channel has been gradually marginalized from Layer2 schemes at this stage, and Plasma is relatively niche and cannot enter the top 10 in terms of TVL in Layer2, so it will not be discussed there. Finally, Layer2 solutions in the form of sidechains that do not use Ethereum as a DA layer at all have been gradually excluded from the definition of Layer2. In this paper, we will only discuss the mainstream Layer2 scheme Rollup and analyze it with its sub-tracks ZK Rollup and Op Rollup.
Optimistic Rollup
Implementation Principle: To begin with, Optimistic Rollup chain needs to deploy a bridge contract on the Ethereum main network, through which it can realize the interaction with the Ethereum main network.Op Layer2 will batch pack the user's transaction data and send it to the Ethereum, which includes the latest state root of the account on Layer2, the batch processed root and the compressed transaction data. data. At this stage, these data are stored in the form of Calldata in the Chain Bridge contract, although it has reduced a lot of gas compared to the permanent storage in MPT, but it is still a considerable data overhead, and also creates a lot of obstacles to the possible performance improvement of Op Layer2 (Optimistic Rollup Layer2) in the future.

Source: Kernel Ventures
Current status: These days, Op Layer2 is the top ecosystem of Layer2, with the top five Layer2 in TVL all from Optimistic Rollup ecosystem. Also, the combined TVL of Optimism and Arbitrium alone have exceeded 16 billion dollars.

Source: L2BEAT
  One of the main reasons why Op Rollup ecosystem can occupy the leading position now is its friendly development environment. It has completed the first round of Layer2 release and mainnet launch before ZK Rollup, which attracted a large number of DApp developers suffering from the limitations of Ethereum fees and low TPS, and shifted the position of DApp development from Layer1 to Layer2 migration. At the same time, Op Layer2 has a higher compatibility with EVM in the bottom layer, which clears the obstacles for the migration of projects on the main network of Ethereum, and realizes the deployment of various types of DApps on Ethereum such as Uniswap, Sushiswap, Cureve and so on to Layer2 in the fastest possible time, and even attracts projects such as Wordcoin and other projects to migrate from the main network of Polygon. At the present stage, Op Layer2 has not only Uniswap V3, a leading Ethereum DeFi, and GMX, a native DeFi project with a TVL of more than 100 million dollars, but also Friend.tech, a SocialFi project with a transaction fee of more than 20 million dollars, which not only completes the accumulation of the number of projects, but also promotes the qualitative breakthrough of the whole ecosystem by the high-quality projects in each track. But in the long run, ZK Lite will not be the best choice. However, in the long run, ZK Layer2 (ZK Rollup Layer2) has a higher TPS limit and lower gas consumption for a single transaction, and Op Layer2 will face a fierce competition with ZK Layer2 when ZK Rollup technology is gradually improved.

Source: Dune
ZK Rollup (Zero-knowledge Rollup)
Implementation Principle: The transaction data in ZK Layer2 has the similar processing method as Op Layer2, which is packaged and processed in Layer2 and then returned to the smart contract in Layer1 to be stored in Calldata. However, the transaction data in Layer2 has an extra step of generating ZKp, and it does not need to return the compressed transaction data to the network, but only needs to return the transaction root and batch root with ZKp used for verifying the legitimacy of the corresponding transaction. The data returned to Layer1 via ZK Rollup does not require any window period and can be updated in real time on the main network after validation.

Source: Kernel Ventures
Current status: ZK Layer2 has become the second largest Layer2 ecosystem, right after Op Layer2 with 4 of the top 10 Layer2 in TVL ranking are ZK Layer2. But the general phenomenon is that there are not any ZK Layer2 strong enough as Op Layer2. While we all think that ZK Layer2 have a good prospect, they just can't be developed. The first reason is that the early release of Op Layer2 has attracted many developers to implement projects on it, and if they can't get enough benefits from project migration, it is unlikely that they will migrate their projects that have already generated stable income on Op Layer2. Secondly, many ZK Layer2 projects are still struggling with the compatibility of the underlying layer with Ethereum. For example, Linea, a ZK star project, is currently incompatible with many EVM opcodes, which brings a lot of development obstacles for developers to adapt to EVM. And another star project, zkSync, is currently unable to realize compatibility with the underlying layer of EVM, and can only be compatible with some development tools of Ethereum.

Source: Kernel Ventures
  Compatibility with Ethereum also makes it difficult to migrate native projects to it. Since bytecode is not fully interoperable, projects need to make changes to the underlying contract to adapt to ZKEVM, a process that involves many difficulties and risks and thus slows down the migration process of Ethereum native projects. It can be seen that at this stage, most of the projects on ZK Layer2 are native projects, and they are mainly DeFi such as Zigzag and SyncSwap, which are relatively less difficult to develop, and the total number and diversity of projects on ZK Layer2 are waiting for further development. However, the advantage of ZK Layer2 lies in its technological advancement. If the compatibility between ZKEVM and EVM can be realized and the ZKp generation algorithm can be perfected, the performance of ZK Layer2 will have a better upper limit compared to Op Layer2. This is also the reason why ZK Layer2 projects continue to emerge in the Op Layer2-dominated market. As the Op Layer2 track has already been carved up, the most appropriate way for the latecomers to attract users to migrate from their original networks is to propose an expected better solution. However, even if ZK Layer2 is technically perfected one day, if Op Layer2 has formed a comprehensive ecosystem with enough projects on the ground, even if there is a Layer2 with better performance, whether users and developers are willing to take the huge risk of migrating will still be an unknown. In addition, Op Layer2 is also making improvements at this stage to stabilize its ecological position, including Optimism's open-source Op Stack to assist other Op Layer2 developers in rapid development, and improvements to the challenge method such as the dichotomous challenge method. While ZK Layer2 is in the process of improvement, Op Layer2 is not slowing down its development, so the important task of ZK Layer2 at this stage is to grasp the improvement of cryptographic algorithms and EVM compatibility in order to prevent users' dependence on the Op Layer2 ecosystem.
3.2.3 Impact of Cancun Upgrade on Layer2
Transaction speed: After Cancun's upgrade, a block can carry up to 20 times more data through a blob, while keeping the block's exit speed unchanged. Therefore, theoretically, Layer2, which uses Layer1 as the DA layer and settlement layer, can also get up to 20 times the TPS increase compared to the original. Even at a 10x increase, any one of the major Layer2 stars would exceed the highest transaction speed in the history of the mainnet.

Source: L2BEAT
Transaction fee: One of the most important factors limiting the decline of the Layer2 network is the cost of data security provided to Layer1, which is currently quoted at close to $3 for 1KB of Calldata data stored on an Ethereum smart contract. But through the Cancun upgrade, Layer2 packaged transaction data is only stored in the form of blobs in the consensus layer of Ethereum, and 1 GB of data storage costs only about $0.1 a month, which greatly reduces the operating costs of Layer2. This greatly reduces Layer2's operating costs. As for the revenue generated from this open source, Layer2 operators will surely give a portion of it to users in order to attract more users and thus reduce Layer2's transaction costs.Scalability: The impact of the Cancun upgrade on Layer2 is mainly due to its temporary storage scheme and the new blob data type. Temporary storage periodically removes old state on the main network that is not useful for current validation, which reduces the storage pressure on nodes, thus speeding up network synchronization and node access between Layer1 and Layer2 at the same time. The blob, with its large external space and flexible adjustment mechanism based on the price of gas, can better adapt to changes in the network transaction volume, increasing the number of blobs carried by a block when the transaction volume is too large, and decreasing it when the transaction volume drops.
3.2.4 Cancun Upgrade and Various Layer2 Verticals
The Cancun upgrade will be positive for the entire Layer2 ecosystem. Since the core change in the Cancun upgrade is to reduce the cost of data storage and the size of individual blocks on Ethereum, Layer2, which uses Ethereum as its DA layer, will naturally see a corresponding increase in TPS and a reduction in the storage fees it pays to Layer1. However, due to the difference in the degree of use of the two Rollups for the Ethereum DA layer, there will be a difference in the degree of benefit for Op Layer2 and ZK Layer2.
Op Layer2: Since Op Layer2 needs to leave the compressed transaction data on the Ethereum for recording, it needs to pay more transaction fees to the Ethereum than ZK Layer2. Therefore, by reducing the gas consumption through EIP-4844, Op Layer2 can get a larger reduction in fees, thus narrowing the disadvantage of ZK Layer2 in terms of fee difference. At the same time, this round of Ethereum gas reduction is also bound to attract more participants and developers, compared with ZK Layer2, which has not issued any coins and its underlying layer is difficult to be compatible with EVMs, more projects and capitals will tend to flock to Op Layer2, especially Arbitrium, which has a strong performance in the recent period, which may lead to a new round of development of Layer2 ecosystem dominated by Op Layer2. This may lead to a new round of development in the Layer2 ecosystem led by Op Layer2, especially for SocialFi and GameFi projects, which are affected by high fees and have difficulties in providing quality user experience. Along with that, this phase of Layer2 is likely to see the emergence of many quality projects that can approach the Web2 user experience. If this round of development is taken by Op again, it will further widen the gap with the ZK Layer2 ecosystem, making it difficult enough for ZK Layer2 to catch up.ZK Layer2: Compared to Op Layer2, the benefit of downward gas adjustments will be smaller because ZK Layer2 does not need to store transaction-specific information on the chain, and although ZK Layer2 is still in the process of development and does not have the large ecosystem of Op Layer2, the facilities of Op Layer2 have already been improved, and there is more intense competition for the development of Op Layer2, which is attracted by the Cancun upgrades. However, the facilities on Op Layer2 are already well established and there is more competition for development on it, and it may not be wise for the new entrants attracted by the Cancun upgrades to compete with the already mature Op Layer2 developers. If ZK Layer2 is able to improve the supporting facilities for developers at this stage and provide a better development environment for developers, considering the better expectation of ZK Layer2 and the fierce competition in the market, new developers may choose to flock to the ZK Layer2 track, and this process will speed up the process of catching up with ZK Layer2, and achieve the goal of catching up with Op Layer2 before Op Layer2 completely dominates the market. before Op Layer2 completely dominates the market.
3.2.5 Opportunities for Layer2 Post Cancun Upgrade
DYDX:Although DYDX is a DEX deployed on Ethereum, its functions and principles are very different from traditional DEX on Ethereum such as Uniswap. First of all, it chooses thin orders instead of the AMM trading model used by mainstream DEXs, which allows users to have a smoother trading experience and creates a good condition for leveraged trading on it. In addition, it utilizes Layer 2 solutions such as StarkEx to achieve scalability and process transactions, packaging transactions off-chain and transmitting them back on-chain. Through the underlying principles of Layer2, DYDX allows users to obtain a far lower transaction cost than traditional DEX, with each transaction costing only about $0.005. At a time when the Cancun upgrade and the volatility of Ethereum and related tokens is almost certain to see a surge in high-risk investments such as leveraged trading. Through the Cancun upgrade, the transaction fees on DYDX will surpass those of CEX even for small transactions, while providing higher fairness and security, thus providing an excellent trading environment for high-risk investments and leveraged trading enthusiasts. From the above perspective, the Cancun upgrade will bring a very good opportunity for DYDX.Rollup Node:The data that was regularly purged in the Cancun upgrade is no longer relevant for the validation of new out-of-block, but that doesn't mean that there is no value in that purged data. For example, projects that are about to be airdropped conveniently need complete historical data to determine the security of the funds of each project that is about to receive airdrops, and there are also some on-chain analytics organizations that often need complete historical data to trace the flow of funds. At this time, one option is to query the historical data from the Rollup operator of Layer2, and in the process the Rollup operator can charge for data retrieval. Therefore, in the context of the Cancun upgrade, if we can effectively improve the data storage and retrieval mechanism on Rollup, and develop related projects in advance for layout, it will greatly increase the possibility of project survival and further development.
3.3 DApp
3.3.1 Profit Model
Similar to Web2 applications, DApps serves to provide a service to users on Ethereum. For example, Uniswap provides users with real-time exchange of different ERC20 tokens; Aave provides users with overcollateralized lending and flash lending services; and Mirror provides creators with decentralized content creation opportunities. However, the difference is that in Web2, the main way to profit is to attract more users to the platform through low-cost and high-quality services, and then use the traffic as a value to attract third-party advertisements and profit from the advertisements. However, DApp maintains zero infringement on users' attention in the whole process, and does not provide any recommendation to users, but collects the corresponding commission from a single service after providing a certain service to users. Thus, the value of a DApp comes mainly from the number of times users use the DApp's services and the depth of each interaction, and if a DApp wants to increase its value, it needs to provide services that are better than those of similar DApps, so that more developers will tend to use it rather than other DApps.
3.3.2 Detachment of DApps
At this stage, Ethereum DApps are dominated by DeFi, GameFi, and SocialFi. There were some Gamble projects in the early days, but due to the limitation of Ethereum's transaction speeds and the release of EOS, which is a more suitable public chain, the Gamble projects have gradually declined on Ethereum. These three types of DApps provide financial, gaming and social services respectively, and realize value capture from them.
DeFi
Implementation Principle: DeFi is essentially one or a series of smart contracts on Ethereum.In the release phase of DeFi, relevant contracts (such as coin contracts, exchange contracts, etc.) need to be deployed on the Ethereum main network, and the contracts will realize the interaction between DeFi function modules and Ethereum through the interfaces. When users interact with DeFi, they will call the contract interface to deposit, withdraw and exchange coins, etc. The DeFi smart contract will package the transaction data, interact with Ethereum through the script interface of the contract, and record the state changes on the Ethereum chain. In this process, the DeFi contract will charge a certain fee as a reward for upstream and downstream liquidity providers and for its own profit.Current status: DeFi has an absolute dominance among DApps. Apart from cross-chain projects and Layer2 projects. DeFi occupies other places in the top 10 DApps in terms of contract assets on Ethereum. Until this time, the cumulative number of DeFi users on Ethereum has exceeded 40 million. Although the number of monthly active users has declined from the peak of nearly 8 million in November 2021 due to the impact of the bear market, with the recovery of the market, the number of monthly active users has also recovered to about half of the peak, and is waiting for the next round of the bull market to make another surge. Meanwhile, DeFi is becoming more diverse and versatile. From the early cryptocurrency trading and mortgage lending to the current leveraged trading, forward buying, NFT financing, flash loans, etc., financial methods that can be realized in Web2 have been gradually realized in DeFi, even somthing can't be realized in Web2, including flash loans, have also been realized in DeFi.

Source: DAppRadar
SocialFi
Implementation Principle: Similar to traditional design platforms, SocialFi supports individuals to create content and publish the created content through the platform to spread it and further attract followers for the accounts, while users can access the content they need and obtain the services they need through the platform. The difference is that the content published by users, the interaction records between the content publishers and their fans, and the information of the accounts themselves are all decentralized through blockchain smart contracts, which means that the ownership of the information is returned to each individual account. For the SocialFi platform, the more people are willing to create and share content through its platform, the more revenue it can generate by providing these services. The cost of user interaction on the SocialFi platform minus the cost of storing account and transaction data is the profit of the SocialFi project.Current status: Although the UAW (User Active Wallet) of SocialFi seems to be comparable to DeFi's when it comes to the Head project, its volume often comes from the airdrop expectation of some projects, which is unsustainable. After the intial boom, Friend.tech had less than 1,000 UAWs these days. And when comparing with DeFi outside the top 5, it is more supportive of this conclusion. The root cause of this is that SocialFi's high service fees and inefficiencies have made it impossible for SocialFi to take on the social attributes it is supposed to have, and it has been reduced to a purely speculative platform.

Source: DAppRadar
GameFi
Implementation Principle: The application of GameFi is similar to that of SocialFi, except that the object of application has become a game. At this stage, the mainstream profit method of GameFi is to sell the props in the game for profit.Current status: If the project owner wants to get more profits, more people to participate in the game is essentially needed. At this stage, there are only two things that can attract users to participate in the game, one is the fun of the game, which drives users to buy props in order to get the right to participate in the game or a better gaming experience. The other is the expectation of profitability, as users believe they can sell the props at a higher price in the future. The first model is similar to Steam, where the program gets real money and the users get to enjoy the game. In the other model, if the users and the project's profits come from the constant influx of new users, and once the new funds can not offset the project's props issued, the project will quickly fall into a vicious cycle of selling, market expectations decline, and continue to sell and difficult to sustainably realize the revenue, with the Ponzi attribute. Due to the limitations brought by blockchain fees and transaction speed, GameFi at this stage is basically unable to achieve the user experience required by the former mode, and is mostly based on the second mode.
3.3.3 Impact of Cancun Upgrade on DApps
Performance optimization: Cancun upgraded a block can carry more transaction data, corresponding to the DApp can realize more state changes. According to the average expansion of 8 blob capacity calculation, Cancun upgraded DApp processing speed can reach ten times the original.Reduced Costs: Data storage costs are a fixed expense for DApps, and both Layer1 and Layer2 DApps directly or indirectly utilize Ethereum to record the status of accounts within the DApp. With the Cancun upgrade, every transaction in a DApp can be stored as a blob of data, significantly reducing the cost of running the DApp.Functionality Expansions: Due to the high cost of storage on Ethereum, project owners are deliberately reducing the amount of data that can be uploaded during the development of DApps. This has made it impossible to migrate many Web2 experiences to DApps, such as SocialFi's inability to support video creation in Twitter, or even if they could, the data would not be as secure as Ethereum on the underlying chain, and GameFi's gameplay options are often low-level and uninteresting, as every state change needs to be recorded on the chain. With the Cancun upgrade, project owners will have more opportunities to experiment with these aspects.
3.3.4 Cancun Upgrade and Various DApp Verticals
DeFi: The impact of the Cancun upgrade on DeFi is relatively small because the only thing that needs to be recorded in DeFi is the current state of the user's assets in the contract, whether it is pledged, borrowed or other states, and the amount of data required to be stored is much smaller than that of the other two types of DApps. However, the increase of Ethereum's TPS brought by the Cancun upgrade can greatly facilitate the arbitrage business of DeFi, which has a high trading frequency, and the leverage business, which needs to complete the opening and closing of positions in a short period of time. At the same time, the reduction in storage costs, which is not evident in single-coin exchanges, can also add up to significant fee savings in leveraged and arbitrage transactions.SocialFi: The Cancun upgrade has the most immediate impact on SocialFi's performance. The Cancun upgrade improves the ability of SocialFi's smart contracts to process and store large amounts of data to provide a superior user experience that is closer to that of Web2. At the same time, basic interactions such as user creation, commenting, liking, etc. on SocialFi can be done at a lower cost, thus attracting truly socially oriented long-term participants.GameFi: For Asset on chain games in the last bull market, the effect is similar to DeFi, with a relatively small decrease in storage cost. But the increase in TPS can still benefit high frequency interactions, timeliness of interactions, and support for interactive features that can improve game playability. Fully On-chain games are more directly affected by the Cancun upgrade. Since all game logic, state, and data is stored on the chain, the Cancun upgrade will significantly reduce the cost of operation and user interaction for the Fully On-chain game. At the same time, the initial deployment cost of the game will also be greatly reduced, thus lowering the threshold for game development and encouraging the emergence of more fully chain games in the future.
3.3.5 Opportunities for DApps Post Cancun Upgrade
Dark Forest: Since the third quarter of 2023, perhaps because of the question that traditional asset-on-chain games are not decentralized enough, or simply because the traditional GameFi narrative seemed lukewarm, capital began to look for new growth points, Fully On-chain games began to explode and attracted a lot of attention. But for the fully on-chain game on Ethereum, the transaction speed of 15 TPS and the storage cost of 16 gas single bytes for the CALLDATA field severely limit the upper limit of its development. The landing of the Cancun upgrade can be a good improvement to both problems, combined with the continuous development of related projects in the second half of 2023, the Cancun upgrade can bring a relatively large positive for this track. Considering the head effect, Dark Forest is one of the few fully on-chain games from the last round of the bull market, with a relatively well-established community base, and has not yet issued its own tokens. It should have good prospects if the project side takes action around the time of Cancun's upgrade.
4. Conclusion
The landing of the Cancun upgrade will not only bring higher TPS and lower storage costs to Ethereum, but also a surge in storage pressure. DA and Layer2 are the ones that will be heavily impacted by the upgrade. In contrast, DA projects that do not use Ethereum in their underlying data storage are not supported by the Ethereum development community, and while there are opportunities, it need to be more cautious when dealing with specific projects. Since most of the ZK system Layer2 tokens have not yet been introduced, and Arbitrium has strengthened significantly in the recent period in anticipation of the Cancun upgrade, if the price of Arb's coins can stabilize through the pullback phase, Arb and its ecosystem of related projects should see a good rise along with the landing of Cancun. Due to the influx of speculators, the DYDX project may also have some opportunity at the Cancun upgrade node. Finally, Rollup has a natural advantage for storing Layer2-related transaction history data, when it comes to providing historical data access services, Rollup on Layer2 will also be a good choice.
If we take a longer-term perspective, the Cancun upgrade has created conditions for the development and performance of various types of DApps, and in the future, we will inevitably see Web3 projects gradually approaching Web2 in terms of interactive functions and real-time performance, which will bring Ethereum to the goal of a world computer, and it is worth making long-term investments for any pragmatic development projects. Ethereum has been in a weak position relative to Bitcoin in the recent market rally, and while Bitcoin has recovered to nearly 2/3 of its previous bull market high, Ethereum has not yet recovered 1/2 of its previous high.The arrival of the Cancun upgrade may change this trend and bring Ethereum a round of complementary gains, after all, as a rare public chain that can maintain profitability while in the midst of token deflation, there is indeed an undervalued value at this stage.
Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, DApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world.
Reference
eips.Ethereums-core: https://eips.Ethereum.org/coreEthStorage 官网: https://eth-store.w3eth.io/#/EIP-1153: Transient storage opcodes: https://eips.Ethereum.org/EIPS/eip-1153EIP-4788: Beacon block root in the EVM: https://eips.Ethereum.org/EIPS/eip-4788EIP-5656: MCOPY - Memory copying instruction:https://eips.Ethereum.org/EIPS/eip-5656EIP-6780: SELFDESTRUCT only in same transaction:https://eips.Ethereum.org/EIPS/eip-6780零知识卷叠如何运作:https://Ethereum.org/zh/developers/docs/scaling/ZK-rollups#how-do-ZK-rollups-workOPTIMISTIC ROLLUPS:https://Ethereum.org/developers/docs/scaling/optimistic-rollupsZK、ZKVM、ZKEVM 及其未来:https://foresightnews.pro/article/detail/11802重建与突破,探讨全链游戏的现在与未来:https://foresightnews.pro/article/detail/39608一文分析 Axie Infinity 背后的经济模式:https://www.tuoluo.cn/article/detail-10066131.html
See original
Kernel Ventures: Outlook for the Pan-Ethereum Ecosystem under the Cancun UpgradeAuthor: Kernel Ventures Jerry Luo Reviewers: Kernel Ventures Mandy, Kernel Ventures Joshua TLDR: Ethereum has completed the first three upgrade stages, which respectively solved the problems of development threshold, DoS attack and POS transformation. The main upgrade goal at this stage is to reduce transaction fees and optimize user experience. The four proposals of EIP-1553, EIP-4788, EIP-5656, and EIP-6780 respectively reduce the cost of interaction between contracts, improve the efficiency of beacon chain access, reduce the cost of data copying, and limit the role permissions of SELFDESTRUCT bytecode. EIP-4844 can greatly improve Ethereum's TPS and reduce data storage costs by introducing blob data plugged into blocks. The Cancun upgrade will have additional benefits for Ethereum-specific DA in the DA track. At this stage, the Ethereum Foundation has a repulsive attitude towards DA solutions that do not use Ethereum at all for data storage. Due to Op Layer2's more mature development environment and greater demand for the Ethereum DA layer, the Cancun upgrade may bring relatively more benefits to it. The Cancun upgrade can increase the performance upper limit of DApps, making DApps have functions closer to those of Apps in Web2. Full-chain games that have not dissipated in popularity and require a large amount of storage space on Ethereum are worthy of attention. At this stage, the Ethereum ecosystem is undervalued, and the Cancun upgrade may be a signal that Ethereum begins to strengthen.

Kernel Ventures: Outlook for the Pan-Ethereum Ecosystem under the Cancun Upgrade

Author: Kernel Ventures Jerry Luo
Reviewers: Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
Ethereum has completed the first three upgrade stages, which respectively solved the problems of development threshold, DoS attack and POS transformation. The main upgrade goal at this stage is to reduce transaction fees and optimize user experience. The four proposals of EIP-1553, EIP-4788, EIP-5656, and EIP-6780 respectively reduce the cost of interaction between contracts, improve the efficiency of beacon chain access, reduce the cost of data copying, and limit the role permissions of SELFDESTRUCT bytecode. EIP-4844 can greatly improve Ethereum's TPS and reduce data storage costs by introducing blob data plugged into blocks. The Cancun upgrade will have additional benefits for Ethereum-specific DA in the DA track. At this stage, the Ethereum Foundation has a repulsive attitude towards DA solutions that do not use Ethereum at all for data storage. Due to Op Layer2's more mature development environment and greater demand for the Ethereum DA layer, the Cancun upgrade may bring relatively more benefits to it. The Cancun upgrade can increase the performance upper limit of DApps, making DApps have functions closer to those of Apps in Web2. Full-chain games that have not dissipated in popularity and require a large amount of storage space on Ethereum are worthy of attention. At this stage, the Ethereum ecosystem is undervalued, and the Cancun upgrade may be a signal that Ethereum begins to strengthen.
$AUCTION 🚀🚀🚀
$AUCTION 🚀🚀🚀
LIVE
Binance News
--
Bounce Brand will launch SatoshiVM native token $SAVM on January 19th
According to Shenzhen TechFlow, Bounce Brand announced that it will launch $SAVM, the native token of Bitcoin ZK Rollup layer 2 solution SatoshiVM, on the Bounce Launchpad on January 19. The token will adopt Bounce’s new initial LP revenue issuance model.
LIVE
Kernel Ventures
--
The New Narrative of Inscription — Under the Support of Different Ecosystems
Author: Kernel Ventures Stanley
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua

TLDR:
This article delves into the development trends of Bitcoin inscription and the characteristics of various protocols.
Analyzing protocols on the Bitcoin chain such as Ordinals, BRC20, Atomical, RGB, Pipe, comparing them with other PoW chains like Dogechain and Litecoin, as well as Ethereum chains Ethscriptions and Evm.ink, and Solana chain's SPL20 protocol. The comparison includes aspects such as fees, divisibility, scalability, and user considerations, with particular emphasis on the low fees and high scalability of the RGB protocol.Examining market and product projection for the inscription ecosystem, highlighting the completeness of infrastructure on the wallet side, the launch of Bitcoin chain AMM DEX, and the potential for additional functionalities in the future, such as lending and derivatives. Unisat's open API interface opens the door to numerous tool projects.
In conclusion, this article provides a comprehensive exploration of the dynamics in the field of Bitcoin inscription, offering insights into the future development of inscription empowered by the ecosystem, providing readers with a thorough understanding and outlook.
Inscription Market Background
Market Overview
Since the introduction of the Bitcoin Ordinals protocol in January 2023, a wave of enthusiasm has swept through the Bitcoin chain with protocols like BRC20 and Ordinals assets, often referred to as the "world of retail investors." This is attributed to the Fair Launch model of scripts like BRC20, where chips are entirely minted by individual retail investors, devoid of institutions, project teams, or insider trading. The minting cost for Ordi is approximately $1 per inscription, but after its listing on the Gate.io exchange, the price surged to $20,000 per inscription. The staggering increase in value fueled the continued popularity of the BRC20 protocol, drawing in numerous Ordinals players and leading to a continuous spike in Gas fees on the Bitcoin chain. At its peak, the minimum confirmation Gas even reached 400 s/vb, surpassing the highest Gas levels in the past three years.
Using this as a starting point, this article will delve into the exploration of the script ecosystem on various chains, discussing the current state of various protocols and anticipating the developmental trends of scripts under the empowerment of the ecosystem.
Data Overview
The 3-year Bitcoin block-fee-rate chart vividly illustrates sharp spikes in fees during May-June and November of this year. This surge reflects the fervor of users towards script protocols, not just limited to the BRC20 protocol. Various protocols developed on the Bitcoin network were introduced during this period, sparking a wave known as "Bitcoin Summer."

Bitcoin rate in the past three years, image source: Mempool.space
From the casting data of Inscriptions, it is evident that the casting quantity has stabilized, consistently maintaining high levels.

Ordinals inscription casting quantity, image source: Dune @dgtl_asserts
Track analysis
This article will categorize various chains and analyze the script protocols on each of them.
Bitcoin Chain
Ordinals / BRC20
On January 21, 2023, Bitcoin developer Casey Rodarmor introduced the Ordinals protocol, allowing metadata to be inscribed on the Bitcoin chain and assigned a script number. In March of the same year, Twitter user @domodata released the BRC20 protocol, evolving token minting into on-chain strings. On November 7, Binance listed the BRC20 flagship token $ORDI, triggering a significant surge with a nearly 100% daily increase.
As the first protocol in the inscription ecosystem, Ordinals has encountered several issues:
BRC20 supports only four-letter tokens, imposing significant limitations.The casting names are susceptible to Sybil attacks, making casting transactions prone to frontrunning.The Ordinals protocol results in substantial redundant data on the Bitcoin network.
For example, after the BRC20 token minted out, the original inscriptions will become invalid once token transactions are sent. This causes significant data occupation, a reason why some early Bitcoin enthusiasts are reluctant to support Ordinals.
Atomicals
The Atomical protocol's ARC20 utilizes one satoshi to represent the deployed token and eliminates the four-character restriction, allowing for more diverse gameplay. A unique project within this framework is the "Realm", where each registered entity is a prefix text and ultimately holds pricing rights for all suffixes. In terms of basic functionality, the Realm can be used as a transfer and receipt address (payment name), and also it has various use cases such as building communities/DAOs, identity verification, social profiles, aligning seamlessly with our envisioned development of DID.

However, both ARC20 and $ATOM are still in the very early stages, and further development is required, including improvements in wallets and markets.

Realm casting quantity, image source: Dune @sankin
Pipe
Casey, the founder of Ordinals, proposed a specific inscription implementation called Rune designed for issuing FT (fungible tokens). This method allows the direct insertion of token data into the UTXO script, encompassing the token's ID, output, and quantity. Rune's implementation is very similar to ARC20, handing over token transfers directly to the BTC mainnet. The distinction lies in Rune including the token quantity in the script data.
While Rune's concept is still in the ideation stage, the founder of #Trac developed the first functional protocol based on this idea, issuing PIPE tokens. Leveraging Casey's high profile, PIPE quickly gained momentum, capitalizing on the speculative fervor inherited from BRC20. Rune's legitimacy is relatively stronger compared to BRC20, but gaining acceptance within the BTC community remains challenging.
RGB

Lightning Network Capacity, Image Source: Mempool.space
With the Ordinals protocol elevating the ecosystem of the Bitcoin network, an increasing number of developers and projects are turning their attention to the Lightning Network due to its extremely low transaction fees and 40 million TPS (transactions per second).
RGB is an intelligent contract system based on BTC and the Lightning Network, representing a more ultimate scaling solution. However, progress has been slow due to its complexity. RGB transforms the state of a smart contract into a concise proof, engraving this proof into the BTC UTXO output script. Users can verify this UTXO to inspect the state of the smart contract. When the smart contract state is updated, a new UTXO is created to store the proof of this state change.
All smart contract data is entirely on the BTC chain, operated by dedicated RGB nodes that record the complete data of the smart contract and handle the computational workload of transactions. Users verify the deterministic changes in contract status by scanning the entire UTXO of the BTC chain.
RGB can be viewed as BTC's Layer 2. This design leverages BTC's security to guarantee smart contracts. However, as the number of smart contracts increases, the demand for UTXO encapsulation data will also inevitably lead to significant redundancy in the BTC blockchain.
Since 2018, RGB has remained in the development stage without speculative content. Tether's issuing company, Tether Limited, is a significant supporter of RGB, aiming to issue a large amount of USDT on the BTC RGB.
In terms of products, the mainstream wallet currently in use is Bitmask, which supports Bitcoin and Lightning Network deposits, as well as assets of RGB-20 and RGB-21. Bitlight Labs is also developing the RGB network, with plans to build its own wallet system and write smart contracts for DEX (decentralized exchange). The project has acquired BitSwap (bitswap-bifi.github.io) and is preparing to integrate it into the RGB network.
RGB's biggest advantages lie in its low transaction fees and extremely high scalability. There was a time when smart contract development on the Bitcoin network was difficult and received little attention. However, with the Ordinals protocol raising the ecosystem's popularity, more developers are experimenting with smart contracts on the RGB network. These smart contracts are written in the Rust language, incompatible with Ethereum, leading to a higher learning curve and requiring further evaluation in terms of technology.
For more information on the technical aspects of the RGB protocol, Kernel Ventures’ previous articles have introduced it in detail. Article link: https://tokeninsight.com/en/research/market-analysis/a-brief-overview-on-rgb-can-rgb-replicate-the-ordinals-hype
Other POW Chain
During the heyday of inscriptions on the Bitcoin chain, as other PoW chains share the same origin and are also based on the UTXO spending model, Ordinals has been migrated to some leading PoW public chains. In this article, we will analyze the examples of Dogechain and Litecoin, which have high market acceptance and development completeness.
Dogechain:
The Drc-20 protocol on the Dogecoin chain is based on Ordinals and functions similarly to the Bitcoin chain. However, due to its low transaction fees and strong meme appeal, it has gained popularity.
Litecoin:
Similarly, the Ltc-20 protocol on the Litecoin chain is based on Ordinals. This protocol has received retweets and attention from the Litecoin official team and its founder, Charlie Lee. It can be considered as having a "noble pedigree." The trading markets Unilit and Litescribe, along with the wallet Litescribe, show a relatively high level of development completeness. The first token, $Lite, is already listed on the Gate exchange.
However, there were issues with the protocol before the index was introduced. After the index was launched, a bug causing increased issuance emerged, but it has since been fixed and is worth keeping an eye on. From the graph, it is evident that after the introduction of the LTC20 protocol, gas fees on the Litecoin chain surged.

Image source: Twitter @SatoshiLite

Litecoin rate in the past year, image source: Litecoinspace
Ethereum Chain
Ethscriptions
As of now, the trading platform Etch on the Ethscriptions protocol has achieved a transaction volume of 10,500 ETH. The floor price of the first token, Eths, is $4,300. For those who stayed in from the beginning and did not exit, the initial investment cost on June 18th was less than 1U. Those who held on have now gained returns of over 6,000 times their initial investment.

Eths transaction data, image source: ETCH Market
Tom Lehman proposed a novel Ethereum scaling solution on August 8th. Employing a technology similar to Ordinals, leveraging Calldata expansion, this solution aims to achieve cost-effectiveness in Ethereum mainnet gas fees and enhance the dimensionality of ecosystem applications.
At the core of Eths is the Ethscriptions Virtual Machine (ESC VM), which can be likened to the Ethereum Virtual Machine (EVM). The "Dumb Contracts" within the ESC VM enable Eths to break free from the limitations of inscriptions as NFT speculation, entering the realm of functionality and practicality. Eths has officially entered the competition in the base layer and L2 solutions arena.

Dumb Contracts running logic, picture source: Ethscriptions ESIP-4 proposal
"Eths represents another approach to Ethereum Layer 2. Unlike typical Layer 2 solutions that are separate chains and may have a backdoor, Eths conducts transactions on the Ethereum mainnet with gas fees as affordable as those on Layer 2. It enables various activities such as swapping, DeFi, and GameFi on the Eths platform. The key aspect is that it operates on the mainnet, making it secure and more decentralized than Layer 2," as excerpted from the Eths community.
However, articulating this new Layer 2 narrative is challenging. Firstly, token splitting is still in the developmental stage, and current inscriptions are still non-fungible tokens (NFTs) that can not be split into fungible tokens (FTs).
As of the latest information available, the FacetSwap (https://facetswap.com/) has introduced a splitting feature. However, it was noted that mainstream trading markets do not currently support split inscriptions. Users can wait for future adaptations. Currently, split inscriptions can be used for activities like swapping and adding liquidity on Factswap. All operations are resolved by a virtual address (non-existent address) 0x000...Face7. Users can embed messages in IDM and send the hexadecimal data of the message to the address ending with Face7 to perform operations like approve and transfer. As this is still in the early stages, its development trajectory will be observed in the future.
Other EVM Chain
Evm.ink
Evm.ink has migrated the protocol standards of Ethscriptions to other EVM-compatible chains, enabling these chains to also mint inscriptions and build indexes for other EVM chains. Recently popular projects such as POLS and AVAL use Evm.ink, which is essentially Ethscriptions' standard, for index recognition.

POLS casting data, image source: Dune @satsx

AVAL casting data, image source: Dune @helium_1990
POLS and AVAL both have a total supply of 21 million inscriptions. POLS has over 80,000 holders, while AVAL has more than 23,000 holders. The minting progress for both is around 2-3 days. This indicates a significant interest from the community in low-cost Layer 2 (L2) inscriptions, as they offer a high return on investment. Due to the low cost, users from the long tail of BTC and ETH chains are participating, leading to overflow. This trend is not limited to just these two chains; other chains like Heco and Fantom have also experienced a surge in gas fees, all related to inscriptions.

Number of daily transactions on the EVM chain, image source: Kernel Ventures
Solana
SPL20
Solana inscriptions commenced on November 17th at 4 AM and were completed by 8 AM, with a total supply of 21,000 inscriptions. Unlike other networks, the main body of the inscription is an NFT, and the Index Content is the actual inscription. NFTs can be created through any platform, and the index determines whether it is included based on the hash of the image or file. The second point is the embedded text; only inscriptions with matching hashes and embedded text are considered valid. Images are off-chain data, and text is on-chain data. Currently, major proxy platforms use IPFS, while others use AR.
Solana inscriptions share a significant limitation with Eths – They can not be split. Without the ability to split, they essentially function as NFTs, lacking the liquidity and operational convenience equivalent to tokens, let alone the vision of future Dex Swaps.
The protocol's founder is also the founder of TapPunk on the Tap protocol. The team behind the largest proxy platform, Liberplex (https://www.libreplex.io/), is very proactive. Since its launch, the team has made rapid progress in development, completing operations such as hash indexing and changing inscription attributes (immutability). They also conduct live coding sessions and Q&A sessions on their official Discord. The trading market Tensor (https://www.tensor.trade/) has also been successfully integrated, and the development progress is swift.
The first inscription, $Sols, had a casting cost of approximately $5. In the secondary market, it reached a peak price of 14 SOL, with a floor price of 7.4 SOL, equivalent to $428. The daily trading volume exceeded 20,000 SOL, equivalent to about $1.2 million, with active turnover rates.
Core comparison
Comparison of core protocols

Comparison of mainstream inscription protocols, Image source: Kernel Ventures
This chart compares several major inscription protocols based on four dimensions: fees, divisibility, scalability, and user base.
Fees: RGB protocol stands out with the optimal fee rate, leveraging the Lightning Network for virtually zero-cost transactions.Divisibility: Both Solana and recent EVM protocols lack the capability for divisibility, with expectations for future development in this aspect.Scalability: RGB protocol's smart contract functionality provides significant scalability. Solana's scalability is still under discussion, but the team and Solana Foundation express support, suggesting it may not be lacking in scalability.User Base: EVM chains, with their naturally low gas costs, attract a larger user base due to the lower trial-and-error cost for users. BRC20, being the first inscription token and ranking first in orthodoxy, has accumulated a substantial user base.
Comparison of protocol token data

Protocol Token Comparison, Image source: Kernel Ventures
Analyzing the mainstream tokens from various protocols, it's evident that the current market capitalization of these tokens is around $600 million, excluding smaller-cap currencies. Additionally, Ordi constitutes 80% of the total market capitalization, indicating significant development opportunities for other protocols. Notably, protocols like RGB are still in the process of refinement and haven't issued tokens.
In terms of the number of holders, Pols and Ordi dominate, while other protocols have fewer holders. Eths and Solana inscriptions have not been split, so a comprehensive analysis of holder distribution is pending further developments.
Innovations and risk analysis
Currently, the primary use of inscriptions is Fair Launch, allowing users to fairly access opportunities to participate in projects. However, the development of the inscription space is not limited to fair launches.
Recent developments in the inscription space have shown significant dynamism and innovation. The growth of this sector is largely attributed to key technological advancements in Bitcoin, such as SegWit, Bech32 encoding, Taproot upgrade, and Schnorr signatures. These technologies not only enhance the transaction efficiency and scalability of the Bitcoin network but also increase its programmability.
For instance, in the RGB protocol, smart contracts built on the Lightning Network of Bitcoin exhibit not only extremely high transactions per second (40 million) but also benefit from being part of the largest blockchain ecosystem, Bitcoin.
Regarding risks, caution is advised, particularly with some Launchpads. For example, the recent case of Rug project Ordstater, with the success of MUBI and TURT, has led to a proliferation of Launchpads. Some platforms may execute a Rug Pull directly after the Initial DEX Offering (IDO). Prior to engaging in any project, it is crucial to thoroughly read the whitepaper, research the background, and avoid blindly following KOLs due to FOMO.
Future deduction of inscription ecology
Market Deduction
Galaxy Research and Mining predicts that by 2025, the market value of the Ordinals market will reach $5 billion, with the number of inscriptions at that time estimated to be only 260,000. Currently, the number of inscriptions has already reached 33 million, a growth of 126 times in just six months. The market capitalization of $Ordi has reached $400 million, and $Sats has reached $300 million. This suggests that the predictions for the entire inscription market were significantly underestimated.
Product Deduction
Currently, BRC20 trading activities are primarily concentrated on OKX and Unisat. The Web3 wallet promoted by OKX this year provides a favorable experience for trading BRC20 assets. The completeness of wallet-side infrastructure further smoothens and shortens the entry path for "retail investors," allowing them to smoothly enter this new market. With the emergence of various protocols, different protocols have introduced their own trading markets and wallets, such as Atomicals, Dogechain, Litecoin, and more. However, the wallets currently available in the market are all modifications of Unisat, built upon the open-source foundation of Unisat.
Comparing Bitcoin (POW) with Ethereum, one can analogize various protocols to different chains, with the fundamental difference lying in the Chain ID. Therefore, future products might involve Unisat integrating different protocols, allowing users to switch between protocols within the wallet as needed, similar to the chain-switching functionality in wallets like Metamask.

Comparison of wallets across protocols, Image source: Kernel Ventures
Track deduction
With funds continuously flowing into the inscription market, users are no longer satisfied with meme-driven speculation and are shifting their focus towards applications built on inscriptions. Unisat has brought innovation to BRC20 by introducing BRC20-Swap, allowing users to easily exchange BRC20 tokens similar to AMM DEX. As the first product enhancing liquidity in the Ordinals ecosystem, Unisat is poised to unlock the potential of the Bitcoin DeFi ecosystem, potentially leading to the development of additional features such as lending and derivatives. Recently, Unisat has also opened API interfaces, which is user-friendly for small developers, enabling them to call various functions, such as automated batch order scanning and monitoring inscriptions for automatic minting. This can give rise to numerous utility projects.
While transaction fees on the Bitcoin network are relatively high, for layer2s' like Stacks and RIF, even though fees are lower, they lack a user base and sufficient infrastructure. This makes Bitcoin's EVM a compelling narrative. For example, BEVM is a project based on the Ethereum network, providing a Bitcoin ecosystem Layer2 with on-chain native tokens being BTC. Users can use the official cross-chain bridge to move Bitcoin from the mainnet to BEVM. The EVM compatibility of BEVM makes it easy to build applications on EVM chains, with low entry barriers for DeFi, swap, and more to migrate from other chains.
However, there are several issues to consider with Bitcoin's EVM. Questions include whether the assets crossing over can maintain decentralization and immutability, the consensus problem of EVM chain nodes, and how to synchronize transactions to the Bitcoin network (or decentralized storage). Since the threshold for Ethereum layer 2 is relatively low, security may be compromised, making it a primary concern for anyone interested in Bitcoin EVM at the moment.

Image source: BEVM Bridge
Summary
This article delves into the development trends in the Bitcoin inscription domain and the characteristics of various protocols. By analyzing protocols such as Ordinals (BRC20), Atomical, RGB, Pipe, and others on the Bitcoin chain, as well as comparing them with other Pow chains, Ethereum's Ethscriptions and Evm.ink, and Solana's SPL20 protocol, the differences in terms of fees, divisibility, scalability, and user aspects are explored.
In the context of the inscription market, starting with the Ordinals protocol, a wave of inscription protocols like BRC20 has been referred to as the "world of retail investors." The analysis includes an overview of data such as Bitcoin block fees and the number of inscriptions forged by Ordinals, providing insights into the development trends in the inscription ecosystem.
In the analysis of the racecourse, the core elements of mainstream inscription protocols, such as fees, divisibility, scalability, and user numbers, are compared to showcase their similarities and differences. Finally, through a comparison of protocol token data and core protocol comparisons, a comprehensive analysis of market value and user distribution for various mainstream protocols is provided. The conclusion emphasizes innovation points and risk analysis, highlighting the vitality and innovation within the inscription domain.
Looking ahead, the inscription domain is expected to witness continuous technological innovation, driving the practical application of more complex functionalities. The market's robust development is anticipated to maintain steady growth, providing more opportunities for investors and participants. Meanwhile, it is expected that more creative projects and protocols will emerge, further enriching the inscription ecosystems of Bitcoin and other public chains. Miners' earnings may also increase as the inscription domain offers them new income opportunities.
Reference link
Bitcoin block-fee-rates (3 year):https://mempool.space/zh/graphs/mining/block-fee-rates#3yESIP-4: The Ethscriptions Virtual Machine:https://docs.ethscriptions.com/esips/esip-4-the-ethscriptions-virtual-machineA comprehensive scan of the inscriptions industry:https://www.theblockbeats.info/news/47753?search=1Litecoin block-fee-rates (1 year):https://litecoinspace.org/zh/graphs/mining/block-fee-rates#1y
The New Narrative of Inscription — Under the Support of Different EcosystemsAuthor: Kernel Ventures Stanley Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua TLDR: This article delves into the development trends of Bitcoin inscription and the characteristics of various protocols. Analyzing protocols on the Bitcoin chain such as Ordinals, BRC20, Atomical, RGB, Pipe, comparing them with other PoW chains like Dogechain and Litecoin, as well as Ethereum chains Ethscriptions and Evm.ink, and Solana chain's SPL20 protocol. The comparison includes aspects such as fees, divisibility, scalability, and user considerations, with particular emphasis on the low fees and high scalability of the RGB protocol.Examining market and product projection for the inscription ecosystem, highlighting the completeness of infrastructure on the wallet side, the launch of Bitcoin chain AMM DEX, and the potential for additional functionalities in the future, such as lending and derivatives. Unisat's open API interface opens the door to numerous tool projects. In conclusion, this article provides a comprehensive exploration of the dynamics in the field of Bitcoin inscription, offering insights into the future development of inscription empowered by the ecosystem, providing readers with a thorough understanding and outlook. Inscription Market Background Market Overview Since the introduction of the Bitcoin Ordinals protocol in January 2023, a wave of enthusiasm has swept through the Bitcoin chain with protocols like BRC20 and Ordinals assets, often referred to as the "world of retail investors." This is attributed to the Fair Launch model of scripts like BRC20, where chips are entirely minted by individual retail investors, devoid of institutions, project teams, or insider trading. The minting cost for Ordi is approximately $1 per inscription, but after its listing on the Gate.io exchange, the price surged to $20,000 per inscription. The staggering increase in value fueled the continued popularity of the BRC20 protocol, drawing in numerous Ordinals players and leading to a continuous spike in Gas fees on the Bitcoin chain. At its peak, the minimum confirmation Gas even reached 400 s/vb, surpassing the highest Gas levels in the past three years. Using this as a starting point, this article will delve into the exploration of the script ecosystem on various chains, discussing the current state of various protocols and anticipating the developmental trends of scripts under the empowerment of the ecosystem. Data Overview The 3-year Bitcoin block-fee-rate chart vividly illustrates sharp spikes in fees during May-June and November of this year. This surge reflects the fervor of users towards script protocols, not just limited to the BRC20 protocol. Various protocols developed on the Bitcoin network were introduced during this period, sparking a wave known as "Bitcoin Summer." Bitcoin rate in the past three years, image source: Mempool.space From the casting data of Inscriptions, it is evident that the casting quantity has stabilized, consistently maintaining high levels. Ordinals inscription casting quantity, image source: Dune @dgtl_asserts Track analysis This article will categorize various chains and analyze the script protocols on each of them. Bitcoin Chain Ordinals / BRC20 On January 21, 2023, Bitcoin developer Casey Rodarmor introduced the Ordinals protocol, allowing metadata to be inscribed on the Bitcoin chain and assigned a script number. In March of the same year, Twitter user @domodata released the BRC20 protocol, evolving token minting into on-chain strings. On November 7, Binance listed the BRC20 flagship token $ORDI, triggering a significant surge with a nearly 100% daily increase. As the first protocol in the inscription ecosystem, Ordinals has encountered several issues: BRC20 supports only four-letter tokens, imposing significant limitations.The casting names are susceptible to Sybil attacks, making casting transactions prone to frontrunning.The Ordinals protocol results in substantial redundant data on the Bitcoin network. For example, after the BRC20 token minted out, the original inscriptions will become invalid once token transactions are sent. This causes significant data occupation, a reason why some early Bitcoin enthusiasts are reluctant to support Ordinals. Atomicals The Atomical protocol's ARC20 utilizes one satoshi to represent the deployed token and eliminates the four-character restriction, allowing for more diverse gameplay. A unique project within this framework is the "Realm", where each registered entity is a prefix text and ultimately holds pricing rights for all suffixes. In terms of basic functionality, the Realm can be used as a transfer and receipt address (payment name), and also it has various use cases such as building communities/DAOs, identity verification, social profiles, aligning seamlessly with our envisioned development of DID. However, both ARC20 and $ATOM are still in the very early stages, and further development is required, including improvements in wallets and markets. Realm casting quantity, image source: Dune @sankin Pipe Casey, the founder of Ordinals, proposed a specific inscription implementation called Rune designed for issuing FT (fungible tokens). This method allows the direct insertion of token data into the UTXO script, encompassing the token's ID, output, and quantity. Rune's implementation is very similar to ARC20, handing over token transfers directly to the BTC mainnet. The distinction lies in Rune including the token quantity in the script data. While Rune's concept is still in the ideation stage, the founder of #Trac developed the first functional protocol based on this idea, issuing PIPE tokens. Leveraging Casey's high profile, PIPE quickly gained momentum, capitalizing on the speculative fervor inherited from BRC20. Rune's legitimacy is relatively stronger compared to BRC20, but gaining acceptance within the BTC community remains challenging. RGB Lightning Network Capacity, Image Source: Mempool.space With the Ordinals protocol elevating the ecosystem of the Bitcoin network, an increasing number of developers and projects are turning their attention to the Lightning Network due to its extremely low transaction fees and 40 million TPS (transactions per second). RGB is an intelligent contract system based on BTC and the Lightning Network, representing a more ultimate scaling solution. However, progress has been slow due to its complexity. RGB transforms the state of a smart contract into a concise proof, engraving this proof into the BTC UTXO output script. Users can verify this UTXO to inspect the state of the smart contract. When the smart contract state is updated, a new UTXO is created to store the proof of this state change. All smart contract data is entirely on the BTC chain, operated by dedicated RGB nodes that record the complete data of the smart contract and handle the computational workload of transactions. Users verify the deterministic changes in contract status by scanning the entire UTXO of the BTC chain. RGB can be viewed as BTC's Layer 2. This design leverages BTC's security to guarantee smart contracts. However, as the number of smart contracts increases, the demand for UTXO encapsulation data will also inevitably lead to significant redundancy in the BTC blockchain. Since 2018, RGB has remained in the development stage without speculative content. Tether's issuing company, Tether Limited, is a significant supporter of RGB, aiming to issue a large amount of USDT on the BTC RGB. In terms of products, the mainstream wallet currently in use is Bitmask, which supports Bitcoin and Lightning Network deposits, as well as assets of RGB-20 and RGB-21. Bitlight Labs is also developing the RGB network, with plans to build its own wallet system and write smart contracts for DEX (decentralized exchange). The project has acquired BitSwap (bitswap-bifi.github.io) and is preparing to integrate it into the RGB network. RGB's biggest advantages lie in its low transaction fees and extremely high scalability. There was a time when smart contract development on the Bitcoin network was difficult and received little attention. However, with the Ordinals protocol raising the ecosystem's popularity, more developers are experimenting with smart contracts on the RGB network. These smart contracts are written in the Rust language, incompatible with Ethereum, leading to a higher learning curve and requiring further evaluation in terms of technology. For more information on the technical aspects of the RGB protocol, Kernel Ventures’ previous articles have introduced it in detail. Article link: https://tokeninsight.com/en/research/market-analysis/a-brief-overview-on-rgb-can-rgb-replicate-the-ordinals-hype Other POW Chain During the heyday of inscriptions on the Bitcoin chain, as other PoW chains share the same origin and are also based on the UTXO spending model, Ordinals has been migrated to some leading PoW public chains. In this article, we will analyze the examples of Dogechain and Litecoin, which have high market acceptance and development completeness. Dogechain: The Drc-20 protocol on the Dogecoin chain is based on Ordinals and functions similarly to the Bitcoin chain. However, due to its low transaction fees and strong meme appeal, it has gained popularity. Litecoin: Similarly, the Ltc-20 protocol on the Litecoin chain is based on Ordinals. This protocol has received retweets and attention from the Litecoin official team and its founder, Charlie Lee. It can be considered as having a "noble pedigree." The trading markets Unilit and Litescribe, along with the wallet Litescribe, show a relatively high level of development completeness. The first token, $Lite, is already listed on the Gate exchange. However, there were issues with the protocol before the index was introduced. After the index was launched, a bug causing increased issuance emerged, but it has since been fixed and is worth keeping an eye on. From the graph, it is evident that after the introduction of the LTC20 protocol, gas fees on the Litecoin chain surged. Image source: Twitter @SatoshiLite Litecoin rate in the past year, image source: Litecoinspace Ethereum Chain Ethscriptions As of now, the trading platform Etch on the Ethscriptions protocol has achieved a transaction volume of 10,500 ETH. The floor price of the first token, Eths, is $4,300. For those who stayed in from the beginning and did not exit, the initial investment cost on June 18th was less than 1U. Those who held on have now gained returns of over 6,000 times their initial investment. Eths transaction data, image source: ETCH Market Tom Lehman proposed a novel Ethereum scaling solution on August 8th. Employing a technology similar to Ordinals, leveraging Calldata expansion, this solution aims to achieve cost-effectiveness in Ethereum mainnet gas fees and enhance the dimensionality of ecosystem applications. At the core of Eths is the Ethscriptions Virtual Machine (ESC VM), which can be likened to the Ethereum Virtual Machine (EVM). The "Dumb Contracts" within the ESC VM enable Eths to break free from the limitations of inscriptions as NFT speculation, entering the realm of functionality and practicality. Eths has officially entered the competition in the base layer and L2 solutions arena. Dumb Contracts running logic, picture source: Ethscriptions ESIP-4 proposal "Eths represents another approach to Ethereum Layer 2. Unlike typical Layer 2 solutions that are separate chains and may have a backdoor, Eths conducts transactions on the Ethereum mainnet with gas fees as affordable as those on Layer 2. It enables various activities such as swapping, DeFi, and GameFi on the Eths platform. The key aspect is that it operates on the mainnet, making it secure and more decentralized than Layer 2," as excerpted from the Eths community. However, articulating this new Layer 2 narrative is challenging. Firstly, token splitting is still in the developmental stage, and current inscriptions are still non-fungible tokens (NFTs) that can not be split into fungible tokens (FTs). As of the latest information available, the FacetSwap (https://facetswap.com/) has introduced a splitting feature. However, it was noted that mainstream trading markets do not currently support split inscriptions. Users can wait for future adaptations. Currently, split inscriptions can be used for activities like swapping and adding liquidity on Factswap. All operations are resolved by a virtual address (non-existent address) 0x000...Face7. Users can embed messages in IDM and send the hexadecimal data of the message to the address ending with Face7 to perform operations like approve and transfer. As this is still in the early stages, its development trajectory will be observed in the future. Other EVM Chain Evm.ink Evm.ink has migrated the protocol standards of Ethscriptions to other EVM-compatible chains, enabling these chains to also mint inscriptions and build indexes for other EVM chains. Recently popular projects such as POLS and AVAL use Evm.ink, which is essentially Ethscriptions' standard, for index recognition. POLS casting data, image source: Dune @satsx AVAL casting data, image source: Dune @helium_1990 POLS and AVAL both have a total supply of 21 million inscriptions. POLS has over 80,000 holders, while AVAL has more than 23,000 holders. The minting progress for both is around 2-3 days. This indicates a significant interest from the community in low-cost Layer 2 (L2) inscriptions, as they offer a high return on investment. Due to the low cost, users from the long tail of BTC and ETH chains are participating, leading to overflow. This trend is not limited to just these two chains; other chains like Heco and Fantom have also experienced a surge in gas fees, all related to inscriptions. Number of daily transactions on the EVM chain, image source: Kernel Ventures Solana SPL20 Solana inscriptions commenced on November 17th at 4 AM and were completed by 8 AM, with a total supply of 21,000 inscriptions. Unlike other networks, the main body of the inscription is an NFT, and the Index Content is the actual inscription. NFTs can be created through any platform, and the index determines whether it is included based on the hash of the image or file. The second point is the embedded text; only inscriptions with matching hashes and embedded text are considered valid. Images are off-chain data, and text is on-chain data. Currently, major proxy platforms use IPFS, while others use AR. Solana inscriptions share a significant limitation with Eths – They can not be split. Without the ability to split, they essentially function as NFTs, lacking the liquidity and operational convenience equivalent to tokens, let alone the vision of future Dex Swaps. The protocol's founder is also the founder of TapPunk on the Tap protocol. The team behind the largest proxy platform, Liberplex (https://www.libreplex.io/), is very proactive. Since its launch, the team has made rapid progress in development, completing operations such as hash indexing and changing inscription attributes (immutability). They also conduct live coding sessions and Q&A sessions on their official Discord. The trading market Tensor (https://www.tensor.trade/) has also been successfully integrated, and the development progress is swift. The first inscription, $Sols, had a casting cost of approximately $5. In the secondary market, it reached a peak price of 14 SOL, with a floor price of 7.4 SOL, equivalent to $428. The daily trading volume exceeded 20,000 SOL, equivalent to about $1.2 million, with active turnover rates. Core comparison Comparison of core protocols Comparison of mainstream inscription protocols, Image source: Kernel Ventures This chart compares several major inscription protocols based on four dimensions: fees, divisibility, scalability, and user base. Fees: RGB protocol stands out with the optimal fee rate, leveraging the Lightning Network for virtually zero-cost transactions.Divisibility: Both Solana and recent EVM protocols lack the capability for divisibility, with expectations for future development in this aspect.Scalability: RGB protocol's smart contract functionality provides significant scalability. Solana's scalability is still under discussion, but the team and Solana Foundation express support, suggesting it may not be lacking in scalability.User Base: EVM chains, with their naturally low gas costs, attract a larger user base due to the lower trial-and-error cost for users. BRC20, being the first inscription token and ranking first in orthodoxy, has accumulated a substantial user base. Comparison of protocol token data Protocol Token Comparison, Image source: Kernel Ventures Analyzing the mainstream tokens from various protocols, it's evident that the current market capitalization of these tokens is around $600 million, excluding smaller-cap currencies. Additionally, Ordi constitutes 80% of the total market capitalization, indicating significant development opportunities for other protocols. Notably, protocols like RGB are still in the process of refinement and haven't issued tokens. In terms of the number of holders, Pols and Ordi dominate, while other protocols have fewer holders. Eths and Solana inscriptions have not been split, so a comprehensive analysis of holder distribution is pending further developments. Innovations and risk analysis Currently, the primary use of inscriptions is Fair Launch, allowing users to fairly access opportunities to participate in projects. However, the development of the inscription space is not limited to fair launches. Recent developments in the inscription space have shown significant dynamism and innovation. The growth of this sector is largely attributed to key technological advancements in Bitcoin, such as SegWit, Bech32 encoding, Taproot upgrade, and Schnorr signatures. These technologies not only enhance the transaction efficiency and scalability of the Bitcoin network but also increase its programmability. For instance, in the RGB protocol, smart contracts built on the Lightning Network of Bitcoin exhibit not only extremely high transactions per second (40 million) but also benefit from being part of the largest blockchain ecosystem, Bitcoin. Regarding risks, caution is advised, particularly with some Launchpads. For example, the recent case of Rug project Ordstater, with the success of MUBI and TURT, has led to a proliferation of Launchpads. Some platforms may execute a Rug Pull directly after the Initial DEX Offering (IDO). Prior to engaging in any project, it is crucial to thoroughly read the whitepaper, research the background, and avoid blindly following KOLs due to FOMO. Future deduction of inscription ecology Market Deduction Galaxy Research and Mining predicts that by 2025, the market value of the Ordinals market will reach $5 billion, with the number of inscriptions at that time estimated to be only 260,000. Currently, the number of inscriptions has already reached 33 million, a growth of 126 times in just six months. The market capitalization of $Ordi has reached $400 million, and $Sats has reached $300 million. This suggests that the predictions for the entire inscription market were significantly underestimated. Product Deduction Currently, BRC20 trading activities are primarily concentrated on OKX and Unisat. The Web3 wallet promoted by OKX this year provides a favorable experience for trading BRC20 assets. The completeness of wallet-side infrastructure further smoothens and shortens the entry path for "retail investors," allowing them to smoothly enter this new market. With the emergence of various protocols, different protocols have introduced their own trading markets and wallets, such as Atomicals, Dogechain, Litecoin, and more. However, the wallets currently available in the market are all modifications of Unisat, built upon the open-source foundation of Unisat. Comparing Bitcoin (POW) with Ethereum, one can analogize various protocols to different chains, with the fundamental difference lying in the Chain ID. Therefore, future products might involve Unisat integrating different protocols, allowing users to switch between protocols within the wallet as needed, similar to the chain-switching functionality in wallets like Metamask. Comparison of wallets across protocols, Image source: Kernel Ventures Track deduction With funds continuously flowing into the inscription market, users are no longer satisfied with meme-driven speculation and are shifting their focus towards applications built on inscriptions. Unisat has brought innovation to BRC20 by introducing BRC20-Swap, allowing users to easily exchange BRC20 tokens similar to AMM DEX. As the first product enhancing liquidity in the Ordinals ecosystem, Unisat is poised to unlock the potential of the Bitcoin DeFi ecosystem, potentially leading to the development of additional features such as lending and derivatives. Recently, Unisat has also opened API interfaces, which is user-friendly for small developers, enabling them to call various functions, such as automated batch order scanning and monitoring inscriptions for automatic minting. This can give rise to numerous utility projects. While transaction fees on the Bitcoin network are relatively high, for layer2s' like Stacks and RIF, even though fees are lower, they lack a user base and sufficient infrastructure. This makes Bitcoin's EVM a compelling narrative. For example, BEVM is a project based on the Ethereum network, providing a Bitcoin ecosystem Layer2 with on-chain native tokens being BTC. Users can use the official cross-chain bridge to move Bitcoin from the mainnet to BEVM. The EVM compatibility of BEVM makes it easy to build applications on EVM chains, with low entry barriers for DeFi, swap, and more to migrate from other chains. However, there are several issues to consider with Bitcoin's EVM. Questions include whether the assets crossing over can maintain decentralization and immutability, the consensus problem of EVM chain nodes, and how to synchronize transactions to the Bitcoin network (or decentralized storage). Since the threshold for Ethereum layer 2 is relatively low, security may be compromised, making it a primary concern for anyone interested in Bitcoin EVM at the moment. Image source: BEVM Bridge Summary This article delves into the development trends in the Bitcoin inscription domain and the characteristics of various protocols. By analyzing protocols such as Ordinals (BRC20), Atomical, RGB, Pipe, and others on the Bitcoin chain, as well as comparing them with other Pow chains, Ethereum's Ethscriptions and Evm.ink, and Solana's SPL20 protocol, the differences in terms of fees, divisibility, scalability, and user aspects are explored. In the context of the inscription market, starting with the Ordinals protocol, a wave of inscription protocols like BRC20 has been referred to as the "world of retail investors." The analysis includes an overview of data such as Bitcoin block fees and the number of inscriptions forged by Ordinals, providing insights into the development trends in the inscription ecosystem. In the analysis of the racecourse, the core elements of mainstream inscription protocols, such as fees, divisibility, scalability, and user numbers, are compared to showcase their similarities and differences. Finally, through a comparison of protocol token data and core protocol comparisons, a comprehensive analysis of market value and user distribution for various mainstream protocols is provided. The conclusion emphasizes innovation points and risk analysis, highlighting the vitality and innovation within the inscription domain. Looking ahead, the inscription domain is expected to witness continuous technological innovation, driving the practical application of more complex functionalities. The market's robust development is anticipated to maintain steady growth, providing more opportunities for investors and participants. Meanwhile, it is expected that more creative projects and protocols will emerge, further enriching the inscription ecosystems of Bitcoin and other public chains. Miners' earnings may also increase as the inscription domain offers them new income opportunities. Reference link Bitcoin block-fee-rates (3 year):https://mempool.space/zh/graphs/mining/block-fee-rates#3yESIP-4: The Ethscriptions Virtual Machine:https://docs.ethscriptions.com/esips/esip-4-the-ethscriptions-virtual-machineA comprehensive scan of the inscriptions industry:https://www.theblockbeats.info/news/47753?search=1Litecoin block-fee-rates (1 year):https://litecoinspace.org/zh/graphs/mining/block-fee-rates#1y

The New Narrative of Inscription — Under the Support of Different Ecosystems

Author: Kernel Ventures Stanley
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua

TLDR:
This article delves into the development trends of Bitcoin inscription and the characteristics of various protocols.
Analyzing protocols on the Bitcoin chain such as Ordinals, BRC20, Atomical, RGB, Pipe, comparing them with other PoW chains like Dogechain and Litecoin, as well as Ethereum chains Ethscriptions and Evm.ink, and Solana chain's SPL20 protocol. The comparison includes aspects such as fees, divisibility, scalability, and user considerations, with particular emphasis on the low fees and high scalability of the RGB protocol.Examining market and product projection for the inscription ecosystem, highlighting the completeness of infrastructure on the wallet side, the launch of Bitcoin chain AMM DEX, and the potential for additional functionalities in the future, such as lending and derivatives. Unisat's open API interface opens the door to numerous tool projects.
In conclusion, this article provides a comprehensive exploration of the dynamics in the field of Bitcoin inscription, offering insights into the future development of inscription empowered by the ecosystem, providing readers with a thorough understanding and outlook.
Inscription Market Background
Market Overview
Since the introduction of the Bitcoin Ordinals protocol in January 2023, a wave of enthusiasm has swept through the Bitcoin chain with protocols like BRC20 and Ordinals assets, often referred to as the "world of retail investors." This is attributed to the Fair Launch model of scripts like BRC20, where chips are entirely minted by individual retail investors, devoid of institutions, project teams, or insider trading. The minting cost for Ordi is approximately $1 per inscription, but after its listing on the Gate.io exchange, the price surged to $20,000 per inscription. The staggering increase in value fueled the continued popularity of the BRC20 protocol, drawing in numerous Ordinals players and leading to a continuous spike in Gas fees on the Bitcoin chain. At its peak, the minimum confirmation Gas even reached 400 s/vb, surpassing the highest Gas levels in the past three years.
Using this as a starting point, this article will delve into the exploration of the script ecosystem on various chains, discussing the current state of various protocols and anticipating the developmental trends of scripts under the empowerment of the ecosystem.
Data Overview
The 3-year Bitcoin block-fee-rate chart vividly illustrates sharp spikes in fees during May-June and November of this year. This surge reflects the fervor of users towards script protocols, not just limited to the BRC20 protocol. Various protocols developed on the Bitcoin network were introduced during this period, sparking a wave known as "Bitcoin Summer."

Bitcoin rate in the past three years, image source: Mempool.space
From the casting data of Inscriptions, it is evident that the casting quantity has stabilized, consistently maintaining high levels.

Ordinals inscription casting quantity, image source: Dune @dgtl_asserts
Track analysis
This article will categorize various chains and analyze the script protocols on each of them.
Bitcoin Chain
Ordinals / BRC20
On January 21, 2023, Bitcoin developer Casey Rodarmor introduced the Ordinals protocol, allowing metadata to be inscribed on the Bitcoin chain and assigned a script number. In March of the same year, Twitter user @domodata released the BRC20 protocol, evolving token minting into on-chain strings. On November 7, Binance listed the BRC20 flagship token $ORDI, triggering a significant surge with a nearly 100% daily increase.
As the first protocol in the inscription ecosystem, Ordinals has encountered several issues:
BRC20 supports only four-letter tokens, imposing significant limitations.The casting names are susceptible to Sybil attacks, making casting transactions prone to frontrunning.The Ordinals protocol results in substantial redundant data on the Bitcoin network.
For example, after the BRC20 token minted out, the original inscriptions will become invalid once token transactions are sent. This causes significant data occupation, a reason why some early Bitcoin enthusiasts are reluctant to support Ordinals.
Atomicals
The Atomical protocol's ARC20 utilizes one satoshi to represent the deployed token and eliminates the four-character restriction, allowing for more diverse gameplay. A unique project within this framework is the "Realm", where each registered entity is a prefix text and ultimately holds pricing rights for all suffixes. In terms of basic functionality, the Realm can be used as a transfer and receipt address (payment name), and also it has various use cases such as building communities/DAOs, identity verification, social profiles, aligning seamlessly with our envisioned development of DID.

However, both ARC20 and $ATOM are still in the very early stages, and further development is required, including improvements in wallets and markets.

Realm casting quantity, image source: Dune @sankin
Pipe
Casey, the founder of Ordinals, proposed a specific inscription implementation called Rune designed for issuing FT (fungible tokens). This method allows the direct insertion of token data into the UTXO script, encompassing the token's ID, output, and quantity. Rune's implementation is very similar to ARC20, handing over token transfers directly to the BTC mainnet. The distinction lies in Rune including the token quantity in the script data.
While Rune's concept is still in the ideation stage, the founder of #Trac developed the first functional protocol based on this idea, issuing PIPE tokens. Leveraging Casey's high profile, PIPE quickly gained momentum, capitalizing on the speculative fervor inherited from BRC20. Rune's legitimacy is relatively stronger compared to BRC20, but gaining acceptance within the BTC community remains challenging.
RGB

Lightning Network Capacity, Image Source: Mempool.space
With the Ordinals protocol elevating the ecosystem of the Bitcoin network, an increasing number of developers and projects are turning their attention to the Lightning Network due to its extremely low transaction fees and 40 million TPS (transactions per second).
RGB is an intelligent contract system based on BTC and the Lightning Network, representing a more ultimate scaling solution. However, progress has been slow due to its complexity. RGB transforms the state of a smart contract into a concise proof, engraving this proof into the BTC UTXO output script. Users can verify this UTXO to inspect the state of the smart contract. When the smart contract state is updated, a new UTXO is created to store the proof of this state change.
All smart contract data is entirely on the BTC chain, operated by dedicated RGB nodes that record the complete data of the smart contract and handle the computational workload of transactions. Users verify the deterministic changes in contract status by scanning the entire UTXO of the BTC chain.
RGB can be viewed as BTC's Layer 2. This design leverages BTC's security to guarantee smart contracts. However, as the number of smart contracts increases, the demand for UTXO encapsulation data will also inevitably lead to significant redundancy in the BTC blockchain.
Since 2018, RGB has remained in the development stage without speculative content. Tether's issuing company, Tether Limited, is a significant supporter of RGB, aiming to issue a large amount of USDT on the BTC RGB.
In terms of products, the mainstream wallet currently in use is Bitmask, which supports Bitcoin and Lightning Network deposits, as well as assets of RGB-20 and RGB-21. Bitlight Labs is also developing the RGB network, with plans to build its own wallet system and write smart contracts for DEX (decentralized exchange). The project has acquired BitSwap (bitswap-bifi.github.io) and is preparing to integrate it into the RGB network.
RGB's biggest advantages lie in its low transaction fees and extremely high scalability. There was a time when smart contract development on the Bitcoin network was difficult and received little attention. However, with the Ordinals protocol raising the ecosystem's popularity, more developers are experimenting with smart contracts on the RGB network. These smart contracts are written in the Rust language, incompatible with Ethereum, leading to a higher learning curve and requiring further evaluation in terms of technology.
For more information on the technical aspects of the RGB protocol, Kernel Ventures’ previous articles have introduced it in detail. Article link: https://tokeninsight.com/en/research/market-analysis/a-brief-overview-on-rgb-can-rgb-replicate-the-ordinals-hype
Other POW Chain
During the heyday of inscriptions on the Bitcoin chain, as other PoW chains share the same origin and are also based on the UTXO spending model, Ordinals has been migrated to some leading PoW public chains. In this article, we will analyze the examples of Dogechain and Litecoin, which have high market acceptance and development completeness.
Dogechain:
The Drc-20 protocol on the Dogecoin chain is based on Ordinals and functions similarly to the Bitcoin chain. However, due to its low transaction fees and strong meme appeal, it has gained popularity.
Litecoin:
Similarly, the Ltc-20 protocol on the Litecoin chain is based on Ordinals. This protocol has received retweets and attention from the Litecoin official team and its founder, Charlie Lee. It can be considered as having a "noble pedigree." The trading markets Unilit and Litescribe, along with the wallet Litescribe, show a relatively high level of development completeness. The first token, $Lite, is already listed on the Gate exchange.
However, there were issues with the protocol before the index was introduced. After the index was launched, a bug causing increased issuance emerged, but it has since been fixed and is worth keeping an eye on. From the graph, it is evident that after the introduction of the LTC20 protocol, gas fees on the Litecoin chain surged.

Image source: Twitter @SatoshiLite

Litecoin rate in the past year, image source: Litecoinspace
Ethereum Chain
Ethscriptions
As of now, the trading platform Etch on the Ethscriptions protocol has achieved a transaction volume of 10,500 ETH. The floor price of the first token, Eths, is $4,300. For those who stayed in from the beginning and did not exit, the initial investment cost on June 18th was less than 1U. Those who held on have now gained returns of over 6,000 times their initial investment.

Eths transaction data, image source: ETCH Market
Tom Lehman proposed a novel Ethereum scaling solution on August 8th. Employing a technology similar to Ordinals, leveraging Calldata expansion, this solution aims to achieve cost-effectiveness in Ethereum mainnet gas fees and enhance the dimensionality of ecosystem applications.
At the core of Eths is the Ethscriptions Virtual Machine (ESC VM), which can be likened to the Ethereum Virtual Machine (EVM). The "Dumb Contracts" within the ESC VM enable Eths to break free from the limitations of inscriptions as NFT speculation, entering the realm of functionality and practicality. Eths has officially entered the competition in the base layer and L2 solutions arena.

Dumb Contracts running logic, picture source: Ethscriptions ESIP-4 proposal
"Eths represents another approach to Ethereum Layer 2. Unlike typical Layer 2 solutions that are separate chains and may have a backdoor, Eths conducts transactions on the Ethereum mainnet with gas fees as affordable as those on Layer 2. It enables various activities such as swapping, DeFi, and GameFi on the Eths platform. The key aspect is that it operates on the mainnet, making it secure and more decentralized than Layer 2," as excerpted from the Eths community.
However, articulating this new Layer 2 narrative is challenging. Firstly, token splitting is still in the developmental stage, and current inscriptions are still non-fungible tokens (NFTs) that can not be split into fungible tokens (FTs).
As of the latest information available, the FacetSwap (https://facetswap.com/) has introduced a splitting feature. However, it was noted that mainstream trading markets do not currently support split inscriptions. Users can wait for future adaptations. Currently, split inscriptions can be used for activities like swapping and adding liquidity on Factswap. All operations are resolved by a virtual address (non-existent address) 0x000...Face7. Users can embed messages in IDM and send the hexadecimal data of the message to the address ending with Face7 to perform operations like approve and transfer. As this is still in the early stages, its development trajectory will be observed in the future.
Other EVM Chain
Evm.ink
Evm.ink has migrated the protocol standards of Ethscriptions to other EVM-compatible chains, enabling these chains to also mint inscriptions and build indexes for other EVM chains. Recently popular projects such as POLS and AVAL use Evm.ink, which is essentially Ethscriptions' standard, for index recognition.

POLS casting data, image source: Dune @satsx

AVAL casting data, image source: Dune @helium_1990
POLS and AVAL both have a total supply of 21 million inscriptions. POLS has over 80,000 holders, while AVAL has more than 23,000 holders. The minting progress for both is around 2-3 days. This indicates a significant interest from the community in low-cost Layer 2 (L2) inscriptions, as they offer a high return on investment. Due to the low cost, users from the long tail of BTC and ETH chains are participating, leading to overflow. This trend is not limited to just these two chains; other chains like Heco and Fantom have also experienced a surge in gas fees, all related to inscriptions.

Number of daily transactions on the EVM chain, image source: Kernel Ventures
Solana
SPL20
Solana inscriptions commenced on November 17th at 4 AM and were completed by 8 AM, with a total supply of 21,000 inscriptions. Unlike other networks, the main body of the inscription is an NFT, and the Index Content is the actual inscription. NFTs can be created through any platform, and the index determines whether it is included based on the hash of the image or file. The second point is the embedded text; only inscriptions with matching hashes and embedded text are considered valid. Images are off-chain data, and text is on-chain data. Currently, major proxy platforms use IPFS, while others use AR.
Solana inscriptions share a significant limitation with Eths – They can not be split. Without the ability to split, they essentially function as NFTs, lacking the liquidity and operational convenience equivalent to tokens, let alone the vision of future Dex Swaps.
The protocol's founder is also the founder of TapPunk on the Tap protocol. The team behind the largest proxy platform, Liberplex (https://www.libreplex.io/), is very proactive. Since its launch, the team has made rapid progress in development, completing operations such as hash indexing and changing inscription attributes (immutability). They also conduct live coding sessions and Q&A sessions on their official Discord. The trading market Tensor (https://www.tensor.trade/) has also been successfully integrated, and the development progress is swift.
The first inscription, $Sols, had a casting cost of approximately $5. In the secondary market, it reached a peak price of 14 SOL, with a floor price of 7.4 SOL, equivalent to $428. The daily trading volume exceeded 20,000 SOL, equivalent to about $1.2 million, with active turnover rates.
Core comparison
Comparison of core protocols

Comparison of mainstream inscription protocols, Image source: Kernel Ventures
This chart compares several major inscription protocols based on four dimensions: fees, divisibility, scalability, and user base.
Fees: RGB protocol stands out with the optimal fee rate, leveraging the Lightning Network for virtually zero-cost transactions.Divisibility: Both Solana and recent EVM protocols lack the capability for divisibility, with expectations for future development in this aspect.Scalability: RGB protocol's smart contract functionality provides significant scalability. Solana's scalability is still under discussion, but the team and Solana Foundation express support, suggesting it may not be lacking in scalability.User Base: EVM chains, with their naturally low gas costs, attract a larger user base due to the lower trial-and-error cost for users. BRC20, being the first inscription token and ranking first in orthodoxy, has accumulated a substantial user base.
Comparison of protocol token data

Protocol Token Comparison, Image source: Kernel Ventures
Analyzing the mainstream tokens from various protocols, it's evident that the current market capitalization of these tokens is around $600 million, excluding smaller-cap currencies. Additionally, Ordi constitutes 80% of the total market capitalization, indicating significant development opportunities for other protocols. Notably, protocols like RGB are still in the process of refinement and haven't issued tokens.
In terms of the number of holders, Pols and Ordi dominate, while other protocols have fewer holders. Eths and Solana inscriptions have not been split, so a comprehensive analysis of holder distribution is pending further developments.
Innovations and risk analysis
Currently, the primary use of inscriptions is Fair Launch, allowing users to fairly access opportunities to participate in projects. However, the development of the inscription space is not limited to fair launches.
Recent developments in the inscription space have shown significant dynamism and innovation. The growth of this sector is largely attributed to key technological advancements in Bitcoin, such as SegWit, Bech32 encoding, Taproot upgrade, and Schnorr signatures. These technologies not only enhance the transaction efficiency and scalability of the Bitcoin network but also increase its programmability.
For instance, in the RGB protocol, smart contracts built on the Lightning Network of Bitcoin exhibit not only extremely high transactions per second (40 million) but also benefit from being part of the largest blockchain ecosystem, Bitcoin.
Regarding risks, caution is advised, particularly with some Launchpads. For example, the recent case of Rug project Ordstater, with the success of MUBI and TURT, has led to a proliferation of Launchpads. Some platforms may execute a Rug Pull directly after the Initial DEX Offering (IDO). Prior to engaging in any project, it is crucial to thoroughly read the whitepaper, research the background, and avoid blindly following KOLs due to FOMO.
Future deduction of inscription ecology
Market Deduction
Galaxy Research and Mining predicts that by 2025, the market value of the Ordinals market will reach $5 billion, with the number of inscriptions at that time estimated to be only 260,000. Currently, the number of inscriptions has already reached 33 million, a growth of 126 times in just six months. The market capitalization of $Ordi has reached $400 million, and $Sats has reached $300 million. This suggests that the predictions for the entire inscription market were significantly underestimated.
Product Deduction
Currently, BRC20 trading activities are primarily concentrated on OKX and Unisat. The Web3 wallet promoted by OKX this year provides a favorable experience for trading BRC20 assets. The completeness of wallet-side infrastructure further smoothens and shortens the entry path for "retail investors," allowing them to smoothly enter this new market. With the emergence of various protocols, different protocols have introduced their own trading markets and wallets, such as Atomicals, Dogechain, Litecoin, and more. However, the wallets currently available in the market are all modifications of Unisat, built upon the open-source foundation of Unisat.
Comparing Bitcoin (POW) with Ethereum, one can analogize various protocols to different chains, with the fundamental difference lying in the Chain ID. Therefore, future products might involve Unisat integrating different protocols, allowing users to switch between protocols within the wallet as needed, similar to the chain-switching functionality in wallets like Metamask.

Comparison of wallets across protocols, Image source: Kernel Ventures
Track deduction
With funds continuously flowing into the inscription market, users are no longer satisfied with meme-driven speculation and are shifting their focus towards applications built on inscriptions. Unisat has brought innovation to BRC20 by introducing BRC20-Swap, allowing users to easily exchange BRC20 tokens similar to AMM DEX. As the first product enhancing liquidity in the Ordinals ecosystem, Unisat is poised to unlock the potential of the Bitcoin DeFi ecosystem, potentially leading to the development of additional features such as lending and derivatives. Recently, Unisat has also opened API interfaces, which is user-friendly for small developers, enabling them to call various functions, such as automated batch order scanning and monitoring inscriptions for automatic minting. This can give rise to numerous utility projects.
While transaction fees on the Bitcoin network are relatively high, for layer2s' like Stacks and RIF, even though fees are lower, they lack a user base and sufficient infrastructure. This makes Bitcoin's EVM a compelling narrative. For example, BEVM is a project based on the Ethereum network, providing a Bitcoin ecosystem Layer2 with on-chain native tokens being BTC. Users can use the official cross-chain bridge to move Bitcoin from the mainnet to BEVM. The EVM compatibility of BEVM makes it easy to build applications on EVM chains, with low entry barriers for DeFi, swap, and more to migrate from other chains.
However, there are several issues to consider with Bitcoin's EVM. Questions include whether the assets crossing over can maintain decentralization and immutability, the consensus problem of EVM chain nodes, and how to synchronize transactions to the Bitcoin network (or decentralized storage). Since the threshold for Ethereum layer 2 is relatively low, security may be compromised, making it a primary concern for anyone interested in Bitcoin EVM at the moment.

Image source: BEVM Bridge
Summary
This article delves into the development trends in the Bitcoin inscription domain and the characteristics of various protocols. By analyzing protocols such as Ordinals (BRC20), Atomical, RGB, Pipe, and others on the Bitcoin chain, as well as comparing them with other Pow chains, Ethereum's Ethscriptions and Evm.ink, and Solana's SPL20 protocol, the differences in terms of fees, divisibility, scalability, and user aspects are explored.
In the context of the inscription market, starting with the Ordinals protocol, a wave of inscription protocols like BRC20 has been referred to as the "world of retail investors." The analysis includes an overview of data such as Bitcoin block fees and the number of inscriptions forged by Ordinals, providing insights into the development trends in the inscription ecosystem.
In the analysis of the racecourse, the core elements of mainstream inscription protocols, such as fees, divisibility, scalability, and user numbers, are compared to showcase their similarities and differences. Finally, through a comparison of protocol token data and core protocol comparisons, a comprehensive analysis of market value and user distribution for various mainstream protocols is provided. The conclusion emphasizes innovation points and risk analysis, highlighting the vitality and innovation within the inscription domain.
Looking ahead, the inscription domain is expected to witness continuous technological innovation, driving the practical application of more complex functionalities. The market's robust development is anticipated to maintain steady growth, providing more opportunities for investors and participants. Meanwhile, it is expected that more creative projects and protocols will emerge, further enriching the inscription ecosystems of Bitcoin and other public chains. Miners' earnings may also increase as the inscription domain offers them new income opportunities.
Reference link
Bitcoin block-fee-rates (3 year):https://mempool.space/zh/graphs/mining/block-fee-rates#3yESIP-4: The Ethscriptions Virtual Machine:https://docs.ethscriptions.com/esips/esip-4-the-ethscriptions-virtual-machineA comprehensive scan of the inscriptions industry:https://www.theblockbeats.info/news/47753?search=1Litecoin block-fee-rates (1 year):https://litecoinspace.org/zh/graphs/mining/block-fee-rates#1y
LIVE
Kernel Ventures
--
Kernel Ventures: A new narrative for inscriptions — Can inscriptions empowered by the ecosystem blaze a new trail?
Author: Kernel Ventures Stanley
Reviewers: Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
This article provides an in-depth exploration of the development trends in the field of Bitcoin inscription and the characteristics of each protocol.
Analyze the protocols on the Bitcoin chain such as Ordinals, BRC20, Atomical, RGB, and Pipe, and other PoW chains, such as Dogechain and Litecoin, as well as the Ethereum chain Ethscriptions and Evm.ink, and the Solana chain SPL20 protocol, in terms of rates and divisibility , scalability, and user aspects, especially highlighting the low rate and high scalability of the RGB protocol. With the market and product deduction of the Inscription ecosystem, the completion of the wallet-side infrastructure, and the launch of the Bitcoin chain AMM DEX, more functions such as lending and derivatives may appear in the future. UniSat's open API interface can produce a lot of tool projects.
See original
Kernel Ventures: A new narrative for inscriptions — Can inscriptions empowered by the ecosystem blaze a new trail?Author: Kernel Ventures Stanley Reviewers: Kernel Ventures Mandy, Kernel Ventures Joshua TLDR: This article provides an in-depth exploration of the development trends in the field of Bitcoin inscription and the characteristics of each protocol. Analyze the protocols on the Bitcoin chain such as Ordinals, BRC20, Atomical, RGB, and Pipe, and other PoW chains, such as Dogechain and Litecoin, as well as the Ethereum chain Ethscriptions and Evm.ink, and the Solana chain SPL20 protocol, in terms of rates and divisibility , scalability, and user aspects, especially highlighting the low rate and high scalability of the RGB protocol. With the market and product deduction of the Inscription ecosystem, the completion of the wallet-side infrastructure, and the launch of the Bitcoin chain AMM DEX, more functions such as lending and derivatives may appear in the future. UniSat's open API interface can produce a lot of tool projects.

Kernel Ventures: A new narrative for inscriptions — Can inscriptions empowered by the ecosystem blaze a new trail?

Author: Kernel Ventures Stanley
Reviewers: Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
This article provides an in-depth exploration of the development trends in the field of Bitcoin inscription and the characteristics of each protocol.
Analyze the protocols on the Bitcoin chain such as Ordinals, BRC20, Atomical, RGB, and Pipe, and other PoW chains, such as Dogechain and Litecoin, as well as the Ethereum chain Ethscriptions and Evm.ink, and the Solana chain SPL20 protocol, in terms of rates and divisibility , scalability, and user aspects, especially highlighting the low rate and high scalability of the RGB protocol. With the market and product deduction of the Inscription ecosystem, the completion of the wallet-side infrastructure, and the launch of the Bitcoin chain AMM DEX, more functions such as lending and derivatives may appear in the future. UniSat's open API interface can produce a lot of tool projects.
😇🧐
😇🧐
LIVE
Kernel Ventures
--
Kernel Ventures: An article discussing DA and historical data layer design
Author: Kernel Ventures Jerry Luo
Reviewers: Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
Early public chains required all network nodes to maintain data consistency to ensure security and decentralization. However, with the development of the blockchain ecosystem, storage pressure continues to increase, leading to a trend of centralized node operations. At this stage, Layer 1 urgently needs to solve the storage cost problem caused by the growth of TPS. Faced with this problem, developers need to propose new historical data storage solutions while taking into account security, storage cost, data reading speed and DA layer versatility. In the process of solving this problem, many new technologies and new ideas have emerged, including Sharding, DAS, Verkle Tree, DA intermediate components, etc. They tried to optimize the storage solution of the DA layer by reducing data redundancy and improving data verification efficiency. The current DA solutions are roughly divided into two categories based on data storage location, namely main chain DA and third-party DA. The main chain DA starts from the perspective of regularly cleaning data and sharding data to reduce node storage pressure. The third-party DA design requirements are all aimed at storage services and have reasonable solutions for large amounts of data. Therefore, the main focus is on trade-off between single-chain compatibility and multi-chain compatibility, and three solutions are proposed: main chain dedicated DA, modular DA, and storage public chain DA. Payment-type public chains have extremely high requirements for historical data security and are suitable for using the main chain as the DA layer. However, for public chains that have been running for a long time and have a large number of miners running the network, it would be more appropriate to adopt a third-party DA that does not involve the consensus layer and takes into account security. Comprehensive public chains are more suitable for using main chain dedicated DA storage with larger data capacity, lower cost and security. But considering the needs of cross-chain, modular DA is also a good option. Generally speaking, blockchain is developing in the direction of reducing data redundancy and multi-chain division of labor.
🥳😈
🥳😈
LIVE
Kernel Ventures
--
Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design
Author: Kernel Ventures Jerry Luo
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
In the early stage of blockchain, maintaining data consistency is considered extremely important to ensure security and decentralization. However, with the development of the blockchain ecosystem, the storage pressure is also increasing, leading to a trend of centralization in node operation. Such being the case, the storage cost problem brought by TPS growth in Layer1 needs to be solved urgently.Faced with this problem, developers should propose a solution that takes security, storage cost, data reading speed, and DA layer versatility fully into account.In the process of solving this problem, many new technologies and ideas have emerged, including Sharding, DAS, Verkle Tree, DA intermediate components, and so on. They try to optimize the storage scheme of the DA layer by reducing data redundancy and improving data validation efficiency.DA solutions are broadly categorized into two types from the perspective of data storage location, namely, main-chain DAs and third-party DAs. Main-chain DAs are designed from the perspectives of regular data cleansing and sliced data storage to reduce the storage pressure on nodes, while the third-party DAs are designed to serve the storage needs which have reasonable solutions for large amounts of data. As a result, we mainly trade-off between single-chain compatibility and multi-chain compatibility in third-party DAs, and propose three kinds of solutions: main-chain-specific DAs, modularized DAs, and storage public-chain DAs.Payment-type public chains have very high requirements for historical data security and, thus are suitable to use the main chain as the DA layer. However, for public chains that have been running for a long time and have a large number of miners running the network, it is more suitable to adopt a third-party DA that does not involve the consensus layer change with relatively high security. For comprehensive public chains, it is more suitable to use the main chain's dedicated DA storage with larger data capacity, lower cost, and security. However, considering the demand for cross-chain, modular DA is also a good option.Overall, blockchain is moving towards reducing data redundancy as well as multi-chain division of labor.
1. Background
Blockchain, as a distributed ledger, needs to make a copy of the historical data stored on all nodes to ensure that the data storage is secure and sufficiently decentralized. Since the correctness of each state change is related to the previous state (the source of the transaction), in order to ensure the correctness of the transaction, a blockchain should store all the history of transactions from the generation of the first transaction to the current transaction. Taking Ethereum as an example, even taking 20 kb per block as the average size, the total size of the current data in Ethereum has reached 370 GB. For a full node, in addition to the block itself, it has to record the state and transaction receipts. Including this part, the total amount of storage of a single node has exceeded 1 TB, which makes the operation of the node gradually centralized.

Source: Etherscan
The recent Cancun upgrade of Ethereum aims to increase Ethereum's TPS to near 1000, at which point Ethereum's annual storage growth will exceed the sum of its current storage. In high-performance public chains, the transaction speed of tens of thousands of TPS may bring hundreds of GB of data addition per day. The common data redundancy of all nodes on the network obviously can not adapt to such storage pressure. So, Layer1 must find a suitable solution to balance the TPS growth and the storage cost of the nodes.
2. Performance Indicators of DA
2.1 Safety
Compared with a database or linked list, blockchain's immutability comes from the fact that its newly generated data can be verified by historical data, thus ensuring the security of its historical data is the first issue to be considered in DA layer storage. To judge the data security of blockchain systems, we often analyze the redundancy amount of data and the checking method of data availability.
Number of redundancy: The redundancy of data in the blockchain system mainly plays such roles: first, more redundancy in the network can provide more samples for reference when the verifier needs to check the account status which can help the node select the data recorded by the majority of nodes with higher security. In traditional databases, since the data is only stored in the form of key-value pairs in a certain node, changing the historical data is only carried out in a single node, with a low cost of the attack, and theoretically, the more the number of redundancies is, the higher the degree of credibility of the data is. Theoretically, the more redundancy there is, the more trustworthy the data will be. What's more, the more nodes there are, the less likely the data will be lost. This point can also be compared to the centralized servers that store Web2 games, once the background servers are all shut down, there will be a complete closure of the service. But it is not better with more redundancy, because redundancy will bring additional storage space, which will bring too much storage pressure to the system. A good DA layer should choose a suitable redundancy way to strike a balance between security and storage efficiency.Data Availability Checking: The amount of redundancy can ensure enough records of data in the network, but the data to be used must be checked for accuracy and completeness. Current blockchains commonly use cryptographic commitment algorithms as the verification methods, which just keep a small cryptographic commitment obtained by transaction data mixing, for the whole network to record. To test the authenticity of historical data, we should try to recover the commitment with the data. If the recovery commitment is identical to the original commitment, the verification passes. Commonly used cryptographic verification algorithms are Merkle Root and Verkle Root. High-security data availability verification algorithms can quickly verify historical data with the help of as little third-party data as possible.
2.2 Storage Cost
After ensuring basic security, the next goal of the DA layer is to reduce costs and increase efficiency. The first step is to reduce the storage cost presented by the memory consumption caused by storing data per unit size, regardless of the difference in hardware performance. Nowadays, the main ways to reduce storage costs in blockchain are to adopt sharding technology and use reward storage to reduce the number of data backups while keeping its security. However, it is not difficult to see from the above improvement methods that there is a game relationship between storage cost and data security, and reducing storage occupancy often means a decrease in security. Therefore, an excellent DA layer needs to realize the balance between storage cost and data security. In addition, if the DA layer is a separate public chain, it also needs to reduce the cost by minimizing the intermediate process of data exchange, in which every transit process needs to leave index data for subsequent retrieval. So the longer the calling process, the more index data will be left, which will increase the storage cost. Finally, the cost of storing data is directly linked to the persistence of the data. In general, the higher the cost of data storage, the more difficult it is for the public chain to store data persistently.
2.3 Data Reading Speed
Having achieved cost reduction, the next step is efficiency which means the ability to quickly recall data from the DA layer when needed. This process involves two steps, the first is to search for nodes to store data, mainly for public chains that have not achieved data consistency across the network, if the public chain has achieved data synchronization of nodes across the network, the time consumption of this process can be ignored. Then, in the mainstream blockchain systems at this stage, including Bitcoin, Ethereum, and Filecoin, the nodes' storage method is all Leveldb database. In Leveldb, data is stored in three ways. First, data written on-the-fly is stored in Memtable type files until the Memtable is full, then, the file type is changed from Memtable to Immutable Memtable. Both two types are stored in memory, but Immutable Memtable files are read-only. The hot storage used in the IPFS network stores data in this part of the network, so that it can be quickly read from memory when it is called, but an average node only has GBs of removable memory, which can easily be slowed down, and when a node goes down, the data in memory is lost permanently. If you want persistent data storage, you need to store the data in the form of SST files on the solid state disk (SSD), but when reading the data, you need to read the data to the memory first, which greatly reduces the speed of data indexing. Finally, for a system with storage sharding, data restoration requires sending data requests to multiple nodes and restoring them, a process that also slows down the reading of data.

Source: Leveldb-handbook
2.4 DA Layer Generalization
With the development of DeFi and various problems of CEX, users' requirements for cross-chain transactions of decentralized assets are growing. Whether we adopt the cross-chain mechanism of hash-locking, notary, or relay chain, we can't avoid the simultaneous determination of historical data on two chains. The key to this problem lies in the separation of data on the two chains, which cannot be directly communicated in different decentralized systems. Therefore, a solution is proposed by changing the storage method of the DA layer, which stores the historical data of multiple public chains on the same trusted public chain and only needs to call the data on this public chain when verifying. This requires the DA layer to be able to establish secure communication with different types of public chains which means that the DA layer has good versatility.
3. Techniques Concerning DA
3.1 Sharding
In traditional distributed systems, a file is not stored in a complete form on a node, but to divide original data into multiple blocks and store them in each node. Also, the block is often not only stored in one node but leaves appropriate backup in other nodes. In the existing mainstream distributed systems, the number of backups is usually set to 2. This sharding mechanism can reduce the storage pressure of individual nodes, expand the total capacity of the system to the sum of the storage capacity of each node, and at the same time ensure the security of storage through appropriate data redundancy. The sharding scheme adopted in blockchain is generally similar to the traditional distributed systems, but there are differences in some details. Firstly, since the default nodes in the blockchain are untrustworthy, the process of realizing sharding requires a sufficiently large amount of data backups for the subsequent judgment of data authenticity, so the number of backups of this node needs to be much more than 2. Ideally, in the blockchain system that adopts this scheme of storage, if the total number of authentication nodes is T and the number of shards is N, the number of backups should be T/N. Secondly, as to the storage process of a block, a traditional distributed system with fewer nodes often has the mode that a node adapted to multiple data blocks. Firstly, the data is mapped to the hash ring by the consistent hash algorithm, then each node stores a certain range of numbered blocks with the hash ring's assignments. It can be accepted in the system that one single node does not have a storage task in certain storage. While on the blockchain, the storage block is no longer a random but an inevitable event for the nodes. Each node will randomly select a block for storage in the blockchain, with the process completed by the hashing result of data mixed with the node's information to modulo slice number. Assuming that each data is divided into N blocks, the actual storage size of each node is only 1/N. By setting N appropriately, we can achieve a balance between the growth of TPS and the pressure on node storage.

Source: Kernel Ventures
3.2 DAS (Data Availability Sampling)
DAS technology is a further optimization of the storage method based on sharding. In the process of sharding, due to the simple random storage of nodes, a block loss may occur. Secondly, for the data after sharding, how to confirm the authenticity and integrity of the data during the restoration process is also very important. In DAS, these two problems are solved by Eraser code and KZG polynomial commitment.
Eraser code: Given the large number of verified nodes in Ethereum, it's possible that a block is not being stored by any node although it is a probability event. To mitigate the threat of missing storage, instead of slicing and dicing the raw data into blocks, this scheme maps the raw data to the coefficients of an nth-degree polynomial, then takes 2n points on the polynomial and lets the nodes randomly choose one of them to store. For this nth-degree polynomial, only n+1 points are needed for the reduction, and thus only half of the blocks need to be selected by the nodes for us to realize the reduction of the original data. The Eraser code improves the security of the data storage and the network's ability to recover the data.KZG polynomial commitment: A very important aspect of data storage is the verification of data authenticity. In networks that do not use Eraser code, various methods can be used for verification, but if the Eraser code above is introduced to improve data security, then it is more appropriate to use the KZG polynomial commitment, which can verify the contents of a single block directly in the form of a polynomial, thus eliminating the need to reduce the polynomial to binary data. KZG polynomial commitment can directly verify the content of a single block in the form of polynomials, thus eliminating the need to reduce the polynomials to binary data, and the overall form of verification is similar to that of Merkle Tree, but it does not require specific Path node data and only requires the KZG Root and block data to verify the authenticity of the block.
3.3 Data Validation Method in DA
Data validation ensures that the data called from a node are accurate and complete. To minimize the amount of data and computational cost required in the validation process, the DA layer now uses a tree structure as the mainstream validation method. The simplest form is to use Merkle Tree for verification, which uses the form of complete binary tree records, only need to keep a Merkle Root and the hash value of the subtree on the other side of the path of the node can be verified, the time complexity of the verification is O(logN) level (the logN is default log2(N)). Although the validation process has been greatly simplified, the amount of data for the validation process in general still grows with the increase of data. To solve the problem of increasing validation volume, another validation method, Verkle Tree, is proposed at this stage, in which each node in the Verkle Tree not only stores the value but also attaches a Vector Commitment, which can quickly validate the authenticity of the data by using the value of the original node and the commitment proof, without the need to call the values of other sister nodes, which makes the computation of each validation easier and faster. This makes the number of computations for each verification only related to the depth of the Verkle Tree, which is a fixed constant, thus greatly accelerating the verification speed. However, the calculation of Vector Commitment requires the participation of all sister nodes in the same layer, which greatly increases the cost of writing and changing data. However, for data such as historical data, which is permanently stored and cannot be tampered with, also, can only be read but not written, the Verkle Tree is extremely suitable. In addition, Merkle Tree and Verkle Tree itself have a K-ary form of variants, the specific implementation of the mechanism is similar, just change the number of subtrees under each node, the specific performance comparison can be seen in the following table.

Source: Verkle Trees
3.4 Generic DA Middleware
The continuous expansion of the blockchain ecosystem has brought about an increasing number of public chains. Due to the advantages and irreplaceability of each public chain in their respective fields, it is impossible for Layer1 public chains to become unified in a short time. However, with the development of DeFi and the problems of CEX, users' demand for decentralized cross-chain trading assets is growing. Therefore, DA layer multi-chain data storage, which can eliminate the security problems in cross-chain data interaction, has gained more and more attention. However, to accept historical data from different public chains, it is necessary for the DA layer to provide decentralized protocols for standardized storage and validation of data flow. For example, kvye, a storage middleware based on Arweave, adopts the method of actively crawling data from the main chains, and it can store the data from all the chains in a standardized form to Arweave in order to minimize the differences in the data transmission process. Comparatively speaking, Layer2, which specializes in providing DA layer data storage for a certain public chain, carries out data interaction by way of internal shared nodes. Although it reduces the cost of interaction and improves security, it has greater limitations and can only provide services to specific public chains.
4. Storage Methods of DA
4.1 Main Chain DA
4.1.1 DankSharding-like
There is no definitive name for this type of storage scheme, but the most prominent one is Dank Sharding on Ethereum, so in this paper, we use the term Dank Sharding-like to refer to this type of scheme. This type of scheme mainly uses the two DA storage techniques mentioned above, sharding and DAS, firstly, the data is divided into an appropriate number of shares by sharding, and then each node extracts a data block in the form of DAS for storage. For the case that there are enough nodes in the whole network, we can take a larger number of slices N, so that the storage pressure of each node is only 1/N of the original, thus realizing N-fold expansion of the overall storage space. At the same time, to prevent the extreme case that a block is not stored by any block, Dank Sharding encodes the data using Eraser Code, which requires only half of the data for complete restoration. Lastly, the data is verified using a Verkle Tree structure with polynomial commitments for fast checksums.
4.1.2 Temporary Storage
For the DA of the main chain, one of the simplest ways to handle data is to store historical data for a short period of time. Essentially, the blockchain acts as a public ledger, where changes are made to the content of the ledger in the presence of the entire network, and there is no need for permanent storage. In the case of Solana, for example, although its historical data is synchronized to Arweave, the main network nodes only retain the transaction data of the last two days. On a public chain based on account records, each moment of historical data retains the final state of the account on the blockchain, which is sufficient to provide a basis for verification of changes at the next moment. Those who have special needs for data before this time, can store it on other decentralized public chains or hand it over to a trusted third party. In other words, those who have additional needs for data will need to pay for historical data storage.
4.2 Third Party DA
4.2.1 DA for Main Chain: EthStorage
DA for Main Chain: The most important thing for the DA layer is the security of data transmission, and the DA with the highest security is the DA of the main chain, but the main chain storage is limited by the storage space and the competition of resources, so when the data volume of the network grows fast, the third-party DA is a better choice if it wants to realize the long-term storage of data. If the third-party DA has higher compatibility with the main network, it can realize the sharing of nodes, and the data interaction process will have higher security. Therefore, under the premise of considering security, a dedicated DA for the main chain will have a huge advantage. Taking Ethereum as an example, one of the basic requirements for a DA dedicated to the main chain is that it can be compatible with EVM to ensure interoperability with Ethereum data and contracts, and representative projects include Topia, EthStorage, etc. Among them, EthStorage is the most compatible DA in terms of compatibility. Representative projects include Topia, EthStorage, and so on. Among them, EthStorage is the most well-developed in terms of compatibility, because in addition to EVM compatibility, it has also set up relevant interfaces to interface with Remix, Hardhat, and other Ethereum development tools to realize compatibility with Ethereum development tools.EthStorage: EthStorage is a public chain independent of Ethereum, but the nodes running on it are a supergroup of Ethereum nodes, which means that the nodes running EthStorage can also run Ethereum at the same time. What's more, we can also directly operate EthStorage through the opcodes on Ethereum. EthStorage's storage model retains only a small amount of metadata for indexing on the main Ethereum network, essentially creating a decentralized database for Ethereum. In the current solution, EthStorage deploys an EthStorage Contract on the main Ethereum to realize the interaction between the main Ethereum and EthStorage. If Ethereum wants to deposit data, it needs to call the put() function in the contract, and the input parameters are two-byte variables key, data, where data represents the data to be deposited, and the key is its identity in the Ethereum network, which can be regarded as similar to the existence of CID in IPFS. After the (key, data) data pair is successfully stored in the EthStorage network, EthStorage will generate a kvldx to be returned to the Ethereum host network, which corresponds to the key on the Ethereum network, and this value corresponds to the storage address of the data on EthStorage so that the original problem of storing a large amount of data can now be changed to storing a single (key, kvldx). (key, kvldx) pair, which greatly reduces the storage cost of the main Ethereum network. If you need to call the previously stored data, you need to use the get() function in EthStorage and enter the key parameter, and then you can do a quick lookup of the data on EthStorage by using the kvldx stored in Ethereum.

Source: Kernel Ventures
In terms of how nodes store data, EthStorage learns from the Arweave model. First of all, a large number of (k,v) pairs from ETH are sharded, and each sharding contains a fixed number of (k, v) pairs, of which there is a limit on the size of each (k, v) pair to ensure the fairness of workload in the process of storing rewards for miners. For the issuance of rewards, it is necessary to verify whether the node stores data to begin with. In this process, EthStorage will divide a sharding (TB size) into many chunks and keep a Merkle root on the Ethereum mainnet for verification. Then the miner needs to provide a nonce to generate a few chunks by a random algorithm with the hash of the previous block on EthStorage, and the miner needs to provide the data of these chunks to prove that he/she has stored the whole sharding, but this nonce can not be chosen arbitrarily, or else the node will choose the appropriate nonce corresponding to the chunks stored by him/her and pass the verification. However, this nonce cannot be chosen randomly, otherwise the node will choose a suitable nonce that corresponds only to its stored chunks and thus pass the verification, so this nonce must make the generated chunks after mixing and hashing so that the difficulty value meets the requirements of the network, and only the first node that submits the nonce and the random-access proof can get the reward.
4.2.2 Modularization DA: Celsetia
Blockchain Module: The transactions to be performed on the Layer1 public chain are divided into the following four parts: (1) designing the underlying logic of the network, selecting validation nodes in a certain way, writing blocks, and allocating rewards for network maintainers; (2) packaging and processing transactions and publishing related transactions; (3) validating transactions to be uploaded to the blockchain and determining the final status; (4) storing and maintaining historical data on the blockchain. According to the different functions performed, we can divide the blockchain into four modules, consensus layer, execution layer, settlement layer, and data availability layer (DA layer).Modular Blockchain design: for a long time, these four modules have been integrated into a single public chain, such a blockchain is called a monolithic blockchain. This form is more stable and easier to maintain, but it also puts tremendous pressure on the single public chain. In practice, the four modules constrain each other and compete for the limited computational and storage resources of the public chain. For example, increasing the processing speed of the processing layer will bring more storage pressure to the data availability layer; ensuring the security of the execution layer requires a more complex verification mechanism but slows down the speed of transaction processing. Therefore, the development of a public chain often faces a trade-off between these four modules. To break through this bottleneck of public chain performance improvement, developers have proposed a modular blockchain solution. The core idea of modular blockchain is to strip out one or several of the four modules mentioned above and give them to a separate public chain for implementation. In this way, the public chain can focus on the improvement of transaction speed or storage capacity, breaking through the previous limitations on the overall performance of the blockchain due to the short board effect.Modular DA: The complex approach of separating the DA layer from the blockchain business and placing it on a separate public chain is considered a viable solution for Layer1's growing historical data. At this stage, the exploration in this area is still at an early stage, and the most representative project is Celestia, which uses the storage method of Sharding, which also divides the data into multiple blocks, and each node extracts a part of it for storage and uses the KZG polynomial commitment to verify the data integrity. At the same time, Celestia uses advanced two-dimensional RS corrective codes to rewrite the original data in the form of a k*k matrix, which ultimately requires only 25% of the original data to be recovered. However, sliced data storage is essentially just multiplying the storage pressure of nodes across the network by a factor of the total data volume, and the storage pressure of nodes still grows linearly with the data volume. As Layer1 continues to improve for transaction speed, the storage pressure on nodes may still reach an unacceptable threshold someday. To address this issue, an IPLD component is introduced in Celestia. Instead of storing the data in the k*k matrix directly on Celestia, the data is stored in the LL-IPFS network, with only the CID code of the data kept in the node. When a user requests a piece of historical data, the node sends the corresponding CID to the IPLD component, which is used to call the original data on IPFS. If the data exists on IPFS, it is returned via the IPLD component and the node. If it does not exist, the data can not be returned.

Source: Celestia Core
Celestia: Taking Celestia as an example, we can see the application of modular blockchain in solving the storage problem of Ethereum, Rollup node will send the packaged and verified transaction data to Celestia and store the data on Celestia, during the process, Celestia only stores the data without having too much perception. In this process, Celestia just stores the data without sensing it, and in the end, according to the size of the storage space, the Rollup node will pay the corresponding tia tokens to Celestia as the storage fee. The storage in Celestia utilizes a similar DAS and debugging code as in EIP4844, but the polynomial debugging code in EIP4844 is upgraded to use a two-dimensional RS debugging code, which upgrades the security of the storage again, and only 25% of the fractions are needed to recover the entire transaction data. It is essentially a POS public chain with low storage costs, and if it is to be realized as a solution to Ethereum's historical data storage problem, many other specific modules are needed to work with Celestia. For example, in terms of rollup, one of the roll-up models highly recommended by Celestia's official website is Sovereign Rollup, which is different from the common rollup on Layer2, which can only calculate and verify transactions, just completing the execution layer, and includes the entire execution and settlement process, which minimizes the need for the execution and settlement process on Celestia. This minimizes the processing of transactions on Celestia, which maximizes the overall security of the transaction process when the overall security of Celestia is weaker than that of Ethereum. As for the security of the data called by Celestia on the main network of Ethereum, the most mainstream solution is the Quantum Gravity Bridge smart contract. For the data stored on Celestia, it will generate a Merkle Root (data availability certificate) and keep it on the Quantum Gravity Bridge contract on the main network of EtherCenter. When EtherCenter calls the historical data on Celestia every time, it will compare the hash result with the Merkle Root, and if it matches, then it means that it is indeed the real historical data.
4.2.3 Storage Chain DA
In terms of the technical principles of mainchain DAs, many techniques similar to sharding have been borrowed from storage public chains. In third-party DAs, some of them even fulfill part of the storage tasks directly with the help of storage public chains, for example, the specific transaction data in Celestia is put on the LL-IPFS network. In the solutions of third-party DAs, besides building a separate public chain to solve the storage problem of Layer1, a more direct way is to directly connect the storage public chain to Layer1 to store the huge historical data on Layer1. For high-performance blockchain, the volume of historical data is even larger, under full-speed operation, the data volume of high-performance public chain Solana is close to 4 PG, which is completely beyond the storage range of ordinary nodes. Solana chooses a solution to store historical data on the decentralized storage network Arweave and only retains 2 days of data on the nodes of the main network for verification. To ensure the security of the storage process, Solana and the Arweave chain have designed a storage bridge protocol, Solar Bridge, which synchronizes the validated data from Solana nodes to Arweave and returns the corresponding tag, which allows Solana nodes to view the historical data of the Solana blockchain at any point in time. The Solana node can view historical data from any point in time on the Solana blockchain. On Arweave, instead of requiring nodes across the network to maintain data consistency as a necessity for participation, the network adopts a reward storage approach. First of all, Arweave doesn't use a traditional chain structure to build blocks, but more like a graph structure. In Arweave, a new block will not only point to the previous block, but also randomly point to a generated block Recall block, whose exact location is determined by the hash result of the previous block and its block height, and the location of the Recall block is unknown until the previous block is mined out. However, in the process of generating new blocks, nodes are required to have the data of the Recall block to use the POW mechanism to calculate the hash of the specified difficulty, and only the miner who is the first to calculate the hash that meets the difficulty can be rewarded, which encourages miners to store as much historical data as possible. At the same time, the fewer people storing a particular historical block, the fewer competitors a node will have when generating a difficulty-compliant nonce, encouraging miners to store blocks with fewer backups in the network. Finally, to ensure that nodes store data permanently, WildFire's node scoring mechanism is introduced in Arweave. Nodes will prefer to communicate with nodes that can provide historical data more and faster, while nodes with lower ratings will not be able to get the latest block and transaction data the first time, thus failing to get a head start in the POW competition.

Source: Arweave Yellow-Paper
5. Synthesized Comparison
We will compare the advantages and disadvantages of each of the five storage solutions in terms of the four dimensions of DA performance metrics.
Safety: The biggest source of data security problems is the loss of data caused by the data transmission process and malicious tampering from dishonest nodes, and the cross-chain process is the hardest hit area of data transmission security due to the independence of the two public chains and the state is not shared. In addition, Layer1, which requires a specialized DA layer at this stage, often has a strong consensus group, and its security will be much higher than that of ordinary storage public chains. Therefore, the main chain DA solution has higher security. After ensuring the security of data transmission, the next step is to ensure the security of calling data. Considering only the short-term historical data used to verify the transaction, the same data is backed up by the whole network in the temporary storage network, while the average number of data backups in the DankSharding-like scheme is only 1/N of the number of nodes in the whole network, which means more data redundancy can make the data less prone to be lost, and at the same time, it can provide more reference samples for verification. Therefore, temporary storage will have higher data security. In the third-party DA scheme, because of the public nodes used in the main chain, the data can be directly transmitted through these relay nodes in the process of cross-chaining, and thus it will also have a relatively higher security than other DA schemes.Storage Cost: The factor that has the greatest impact on storage cost is the amount of redundancy in the data. In the short-term storage scheme of the main chain DA, which uses the form of network-wide node data synchronization for storage, any newly stored data needs to be backed up in the network-wide nodes, having the highest storage cost. The high storage cost in turn determines that in a high TPS network, this approach is only suitable for temporary storage. Next is the sharding storage method, including sharding in the main chain and sharding in the third-party DA. Because the main chain often has more nodes, and thus the corresponding block will have more backups, the main chain sharding scheme will have a higher cost. The lowest storage cost is in the storage public chain DA that adopts the reward storage method, and the amount of data redundancy in this scheme tends to fluctuate around a fixed constant. At the same time, the storage public chain DA also introduces a dynamic adjustment mechanism, which attracts nodes to store less backup data by increasing the reward to ensure data security.Data Read Speed: Data storage speed is primarily affected by where the data is stored in the storage space, the data index path, and the distribution of the data among the nodes. Among them, where the data is stored in the nodes has a greater impact on the speed, because storing the data in memory or SSD can lead to a tens of times difference in read speed. Storage public chain DAs mostly take SSD storage because the load on that chain includes not only data from the DA layer but also highly memory-hungry personal data such as videos and images uploaded by users. If the network does not use SSDs as storage space, it is difficult to carry the huge storage pressure and meet the demand for long-term storage. Second, for third-party DAs and main-chain DAs that use memory state to store data, third-party DAs first need to search for the corresponding indexed data in the main chain, and then transfer the indexed data across the chain to third-party DAs and return the data via the storage bridge. In contrast, the mainchain DA can query data directly from nodes, and thus has faster data retrieval speed. Finally, within the main-chain DA, the sharding approach requires calling blocks from multiple nodes and restoring the original data. Therefore, it is slower than the short-term storage method without sharding.DA Layer Universality: Mainchain DA universality is close to zero because it is not possible to transfer data from a public chain with insufficient storage space to another public chain with insufficient storage space. In third-party DAs, the generality of a solution and its compatibility with a particular mainchain are contradictory metrics. For example, in the case of a mainchain-specific DA solution designed for a particular mainchain, it has made a lot of improvements at the level of node types and network consensus to adapt to that particular public chain, and thus these improvements can act as a huge obstacle when communicating with other public chains. Within third-party DAs, storage public chain DAs perform better in terms of generalizability than modular DAs. Storage public chain DAs have a larger developer community and more expansion facilities to adapt to different public chains. At the same time, the storage public chain DA can obtain data more actively through packet capture rather than passively receiving information transmitted from other public chains. Therefore, it can encode the data in its way, achieve standardized storage of data flow, facilitate the management of data information from different main chains, and improve storage efficiency.

Source: Kernel Ventures
6. Conclusion
Blockchain is undergoing the process of conversion from Crypto to Web3, and it brings an abundance of projects on the blockchain, but also data storage problems. To accommodate the simultaneous operation of so many projects on Layer1 and ensure the experience of the Gamefi and Socialfi projects, Layer1 represented by Ethereum has adopted Rollup and Blobs to improve the TPS. What's more, the n/umber of high-performance blockchains in the newborn blockchain is also growing. But higher TPS not only means higher performance but also means more storage pressure in the network. For the huge amount of historical data, multiple DA approaches, both main chain and third-party based are proposed at this stage to adapt to the growth of storage pressure on the chain. Improvements have their advantages and disadvantages and have different applicability in different contexts. In the case of payment-based blockchains, which have very high requirements for the security of historical data and do not pursue particularly high TPS, those are still in the preparatory stage, they can adopt a DankSharding-like storage method, which can ensure security and a huge increase in storage capacity at the same time realize. However, if it is a public chain like Bitcoin, which has already been formed and has a large number of nodes, there is a huge risk of rashly improving the consensus layer, so it can adopt a special DA for the main chain with higher security in the off-chain storage to balance the security and storage issues. However, it is worth noting that the function of the blockchain is changing over time. For example, in the early days, Ethereum's functionality was limited to payments and simple automated processing of assets and transactions using smart contracts, but as the blockchain landscape has expanded, various Socialfi and Defi projects have been added to Ethereum, pushing it to a more comprehensive direction. With the recent explosion of the inscription ecosystem on Bitcoin, transaction fees on the Bitcoin network have surged nearly 20 times since August, reflecting the fact that the network's transaction speeds are not able to meet the demand for transactions at this stage. Traders have to raise fees to get transactions processed as quickly as possible. Now, the Bitcoin community needs to make a trade-off between accepting high fees and slow transaction speed or reducing network security to increase transaction speeds while defeating the purpose of the payment system in the first place. If the Bitcoin community chooses the latter, then the storage solution will need to be adjusted in the face of increasing data pressure.

Source: OKLINK
As for the public chain with comprehensive functions, its pursuit of TPS is higher, with the enormous growth of historical data, it is difficult to adapt to the rapid growth of TPS in the long run by adopting the DankSharding-like solution. Therefore, a more appropriate way is to migrate the data to a third-party DA for storage. Among them, main chain-specific DAs have the highest compatibility and may be more advantageous if only the storage of a single public chain is considered. However, nowadays, when Layer1 public chains are blooming, cross-chain asset transfer and data interaction have also become a common pursuit of the blockchain community. If we consider the long-term development of the whole blockchain ecosystem, storing historical data from different public chains on the same public chain can eliminate many security problems in the process of data exchange and validation, so the modularized DA and the way of storing public chain DAs may be a better choice. Under the premise of close generality, modular DA focuses on providing blockchain DA layer services, introduces more refined index data to manage historical data, and can make a reasonable categorization of different public chain data, which has more advantages compared with storage public chains. However, the above proposal does not consider the cost of consensus layer adjustment on the existing public chain, which is extremely risky. A tiny systematic loophole may make the public chain lose community consensus. Therefore, if it is a transitional solution in the process of blockchain transformation, the temporary storage on the main chain may be more appropriate. Finally, all the above discussions are based on the performance during actual operation, but if the goal of a certain public chain is to develop its ecology and attract more project parties and participants, it may also tend to favor projects that are supported and funded by its foundation. For example, if the overall performance is equal to or even slightly lower than that of the storage public chain storage solution, the Ethereum community will also favor EthStorage, which is a Layer2 project supported by the Ethereum Foundation, to continue to develop the Ethereum ecosystem.
All in all, the increasing complexity of today's blockchains brings with it a greater need for storage space. With enough Layer1 validation nodes, historical data does not need to be backed up by all nodes in the whole network butcan ensure security after a certain threshold. At the same time,the division of labor of the public chain has become more and more detailed, Layer1 is responsible for consensus and execution, Rollup is responsible for calculation and verification, and then a separate blockchain is used for data storage. Each part can focus on a certain function without being limited by the performance of the other parts. However, the specific number of storage or the proportion of nodes allowed to store historical data in order toachieve a balance between security and efficiency,as well as how toensure secure interoperability between different blockchainsis a problem that needs to be considered by blockchain developers. Investors canpay attention to the main chain-specific DA project on Ethereum, because Ethereum already has enough supporters at this stage, without the need to use the power of other communities to expand its influence. It is more important to improve and develop its community to attract more projects to the Ethereum ecosystem. However, for public chains that are catching up, such as Solana and Aptos, the single chain itself does not have such a perfect ecosystem, so they may prefer to join forces with other communities to build a large cross-chain ecosystem to expand their influence. Therefore,for the emerging Layer1, a general-purpose third-party DA deserves more attention.
Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world.
Reference
Celestia: 模块化区块链的星辰大海: https://foresightnews.pro/article/detail/15497DHT usage and future work: https://github.com/celestiaorg/celestia-node/issues/11Celestia-core: https://github.com/celestiaorg/celestia-coreSolana labs: https://github.com/solana-labs/solana?source=post_page-----cf47a61a9274--------------------------------Announcing The SOLAR Bridge: https://medium.com/solana-labs/announcing-the-solar-bridge-c90718a49fa2leveldb-handbook: https://leveldb-handbook.readthedocs.io/zh/latest/sstable.htmlKuszmaul J. Verkle trees[J]. Verkle Trees, 2019, 1: 1.: https://math.mit.edu/research/highschool/primes/materials/2018/Kuszmaul.pdfArweave Network: https://www.arweave.org/Arweave Yellow-book: https://www.arweave.org/yellow-paper.pdf
Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer DesignAuthor: Kernel Ventures Jerry Luo Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua TLDR: In the early stage of blockchain, maintaining data consistency is considered extremely important to ensure security and decentralization. However, with the development of the blockchain ecosystem, the storage pressure is also increasing, leading to a trend of centralization in node operation. Such being the case, the storage cost problem brought by TPS growth in Layer1 needs to be solved urgently.Faced with this problem, developers should propose a solution that takes security, storage cost, data reading speed, and DA layer versatility fully into account.In the process of solving this problem, many new technologies and ideas have emerged, including Sharding, DAS, Verkle Tree, DA intermediate components, and so on. They try to optimize the storage scheme of the DA layer by reducing data redundancy and improving data validation efficiency.DA solutions are broadly categorized into two types from the perspective of data storage location, namely, main-chain DAs and third-party DAs. Main-chain DAs are designed from the perspectives of regular data cleansing and sliced data storage to reduce the storage pressure on nodes, while the third-party DAs are designed to serve the storage needs which have reasonable solutions for large amounts of data. As a result, we mainly trade-off between single-chain compatibility and multi-chain compatibility in third-party DAs, and propose three kinds of solutions: main-chain-specific DAs, modularized DAs, and storage public-chain DAs.Payment-type public chains have very high requirements for historical data security and, thus are suitable to use the main chain as the DA layer. However, for public chains that have been running for a long time and have a large number of miners running the network, it is more suitable to adopt a third-party DA that does not involve the consensus layer change with relatively high security. For comprehensive public chains, it is more suitable to use the main chain's dedicated DA storage with larger data capacity, lower cost, and security. However, considering the demand for cross-chain, modular DA is also a good option.Overall, blockchain is moving towards reducing data redundancy as well as multi-chain division of labor. 1. Background Blockchain, as a distributed ledger, needs to make a copy of the historical data stored on all nodes to ensure that the data storage is secure and sufficiently decentralized. Since the correctness of each state change is related to the previous state (the source of the transaction), in order to ensure the correctness of the transaction, a blockchain should store all the history of transactions from the generation of the first transaction to the current transaction. Taking Ethereum as an example, even taking 20 kb per block as the average size, the total size of the current data in Ethereum has reached 370 GB. For a full node, in addition to the block itself, it has to record the state and transaction receipts. Including this part, the total amount of storage of a single node has exceeded 1 TB, which makes the operation of the node gradually centralized. Source: Etherscan The recent Cancun upgrade of Ethereum aims to increase Ethereum's TPS to near 1000, at which point Ethereum's annual storage growth will exceed the sum of its current storage. In high-performance public chains, the transaction speed of tens of thousands of TPS may bring hundreds of GB of data addition per day. The common data redundancy of all nodes on the network obviously can not adapt to such storage pressure. So, Layer1 must find a suitable solution to balance the TPS growth and the storage cost of the nodes. 2. Performance Indicators of DA 2.1 Safety Compared with a database or linked list, blockchain's immutability comes from the fact that its newly generated data can be verified by historical data, thus ensuring the security of its historical data is the first issue to be considered in DA layer storage. To judge the data security of blockchain systems, we often analyze the redundancy amount of data and the checking method of data availability. Number of redundancy: The redundancy of data in the blockchain system mainly plays such roles: first, more redundancy in the network can provide more samples for reference when the verifier needs to check the account status which can help the node select the data recorded by the majority of nodes with higher security. In traditional databases, since the data is only stored in the form of key-value pairs in a certain node, changing the historical data is only carried out in a single node, with a low cost of the attack, and theoretically, the more the number of redundancies is, the higher the degree of credibility of the data is. Theoretically, the more redundancy there is, the more trustworthy the data will be. What's more, the more nodes there are, the less likely the data will be lost. This point can also be compared to the centralized servers that store Web2 games, once the background servers are all shut down, there will be a complete closure of the service. But it is not better with more redundancy, because redundancy will bring additional storage space, which will bring too much storage pressure to the system. A good DA layer should choose a suitable redundancy way to strike a balance between security and storage efficiency.Data Availability Checking: The amount of redundancy can ensure enough records of data in the network, but the data to be used must be checked for accuracy and completeness. Current blockchains commonly use cryptographic commitment algorithms as the verification methods, which just keep a small cryptographic commitment obtained by transaction data mixing, for the whole network to record. To test the authenticity of historical data, we should try to recover the commitment with the data. If the recovery commitment is identical to the original commitment, the verification passes. Commonly used cryptographic verification algorithms are Merkle Root and Verkle Root. High-security data availability verification algorithms can quickly verify historical data with the help of as little third-party data as possible. 2.2 Storage Cost After ensuring basic security, the next goal of the DA layer is to reduce costs and increase efficiency. The first step is to reduce the storage cost presented by the memory consumption caused by storing data per unit size, regardless of the difference in hardware performance. Nowadays, the main ways to reduce storage costs in blockchain are to adopt sharding technology and use reward storage to reduce the number of data backups while keeping its security. However, it is not difficult to see from the above improvement methods that there is a game relationship between storage cost and data security, and reducing storage occupancy often means a decrease in security. Therefore, an excellent DA layer needs to realize the balance between storage cost and data security. In addition, if the DA layer is a separate public chain, it also needs to reduce the cost by minimizing the intermediate process of data exchange, in which every transit process needs to leave index data for subsequent retrieval. So the longer the calling process, the more index data will be left, which will increase the storage cost. Finally, the cost of storing data is directly linked to the persistence of the data. In general, the higher the cost of data storage, the more difficult it is for the public chain to store data persistently. 2.3 Data Reading Speed Having achieved cost reduction, the next step is efficiency which means the ability to quickly recall data from the DA layer when needed. This process involves two steps, the first is to search for nodes to store data, mainly for public chains that have not achieved data consistency across the network, if the public chain has achieved data synchronization of nodes across the network, the time consumption of this process can be ignored. Then, in the mainstream blockchain systems at this stage, including Bitcoin, Ethereum, and Filecoin, the nodes' storage method is all Leveldb database. In Leveldb, data is stored in three ways. First, data written on-the-fly is stored in Memtable type files until the Memtable is full, then, the file type is changed from Memtable to Immutable Memtable. Both two types are stored in memory, but Immutable Memtable files are read-only. The hot storage used in the IPFS network stores data in this part of the network, so that it can be quickly read from memory when it is called, but an average node only has GBs of removable memory, which can easily be slowed down, and when a node goes down, the data in memory is lost permanently. If you want persistent data storage, you need to store the data in the form of SST files on the solid state disk (SSD), but when reading the data, you need to read the data to the memory first, which greatly reduces the speed of data indexing. Finally, for a system with storage sharding, data restoration requires sending data requests to multiple nodes and restoring them, a process that also slows down the reading of data. Source: Leveldb-handbook 2.4 DA Layer Generalization With the development of DeFi and various problems of CEX, users' requirements for cross-chain transactions of decentralized assets are growing. Whether we adopt the cross-chain mechanism of hash-locking, notary, or relay chain, we can't avoid the simultaneous determination of historical data on two chains. The key to this problem lies in the separation of data on the two chains, which cannot be directly communicated in different decentralized systems. Therefore, a solution is proposed by changing the storage method of the DA layer, which stores the historical data of multiple public chains on the same trusted public chain and only needs to call the data on this public chain when verifying. This requires the DA layer to be able to establish secure communication with different types of public chains which means that the DA layer has good versatility. 3. Techniques Concerning DA 3.1 Sharding In traditional distributed systems, a file is not stored in a complete form on a node, but to divide original data into multiple blocks and store them in each node. Also, the block is often not only stored in one node but leaves appropriate backup in other nodes. In the existing mainstream distributed systems, the number of backups is usually set to 2. This sharding mechanism can reduce the storage pressure of individual nodes, expand the total capacity of the system to the sum of the storage capacity of each node, and at the same time ensure the security of storage through appropriate data redundancy. The sharding scheme adopted in blockchain is generally similar to the traditional distributed systems, but there are differences in some details. Firstly, since the default nodes in the blockchain are untrustworthy, the process of realizing sharding requires a sufficiently large amount of data backups for the subsequent judgment of data authenticity, so the number of backups of this node needs to be much more than 2. Ideally, in the blockchain system that adopts this scheme of storage, if the total number of authentication nodes is T and the number of shards is N, the number of backups should be T/N. Secondly, as to the storage process of a block, a traditional distributed system with fewer nodes often has the mode that a node adapted to multiple data blocks. Firstly, the data is mapped to the hash ring by the consistent hash algorithm, then each node stores a certain range of numbered blocks with the hash ring's assignments. It can be accepted in the system that one single node does not have a storage task in certain storage. While on the blockchain, the storage block is no longer a random but an inevitable event for the nodes. Each node will randomly select a block for storage in the blockchain, with the process completed by the hashing result of data mixed with the node's information to modulo slice number. Assuming that each data is divided into N blocks, the actual storage size of each node is only 1/N. By setting N appropriately, we can achieve a balance between the growth of TPS and the pressure on node storage. Source: Kernel Ventures 3.2 DAS (Data Availability Sampling) DAS technology is a further optimization of the storage method based on sharding. In the process of sharding, due to the simple random storage of nodes, a block loss may occur. Secondly, for the data after sharding, how to confirm the authenticity and integrity of the data during the restoration process is also very important. In DAS, these two problems are solved by Eraser code and KZG polynomial commitment. Eraser code: Given the large number of verified nodes in Ethereum, it's possible that a block is not being stored by any node although it is a probability event. To mitigate the threat of missing storage, instead of slicing and dicing the raw data into blocks, this scheme maps the raw data to the coefficients of an nth-degree polynomial, then takes 2n points on the polynomial and lets the nodes randomly choose one of them to store. For this nth-degree polynomial, only n+1 points are needed for the reduction, and thus only half of the blocks need to be selected by the nodes for us to realize the reduction of the original data. The Eraser code improves the security of the data storage and the network's ability to recover the data.KZG polynomial commitment: A very important aspect of data storage is the verification of data authenticity. In networks that do not use Eraser code, various methods can be used for verification, but if the Eraser code above is introduced to improve data security, then it is more appropriate to use the KZG polynomial commitment, which can verify the contents of a single block directly in the form of a polynomial, thus eliminating the need to reduce the polynomial to binary data. KZG polynomial commitment can directly verify the content of a single block in the form of polynomials, thus eliminating the need to reduce the polynomials to binary data, and the overall form of verification is similar to that of Merkle Tree, but it does not require specific Path node data and only requires the KZG Root and block data to verify the authenticity of the block. 3.3 Data Validation Method in DA Data validation ensures that the data called from a node are accurate and complete. To minimize the amount of data and computational cost required in the validation process, the DA layer now uses a tree structure as the mainstream validation method. The simplest form is to use Merkle Tree for verification, which uses the form of complete binary tree records, only need to keep a Merkle Root and the hash value of the subtree on the other side of the path of the node can be verified, the time complexity of the verification is O(logN) level (the logN is default log2(N)). Although the validation process has been greatly simplified, the amount of data for the validation process in general still grows with the increase of data. To solve the problem of increasing validation volume, another validation method, Verkle Tree, is proposed at this stage, in which each node in the Verkle Tree not only stores the value but also attaches a Vector Commitment, which can quickly validate the authenticity of the data by using the value of the original node and the commitment proof, without the need to call the values of other sister nodes, which makes the computation of each validation easier and faster. This makes the number of computations for each verification only related to the depth of the Verkle Tree, which is a fixed constant, thus greatly accelerating the verification speed. However, the calculation of Vector Commitment requires the participation of all sister nodes in the same layer, which greatly increases the cost of writing and changing data. However, for data such as historical data, which is permanently stored and cannot be tampered with, also, can only be read but not written, the Verkle Tree is extremely suitable. In addition, Merkle Tree and Verkle Tree itself have a K-ary form of variants, the specific implementation of the mechanism is similar, just change the number of subtrees under each node, the specific performance comparison can be seen in the following table. Source: Verkle Trees 3.4 Generic DA Middleware The continuous expansion of the blockchain ecosystem has brought about an increasing number of public chains. Due to the advantages and irreplaceability of each public chain in their respective fields, it is impossible for Layer1 public chains to become unified in a short time. However, with the development of DeFi and the problems of CEX, users' demand for decentralized cross-chain trading assets is growing. Therefore, DA layer multi-chain data storage, which can eliminate the security problems in cross-chain data interaction, has gained more and more attention. However, to accept historical data from different public chains, it is necessary for the DA layer to provide decentralized protocols for standardized storage and validation of data flow. For example, kvye, a storage middleware based on Arweave, adopts the method of actively crawling data from the main chains, and it can store the data from all the chains in a standardized form to Arweave in order to minimize the differences in the data transmission process. Comparatively speaking, Layer2, which specializes in providing DA layer data storage for a certain public chain, carries out data interaction by way of internal shared nodes. Although it reduces the cost of interaction and improves security, it has greater limitations and can only provide services to specific public chains. 4. Storage Methods of DA 4.1 Main Chain DA 4.1.1 DankSharding-like There is no definitive name for this type of storage scheme, but the most prominent one is Dank Sharding on Ethereum, so in this paper, we use the term Dank Sharding-like to refer to this type of scheme. This type of scheme mainly uses the two DA storage techniques mentioned above, sharding and DAS, firstly, the data is divided into an appropriate number of shares by sharding, and then each node extracts a data block in the form of DAS for storage. For the case that there are enough nodes in the whole network, we can take a larger number of slices N, so that the storage pressure of each node is only 1/N of the original, thus realizing N-fold expansion of the overall storage space. At the same time, to prevent the extreme case that a block is not stored by any block, Dank Sharding encodes the data using Eraser Code, which requires only half of the data for complete restoration. Lastly, the data is verified using a Verkle Tree structure with polynomial commitments for fast checksums. 4.1.2 Temporary Storage For the DA of the main chain, one of the simplest ways to handle data is to store historical data for a short period of time. Essentially, the blockchain acts as a public ledger, where changes are made to the content of the ledger in the presence of the entire network, and there is no need for permanent storage. In the case of Solana, for example, although its historical data is synchronized to Arweave, the main network nodes only retain the transaction data of the last two days. On a public chain based on account records, each moment of historical data retains the final state of the account on the blockchain, which is sufficient to provide a basis for verification of changes at the next moment. Those who have special needs for data before this time, can store it on other decentralized public chains or hand it over to a trusted third party. In other words, those who have additional needs for data will need to pay for historical data storage. 4.2 Third Party DA 4.2.1 DA for Main Chain: EthStorage DA for Main Chain: The most important thing for the DA layer is the security of data transmission, and the DA with the highest security is the DA of the main chain, but the main chain storage is limited by the storage space and the competition of resources, so when the data volume of the network grows fast, the third-party DA is a better choice if it wants to realize the long-term storage of data. If the third-party DA has higher compatibility with the main network, it can realize the sharing of nodes, and the data interaction process will have higher security. Therefore, under the premise of considering security, a dedicated DA for the main chain will have a huge advantage. Taking Ethereum as an example, one of the basic requirements for a DA dedicated to the main chain is that it can be compatible with EVM to ensure interoperability with Ethereum data and contracts, and representative projects include Topia, EthStorage, etc. Among them, EthStorage is the most compatible DA in terms of compatibility. Representative projects include Topia, EthStorage, and so on. Among them, EthStorage is the most well-developed in terms of compatibility, because in addition to EVM compatibility, it has also set up relevant interfaces to interface with Remix, Hardhat, and other Ethereum development tools to realize compatibility with Ethereum development tools.EthStorage: EthStorage is a public chain independent of Ethereum, but the nodes running on it are a supergroup of Ethereum nodes, which means that the nodes running EthStorage can also run Ethereum at the same time. What's more, we can also directly operate EthStorage through the opcodes on Ethereum. EthStorage's storage model retains only a small amount of metadata for indexing on the main Ethereum network, essentially creating a decentralized database for Ethereum. In the current solution, EthStorage deploys an EthStorage Contract on the main Ethereum to realize the interaction between the main Ethereum and EthStorage. If Ethereum wants to deposit data, it needs to call the put() function in the contract, and the input parameters are two-byte variables key, data, where data represents the data to be deposited, and the key is its identity in the Ethereum network, which can be regarded as similar to the existence of CID in IPFS. After the (key, data) data pair is successfully stored in the EthStorage network, EthStorage will generate a kvldx to be returned to the Ethereum host network, which corresponds to the key on the Ethereum network, and this value corresponds to the storage address of the data on EthStorage so that the original problem of storing a large amount of data can now be changed to storing a single (key, kvldx). (key, kvldx) pair, which greatly reduces the storage cost of the main Ethereum network. If you need to call the previously stored data, you need to use the get() function in EthStorage and enter the key parameter, and then you can do a quick lookup of the data on EthStorage by using the kvldx stored in Ethereum. Source: Kernel Ventures In terms of how nodes store data, EthStorage learns from the Arweave model. First of all, a large number of (k,v) pairs from ETH are sharded, and each sharding contains a fixed number of (k, v) pairs, of which there is a limit on the size of each (k, v) pair to ensure the fairness of workload in the process of storing rewards for miners. For the issuance of rewards, it is necessary to verify whether the node stores data to begin with. In this process, EthStorage will divide a sharding (TB size) into many chunks and keep a Merkle root on the Ethereum mainnet for verification. Then the miner needs to provide a nonce to generate a few chunks by a random algorithm with the hash of the previous block on EthStorage, and the miner needs to provide the data of these chunks to prove that he/she has stored the whole sharding, but this nonce can not be chosen arbitrarily, or else the node will choose the appropriate nonce corresponding to the chunks stored by him/her and pass the verification. However, this nonce cannot be chosen randomly, otherwise the node will choose a suitable nonce that corresponds only to its stored chunks and thus pass the verification, so this nonce must make the generated chunks after mixing and hashing so that the difficulty value meets the requirements of the network, and only the first node that submits the nonce and the random-access proof can get the reward. 4.2.2 Modularization DA: Celsetia Blockchain Module: The transactions to be performed on the Layer1 public chain are divided into the following four parts: (1) designing the underlying logic of the network, selecting validation nodes in a certain way, writing blocks, and allocating rewards for network maintainers; (2) packaging and processing transactions and publishing related transactions; (3) validating transactions to be uploaded to the blockchain and determining the final status; (4) storing and maintaining historical data on the blockchain. According to the different functions performed, we can divide the blockchain into four modules, consensus layer, execution layer, settlement layer, and data availability layer (DA layer).Modular Blockchain design: for a long time, these four modules have been integrated into a single public chain, such a blockchain is called a monolithic blockchain. This form is more stable and easier to maintain, but it also puts tremendous pressure on the single public chain. In practice, the four modules constrain each other and compete for the limited computational and storage resources of the public chain. For example, increasing the processing speed of the processing layer will bring more storage pressure to the data availability layer; ensuring the security of the execution layer requires a more complex verification mechanism but slows down the speed of transaction processing. Therefore, the development of a public chain often faces a trade-off between these four modules. To break through this bottleneck of public chain performance improvement, developers have proposed a modular blockchain solution. The core idea of modular blockchain is to strip out one or several of the four modules mentioned above and give them to a separate public chain for implementation. In this way, the public chain can focus on the improvement of transaction speed or storage capacity, breaking through the previous limitations on the overall performance of the blockchain due to the short board effect.Modular DA: The complex approach of separating the DA layer from the blockchain business and placing it on a separate public chain is considered a viable solution for Layer1's growing historical data. At this stage, the exploration in this area is still at an early stage, and the most representative project is Celestia, which uses the storage method of Sharding, which also divides the data into multiple blocks, and each node extracts a part of it for storage and uses the KZG polynomial commitment to verify the data integrity. At the same time, Celestia uses advanced two-dimensional RS corrective codes to rewrite the original data in the form of a k*k matrix, which ultimately requires only 25% of the original data to be recovered. However, sliced data storage is essentially just multiplying the storage pressure of nodes across the network by a factor of the total data volume, and the storage pressure of nodes still grows linearly with the data volume. As Layer1 continues to improve for transaction speed, the storage pressure on nodes may still reach an unacceptable threshold someday. To address this issue, an IPLD component is introduced in Celestia. Instead of storing the data in the k*k matrix directly on Celestia, the data is stored in the LL-IPFS network, with only the CID code of the data kept in the node. When a user requests a piece of historical data, the node sends the corresponding CID to the IPLD component, which is used to call the original data on IPFS. If the data exists on IPFS, it is returned via the IPLD component and the node. If it does not exist, the data can not be returned. Source: Celestia Core Celestia: Taking Celestia as an example, we can see the application of modular blockchain in solving the storage problem of Ethereum, Rollup node will send the packaged and verified transaction data to Celestia and store the data on Celestia, during the process, Celestia only stores the data without having too much perception. In this process, Celestia just stores the data without sensing it, and in the end, according to the size of the storage space, the Rollup node will pay the corresponding tia tokens to Celestia as the storage fee. The storage in Celestia utilizes a similar DAS and debugging code as in EIP4844, but the polynomial debugging code in EIP4844 is upgraded to use a two-dimensional RS debugging code, which upgrades the security of the storage again, and only 25% of the fractions are needed to recover the entire transaction data. It is essentially a POS public chain with low storage costs, and if it is to be realized as a solution to Ethereum's historical data storage problem, many other specific modules are needed to work with Celestia. For example, in terms of rollup, one of the roll-up models highly recommended by Celestia's official website is Sovereign Rollup, which is different from the common rollup on Layer2, which can only calculate and verify transactions, just completing the execution layer, and includes the entire execution and settlement process, which minimizes the need for the execution and settlement process on Celestia. This minimizes the processing of transactions on Celestia, which maximizes the overall security of the transaction process when the overall security of Celestia is weaker than that of Ethereum. As for the security of the data called by Celestia on the main network of Ethereum, the most mainstream solution is the Quantum Gravity Bridge smart contract. For the data stored on Celestia, it will generate a Merkle Root (data availability certificate) and keep it on the Quantum Gravity Bridge contract on the main network of EtherCenter. When EtherCenter calls the historical data on Celestia every time, it will compare the hash result with the Merkle Root, and if it matches, then it means that it is indeed the real historical data. 4.2.3 Storage Chain DA In terms of the technical principles of mainchain DAs, many techniques similar to sharding have been borrowed from storage public chains. In third-party DAs, some of them even fulfill part of the storage tasks directly with the help of storage public chains, for example, the specific transaction data in Celestia is put on the LL-IPFS network. In the solutions of third-party DAs, besides building a separate public chain to solve the storage problem of Layer1, a more direct way is to directly connect the storage public chain to Layer1 to store the huge historical data on Layer1. For high-performance blockchain, the volume of historical data is even larger, under full-speed operation, the data volume of high-performance public chain Solana is close to 4 PG, which is completely beyond the storage range of ordinary nodes. Solana chooses a solution to store historical data on the decentralized storage network Arweave and only retains 2 days of data on the nodes of the main network for verification. To ensure the security of the storage process, Solana and the Arweave chain have designed a storage bridge protocol, Solar Bridge, which synchronizes the validated data from Solana nodes to Arweave and returns the corresponding tag, which allows Solana nodes to view the historical data of the Solana blockchain at any point in time. The Solana node can view historical data from any point in time on the Solana blockchain. On Arweave, instead of requiring nodes across the network to maintain data consistency as a necessity for participation, the network adopts a reward storage approach. First of all, Arweave doesn't use a traditional chain structure to build blocks, but more like a graph structure. In Arweave, a new block will not only point to the previous block, but also randomly point to a generated block Recall block, whose exact location is determined by the hash result of the previous block and its block height, and the location of the Recall block is unknown until the previous block is mined out. However, in the process of generating new blocks, nodes are required to have the data of the Recall block to use the POW mechanism to calculate the hash of the specified difficulty, and only the miner who is the first to calculate the hash that meets the difficulty can be rewarded, which encourages miners to store as much historical data as possible. At the same time, the fewer people storing a particular historical block, the fewer competitors a node will have when generating a difficulty-compliant nonce, encouraging miners to store blocks with fewer backups in the network. Finally, to ensure that nodes store data permanently, WildFire's node scoring mechanism is introduced in Arweave. Nodes will prefer to communicate with nodes that can provide historical data more and faster, while nodes with lower ratings will not be able to get the latest block and transaction data the first time, thus failing to get a head start in the POW competition. Source: Arweave Yellow-Paper 5. Synthesized Comparison We will compare the advantages and disadvantages of each of the five storage solutions in terms of the four dimensions of DA performance metrics. Safety: The biggest source of data security problems is the loss of data caused by the data transmission process and malicious tampering from dishonest nodes, and the cross-chain process is the hardest hit area of data transmission security due to the independence of the two public chains and the state is not shared. In addition, Layer1, which requires a specialized DA layer at this stage, often has a strong consensus group, and its security will be much higher than that of ordinary storage public chains. Therefore, the main chain DA solution has higher security. After ensuring the security of data transmission, the next step is to ensure the security of calling data. Considering only the short-term historical data used to verify the transaction, the same data is backed up by the whole network in the temporary storage network, while the average number of data backups in the DankSharding-like scheme is only 1/N of the number of nodes in the whole network, which means more data redundancy can make the data less prone to be lost, and at the same time, it can provide more reference samples for verification. Therefore, temporary storage will have higher data security. In the third-party DA scheme, because of the public nodes used in the main chain, the data can be directly transmitted through these relay nodes in the process of cross-chaining, and thus it will also have a relatively higher security than other DA schemes.Storage Cost: The factor that has the greatest impact on storage cost is the amount of redundancy in the data. In the short-term storage scheme of the main chain DA, which uses the form of network-wide node data synchronization for storage, any newly stored data needs to be backed up in the network-wide nodes, having the highest storage cost. The high storage cost in turn determines that in a high TPS network, this approach is only suitable for temporary storage. Next is the sharding storage method, including sharding in the main chain and sharding in the third-party DA. Because the main chain often has more nodes, and thus the corresponding block will have more backups, the main chain sharding scheme will have a higher cost. The lowest storage cost is in the storage public chain DA that adopts the reward storage method, and the amount of data redundancy in this scheme tends to fluctuate around a fixed constant. At the same time, the storage public chain DA also introduces a dynamic adjustment mechanism, which attracts nodes to store less backup data by increasing the reward to ensure data security.Data Read Speed: Data storage speed is primarily affected by where the data is stored in the storage space, the data index path, and the distribution of the data among the nodes. Among them, where the data is stored in the nodes has a greater impact on the speed, because storing the data in memory or SSD can lead to a tens of times difference in read speed. Storage public chain DAs mostly take SSD storage because the load on that chain includes not only data from the DA layer but also highly memory-hungry personal data such as videos and images uploaded by users. If the network does not use SSDs as storage space, it is difficult to carry the huge storage pressure and meet the demand for long-term storage. Second, for third-party DAs and main-chain DAs that use memory state to store data, third-party DAs first need to search for the corresponding indexed data in the main chain, and then transfer the indexed data across the chain to third-party DAs and return the data via the storage bridge. In contrast, the mainchain DA can query data directly from nodes, and thus has faster data retrieval speed. Finally, within the main-chain DA, the sharding approach requires calling blocks from multiple nodes and restoring the original data. Therefore, it is slower than the short-term storage method without sharding.DA Layer Universality: Mainchain DA universality is close to zero because it is not possible to transfer data from a public chain with insufficient storage space to another public chain with insufficient storage space. In third-party DAs, the generality of a solution and its compatibility with a particular mainchain are contradictory metrics. For example, in the case of a mainchain-specific DA solution designed for a particular mainchain, it has made a lot of improvements at the level of node types and network consensus to adapt to that particular public chain, and thus these improvements can act as a huge obstacle when communicating with other public chains. Within third-party DAs, storage public chain DAs perform better in terms of generalizability than modular DAs. Storage public chain DAs have a larger developer community and more expansion facilities to adapt to different public chains. At the same time, the storage public chain DA can obtain data more actively through packet capture rather than passively receiving information transmitted from other public chains. Therefore, it can encode the data in its way, achieve standardized storage of data flow, facilitate the management of data information from different main chains, and improve storage efficiency. Source: Kernel Ventures 6. Conclusion Blockchain is undergoing the process of conversion from Crypto to Web3, and it brings an abundance of projects on the blockchain, but also data storage problems. To accommodate the simultaneous operation of so many projects on Layer1 and ensure the experience of the Gamefi and Socialfi projects, Layer1 represented by Ethereum has adopted Rollup and Blobs to improve the TPS. What's more, the n/umber of high-performance blockchains in the newborn blockchain is also growing. But higher TPS not only means higher performance but also means more storage pressure in the network. For the huge amount of historical data, multiple DA approaches, both main chain and third-party based are proposed at this stage to adapt to the growth of storage pressure on the chain. Improvements have their advantages and disadvantages and have different applicability in different contexts. In the case of payment-based blockchains, which have very high requirements for the security of historical data and do not pursue particularly high TPS, those are still in the preparatory stage, they can adopt a DankSharding-like storage method, which can ensure security and a huge increase in storage capacity at the same time realize. However, if it is a public chain like Bitcoin, which has already been formed and has a large number of nodes, there is a huge risk of rashly improving the consensus layer, so it can adopt a special DA for the main chain with higher security in the off-chain storage to balance the security and storage issues. However, it is worth noting that the function of the blockchain is changing over time. For example, in the early days, Ethereum's functionality was limited to payments and simple automated processing of assets and transactions using smart contracts, but as the blockchain landscape has expanded, various Socialfi and Defi projects have been added to Ethereum, pushing it to a more comprehensive direction. With the recent explosion of the inscription ecosystem on Bitcoin, transaction fees on the Bitcoin network have surged nearly 20 times since August, reflecting the fact that the network's transaction speeds are not able to meet the demand for transactions at this stage. Traders have to raise fees to get transactions processed as quickly as possible. Now, the Bitcoin community needs to make a trade-off between accepting high fees and slow transaction speed or reducing network security to increase transaction speeds while defeating the purpose of the payment system in the first place. If the Bitcoin community chooses the latter, then the storage solution will need to be adjusted in the face of increasing data pressure. Source: OKLINK As for the public chain with comprehensive functions, its pursuit of TPS is higher, with the enormous growth of historical data, it is difficult to adapt to the rapid growth of TPS in the long run by adopting the DankSharding-like solution. Therefore, a more appropriate way is to migrate the data to a third-party DA for storage. Among them, main chain-specific DAs have the highest compatibility and may be more advantageous if only the storage of a single public chain is considered. However, nowadays, when Layer1 public chains are blooming, cross-chain asset transfer and data interaction have also become a common pursuit of the blockchain community. If we consider the long-term development of the whole blockchain ecosystem, storing historical data from different public chains on the same public chain can eliminate many security problems in the process of data exchange and validation, so the modularized DA and the way of storing public chain DAs may be a better choice. Under the premise of close generality, modular DA focuses on providing blockchain DA layer services, introduces more refined index data to manage historical data, and can make a reasonable categorization of different public chain data, which has more advantages compared with storage public chains. However, the above proposal does not consider the cost of consensus layer adjustment on the existing public chain, which is extremely risky. A tiny systematic loophole may make the public chain lose community consensus. Therefore, if it is a transitional solution in the process of blockchain transformation, the temporary storage on the main chain may be more appropriate. Finally, all the above discussions are based on the performance during actual operation, but if the goal of a certain public chain is to develop its ecology and attract more project parties and participants, it may also tend to favor projects that are supported and funded by its foundation. For example, if the overall performance is equal to or even slightly lower than that of the storage public chain storage solution, the Ethereum community will also favor EthStorage, which is a Layer2 project supported by the Ethereum Foundation, to continue to develop the Ethereum ecosystem. All in all, the increasing complexity of today's blockchains brings with it a greater need for storage space. With enough Layer1 validation nodes, historical data does not need to be backed up by all nodes in the whole network butcan ensure security after a certain threshold. At the same time,the division of labor of the public chain has become more and more detailed, Layer1 is responsible for consensus and execution, Rollup is responsible for calculation and verification, and then a separate blockchain is used for data storage. Each part can focus on a certain function without being limited by the performance of the other parts. However, the specific number of storage or the proportion of nodes allowed to store historical data in order toachieve a balance between security and efficiency,as well as how toensure secure interoperability between different blockchainsis a problem that needs to be considered by blockchain developers. Investors canpay attention to the main chain-specific DA project on Ethereum, because Ethereum already has enough supporters at this stage, without the need to use the power of other communities to expand its influence. It is more important to improve and develop its community to attract more projects to the Ethereum ecosystem. However, for public chains that are catching up, such as Solana and Aptos, the single chain itself does not have such a perfect ecosystem, so they may prefer to join forces with other communities to build a large cross-chain ecosystem to expand their influence. Therefore,for the emerging Layer1, a general-purpose third-party DA deserves more attention. Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world. Reference Celestia: 模块化区块链的星辰大海: https://foresightnews.pro/article/detail/15497DHT usage and future work: https://github.com/celestiaorg/celestia-node/issues/11Celestia-core: https://github.com/celestiaorg/celestia-coreSolana labs: https://github.com/solana-labs/solana?source=post_page-----cf47a61a9274--------------------------------Announcing The SOLAR Bridge: https://medium.com/solana-labs/announcing-the-solar-bridge-c90718a49fa2leveldb-handbook: https://leveldb-handbook.readthedocs.io/zh/latest/sstable.htmlKuszmaul J. Verkle trees[J]. Verkle Trees, 2019, 1: 1.: https://math.mit.edu/research/highschool/primes/materials/2018/Kuszmaul.pdfArweave Network: https://www.arweave.org/Arweave Yellow-book: https://www.arweave.org/yellow-paper.pdf

Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design

Author: Kernel Ventures Jerry Luo
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
In the early stage of blockchain, maintaining data consistency is considered extremely important to ensure security and decentralization. However, with the development of the blockchain ecosystem, the storage pressure is also increasing, leading to a trend of centralization in node operation. Such being the case, the storage cost problem brought by TPS growth in Layer1 needs to be solved urgently.Faced with this problem, developers should propose a solution that takes security, storage cost, data reading speed, and DA layer versatility fully into account.In the process of solving this problem, many new technologies and ideas have emerged, including Sharding, DAS, Verkle Tree, DA intermediate components, and so on. They try to optimize the storage scheme of the DA layer by reducing data redundancy and improving data validation efficiency.DA solutions are broadly categorized into two types from the perspective of data storage location, namely, main-chain DAs and third-party DAs. Main-chain DAs are designed from the perspectives of regular data cleansing and sliced data storage to reduce the storage pressure on nodes, while the third-party DAs are designed to serve the storage needs which have reasonable solutions for large amounts of data. As a result, we mainly trade-off between single-chain compatibility and multi-chain compatibility in third-party DAs, and propose three kinds of solutions: main-chain-specific DAs, modularized DAs, and storage public-chain DAs.Payment-type public chains have very high requirements for historical data security and, thus are suitable to use the main chain as the DA layer. However, for public chains that have been running for a long time and have a large number of miners running the network, it is more suitable to adopt a third-party DA that does not involve the consensus layer change with relatively high security. For comprehensive public chains, it is more suitable to use the main chain's dedicated DA storage with larger data capacity, lower cost, and security. However, considering the demand for cross-chain, modular DA is also a good option.Overall, blockchain is moving towards reducing data redundancy as well as multi-chain division of labor.
1. Background
Blockchain, as a distributed ledger, needs to make a copy of the historical data stored on all nodes to ensure that the data storage is secure and sufficiently decentralized. Since the correctness of each state change is related to the previous state (the source of the transaction), in order to ensure the correctness of the transaction, a blockchain should store all the history of transactions from the generation of the first transaction to the current transaction. Taking Ethereum as an example, even taking 20 kb per block as the average size, the total size of the current data in Ethereum has reached 370 GB. For a full node, in addition to the block itself, it has to record the state and transaction receipts. Including this part, the total amount of storage of a single node has exceeded 1 TB, which makes the operation of the node gradually centralized.

Source: Etherscan
The recent Cancun upgrade of Ethereum aims to increase Ethereum's TPS to near 1000, at which point Ethereum's annual storage growth will exceed the sum of its current storage. In high-performance public chains, the transaction speed of tens of thousands of TPS may bring hundreds of GB of data addition per day. The common data redundancy of all nodes on the network obviously can not adapt to such storage pressure. So, Layer1 must find a suitable solution to balance the TPS growth and the storage cost of the nodes.
2. Performance Indicators of DA
2.1 Safety
Compared with a database or linked list, blockchain's immutability comes from the fact that its newly generated data can be verified by historical data, thus ensuring the security of its historical data is the first issue to be considered in DA layer storage. To judge the data security of blockchain systems, we often analyze the redundancy amount of data and the checking method of data availability.
Number of redundancy: The redundancy of data in the blockchain system mainly plays such roles: first, more redundancy in the network can provide more samples for reference when the verifier needs to check the account status which can help the node select the data recorded by the majority of nodes with higher security. In traditional databases, since the data is only stored in the form of key-value pairs in a certain node, changing the historical data is only carried out in a single node, with a low cost of the attack, and theoretically, the more the number of redundancies is, the higher the degree of credibility of the data is. Theoretically, the more redundancy there is, the more trustworthy the data will be. What's more, the more nodes there are, the less likely the data will be lost. This point can also be compared to the centralized servers that store Web2 games, once the background servers are all shut down, there will be a complete closure of the service. But it is not better with more redundancy, because redundancy will bring additional storage space, which will bring too much storage pressure to the system. A good DA layer should choose a suitable redundancy way to strike a balance between security and storage efficiency.Data Availability Checking: The amount of redundancy can ensure enough records of data in the network, but the data to be used must be checked for accuracy and completeness. Current blockchains commonly use cryptographic commitment algorithms as the verification methods, which just keep a small cryptographic commitment obtained by transaction data mixing, for the whole network to record. To test the authenticity of historical data, we should try to recover the commitment with the data. If the recovery commitment is identical to the original commitment, the verification passes. Commonly used cryptographic verification algorithms are Merkle Root and Verkle Root. High-security data availability verification algorithms can quickly verify historical data with the help of as little third-party data as possible.
2.2 Storage Cost
After ensuring basic security, the next goal of the DA layer is to reduce costs and increase efficiency. The first step is to reduce the storage cost presented by the memory consumption caused by storing data per unit size, regardless of the difference in hardware performance. Nowadays, the main ways to reduce storage costs in blockchain are to adopt sharding technology and use reward storage to reduce the number of data backups while keeping its security. However, it is not difficult to see from the above improvement methods that there is a game relationship between storage cost and data security, and reducing storage occupancy often means a decrease in security. Therefore, an excellent DA layer needs to realize the balance between storage cost and data security. In addition, if the DA layer is a separate public chain, it also needs to reduce the cost by minimizing the intermediate process of data exchange, in which every transit process needs to leave index data for subsequent retrieval. So the longer the calling process, the more index data will be left, which will increase the storage cost. Finally, the cost of storing data is directly linked to the persistence of the data. In general, the higher the cost of data storage, the more difficult it is for the public chain to store data persistently.
2.3 Data Reading Speed
Having achieved cost reduction, the next step is efficiency which means the ability to quickly recall data from the DA layer when needed. This process involves two steps, the first is to search for nodes to store data, mainly for public chains that have not achieved data consistency across the network, if the public chain has achieved data synchronization of nodes across the network, the time consumption of this process can be ignored. Then, in the mainstream blockchain systems at this stage, including Bitcoin, Ethereum, and Filecoin, the nodes' storage method is all Leveldb database. In Leveldb, data is stored in three ways. First, data written on-the-fly is stored in Memtable type files until the Memtable is full, then, the file type is changed from Memtable to Immutable Memtable. Both two types are stored in memory, but Immutable Memtable files are read-only. The hot storage used in the IPFS network stores data in this part of the network, so that it can be quickly read from memory when it is called, but an average node only has GBs of removable memory, which can easily be slowed down, and when a node goes down, the data in memory is lost permanently. If you want persistent data storage, you need to store the data in the form of SST files on the solid state disk (SSD), but when reading the data, you need to read the data to the memory first, which greatly reduces the speed of data indexing. Finally, for a system with storage sharding, data restoration requires sending data requests to multiple nodes and restoring them, a process that also slows down the reading of data.

Source: Leveldb-handbook
2.4 DA Layer Generalization
With the development of DeFi and various problems of CEX, users' requirements for cross-chain transactions of decentralized assets are growing. Whether we adopt the cross-chain mechanism of hash-locking, notary, or relay chain, we can't avoid the simultaneous determination of historical data on two chains. The key to this problem lies in the separation of data on the two chains, which cannot be directly communicated in different decentralized systems. Therefore, a solution is proposed by changing the storage method of the DA layer, which stores the historical data of multiple public chains on the same trusted public chain and only needs to call the data on this public chain when verifying. This requires the DA layer to be able to establish secure communication with different types of public chains which means that the DA layer has good versatility.
3. Techniques Concerning DA
3.1 Sharding
In traditional distributed systems, a file is not stored in a complete form on a node, but to divide original data into multiple blocks and store them in each node. Also, the block is often not only stored in one node but leaves appropriate backup in other nodes. In the existing mainstream distributed systems, the number of backups is usually set to 2. This sharding mechanism can reduce the storage pressure of individual nodes, expand the total capacity of the system to the sum of the storage capacity of each node, and at the same time ensure the security of storage through appropriate data redundancy. The sharding scheme adopted in blockchain is generally similar to the traditional distributed systems, but there are differences in some details. Firstly, since the default nodes in the blockchain are untrustworthy, the process of realizing sharding requires a sufficiently large amount of data backups for the subsequent judgment of data authenticity, so the number of backups of this node needs to be much more than 2. Ideally, in the blockchain system that adopts this scheme of storage, if the total number of authentication nodes is T and the number of shards is N, the number of backups should be T/N. Secondly, as to the storage process of a block, a traditional distributed system with fewer nodes often has the mode that a node adapted to multiple data blocks. Firstly, the data is mapped to the hash ring by the consistent hash algorithm, then each node stores a certain range of numbered blocks with the hash ring's assignments. It can be accepted in the system that one single node does not have a storage task in certain storage. While on the blockchain, the storage block is no longer a random but an inevitable event for the nodes. Each node will randomly select a block for storage in the blockchain, with the process completed by the hashing result of data mixed with the node's information to modulo slice number. Assuming that each data is divided into N blocks, the actual storage size of each node is only 1/N. By setting N appropriately, we can achieve a balance between the growth of TPS and the pressure on node storage.

Source: Kernel Ventures
3.2 DAS (Data Availability Sampling)
DAS technology is a further optimization of the storage method based on sharding. In the process of sharding, due to the simple random storage of nodes, a block loss may occur. Secondly, for the data after sharding, how to confirm the authenticity and integrity of the data during the restoration process is also very important. In DAS, these two problems are solved by Eraser code and KZG polynomial commitment.
Eraser code: Given the large number of verified nodes in Ethereum, it's possible that a block is not being stored by any node although it is a probability event. To mitigate the threat of missing storage, instead of slicing and dicing the raw data into blocks, this scheme maps the raw data to the coefficients of an nth-degree polynomial, then takes 2n points on the polynomial and lets the nodes randomly choose one of them to store. For this nth-degree polynomial, only n+1 points are needed for the reduction, and thus only half of the blocks need to be selected by the nodes for us to realize the reduction of the original data. The Eraser code improves the security of the data storage and the network's ability to recover the data.KZG polynomial commitment: A very important aspect of data storage is the verification of data authenticity. In networks that do not use Eraser code, various methods can be used for verification, but if the Eraser code above is introduced to improve data security, then it is more appropriate to use the KZG polynomial commitment, which can verify the contents of a single block directly in the form of a polynomial, thus eliminating the need to reduce the polynomial to binary data. KZG polynomial commitment can directly verify the content of a single block in the form of polynomials, thus eliminating the need to reduce the polynomials to binary data, and the overall form of verification is similar to that of Merkle Tree, but it does not require specific Path node data and only requires the KZG Root and block data to verify the authenticity of the block.
3.3 Data Validation Method in DA
Data validation ensures that the data called from a node are accurate and complete. To minimize the amount of data and computational cost required in the validation process, the DA layer now uses a tree structure as the mainstream validation method. The simplest form is to use Merkle Tree for verification, which uses the form of complete binary tree records, only need to keep a Merkle Root and the hash value of the subtree on the other side of the path of the node can be verified, the time complexity of the verification is O(logN) level (the logN is default log2(N)). Although the validation process has been greatly simplified, the amount of data for the validation process in general still grows with the increase of data. To solve the problem of increasing validation volume, another validation method, Verkle Tree, is proposed at this stage, in which each node in the Verkle Tree not only stores the value but also attaches a Vector Commitment, which can quickly validate the authenticity of the data by using the value of the original node and the commitment proof, without the need to call the values of other sister nodes, which makes the computation of each validation easier and faster. This makes the number of computations for each verification only related to the depth of the Verkle Tree, which is a fixed constant, thus greatly accelerating the verification speed. However, the calculation of Vector Commitment requires the participation of all sister nodes in the same layer, which greatly increases the cost of writing and changing data. However, for data such as historical data, which is permanently stored and cannot be tampered with, also, can only be read but not written, the Verkle Tree is extremely suitable. In addition, Merkle Tree and Verkle Tree itself have a K-ary form of variants, the specific implementation of the mechanism is similar, just change the number of subtrees under each node, the specific performance comparison can be seen in the following table.

Source: Verkle Trees
3.4 Generic DA Middleware
The continuous expansion of the blockchain ecosystem has brought about an increasing number of public chains. Due to the advantages and irreplaceability of each public chain in their respective fields, it is impossible for Layer1 public chains to become unified in a short time. However, with the development of DeFi and the problems of CEX, users' demand for decentralized cross-chain trading assets is growing. Therefore, DA layer multi-chain data storage, which can eliminate the security problems in cross-chain data interaction, has gained more and more attention. However, to accept historical data from different public chains, it is necessary for the DA layer to provide decentralized protocols for standardized storage and validation of data flow. For example, kvye, a storage middleware based on Arweave, adopts the method of actively crawling data from the main chains, and it can store the data from all the chains in a standardized form to Arweave in order to minimize the differences in the data transmission process. Comparatively speaking, Layer2, which specializes in providing DA layer data storage for a certain public chain, carries out data interaction by way of internal shared nodes. Although it reduces the cost of interaction and improves security, it has greater limitations and can only provide services to specific public chains.
4. Storage Methods of DA
4.1 Main Chain DA
4.1.1 DankSharding-like
There is no definitive name for this type of storage scheme, but the most prominent one is Dank Sharding on Ethereum, so in this paper, we use the term Dank Sharding-like to refer to this type of scheme. This type of scheme mainly uses the two DA storage techniques mentioned above, sharding and DAS, firstly, the data is divided into an appropriate number of shares by sharding, and then each node extracts a data block in the form of DAS for storage. For the case that there are enough nodes in the whole network, we can take a larger number of slices N, so that the storage pressure of each node is only 1/N of the original, thus realizing N-fold expansion of the overall storage space. At the same time, to prevent the extreme case that a block is not stored by any block, Dank Sharding encodes the data using Eraser Code, which requires only half of the data for complete restoration. Lastly, the data is verified using a Verkle Tree structure with polynomial commitments for fast checksums.
4.1.2 Temporary Storage
For the DA of the main chain, one of the simplest ways to handle data is to store historical data for a short period of time. Essentially, the blockchain acts as a public ledger, where changes are made to the content of the ledger in the presence of the entire network, and there is no need for permanent storage. In the case of Solana, for example, although its historical data is synchronized to Arweave, the main network nodes only retain the transaction data of the last two days. On a public chain based on account records, each moment of historical data retains the final state of the account on the blockchain, which is sufficient to provide a basis for verification of changes at the next moment. Those who have special needs for data before this time, can store it on other decentralized public chains or hand it over to a trusted third party. In other words, those who have additional needs for data will need to pay for historical data storage.
4.2 Third Party DA
4.2.1 DA for Main Chain: EthStorage
DA for Main Chain: The most important thing for the DA layer is the security of data transmission, and the DA with the highest security is the DA of the main chain, but the main chain storage is limited by the storage space and the competition of resources, so when the data volume of the network grows fast, the third-party DA is a better choice if it wants to realize the long-term storage of data. If the third-party DA has higher compatibility with the main network, it can realize the sharing of nodes, and the data interaction process will have higher security. Therefore, under the premise of considering security, a dedicated DA for the main chain will have a huge advantage. Taking Ethereum as an example, one of the basic requirements for a DA dedicated to the main chain is that it can be compatible with EVM to ensure interoperability with Ethereum data and contracts, and representative projects include Topia, EthStorage, etc. Among them, EthStorage is the most compatible DA in terms of compatibility. Representative projects include Topia, EthStorage, and so on. Among them, EthStorage is the most well-developed in terms of compatibility, because in addition to EVM compatibility, it has also set up relevant interfaces to interface with Remix, Hardhat, and other Ethereum development tools to realize compatibility with Ethereum development tools.EthStorage: EthStorage is a public chain independent of Ethereum, but the nodes running on it are a supergroup of Ethereum nodes, which means that the nodes running EthStorage can also run Ethereum at the same time. What's more, we can also directly operate EthStorage through the opcodes on Ethereum. EthStorage's storage model retains only a small amount of metadata for indexing on the main Ethereum network, essentially creating a decentralized database for Ethereum. In the current solution, EthStorage deploys an EthStorage Contract on the main Ethereum to realize the interaction between the main Ethereum and EthStorage. If Ethereum wants to deposit data, it needs to call the put() function in the contract, and the input parameters are two-byte variables key, data, where data represents the data to be deposited, and the key is its identity in the Ethereum network, which can be regarded as similar to the existence of CID in IPFS. After the (key, data) data pair is successfully stored in the EthStorage network, EthStorage will generate a kvldx to be returned to the Ethereum host network, which corresponds to the key on the Ethereum network, and this value corresponds to the storage address of the data on EthStorage so that the original problem of storing a large amount of data can now be changed to storing a single (key, kvldx). (key, kvldx) pair, which greatly reduces the storage cost of the main Ethereum network. If you need to call the previously stored data, you need to use the get() function in EthStorage and enter the key parameter, and then you can do a quick lookup of the data on EthStorage by using the kvldx stored in Ethereum.

Source: Kernel Ventures
In terms of how nodes store data, EthStorage learns from the Arweave model. First of all, a large number of (k,v) pairs from ETH are sharded, and each sharding contains a fixed number of (k, v) pairs, of which there is a limit on the size of each (k, v) pair to ensure the fairness of workload in the process of storing rewards for miners. For the issuance of rewards, it is necessary to verify whether the node stores data to begin with. In this process, EthStorage will divide a sharding (TB size) into many chunks and keep a Merkle root on the Ethereum mainnet for verification. Then the miner needs to provide a nonce to generate a few chunks by a random algorithm with the hash of the previous block on EthStorage, and the miner needs to provide the data of these chunks to prove that he/she has stored the whole sharding, but this nonce can not be chosen arbitrarily, or else the node will choose the appropriate nonce corresponding to the chunks stored by him/her and pass the verification. However, this nonce cannot be chosen randomly, otherwise the node will choose a suitable nonce that corresponds only to its stored chunks and thus pass the verification, so this nonce must make the generated chunks after mixing and hashing so that the difficulty value meets the requirements of the network, and only the first node that submits the nonce and the random-access proof can get the reward.
4.2.2 Modularization DA: Celsetia
Blockchain Module: The transactions to be performed on the Layer1 public chain are divided into the following four parts: (1) designing the underlying logic of the network, selecting validation nodes in a certain way, writing blocks, and allocating rewards for network maintainers; (2) packaging and processing transactions and publishing related transactions; (3) validating transactions to be uploaded to the blockchain and determining the final status; (4) storing and maintaining historical data on the blockchain. According to the different functions performed, we can divide the blockchain into four modules, consensus layer, execution layer, settlement layer, and data availability layer (DA layer).Modular Blockchain design: for a long time, these four modules have been integrated into a single public chain, such a blockchain is called a monolithic blockchain. This form is more stable and easier to maintain, but it also puts tremendous pressure on the single public chain. In practice, the four modules constrain each other and compete for the limited computational and storage resources of the public chain. For example, increasing the processing speed of the processing layer will bring more storage pressure to the data availability layer; ensuring the security of the execution layer requires a more complex verification mechanism but slows down the speed of transaction processing. Therefore, the development of a public chain often faces a trade-off between these four modules. To break through this bottleneck of public chain performance improvement, developers have proposed a modular blockchain solution. The core idea of modular blockchain is to strip out one or several of the four modules mentioned above and give them to a separate public chain for implementation. In this way, the public chain can focus on the improvement of transaction speed or storage capacity, breaking through the previous limitations on the overall performance of the blockchain due to the short board effect.Modular DA: The complex approach of separating the DA layer from the blockchain business and placing it on a separate public chain is considered a viable solution for Layer1's growing historical data. At this stage, the exploration in this area is still at an early stage, and the most representative project is Celestia, which uses the storage method of Sharding, which also divides the data into multiple blocks, and each node extracts a part of it for storage and uses the KZG polynomial commitment to verify the data integrity. At the same time, Celestia uses advanced two-dimensional RS corrective codes to rewrite the original data in the form of a k*k matrix, which ultimately requires only 25% of the original data to be recovered. However, sliced data storage is essentially just multiplying the storage pressure of nodes across the network by a factor of the total data volume, and the storage pressure of nodes still grows linearly with the data volume. As Layer1 continues to improve for transaction speed, the storage pressure on nodes may still reach an unacceptable threshold someday. To address this issue, an IPLD component is introduced in Celestia. Instead of storing the data in the k*k matrix directly on Celestia, the data is stored in the LL-IPFS network, with only the CID code of the data kept in the node. When a user requests a piece of historical data, the node sends the corresponding CID to the IPLD component, which is used to call the original data on IPFS. If the data exists on IPFS, it is returned via the IPLD component and the node. If it does not exist, the data can not be returned.

Source: Celestia Core
Celestia: Taking Celestia as an example, we can see the application of modular blockchain in solving the storage problem of Ethereum, Rollup node will send the packaged and verified transaction data to Celestia and store the data on Celestia, during the process, Celestia only stores the data without having too much perception. In this process, Celestia just stores the data without sensing it, and in the end, according to the size of the storage space, the Rollup node will pay the corresponding tia tokens to Celestia as the storage fee. The storage in Celestia utilizes a similar DAS and debugging code as in EIP4844, but the polynomial debugging code in EIP4844 is upgraded to use a two-dimensional RS debugging code, which upgrades the security of the storage again, and only 25% of the fractions are needed to recover the entire transaction data. It is essentially a POS public chain with low storage costs, and if it is to be realized as a solution to Ethereum's historical data storage problem, many other specific modules are needed to work with Celestia. For example, in terms of rollup, one of the roll-up models highly recommended by Celestia's official website is Sovereign Rollup, which is different from the common rollup on Layer2, which can only calculate and verify transactions, just completing the execution layer, and includes the entire execution and settlement process, which minimizes the need for the execution and settlement process on Celestia. This minimizes the processing of transactions on Celestia, which maximizes the overall security of the transaction process when the overall security of Celestia is weaker than that of Ethereum. As for the security of the data called by Celestia on the main network of Ethereum, the most mainstream solution is the Quantum Gravity Bridge smart contract. For the data stored on Celestia, it will generate a Merkle Root (data availability certificate) and keep it on the Quantum Gravity Bridge contract on the main network of EtherCenter. When EtherCenter calls the historical data on Celestia every time, it will compare the hash result with the Merkle Root, and if it matches, then it means that it is indeed the real historical data.
4.2.3 Storage Chain DA
In terms of the technical principles of mainchain DAs, many techniques similar to sharding have been borrowed from storage public chains. In third-party DAs, some of them even fulfill part of the storage tasks directly with the help of storage public chains, for example, the specific transaction data in Celestia is put on the LL-IPFS network. In the solutions of third-party DAs, besides building a separate public chain to solve the storage problem of Layer1, a more direct way is to directly connect the storage public chain to Layer1 to store the huge historical data on Layer1. For high-performance blockchain, the volume of historical data is even larger, under full-speed operation, the data volume of high-performance public chain Solana is close to 4 PG, which is completely beyond the storage range of ordinary nodes. Solana chooses a solution to store historical data on the decentralized storage network Arweave and only retains 2 days of data on the nodes of the main network for verification. To ensure the security of the storage process, Solana and the Arweave chain have designed a storage bridge protocol, Solar Bridge, which synchronizes the validated data from Solana nodes to Arweave and returns the corresponding tag, which allows Solana nodes to view the historical data of the Solana blockchain at any point in time. The Solana node can view historical data from any point in time on the Solana blockchain. On Arweave, instead of requiring nodes across the network to maintain data consistency as a necessity for participation, the network adopts a reward storage approach. First of all, Arweave doesn't use a traditional chain structure to build blocks, but more like a graph structure. In Arweave, a new block will not only point to the previous block, but also randomly point to a generated block Recall block, whose exact location is determined by the hash result of the previous block and its block height, and the location of the Recall block is unknown until the previous block is mined out. However, in the process of generating new blocks, nodes are required to have the data of the Recall block to use the POW mechanism to calculate the hash of the specified difficulty, and only the miner who is the first to calculate the hash that meets the difficulty can be rewarded, which encourages miners to store as much historical data as possible. At the same time, the fewer people storing a particular historical block, the fewer competitors a node will have when generating a difficulty-compliant nonce, encouraging miners to store blocks with fewer backups in the network. Finally, to ensure that nodes store data permanently, WildFire's node scoring mechanism is introduced in Arweave. Nodes will prefer to communicate with nodes that can provide historical data more and faster, while nodes with lower ratings will not be able to get the latest block and transaction data the first time, thus failing to get a head start in the POW competition.

Source: Arweave Yellow-Paper
5. Synthesized Comparison
We will compare the advantages and disadvantages of each of the five storage solutions in terms of the four dimensions of DA performance metrics.
Safety: The biggest source of data security problems is the loss of data caused by the data transmission process and malicious tampering from dishonest nodes, and the cross-chain process is the hardest hit area of data transmission security due to the independence of the two public chains and the state is not shared. In addition, Layer1, which requires a specialized DA layer at this stage, often has a strong consensus group, and its security will be much higher than that of ordinary storage public chains. Therefore, the main chain DA solution has higher security. After ensuring the security of data transmission, the next step is to ensure the security of calling data. Considering only the short-term historical data used to verify the transaction, the same data is backed up by the whole network in the temporary storage network, while the average number of data backups in the DankSharding-like scheme is only 1/N of the number of nodes in the whole network, which means more data redundancy can make the data less prone to be lost, and at the same time, it can provide more reference samples for verification. Therefore, temporary storage will have higher data security. In the third-party DA scheme, because of the public nodes used in the main chain, the data can be directly transmitted through these relay nodes in the process of cross-chaining, and thus it will also have a relatively higher security than other DA schemes.Storage Cost: The factor that has the greatest impact on storage cost is the amount of redundancy in the data. In the short-term storage scheme of the main chain DA, which uses the form of network-wide node data synchronization for storage, any newly stored data needs to be backed up in the network-wide nodes, having the highest storage cost. The high storage cost in turn determines that in a high TPS network, this approach is only suitable for temporary storage. Next is the sharding storage method, including sharding in the main chain and sharding in the third-party DA. Because the main chain often has more nodes, and thus the corresponding block will have more backups, the main chain sharding scheme will have a higher cost. The lowest storage cost is in the storage public chain DA that adopts the reward storage method, and the amount of data redundancy in this scheme tends to fluctuate around a fixed constant. At the same time, the storage public chain DA also introduces a dynamic adjustment mechanism, which attracts nodes to store less backup data by increasing the reward to ensure data security.Data Read Speed: Data storage speed is primarily affected by where the data is stored in the storage space, the data index path, and the distribution of the data among the nodes. Among them, where the data is stored in the nodes has a greater impact on the speed, because storing the data in memory or SSD can lead to a tens of times difference in read speed. Storage public chain DAs mostly take SSD storage because the load on that chain includes not only data from the DA layer but also highly memory-hungry personal data such as videos and images uploaded by users. If the network does not use SSDs as storage space, it is difficult to carry the huge storage pressure and meet the demand for long-term storage. Second, for third-party DAs and main-chain DAs that use memory state to store data, third-party DAs first need to search for the corresponding indexed data in the main chain, and then transfer the indexed data across the chain to third-party DAs and return the data via the storage bridge. In contrast, the mainchain DA can query data directly from nodes, and thus has faster data retrieval speed. Finally, within the main-chain DA, the sharding approach requires calling blocks from multiple nodes and restoring the original data. Therefore, it is slower than the short-term storage method without sharding.DA Layer Universality: Mainchain DA universality is close to zero because it is not possible to transfer data from a public chain with insufficient storage space to another public chain with insufficient storage space. In third-party DAs, the generality of a solution and its compatibility with a particular mainchain are contradictory metrics. For example, in the case of a mainchain-specific DA solution designed for a particular mainchain, it has made a lot of improvements at the level of node types and network consensus to adapt to that particular public chain, and thus these improvements can act as a huge obstacle when communicating with other public chains. Within third-party DAs, storage public chain DAs perform better in terms of generalizability than modular DAs. Storage public chain DAs have a larger developer community and more expansion facilities to adapt to different public chains. At the same time, the storage public chain DA can obtain data more actively through packet capture rather than passively receiving information transmitted from other public chains. Therefore, it can encode the data in its way, achieve standardized storage of data flow, facilitate the management of data information from different main chains, and improve storage efficiency.

Source: Kernel Ventures
6. Conclusion
Blockchain is undergoing the process of conversion from Crypto to Web3, and it brings an abundance of projects on the blockchain, but also data storage problems. To accommodate the simultaneous operation of so many projects on Layer1 and ensure the experience of the Gamefi and Socialfi projects, Layer1 represented by Ethereum has adopted Rollup and Blobs to improve the TPS. What's more, the n/umber of high-performance blockchains in the newborn blockchain is also growing. But higher TPS not only means higher performance but also means more storage pressure in the network. For the huge amount of historical data, multiple DA approaches, both main chain and third-party based are proposed at this stage to adapt to the growth of storage pressure on the chain. Improvements have their advantages and disadvantages and have different applicability in different contexts. In the case of payment-based blockchains, which have very high requirements for the security of historical data and do not pursue particularly high TPS, those are still in the preparatory stage, they can adopt a DankSharding-like storage method, which can ensure security and a huge increase in storage capacity at the same time realize. However, if it is a public chain like Bitcoin, which has already been formed and has a large number of nodes, there is a huge risk of rashly improving the consensus layer, so it can adopt a special DA for the main chain with higher security in the off-chain storage to balance the security and storage issues. However, it is worth noting that the function of the blockchain is changing over time. For example, in the early days, Ethereum's functionality was limited to payments and simple automated processing of assets and transactions using smart contracts, but as the blockchain landscape has expanded, various Socialfi and Defi projects have been added to Ethereum, pushing it to a more comprehensive direction. With the recent explosion of the inscription ecosystem on Bitcoin, transaction fees on the Bitcoin network have surged nearly 20 times since August, reflecting the fact that the network's transaction speeds are not able to meet the demand for transactions at this stage. Traders have to raise fees to get transactions processed as quickly as possible. Now, the Bitcoin community needs to make a trade-off between accepting high fees and slow transaction speed or reducing network security to increase transaction speeds while defeating the purpose of the payment system in the first place. If the Bitcoin community chooses the latter, then the storage solution will need to be adjusted in the face of increasing data pressure.

Source: OKLINK
As for the public chain with comprehensive functions, its pursuit of TPS is higher, with the enormous growth of historical data, it is difficult to adapt to the rapid growth of TPS in the long run by adopting the DankSharding-like solution. Therefore, a more appropriate way is to migrate the data to a third-party DA for storage. Among them, main chain-specific DAs have the highest compatibility and may be more advantageous if only the storage of a single public chain is considered. However, nowadays, when Layer1 public chains are blooming, cross-chain asset transfer and data interaction have also become a common pursuit of the blockchain community. If we consider the long-term development of the whole blockchain ecosystem, storing historical data from different public chains on the same public chain can eliminate many security problems in the process of data exchange and validation, so the modularized DA and the way of storing public chain DAs may be a better choice. Under the premise of close generality, modular DA focuses on providing blockchain DA layer services, introduces more refined index data to manage historical data, and can make a reasonable categorization of different public chain data, which has more advantages compared with storage public chains. However, the above proposal does not consider the cost of consensus layer adjustment on the existing public chain, which is extremely risky. A tiny systematic loophole may make the public chain lose community consensus. Therefore, if it is a transitional solution in the process of blockchain transformation, the temporary storage on the main chain may be more appropriate. Finally, all the above discussions are based on the performance during actual operation, but if the goal of a certain public chain is to develop its ecology and attract more project parties and participants, it may also tend to favor projects that are supported and funded by its foundation. For example, if the overall performance is equal to or even slightly lower than that of the storage public chain storage solution, the Ethereum community will also favor EthStorage, which is a Layer2 project supported by the Ethereum Foundation, to continue to develop the Ethereum ecosystem.
All in all, the increasing complexity of today's blockchains brings with it a greater need for storage space. With enough Layer1 validation nodes, historical data does not need to be backed up by all nodes in the whole network butcan ensure security after a certain threshold. At the same time,the division of labor of the public chain has become more and more detailed, Layer1 is responsible for consensus and execution, Rollup is responsible for calculation and verification, and then a separate blockchain is used for data storage. Each part can focus on a certain function without being limited by the performance of the other parts. However, the specific number of storage or the proportion of nodes allowed to store historical data in order toachieve a balance between security and efficiency,as well as how toensure secure interoperability between different blockchainsis a problem that needs to be considered by blockchain developers. Investors canpay attention to the main chain-specific DA project on Ethereum, because Ethereum already has enough supporters at this stage, without the need to use the power of other communities to expand its influence. It is more important to improve and develop its community to attract more projects to the Ethereum ecosystem. However, for public chains that are catching up, such as Solana and Aptos, the single chain itself does not have such a perfect ecosystem, so they may prefer to join forces with other communities to build a large cross-chain ecosystem to expand their influence. Therefore,for the emerging Layer1, a general-purpose third-party DA deserves more attention.
Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world.
Reference
Celestia: 模块化区块链的星辰大海: https://foresightnews.pro/article/detail/15497DHT usage and future work: https://github.com/celestiaorg/celestia-node/issues/11Celestia-core: https://github.com/celestiaorg/celestia-coreSolana labs: https://github.com/solana-labs/solana?source=post_page-----cf47a61a9274--------------------------------Announcing The SOLAR Bridge: https://medium.com/solana-labs/announcing-the-solar-bridge-c90718a49fa2leveldb-handbook: https://leveldb-handbook.readthedocs.io/zh/latest/sstable.htmlKuszmaul J. Verkle trees[J]. Verkle Trees, 2019, 1: 1.: https://math.mit.edu/research/highschool/primes/materials/2018/Kuszmaul.pdfArweave Network: https://www.arweave.org/Arweave Yellow-book: https://www.arweave.org/yellow-paper.pdf
See original
Kernel Ventures: An article discussing DA and historical data layer designAuthor: Kernel Ventures Jerry Luo Reviewers: Kernel Ventures Mandy, Kernel Ventures Joshua TLDR: Early public chains required all network nodes to maintain data consistency to ensure security and decentralization. However, with the development of the blockchain ecosystem, storage pressure continues to increase, leading to a trend of centralized node operations. At this stage, Layer 1 urgently needs to solve the storage cost problem caused by the growth of TPS. Faced with this problem, developers need to propose new historical data storage solutions while taking into account security, storage cost, data reading speed and DA layer versatility. In the process of solving this problem, many new technologies and new ideas have emerged, including Sharding, DAS, Verkle Tree, DA intermediate components, etc. They tried to optimize the storage solution of the DA layer by reducing data redundancy and improving data verification efficiency. The current DA solutions are roughly divided into two categories based on data storage location, namely main chain DA and third-party DA. The main chain DA starts from the perspective of regularly cleaning data and sharding data to reduce node storage pressure. The third-party DA design requirements are all aimed at storage services and have reasonable solutions for large amounts of data. Therefore, the main focus is on trade-off between single-chain compatibility and multi-chain compatibility, and three solutions are proposed: main chain dedicated DA, modular DA, and storage public chain DA. Payment-type public chains have extremely high requirements for historical data security and are suitable for using the main chain as the DA layer. However, for public chains that have been running for a long time and have a large number of miners running the network, it would be more appropriate to adopt a third-party DA that does not involve the consensus layer and takes into account security. Comprehensive public chains are more suitable for using main chain dedicated DA storage with larger data capacity, lower cost and security. But considering the needs of cross-chain, modular DA is also a good option. Generally speaking, blockchain is developing in the direction of reducing data redundancy and multi-chain division of labor.

Kernel Ventures: An article discussing DA and historical data layer design

Author: Kernel Ventures Jerry Luo
Reviewers: Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
Early public chains required all network nodes to maintain data consistency to ensure security and decentralization. However, with the development of the blockchain ecosystem, storage pressure continues to increase, leading to a trend of centralized node operations. At this stage, Layer 1 urgently needs to solve the storage cost problem caused by the growth of TPS. Faced with this problem, developers need to propose new historical data storage solutions while taking into account security, storage cost, data reading speed and DA layer versatility. In the process of solving this problem, many new technologies and new ideas have emerged, including Sharding, DAS, Verkle Tree, DA intermediate components, etc. They tried to optimize the storage solution of the DA layer by reducing data redundancy and improving data verification efficiency. The current DA solutions are roughly divided into two categories based on data storage location, namely main chain DA and third-party DA. The main chain DA starts from the perspective of regularly cleaning data and sharding data to reduce node storage pressure. The third-party DA design requirements are all aimed at storage services and have reasonable solutions for large amounts of data. Therefore, the main focus is on trade-off between single-chain compatibility and multi-chain compatibility, and three solutions are proposed: main chain dedicated DA, modular DA, and storage public chain DA. Payment-type public chains have extremely high requirements for historical data security and are suitable for using the main chain as the DA layer. However, for public chains that have been running for a long time and have a large number of miners running the network, it would be more appropriate to adopt a third-party DA that does not involve the consensus layer and takes into account security. Comprehensive public chains are more suitable for using main chain dedicated DA storage with larger data capacity, lower cost and security. But considering the needs of cross-chain, modular DA is also a good option. Generally speaking, blockchain is developing in the direction of reducing data redundancy and multi-chain division of labor.
🫡🫥
🫡🫥
LIVE
Kernel Ventures
--
Kernel Ventures: Empowering Dapps with off-chain computing power — ZK coprocessor
Author: Kernel Ventures Turbo Guo
Reviewers: Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
The ZK coprocessor is a solution that allows dApps to utilize off-chain computing resources. This article mainly discusses the implementation of the coprocessor, various applications and future development directions. The main contents are:
RISC Zero's zkVM is a ZK coprocessor solution that allows on-chain contracts to call off-chain zkVM to run specific Rust code and return the results to the chain, while providing zkp for the contract to verify whether the calculation is correct. ZK coprocessors have different implementation methods. In addition to zkVM, users can also write customized ZK circuits for their own programs, or use prefabricated frameworks to write circuits, allowing contracts to utilize off-chain computing resources. The ZK coprocessor can play a role in DeFi, such as placing AMM calculations off-chain, allowing the protocol to capture MEV-like value, or allowing AMM to implement complex and computationally intensive operating logic. The ZK coprocessor can also allow lending protocols to calculate interest rates in real time, making margin calculations transparent, etc. There are two implementations of zkAMM, one is using zkVM and the other is using zkOracle. There are other potential uses of the ZK coprocessor. For example, wallets can use the ZK coprocessor to perform identity verification off-chain. The coprocessor can also enable on-chain games to perform more complex calculations and reduce the gas required for DAO governance. wait. The pattern of the ZK coprocessor is undecided, but compared to users writing their own circuits, it is more friendly to use a project as an interface to call off-chain resources. But what computing service providers (traditional cloud vendors, traditional cloud vendors, etc.) are connected behind the "interface" project? Decentralized resource sharing) is another issue worth discussing.
👻
👻
LIVE
Kernel Ventures
--
Kernel Ventures: Empowering DApps with Off-Chain Computing Ability — ZK Coprocessors
Author: Kernel Ventures Turbo Guo
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR: The ZK coprocessor is a solution for dApps to utilize off-chain computing resources. This article explores the existing solutions, various applications, and future development of coprocessors. The main topics covered are as follows:
RISC Zero's zkVM is a ZK coprocessor solution that allows on-chain contracts which call off-chain zkVM to run specific Rust code and return the results to the chain, while providing zkp for on-chain verification of the correctness of the computation.There are different solutions for ZK coprocessors. Besides zkVM, users can also write customized ZK circuits for their programs, or use pre-made frameworks to write circuits, thereby enabling contracts to utilize off-chain computing resources.ZK coprocessor can play a role in DeFi, such as offloading AMM calculations off-chain to capture value similar to MEV or enabling complex and computationally intensive logic for AMMs. ZK coprocessor can also facilitate real-time interest rate calculations for lending protocols, making margin calculations transparent, among other things. zkAMM has two implementation approaches, one using zkVM, and the other using zkOracle.ZK coprocessor also has other potential use cases, such as wallets using it to perform off-chain identity verification. It can enable more complex computations for on-chain games and reduce the gas required for DAO governance, among other applications.The landscape for ZK coprocessors is still uncertain, but compared to users writing their own circuits, using a solution for off-chain resource interfacing is more user-friendly. However, the question of which computation service providers are integrated behind that "interface" solution, whether traditional cloud providers or decentralized resource-sharing networks, is another important topic for discussion.
1. The Purpose and Application of ZK Coprocessors

Source: Kernel Ventures

The core of ZK coprocessor is to move on-chain computation off-chain, using ZK proofs to ensure the reliability of off-chain computation, allowing smart contracts to easily handle a large amount of computation while verifying the reliability of the computation. This is similar to the idea of zkRollups, but Rollups use off-chain computing resources at the chain protocol layer, while ZK coprocessors are used by dApps to utilize off-chain resources.
Using RISC Zero as an example to explain one solution of ZK coprocessors, RISC Zero has developed the Bonsai ZK coprocessor architecture, whose core is RISC Zero's zkVM. Developers can generate zkp on zkVM for "a certain Rust code being correctly executed". With zkVM, the specific process of implementing a ZK coprocessor is:
Developers send a request to Bonsai's relay contract, i.e., to run the developer's required program in zkVM.The relay contract sends the request to the off-chain request pool.Bonsai executes the request in off-chain zkVM, performs large-scale computations, and then generates a receipt.These proofs, also known as "receipts", are published back to the chain by Bonsai through the relay contract.

Source: RISC Zero

In Bonsai, the proven program is called the Guest Program, and the receipt is used to prove that the guest program has been executed correctly. The receipt includes a journal and a seal. Specifically, the journal carries the public output of the zkVM application, while the seal is used to prove the validity of the receipt, i.e., to prove that the guest program has been executed correctly. The seal itself is a zkSTARK generated by the prover. Verifying the receipt ensures that the journal is constructed using the correct circuit, etc.
Bonsai simplifies the process for developers to compile Rust code into zkVM bytecode, upload programs, execute them in the VM, and receive proof feedback, allowing developers to focus more on logical design. It enables not only partial contract logic but the entire contract logic to run off-chain. RISC Zero also utilizes continuations, breaking down the generation of a large proof into smaller parts, enabling proof generation for large programs without consuming excessive memory. In addition to RISC Zero, there are other projects like IronMill, =nil; Foundation, and Marlin that provide similar general solutions.
2. Application of ZK Coprocessors in DeFi
2.1 AMM - Bonsai as a Coprocessor
zkUniswap is an AMM that leverages off-chain computing resources. Its core feature is to offload part of the swap computation off-chain, using Bonsai. Users initiate a swap request on-chain. Bonsai's relay contract obtains the request, initiates off-chain computation, and upon completion, returns the computation result and proof to the EVM's callback function. If the proof is successfully verified, the swap is executed.
However, the swap is not completed in one go. The request and execution processes are in different transactions, which brings certain risks. That is, between the submission of the request and the completion of the swap, the state of the pool may change. As the verification is based on the state of the pool at the time of request submission, if a request is still pending, and the pool's state changes, then the verification will be invalid. This is an important consideration in the design and security of such systems.
To address this issue, developers have designed a pool lock. When a user initiates a request, all operations other than settling the swap are temporarily locked until off-chain computing successfully triggers the on-chain swap or the swap times out (the time limit will be preset). With a time limit in place, even if there are problems with the relay or zkp, the pool will not be locked indefinitely. The specific time limit might be a few minutes.
zkUniswap has a unique design to capture MEV, as developers aim to have the protocol benefit from MEV. Theoretically, zkAMMs also have MEV, as the first person to submit a swap can lock it and front-run others, leading to gas wars, and builders can still prioritize transaction sequencing. However, zkUniswap takes the MEV profits for itself using a method known as the Variable Rate Gradual Dutch Auction (VRGDA). This approach allows zkUniswap to extract MEV value for the protocol.
zkUniswap's concept is quite interesting. It involves lowering the price of locked assets in an auction, and if the locked assets are sold quickly, the protocol recognizes high demand and raises the price automatically. If the sale of locked assets slows down, the protocol lowers the price. This innovative approach could potentially become a new source of revenue. Essentially, the protocol introduces a unique mechanism for prioritizing transactions, and the competition for pricing benefits the project directly through this mechanism.
2.2 AMM - zkOracle as a Coprocessor
Besides using zkVM, some have proposed using zkOracle to utilize off-chain computing resources, it is worth noting that zkOracle is an I/O (input and output) oracle that handles both input and output. Generally, there are two types of oracles, one is the input oracle, and the other is the output oracle. The input oracle processes (computes) off-chain data and puts it on-chain, while the output oracle processes (computes) on-chain data and provides it off-chain. The I/O oracle (zkOracle) first does the output, then the input, allowing the chain to utilize off-chain computing resources.
On the one hand, zkOracle uses on-chain data as a data source, and on the other hand, it uses ZK to ensure that the oracle nodes' computations are honest, thus achieving the function of a coprocessor. Therefore, the core computation of AMM can be placed within zkOracle, allowing for traditional AMM functionality while also enabling more complex and computationally intensive operations using zkOracle.

Source: github fewwwww/zkAMM
2.3 Lending Rate Calculation, Margin Calculation, and Other Applications
Setting aside the implementation method, with the addition of ZK coprocessors, many functionatlities can be achieved. For example, lending protocols can adjust interest rates according to real-time parameters instead of pre-defined conditions. For instance, increasing the interest rate to attract supply when the demand for borrowing is strong, and lowering the interest rate when demand decreases. This requires the lending protocol to obtain a large amount of on-chain data in real-time, preprocess the data, and calculate the parameters off-chain (unless the on-chain cost is extremely low).
Complex calculations such as determining margin balances, unrealized profits/losses and etc., can also use coprocessors for execution. The advantage of using coprocessors is that it make these applications more transparent and verifiable. The logic of the margin engine is no longer a secret black box. Although the calculations are performed off-chain, users can fully trust the correctness of their execution. This approach is also applicable to options calculations.
3. Other Applications of ZK Coprocessors
3.1 Wallet - Using Bonsai as a Coprocessor
Bonfire Wallet uses zkVM to offload the computation of identity verification off-chain. The goal of this wallet is to allow users to create burner wallets using biometric information (fingerprints) or encrypted hardware yubikey. Specifically, Bonfire Wallet uses WebAuthn, a common web authentication standard, to allow users to complete web identity verification directly with devices without a password. So in Bonfire Wallet, users generate a public key with WebAuthn (not on-chain, but for WebAuthn), and then use it to create a wallet. Each Burner wallet has a contract on-chain, which contains the public key of WebAuthn. The contract needs to verify the user's WebAuthn signature. But this computation is large, so Bonsai is used to offload this computation off-chain, through a zkVM guest program to verify the signature off-chain, and produce zkp for on-chain verification.

Source: Bonfire Wallet
3.2 On-Chain Data Retrieval - ZK Circuits Written by Users
Axiom is an application that does not use zkVM but uses a different coprocessor solution. Let's first introduce what Axiom aims to do. It leverages a ZK coprocessors to allow contracts to access historical on-chain information. In reality, enabling contracts to read historical data is quite challenging, because smart contracts typically obtain real-time on-chain data, which can be very expensive. It is hard for contracts to access valuable on-chain data such as historical account balances or transaction records.

Source: Axiom demo
Axiom nodes access the required on-chain data and perform the specified computation off-chain, then generate a zero-knowledge proof for the computation, proving that the result is correctly calculated based on valid on-chain data. This proof is verified on-chain, ensuring that the contract can trust this result.
To generate zkp for off-chain computation, it is necessary to compile programs into ZK circuits. Previously we also mentioned using zkVM for this, but Axiom suggested that there are many solutions for this, and it's necessary to balance performance, flexibility, and development experience:
Customized Circuits: if developers customize circuits for their programs, the performance will definitely be the best, but it takes time to develop;eDSL/DSL: developers still write their circuits, but there are some optional frameworks to help developers solve zk-related problems, thus balancing performance and development experience.zkVM: developers directly run ZK on an existing virtual machine, which is very convenient, but Axiom believes it's inefficient.
Therefore, Axiom chose the second option, and provides users with a set of optimized ZK modules, allowing them to design their own circuits.
Projects similar to Axiom include Herodotus, which aims to be a middleware for cross-chain messaging. Since information processing is off-chain, it's reasonable to allow different chains to obtain processed data. Another project, Space and Time, uses a similar architecture to implement data indexing.
3.3 On-Chain Games, DAO Governance and Other Applications
In addition to the above, on-chain games, DAO governance can also use ZK coprocessors. RISC Zero believes that any computation requiring more than 250k gas would be cheaper using a ZK coprocessor, but how this is calculated remains to be further investigated. DAO governance can also use ZK coprocessors, as it involves multiple people and multiple contracts, which is very computationally intensive. RISC Zero claims that using Bonsai can reduce gas fees by 50%. Many ZKML projects, such as Modulus Labs and Giza, are using the same solution as ZK coprocessors, but the concept of ZK coprocessors is broader.
It's worth mentioning that there are some auxiliary projects in the field of ZK coprocessors, such as ezkl, which provides compilers for ZK circuits, toolkits for deploying ZK, and tools for offloading on-chain computation off-chain.
4. Future Outlook
Coprocessors provide on-chain applications with external computational resources akin to the "cloud", offering cost-effective and abundant computation, while on-chain processing focuses on essential calculations. In practice, zkVM can also run on the cloud. Essentially, ZK coprocessors is an architectural approach that moves on-chain computation off-chain, with an unlimited source of off-chain computational resources.
Essentially, off-chain computing resources can be provided by traditional cloud providers, even decentralized computing resource sharing, and local devices. These three directions each have their characteristics. Traditional cloud providers can provide relatively mature off-chain computing solutions, the "robustness" of future decentralized computing resources may be stronger, and local computing also holds a lot of potential. But currently, many ZK coprocessor projects are in a closed-source service provider stage because the ecosystem for these services has not fully formed and service specialization among different projects is yet to be defined. Two possible scenarios for the future are:
Every part of the ZK coprocessor has a large number of projects competing with each other.A single project with excellent service experience may dominate the market.
From a developer's perspective, when using ZK coprocessors, they might only interact with a single "interface" project. This is similar to the reason why Amazon Web Services has a substantial market share, as developers tend to become accustomed to a specific deployment method. However, the question of which computing service providers (traditional cloud companies, decentralized resource sharing) are integrated behind this off-chain computational resource "interface" project is another topic worth discussing.
Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world.
REFERENCE:
A Guide to ZK Coprocessors for Scalability:https://www.risczero.com/news/a-guide-to-zk-coprocessors-for-scalabilityDefining zkOracle for Ethereum:https://ethresear.ch/t/defining-zkoracle-for-ethereum/15131zkUniswap: a first-of-its-kind zkAMM:https://ethresear.ch/t/zkuniswap-a-first-of-its-kind-zkamm/16839What is a ZK Coprocessor?:https://blog.axiom.xyz/what-is-a-zk-coprocessor/A Brief Intro to Coprocessors:https://crypto.mirror.xyz/BFqUfBNVZrqYau3Vz9WJ-BACw5FT3W30iUX3mPlKxtALatest Applications Building on Hyper Oracle (Bonus: Things You Can Build Now):https://mirror.xyz/hyperoracleblog.eth/Tik3nBI9mw05Ql_aHKZqm4hNxfxaEQdDAKn7JKcx0xQBonfire Wallet:https://ethglobal.com/showcase/bonfire-wallet-n1dzp
Kernel Ventures: Empowering DApps with Off-Chain Computing Ability — ZK CoprocessorsAuthor: Kernel Ventures Turbo Guo Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua TLDR: The ZK coprocessor is a solution for dApps to utilize off-chain computing resources. This article explores the existing solutions, various applications, and future development of coprocessors. The main topics covered are as follows: RISC Zero's zkVM is a ZK coprocessor solution that allows on-chain contracts which call off-chain zkVM to run specific Rust code and return the results to the chain, while providing zkp for on-chain verification of the correctness of the computation.There are different solutions for ZK coprocessors. Besides zkVM, users can also write customized ZK circuits for their programs, or use pre-made frameworks to write circuits, thereby enabling contracts to utilize off-chain computing resources.ZK coprocessor can play a role in DeFi, such as offloading AMM calculations off-chain to capture value similar to MEV or enabling complex and computationally intensive logic for AMMs. ZK coprocessor can also facilitate real-time interest rate calculations for lending protocols, making margin calculations transparent, among other things. zkAMM has two implementation approaches, one using zkVM, and the other using zkOracle.ZK coprocessor also has other potential use cases, such as wallets using it to perform off-chain identity verification. It can enable more complex computations for on-chain games and reduce the gas required for DAO governance, among other applications.The landscape for ZK coprocessors is still uncertain, but compared to users writing their own circuits, using a solution for off-chain resource interfacing is more user-friendly. However, the question of which computation service providers are integrated behind that "interface" solution, whether traditional cloud providers or decentralized resource-sharing networks, is another important topic for discussion. 1. The Purpose and Application of ZK Coprocessors Source: Kernel Ventures The core of ZK coprocessor is to move on-chain computation off-chain, using ZK proofs to ensure the reliability of off-chain computation, allowing smart contracts to easily handle a large amount of computation while verifying the reliability of the computation. This is similar to the idea of zkRollups, but Rollups use off-chain computing resources at the chain protocol layer, while ZK coprocessors are used by dApps to utilize off-chain resources. Using RISC Zero as an example to explain one solution of ZK coprocessors, RISC Zero has developed the Bonsai ZK coprocessor architecture, whose core is RISC Zero's zkVM. Developers can generate zkp on zkVM for "a certain Rust code being correctly executed". With zkVM, the specific process of implementing a ZK coprocessor is: Developers send a request to Bonsai's relay contract, i.e., to run the developer's required program in zkVM.The relay contract sends the request to the off-chain request pool.Bonsai executes the request in off-chain zkVM, performs large-scale computations, and then generates a receipt.These proofs, also known as "receipts", are published back to the chain by Bonsai through the relay contract. Source: RISC Zero In Bonsai, the proven program is called the Guest Program, and the receipt is used to prove that the guest program has been executed correctly. The receipt includes a journal and a seal. Specifically, the journal carries the public output of the zkVM application, while the seal is used to prove the validity of the receipt, i.e., to prove that the guest program has been executed correctly. The seal itself is a zkSTARK generated by the prover. Verifying the receipt ensures that the journal is constructed using the correct circuit, etc. Bonsai simplifies the process for developers to compile Rust code into zkVM bytecode, upload programs, execute them in the VM, and receive proof feedback, allowing developers to focus more on logical design. It enables not only partial contract logic but the entire contract logic to run off-chain. RISC Zero also utilizes continuations, breaking down the generation of a large proof into smaller parts, enabling proof generation for large programs without consuming excessive memory. In addition to RISC Zero, there are other projects like IronMill, =nil; Foundation, and Marlin that provide similar general solutions. 2. Application of ZK Coprocessors in DeFi 2.1 AMM - Bonsai as a Coprocessor zkUniswap is an AMM that leverages off-chain computing resources. Its core feature is to offload part of the swap computation off-chain, using Bonsai. Users initiate a swap request on-chain. Bonsai's relay contract obtains the request, initiates off-chain computation, and upon completion, returns the computation result and proof to the EVM's callback function. If the proof is successfully verified, the swap is executed. However, the swap is not completed in one go. The request and execution processes are in different transactions, which brings certain risks. That is, between the submission of the request and the completion of the swap, the state of the pool may change. As the verification is based on the state of the pool at the time of request submission, if a request is still pending, and the pool's state changes, then the verification will be invalid. This is an important consideration in the design and security of such systems. To address this issue, developers have designed a pool lock. When a user initiates a request, all operations other than settling the swap are temporarily locked until off-chain computing successfully triggers the on-chain swap or the swap times out (the time limit will be preset). With a time limit in place, even if there are problems with the relay or zkp, the pool will not be locked indefinitely. The specific time limit might be a few minutes. zkUniswap has a unique design to capture MEV, as developers aim to have the protocol benefit from MEV. Theoretically, zkAMMs also have MEV, as the first person to submit a swap can lock it and front-run others, leading to gas wars, and builders can still prioritize transaction sequencing. However, zkUniswap takes the MEV profits for itself using a method known as the Variable Rate Gradual Dutch Auction (VRGDA). This approach allows zkUniswap to extract MEV value for the protocol. zkUniswap's concept is quite interesting. It involves lowering the price of locked assets in an auction, and if the locked assets are sold quickly, the protocol recognizes high demand and raises the price automatically. If the sale of locked assets slows down, the protocol lowers the price. This innovative approach could potentially become a new source of revenue. Essentially, the protocol introduces a unique mechanism for prioritizing transactions, and the competition for pricing benefits the project directly through this mechanism. 2.2 AMM - zkOracle as a Coprocessor Besides using zkVM, some have proposed using zkOracle to utilize off-chain computing resources, it is worth noting that zkOracle is an I/O (input and output) oracle that handles both input and output. Generally, there are two types of oracles, one is the input oracle, and the other is the output oracle. The input oracle processes (computes) off-chain data and puts it on-chain, while the output oracle processes (computes) on-chain data and provides it off-chain. The I/O oracle (zkOracle) first does the output, then the input, allowing the chain to utilize off-chain computing resources. On the one hand, zkOracle uses on-chain data as a data source, and on the other hand, it uses ZK to ensure that the oracle nodes' computations are honest, thus achieving the function of a coprocessor. Therefore, the core computation of AMM can be placed within zkOracle, allowing for traditional AMM functionality while also enabling more complex and computationally intensive operations using zkOracle. Source: github fewwwww/zkAMM 2.3 Lending Rate Calculation, Margin Calculation, and Other Applications Setting aside the implementation method, with the addition of ZK coprocessors, many functionatlities can be achieved. For example, lending protocols can adjust interest rates according to real-time parameters instead of pre-defined conditions. For instance, increasing the interest rate to attract supply when the demand for borrowing is strong, and lowering the interest rate when demand decreases. This requires the lending protocol to obtain a large amount of on-chain data in real-time, preprocess the data, and calculate the parameters off-chain (unless the on-chain cost is extremely low). Complex calculations such as determining margin balances, unrealized profits/losses and etc., can also use coprocessors for execution. The advantage of using coprocessors is that it make these applications more transparent and verifiable. The logic of the margin engine is no longer a secret black box. Although the calculations are performed off-chain, users can fully trust the correctness of their execution. This approach is also applicable to options calculations. 3. Other Applications of ZK Coprocessors 3.1 Wallet - Using Bonsai as a Coprocessor Bonfire Wallet uses zkVM to offload the computation of identity verification off-chain. The goal of this wallet is to allow users to create burner wallets using biometric information (fingerprints) or encrypted hardware yubikey. Specifically, Bonfire Wallet uses WebAuthn, a common web authentication standard, to allow users to complete web identity verification directly with devices without a password. So in Bonfire Wallet, users generate a public key with WebAuthn (not on-chain, but for WebAuthn), and then use it to create a wallet. Each Burner wallet has a contract on-chain, which contains the public key of WebAuthn. The contract needs to verify the user's WebAuthn signature. But this computation is large, so Bonsai is used to offload this computation off-chain, through a zkVM guest program to verify the signature off-chain, and produce zkp for on-chain verification. Source: Bonfire Wallet 3.2 On-Chain Data Retrieval - ZK Circuits Written by Users Axiom is an application that does not use zkVM but uses a different coprocessor solution. Let's first introduce what Axiom aims to do. It leverages a ZK coprocessors to allow contracts to access historical on-chain information. In reality, enabling contracts to read historical data is quite challenging, because smart contracts typically obtain real-time on-chain data, which can be very expensive. It is hard for contracts to access valuable on-chain data such as historical account balances or transaction records. Source: Axiom demo Axiom nodes access the required on-chain data and perform the specified computation off-chain, then generate a zero-knowledge proof for the computation, proving that the result is correctly calculated based on valid on-chain data. This proof is verified on-chain, ensuring that the contract can trust this result. To generate zkp for off-chain computation, it is necessary to compile programs into ZK circuits. Previously we also mentioned using zkVM for this, but Axiom suggested that there are many solutions for this, and it's necessary to balance performance, flexibility, and development experience: Customized Circuits: if developers customize circuits for their programs, the performance will definitely be the best, but it takes time to develop;eDSL/DSL: developers still write their circuits, but there are some optional frameworks to help developers solve zk-related problems, thus balancing performance and development experience.zkVM: developers directly run ZK on an existing virtual machine, which is very convenient, but Axiom believes it's inefficient. Therefore, Axiom chose the second option, and provides users with a set of optimized ZK modules, allowing them to design their own circuits. Projects similar to Axiom include Herodotus, which aims to be a middleware for cross-chain messaging. Since information processing is off-chain, it's reasonable to allow different chains to obtain processed data. Another project, Space and Time, uses a similar architecture to implement data indexing. 3.3 On-Chain Games, DAO Governance and Other Applications In addition to the above, on-chain games, DAO governance can also use ZK coprocessors. RISC Zero believes that any computation requiring more than 250k gas would be cheaper using a ZK coprocessor, but how this is calculated remains to be further investigated. DAO governance can also use ZK coprocessors, as it involves multiple people and multiple contracts, which is very computationally intensive. RISC Zero claims that using Bonsai can reduce gas fees by 50%. Many ZKML projects, such as Modulus Labs and Giza, are using the same solution as ZK coprocessors, but the concept of ZK coprocessors is broader. It's worth mentioning that there are some auxiliary projects in the field of ZK coprocessors, such as ezkl, which provides compilers for ZK circuits, toolkits for deploying ZK, and tools for offloading on-chain computation off-chain. 4. Future Outlook Coprocessors provide on-chain applications with external computational resources akin to the "cloud", offering cost-effective and abundant computation, while on-chain processing focuses on essential calculations. In practice, zkVM can also run on the cloud. Essentially, ZK coprocessors is an architectural approach that moves on-chain computation off-chain, with an unlimited source of off-chain computational resources. Essentially, off-chain computing resources can be provided by traditional cloud providers, even decentralized computing resource sharing, and local devices. These three directions each have their characteristics. Traditional cloud providers can provide relatively mature off-chain computing solutions, the "robustness" of future decentralized computing resources may be stronger, and local computing also holds a lot of potential. But currently, many ZK coprocessor projects are in a closed-source service provider stage because the ecosystem for these services has not fully formed and service specialization among different projects is yet to be defined. Two possible scenarios for the future are: Every part of the ZK coprocessor has a large number of projects competing with each other.A single project with excellent service experience may dominate the market. From a developer's perspective, when using ZK coprocessors, they might only interact with a single "interface" project. This is similar to the reason why Amazon Web Services has a substantial market share, as developers tend to become accustomed to a specific deployment method. However, the question of which computing service providers (traditional cloud companies, decentralized resource sharing) are integrated behind this off-chain computational resource "interface" project is another topic worth discussing. Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world. REFERENCE: A Guide to ZK Coprocessors for Scalability:https://www.risczero.com/news/a-guide-to-zk-coprocessors-for-scalabilityDefining zkOracle for Ethereum:https://ethresear.ch/t/defining-zkoracle-for-ethereum/15131zkUniswap: a first-of-its-kind zkAMM:https://ethresear.ch/t/zkuniswap-a-first-of-its-kind-zkamm/16839What is a ZK Coprocessor?:https://blog.axiom.xyz/what-is-a-zk-coprocessor/A Brief Intro to Coprocessors:https://crypto.mirror.xyz/BFqUfBNVZrqYau3Vz9WJ-BACw5FT3W30iUX3mPlKxtALatest Applications Building on Hyper Oracle (Bonus: Things You Can Build Now):https://mirror.xyz/hyperoracleblog.eth/Tik3nBI9mw05Ql_aHKZqm4hNxfxaEQdDAKn7JKcx0xQBonfire Wallet:https://ethglobal.com/showcase/bonfire-wallet-n1dzp

Kernel Ventures: Empowering DApps with Off-Chain Computing Ability — ZK Coprocessors

Author: Kernel Ventures Turbo Guo
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR: The ZK coprocessor is a solution for dApps to utilize off-chain computing resources. This article explores the existing solutions, various applications, and future development of coprocessors. The main topics covered are as follows:
RISC Zero's zkVM is a ZK coprocessor solution that allows on-chain contracts which call off-chain zkVM to run specific Rust code and return the results to the chain, while providing zkp for on-chain verification of the correctness of the computation.There are different solutions for ZK coprocessors. Besides zkVM, users can also write customized ZK circuits for their programs, or use pre-made frameworks to write circuits, thereby enabling contracts to utilize off-chain computing resources.ZK coprocessor can play a role in DeFi, such as offloading AMM calculations off-chain to capture value similar to MEV or enabling complex and computationally intensive logic for AMMs. ZK coprocessor can also facilitate real-time interest rate calculations for lending protocols, making margin calculations transparent, among other things. zkAMM has two implementation approaches, one using zkVM, and the other using zkOracle.ZK coprocessor also has other potential use cases, such as wallets using it to perform off-chain identity verification. It can enable more complex computations for on-chain games and reduce the gas required for DAO governance, among other applications.The landscape for ZK coprocessors is still uncertain, but compared to users writing their own circuits, using a solution for off-chain resource interfacing is more user-friendly. However, the question of which computation service providers are integrated behind that "interface" solution, whether traditional cloud providers or decentralized resource-sharing networks, is another important topic for discussion.
1. The Purpose and Application of ZK Coprocessors

Source: Kernel Ventures

The core of ZK coprocessor is to move on-chain computation off-chain, using ZK proofs to ensure the reliability of off-chain computation, allowing smart contracts to easily handle a large amount of computation while verifying the reliability of the computation. This is similar to the idea of zkRollups, but Rollups use off-chain computing resources at the chain protocol layer, while ZK coprocessors are used by dApps to utilize off-chain resources.
Using RISC Zero as an example to explain one solution of ZK coprocessors, RISC Zero has developed the Bonsai ZK coprocessor architecture, whose core is RISC Zero's zkVM. Developers can generate zkp on zkVM for "a certain Rust code being correctly executed". With zkVM, the specific process of implementing a ZK coprocessor is:
Developers send a request to Bonsai's relay contract, i.e., to run the developer's required program in zkVM.The relay contract sends the request to the off-chain request pool.Bonsai executes the request in off-chain zkVM, performs large-scale computations, and then generates a receipt.These proofs, also known as "receipts", are published back to the chain by Bonsai through the relay contract.

Source: RISC Zero

In Bonsai, the proven program is called the Guest Program, and the receipt is used to prove that the guest program has been executed correctly. The receipt includes a journal and a seal. Specifically, the journal carries the public output of the zkVM application, while the seal is used to prove the validity of the receipt, i.e., to prove that the guest program has been executed correctly. The seal itself is a zkSTARK generated by the prover. Verifying the receipt ensures that the journal is constructed using the correct circuit, etc.
Bonsai simplifies the process for developers to compile Rust code into zkVM bytecode, upload programs, execute them in the VM, and receive proof feedback, allowing developers to focus more on logical design. It enables not only partial contract logic but the entire contract logic to run off-chain. RISC Zero also utilizes continuations, breaking down the generation of a large proof into smaller parts, enabling proof generation for large programs without consuming excessive memory. In addition to RISC Zero, there are other projects like IronMill, =nil; Foundation, and Marlin that provide similar general solutions.
2. Application of ZK Coprocessors in DeFi
2.1 AMM - Bonsai as a Coprocessor
zkUniswap is an AMM that leverages off-chain computing resources. Its core feature is to offload part of the swap computation off-chain, using Bonsai. Users initiate a swap request on-chain. Bonsai's relay contract obtains the request, initiates off-chain computation, and upon completion, returns the computation result and proof to the EVM's callback function. If the proof is successfully verified, the swap is executed.
However, the swap is not completed in one go. The request and execution processes are in different transactions, which brings certain risks. That is, between the submission of the request and the completion of the swap, the state of the pool may change. As the verification is based on the state of the pool at the time of request submission, if a request is still pending, and the pool's state changes, then the verification will be invalid. This is an important consideration in the design and security of such systems.
To address this issue, developers have designed a pool lock. When a user initiates a request, all operations other than settling the swap are temporarily locked until off-chain computing successfully triggers the on-chain swap or the swap times out (the time limit will be preset). With a time limit in place, even if there are problems with the relay or zkp, the pool will not be locked indefinitely. The specific time limit might be a few minutes.
zkUniswap has a unique design to capture MEV, as developers aim to have the protocol benefit from MEV. Theoretically, zkAMMs also have MEV, as the first person to submit a swap can lock it and front-run others, leading to gas wars, and builders can still prioritize transaction sequencing. However, zkUniswap takes the MEV profits for itself using a method known as the Variable Rate Gradual Dutch Auction (VRGDA). This approach allows zkUniswap to extract MEV value for the protocol.
zkUniswap's concept is quite interesting. It involves lowering the price of locked assets in an auction, and if the locked assets are sold quickly, the protocol recognizes high demand and raises the price automatically. If the sale of locked assets slows down, the protocol lowers the price. This innovative approach could potentially become a new source of revenue. Essentially, the protocol introduces a unique mechanism for prioritizing transactions, and the competition for pricing benefits the project directly through this mechanism.
2.2 AMM - zkOracle as a Coprocessor
Besides using zkVM, some have proposed using zkOracle to utilize off-chain computing resources, it is worth noting that zkOracle is an I/O (input and output) oracle that handles both input and output. Generally, there are two types of oracles, one is the input oracle, and the other is the output oracle. The input oracle processes (computes) off-chain data and puts it on-chain, while the output oracle processes (computes) on-chain data and provides it off-chain. The I/O oracle (zkOracle) first does the output, then the input, allowing the chain to utilize off-chain computing resources.
On the one hand, zkOracle uses on-chain data as a data source, and on the other hand, it uses ZK to ensure that the oracle nodes' computations are honest, thus achieving the function of a coprocessor. Therefore, the core computation of AMM can be placed within zkOracle, allowing for traditional AMM functionality while also enabling more complex and computationally intensive operations using zkOracle.

Source: github fewwwww/zkAMM
2.3 Lending Rate Calculation, Margin Calculation, and Other Applications
Setting aside the implementation method, with the addition of ZK coprocessors, many functionatlities can be achieved. For example, lending protocols can adjust interest rates according to real-time parameters instead of pre-defined conditions. For instance, increasing the interest rate to attract supply when the demand for borrowing is strong, and lowering the interest rate when demand decreases. This requires the lending protocol to obtain a large amount of on-chain data in real-time, preprocess the data, and calculate the parameters off-chain (unless the on-chain cost is extremely low).
Complex calculations such as determining margin balances, unrealized profits/losses and etc., can also use coprocessors for execution. The advantage of using coprocessors is that it make these applications more transparent and verifiable. The logic of the margin engine is no longer a secret black box. Although the calculations are performed off-chain, users can fully trust the correctness of their execution. This approach is also applicable to options calculations.
3. Other Applications of ZK Coprocessors
3.1 Wallet - Using Bonsai as a Coprocessor
Bonfire Wallet uses zkVM to offload the computation of identity verification off-chain. The goal of this wallet is to allow users to create burner wallets using biometric information (fingerprints) or encrypted hardware yubikey. Specifically, Bonfire Wallet uses WebAuthn, a common web authentication standard, to allow users to complete web identity verification directly with devices without a password. So in Bonfire Wallet, users generate a public key with WebAuthn (not on-chain, but for WebAuthn), and then use it to create a wallet. Each Burner wallet has a contract on-chain, which contains the public key of WebAuthn. The contract needs to verify the user's WebAuthn signature. But this computation is large, so Bonsai is used to offload this computation off-chain, through a zkVM guest program to verify the signature off-chain, and produce zkp for on-chain verification.

Source: Bonfire Wallet
3.2 On-Chain Data Retrieval - ZK Circuits Written by Users
Axiom is an application that does not use zkVM but uses a different coprocessor solution. Let's first introduce what Axiom aims to do. It leverages a ZK coprocessors to allow contracts to access historical on-chain information. In reality, enabling contracts to read historical data is quite challenging, because smart contracts typically obtain real-time on-chain data, which can be very expensive. It is hard for contracts to access valuable on-chain data such as historical account balances or transaction records.

Source: Axiom demo
Axiom nodes access the required on-chain data and perform the specified computation off-chain, then generate a zero-knowledge proof for the computation, proving that the result is correctly calculated based on valid on-chain data. This proof is verified on-chain, ensuring that the contract can trust this result.
To generate zkp for off-chain computation, it is necessary to compile programs into ZK circuits. Previously we also mentioned using zkVM for this, but Axiom suggested that there are many solutions for this, and it's necessary to balance performance, flexibility, and development experience:
Customized Circuits: if developers customize circuits for their programs, the performance will definitely be the best, but it takes time to develop;eDSL/DSL: developers still write their circuits, but there are some optional frameworks to help developers solve zk-related problems, thus balancing performance and development experience.zkVM: developers directly run ZK on an existing virtual machine, which is very convenient, but Axiom believes it's inefficient.
Therefore, Axiom chose the second option, and provides users with a set of optimized ZK modules, allowing them to design their own circuits.
Projects similar to Axiom include Herodotus, which aims to be a middleware for cross-chain messaging. Since information processing is off-chain, it's reasonable to allow different chains to obtain processed data. Another project, Space and Time, uses a similar architecture to implement data indexing.
3.3 On-Chain Games, DAO Governance and Other Applications
In addition to the above, on-chain games, DAO governance can also use ZK coprocessors. RISC Zero believes that any computation requiring more than 250k gas would be cheaper using a ZK coprocessor, but how this is calculated remains to be further investigated. DAO governance can also use ZK coprocessors, as it involves multiple people and multiple contracts, which is very computationally intensive. RISC Zero claims that using Bonsai can reduce gas fees by 50%. Many ZKML projects, such as Modulus Labs and Giza, are using the same solution as ZK coprocessors, but the concept of ZK coprocessors is broader.
It's worth mentioning that there are some auxiliary projects in the field of ZK coprocessors, such as ezkl, which provides compilers for ZK circuits, toolkits for deploying ZK, and tools for offloading on-chain computation off-chain.
4. Future Outlook
Coprocessors provide on-chain applications with external computational resources akin to the "cloud", offering cost-effective and abundant computation, while on-chain processing focuses on essential calculations. In practice, zkVM can also run on the cloud. Essentially, ZK coprocessors is an architectural approach that moves on-chain computation off-chain, with an unlimited source of off-chain computational resources.
Essentially, off-chain computing resources can be provided by traditional cloud providers, even decentralized computing resource sharing, and local devices. These three directions each have their characteristics. Traditional cloud providers can provide relatively mature off-chain computing solutions, the "robustness" of future decentralized computing resources may be stronger, and local computing also holds a lot of potential. But currently, many ZK coprocessor projects are in a closed-source service provider stage because the ecosystem for these services has not fully formed and service specialization among different projects is yet to be defined. Two possible scenarios for the future are:
Every part of the ZK coprocessor has a large number of projects competing with each other.A single project with excellent service experience may dominate the market.
From a developer's perspective, when using ZK coprocessors, they might only interact with a single "interface" project. This is similar to the reason why Amazon Web Services has a substantial market share, as developers tend to become accustomed to a specific deployment method. However, the question of which computing service providers (traditional cloud companies, decentralized resource sharing) are integrated behind this off-chain computational resource "interface" project is another topic worth discussing.
Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world.
REFERENCE:
A Guide to ZK Coprocessors for Scalability:https://www.risczero.com/news/a-guide-to-zk-coprocessors-for-scalabilityDefining zkOracle for Ethereum:https://ethresear.ch/t/defining-zkoracle-for-ethereum/15131zkUniswap: a first-of-its-kind zkAMM:https://ethresear.ch/t/zkuniswap-a-first-of-its-kind-zkamm/16839What is a ZK Coprocessor?:https://blog.axiom.xyz/what-is-a-zk-coprocessor/A Brief Intro to Coprocessors:https://crypto.mirror.xyz/BFqUfBNVZrqYau3Vz9WJ-BACw5FT3W30iUX3mPlKxtALatest Applications Building on Hyper Oracle (Bonus: Things You Can Build Now):https://mirror.xyz/hyperoracleblog.eth/Tik3nBI9mw05Ql_aHKZqm4hNxfxaEQdDAKn7JKcx0xQBonfire Wallet:https://ethglobal.com/showcase/bonfire-wallet-n1dzp
Explore the latest crypto news
⚡️ Be a part of the latests discussions in crypto
💬 Interact with your favorite creators
👍 Enjoy content that interests you
Email / Phone number

Latest News

--
View More

Trending Articles

View More
Sitemap
Cookie Preferences
Platform T&Cs