Binance Square
LIVE
Kernel Ventures
@Coindaily
Kernel Ventures是一个由研究和开发社区驱动的加密风险投资基金,拥有超过70个早期投资,专注于基础设施、中间件、dApps,尤其是ZK、Rollup、DEX、模块化区块链,以及将搭载未来数十亿加密用户的垂直领域,如账户抽象、数据可用性、可扩展性等。在过去的七年里,我们一直致力于支持世界各地的核心开发社区。
Following
Followers
Liked
Shared
All Content
LIVE
--
Kernel Ventures: The Upsurge of Bitcoin Ecosystem — A Panoramic View of its Application LayerAuthor: Kernel Ventures Jerry Luo Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua TLDR: Along with the rise of inscription, the existing application layer of the Bitcoin network is unable to sustain market activities and is the main focus of the current Bitcoin ecosystem development.There are three mainstream Layer2 solutions for Bitcoin: Lightning network, Sidechain, and RollupThe Lightning network enables peer-to-peer payments by establishing an off-chain payment channel, which is settled on the main network after the channel is closed.Side chain locks BTC assets on the mainnet through specific addresses or multisig addresses, while mints equivalent BTC assets on the sidechain. Merlin Chain is able to support multiple types of inscription assets across the chain, backed by the Bitmap ecosystem, and its TVL has reached nearly 4 billion dollars.BTC Rollup is based on the Taproot circuit, which can simulates smart contracts on-chain, and it performs packing and computation operations outside the main Bitcoin Network. The B2 network is at the forefront of this implementation, with over $200 million on-chain TVL.Cross-chain bridges built specifically for Bitcoin aren't very common. There are more multi-chain and full-chain bridges that integrate with mainstream blockchains, one of which is Meson.Fi, that has established relationships with a number of Bitcoin Layer2.Stablecoin protocols on the Bitcoin network are mostly implemented in the form of over-collateralization and support other DeFi protocols to bring more yield for users.There are various DeFi projects in the Bitcoin ecosystem, from those that migrated from other chains, to those that are built on the native Bitcoin network during the current development boom, and those that were built during the last bull market and deployed as a sidechain. Overall Alex provides the most variety of trading products and the smoothest trading experience, but Orders Exchange has a higher growth ceiling.Bitcoin will be an important narrative in this cycle of bull markets. It is necessary to pay close attention to top tier projects in each vertical of the Bitcoin ecosystem. 1. Background With the overflow of inscription assets due to the Ordinals protocol, Bitcoin network, which was once characterized by lack of smart contracts, inefficent for developing, and in dearth of infrastructure and scaling capabilities, is experiencing a data boom on chain (refer to Kernel's previous research article: Can RGB Replicate The Ordinals Hype for more details). Similar to what happened when the Ethereum network first came into established, formatted text, images, even videos were being scrambled to 4MB Tapscript scripts that would have never been executed. While this surge in on-chain activities contributed to the growth and development of the Bitcoin ecosystem and infrastructure, it also created a surge in transaction volumes and a huge storage burden on the network. In addition, for a wide variety of inscriptions, simple transfers can no longer satisfy users' transaction needs, and users are looking forward to the introduction of a wide range of derivatives trading services in Bitcoin. Hence the development of the Bitcoin application layer has become relatively urgent right now. Source: CryptoQuant 2. Bitcoin Layer2 Unlike Layer2 on Ethereum, which is dominated by Rollup, Layer2 solution for Bitcoin is still vague. Bitcoin is not able to write smart contract on its own scripting language, and the publication of smart contracts must rely on third-party protocols, so applying a similar solution to Bitcoin can not guarantee the same level of security as an Ethereum Rollup. As a result, a variety of Layer2 solutions exist for the Bitcoin, including Lightning network, side chain, and Rollup based on TapScript. 2.1 Lightning network Lightning network is the earliest Bitcoin Layer2 solution, first proposed by Gregory Maxwell in December 2015. The Lightning network stack, known as BOLT, was released by Lightning Labs in January 2017. Since then, it has undergone upgrades and improvements. The Lightning network allows users to make peer-to-peer, off-chain payment channel transfers of any size and number without fees until the Lightning network is closed. At that point, all previous transactions are settled with a single transaction. The Lightning network has the potential to achieve up to 10 million TPS (transactions per second) due to its use of off-chain channels. However, there is a risk of centralization with off-chain channels. And to successfully transact between two addresses, the off-chain channel must be established either directly or through a third party. Additionally, both parties must be online during the transaction for secure execution. Source: Kernel Ventures 2.2 Side Chain The side chain solution on Bitcoin is similar to that of Ethereum, with a new token pegged to Bitcoin 1:1 is issued on a new chain. This new chain would not be limited by transaction speed and development bottlenecks of the Bitcoin network, allowing for the transfer of Bitcoin-pegged tokens at a much faster rate and lower cost. The side chain solution presumably inherits the asset value of the mainnet, but not the security of the mainnet, and all transactions are recorded and confirmed on the side chain. 2.2.1 Stacks Stacks 2.0 was released in 2021, where users can lock BTC on the Bitcoin mainnet and receive the equivalent value of SBTC assets on Stacks, but their transactions on the side chain require payment of STX, the Stacks' native token, as gas. Unlike Ethereum, Bitcoin network doesn't allow for a smart contract address that can effectively manage the locked BTC. Therefore, locked BTC is sent to a specific multisig address. The release process is relatively simple, requiring a request to the Burn-Unlock contract on Stacks to destroy the SBTC on Stacks and send the locked BTC back to the original address, since the Stacks network allows Clarity language for smart contract development. The block release process of the Stacks network uses the POX consensus mechanism. Bitcoin miners send BTC bids for block opportunities, and the higher the bid, the higher the weight of the miner. Ultimately, the winner is selected by a specific verifiable random function to package the blocks on the Stacks network, and receives a reward in the form of the corresponding STX. At the same time, this part of the bidding BTC will be distributed in the form of SBTC to the holders of STX tokens as a reward. Source: Kernel Ventures In addition, Stacks is expected to push Satoshi Nakamoto upgrade in April, which will include optimizations to its development language, Clarity, to lower the barriers for developers. Secondly, Stacks has optimized the security level of the network, confirming transactions on Stacks to be settled on Bitcoin mainnet, upgrading the security of Stacks from a sidechain to Layer2, which is the same as that of Bitcoin's mainnet. Finally, Stacks has also made significant improvements to its block rate, reaching 5 seconds per block in the testing phase (compared to 10-30 minutes per block in the current phase). If Satoshi Nakamoto upgrade is completed successfully, Stacks can narrow, perhaps even eliminate the gap between Layer2 on Ethereum, which should attract a lot of attention and stimulate the development of ecosystem. 2.2.2 RSK RSK (RootStock) is a Bitcoin side chain without native tokens, and transactions on the side chain are currently handled on Bitcoin. Users can exchange BTC from the mainnet for RBTC at a 1:1 ratio on RSK through the built-in PowPeg protocol. RSK is also a POW chain, but with the introduction of a merger mining mechanism, the infrastructure and setup of Bitcoin miners can be fully applied to the RSK mining process, which reduces the cost of Bitcoin miners to participate in RSK mining. Until now, transactions on RSK are three times faster than on the mainnet and cost 1/20 as much as on the mainnet. Source: RSK White Paper 2.2.3 BEVM BEVM is an EVM-compatible POS side chain that has yet issued its own native token. It uses Schnorr's multisig algorithm on the Bitcoin network to store incoming assets in a multisig script address controlled by 1,000 addresses, which corresponds to the 1,000 POS verifiers on BEVM. The automated control of assets can be achieved by writing MAST (Merkelized Abstract Syntax Tree) scripts in the TapScript area, where the program is described in a number of independent chunks, each of which corresponds to a portion of the code logic, with no need to store a large amount of logic in the Script, only the hash result of each chunk. That has greatly reduced the amount of code that needs to be stored on the blockchain. When a user transfers BTC to BEVM, this part of BTC is locked by the script program, and the locked BTC can only be unlocked and sent back to the corresponding address if signed by more than 2/3 of the verifiers. BEVM is EVM compatible, so that allows for cost efficient migration of dApps originally built on Ethereum, trading with the above BTC-pegged assets while using it for gas expenses. Source: BTCStudy 2.2.4 Merlin Chain Merlin Chain is an EVM-compatible Bitcoin side chain that allows direct connection to the network through Bitcoin address generated by Particle network, with an unique Ethereum address generated. It can also be connected directly to an RPC node with an Ethereum account. Merlin Chain currently supports the transfer of BTC, Bitmap, BRC-420 and BRC-20 assets across the chain. The BRC420 protocol is developed by the Bitmap asset community based on recursive inscriptions like Merlin Chain, and the whole community has also put forward projects such as RCSV's recursive inscription matrix and the Bitmap Game meta-universe platform based on recursive inscriptions. Source: Merlin Docs Merlin Chain went live on February 5th, followed by a round of IDOs and staking rewards allocating 21% of the governance token MERL. The direct and massive airdrop attracted a large number of participants, Merlin Chain's TVL has surpassed $3 billion by now, with Bitcoin's on-chain TVL surpassing Polygon's, reaching #6 on all blockchains. Source: DeFiLlama During People's Launchpad's IDO, users can stake Ally or more than 0.00025 BTC to earn bonus points that could be redeemed for MERL, with a cumulative bonus stake limit of 0.02 BTC, which corresponded to 460 MERL tokens. Allocation of this round is relatively small, accounting for only 1% of total amount of MERL. However, considering OTC price of $2.90 MERL today, it has created a return of over 100%. In the second staking incentive round, Merlin allocated 20% of its total tokens, allowing users to stake BTC, Bitmap, USDT, USDC, and part of BRC-20 and BRC-420 assets on the Merlin Chain through Merlin's Seal. User's assets on Merlin will take an hourly snapshot of their value in USD, and the final daily average price multiplied by 10,000 will be the amount of points the user receives. The second round of staking is based on Blast's team model, where users can choose to be leader or team member. Leaders will receive an invitation code to share with their team members. Merlin is relatively mature in the current Bitcoin Layer2 ecosystem, liberating the liquidity of Layer1 assets, and allow Bitcoin transfers on Layer2 at a lower cost. The Bitmap ecosystem behind Merlin is very large, and the technology is relatively sound, so it is probable to have good development in the long run. The stake on Merlin has a high rate of return. In addition to the expected return of MERL, there are also opportunities to obtain the corresponding Meme or other tokens airdropped by the project, such as the official airdropped Voya tokens. Staking of more than 0.01 BTC can obtain airdropping of 90 Voya tokens, whose price has been rising since the launch of the program, and the highest of which reaches 514% of issuance price. Voya's current price is quoted at US$5.89, and the yield is as high as 106% when calculated according to the average price of Bitcoin at US$50,000 when staked. Source: CoinGecko 2.3 Rollup 2.3.1 BitVM BitVM is based on Optimistic Rollup for Bitcoin Layer2. Similar to Optimistic Rollup on Ethereum, traders first send transactions to Layer2 on the Bitcoin network, where they can be calculated and packed, after which the results will be sent to smart contract on Layer1 for verification while time is given to the verifier to challenge the prover's statement. However, Bitcoin does not support native smart contract, so the implementation is not as simple as Ethereum's Optimistic Rollup. The whole process involves Bit Value Commitment, Logic Gate Commitment, and Binary Circuit Commitment, which can be summarized as BVC, LGC and BCC below. BVC (Bit Value Commitment): BVC is essentially a level result with only two possibilities, 0 and 1, similar to a Bool type variable in other programming languages. Bitcoin is a stack-based scripting language, where bool type doesn't exist, so bytecode combinations are used to emulate it in BitVM.<Input Preimage of HASH> OP_IF OP_HASH160 //Hash the input of user <HASH1> OP_EQUALVERIFY //Output 1 if Hash(input)== HASH1 <1> OP_ELSE OP_HASH160 //Hash the input of user <HASH2> OP_EQUALVERIFY //Output 0 if Hash(input)== HASH2 <0>  In BVC, the user needs to submit an input first, then the Bitcoin network will hash the input and unlock the script only if the hash result is equal to HASH1 or HASH0 with HASH1 having an output of 1 and HASH2 having an output of 0. In the following section, we will summarise the entire snippet into an OP_BITCOMMITMENT opcode to simplify the description process.LGC (Logic Gate Commitment): All functions in a computer are essentially a combination of a series of bool gates, which can be simplified to a series of NAND gates. That's to say, if we can simulate NAND gates in the Bitcoin network through bytecode, we can essentially realize any function. Although Bitcoin does not have a direct implementation of the NAND opcode, it does have an AND gate, OP_BOOLAND, and a NOT gate, OP_NOT, which can be superimposed to reproduce the NAND. For the two output levels obtained from OP_BITCOMMITMENT, we can form a NAND output circuit with the OP_BOOLAND and OP_NOT opcodes.BCC (Binary Circuit Commitment): Based on LGC circuits, we can construct specific gate relationships between inputs and outputs. In BCC gate circuits, this input comes from the corresponding hash-primary image in the TapScript script, and different Taproot addresses correspond to a different gate, which we call a TapLeaf, and the many TapLeafs make up a Taptree, which serves as the input to the BCC circuit. Source: BitVM White Paper Ideally, a BitVM prover would compile and compute the circuits off-chain and return the results to the Bitcoin network for execution. However, since the off-chain process is not automated by smart contract, to prevent the provers from committing fraud transactions, BitVm requires the provers on the network to conduct a challenge. Verifier first reproduce the output of a certain TapLeaf, then add it with other TapLeaf results provided by the provers as inputs to drive the circuit. If the output is false, the challenge is successful, which means that the prover has provided a fraud message, and vice versa. However, to accomplish this process, the Taproot circuit needs to be shared between the challenger and the prover in advance, and, only the interaction between a single prover and a single verifier can be realized. 2.3.2 SatoshiVM SatoshiVM is an EVM compatible zkRollup Layer2 solution for Bitcoin. The implementation of smart contracts on SatoshiVM is the same as on BitVM, using Taproot circuits to simulate complex functions. SatoshiVM is divided into three layers, the Settlement Layer, the Sequencing Layer and the Proving Layer. The Settlement Layer, also known as the Bitcoin mainnet, is responsible for providing the DA layer, storing the Merkle Roots and Zero Knowledge Proofs of transactions, and settling transactions by verifying the correctness of the Layer2 packaged transactions through the Taproot circuit. The Sequencing Layer is responsible for packaging and processing transactions, and returning the results of the transactions to the mainnet along with the zero-knowledge certificates, and the Proving Layer is responsible for generating zero-knowledge certificates for the tasks received from the Sequencing Layer and passing them back to the Sequencing Layer. Source: SatoshiVM Docs 2.3.3 BL2 BL2 is a zkRollup Bitcoin Layer2 based on the VM Common Protocol (the official preconfigured VM protocol that is compatible with all major VMs). similar to other zkRollup Layers, its Rollup Layer mainly packs transactions and generates the corresponding zero-knowledge certificates through zkEVM. BL2's DA layer introduces Celestia to store bulk transaction data and only uses the BL2 network to store the zero-knowledge proofs, and finally returns the zero-knowledge proofs validation and a small amount of validation data, including BVC, to the main network for settlement. Source: BL2.io BL2's official X account has been updated daily, and it has also announced its development plan and token program, which will allocate 20% of its tokens to OG Mining, as well as the launch of a testnet in the near future. At this stage, the project is relatively new compared to other Bitcoin Layer2 and in its early stage, with only 33,000 followers on X. It's worth paying attention to as it introduces some of the more recent concepts such as Celestia and Bitcoin Layer2. However, there are no actual technical details on the website, with only a demo of what to expect, and no whitepaper for the project. At the same time, the goals are quite big, such as the abstraction of accounts on Bitcoin and compatibility with the VM protocol of mainstream virtual machines. Whether the team will be able to achieve this goal is still questionable, so we will consider taking a more reserved approach. Source: BL2's X Account 2.3.4 B2 Network The B2 Network is a zkRollup Layer2 with Bitcoin as the settlement layer and DA layer, which is structured into a Rollup Layer and a DA Layer. User transactions are first submitted and processed in the Rollup Layer, which uses a zkEVM scheme to execute user transactions and output the associated proofs, followed by storage of user state in the zkRollup Layer. The batch transactions and generated zero-knowledge proofs are forwarded to the DA Layer for storage and validation. The DA Layer can be subdivided into three parts: the Decentralised Storage Node, the B2 Node, and Bitcoin mainnet. The decentralised storage node receives the Rollup data and periodically generates temporal and spatial zero-knowledge proofs based on the Rollup data and sends the generated zero-knowledge proofs to the B2 Node, which is responsible for off-chain validation of the data, and then records the transaction data and corresponding zero-knowledge proofs in TapScript on the Bitcoin mainnet after validation is completed. The B2 Node is responsible for confirming the authenticity of the ZKP and finalising the settlement. Source: B2 Network White Paper B2 Network has a good influence among major BTC Layer2 programs, with 300,000 followers on X, surpassing BEVM's 140,000 and SatoshiVM's 166,000, which is also a Zk Rollup Layer2. At the same time, the project has received seed round funding from OKX and HashKey, attracting a lot of attention, and the TVL on the chain has exceeded $600 million. Source: bsquared.network B2 Network has launched B2 Buzz,and in order to use B2 Network, you need an invitation link. B2 Network uses the same communication model as Blast, which provides a strong two-way benefit binding newcomers and those who have already joined the network giving them sufficient motivation to promote the project. After completing simple tasks such as following the official X account, you can enter the staking interface, which supports the use of assets on four chains: BTC, Ethereum, BSC and Polygon. In addition to Bitcoin, inscriptions, ORDI and SATS can also be staked on the Bitcoin network. If you stake BTC, you can transfer the assets directly, whereas if you stake an inscription, you need to inscribe and transfer, and it is important to note that since there are no smart contracts on the Bitcoin network, the assets are essentially multisig-locked to a specific BTC address. The assets staked on the B2 network will not be released until at least April this year, and the points gained from staking during this period can be exchanged for mining components used for virtual mining, of which the BASIC miners only requires 10 components to activate, while the ADVANCED miner requires more than 80 components. Officials announced a partial token program, 5% of the total tokens will be used to reward virtual mining, and the other 5% will be allocated to ecological projects on B2 network for airdrop. At the time when much attentions are paid for Tokenomics fairness, 10% of the total amount of tokens is difficult to fully mobilize the enthusiasm of the community. It is expected that B2 network will have other staking incentives or LaunchPad plans in the future. 2.4 Comprehensive Comparison Among the three types of BTC Layer2, Lightning Network has the fastest transaction speed and lowest transaction cost, and has more applications in real-time payment and offline purchase. However, to realize the development of the application ecosystem on Bitcoin, it is difficult to build all kinds of DeFi or cross-chain protocols on Lightning network in terms of stability and security, and thus the competition in the application layer market is mainly between the sidechain and Rollup types. Sidechain solutions do not need to confirm transactions on the main network, and have more mature technical solutions and implementation difficulties, and thus have the highest TVL among the three. Due to the lack of smart contracts on the Bitcoin main network, the confirmation solution for Rollup data is still under development, and it might take a while for actual usage. Source: Kernel Ventures 3. Bitcoin Cross-chain Bridge 3.1 Multibit Multibit is a cross-chain bridge designed specifically for BRC20 assets on the Bitcoin network, and currently supports the migration of BRC20 assets to Ethereum, BSC, Solana, and Polygon. In the process of cross-chain bridging, users first need to send their assets to a BRC20 address designated by Multibit, and wait for Multibit to confirm the transfer of the assets on the main network, then the users will have the right to cast the corresponding assets on other chains, and to complete the cross-chain bridging process, users need to pay gas to mint on the other chain. Among the cross-chain bridges, Multibit has the best interoperability and the largest number of BRC20 assets, including more than ten kinds of BRC20 assets such as ORDI. In addition, Multibit also actively expands the cross-chain bridging of assets other than BRC20, and currently supports the Farming and cross-chain bridging of governance tokens and stablecoins of Bitstable, the native stablecoin protocol of BTC. Multibit is at the forefront of cross-chain bridges for BTC-derived assets. The Cross Chain Assets that Multibit supports, Source: Multibit's X Account 3.2 Sobit Sobit is a cross-chain protocol between Solana and Bitcoin network. Cross-chain assets are mainly BRC20 tokens and Sobit's native tokens. Users collateralize BRC20 assets on the Bitcoin mainnet to a designated Sobit address, and wait for Sobit's validation network to verify that the user can then Mint the mapped assets at the designated address on the Solana network. At the heart of SoBit's validation network is a validator-based framework that requires multiple trusted validators to approve cross-chain transactions, providing additional security against unauthorized transfers. Sobit's native token is Sobb, which can be used to pay for cross-chain fees for the Sobit Cross-Chain Bridge, totaling 1 billion coins. Sobb distributes 74% of its assets in a Fair Launch. Unlike other DeFi and cross-chain tokens on Bitcoin that have gone a upward trend these days, Sobb's price has been on a downward cycle after a brief uptrend, dropping more than 90 percent, not picking up any significant momentum along with BTC's uptrend, which may be caused by Sobb's chosen vertical. Sobit and Multibit's market orientations are very similar. But at this stage, Sobit can only support cross-chain for Solana, with only three kinds of BRC20 assets that can be bridged cross-chain. Compared with Multibit, which also provides cross-chain bridging of BRC20 assets, Sobit is far behind in terms of its ecosystem and completeness of cross-chain assets, and thus can hardly gain any advantage in the competition with Multibit. The Price of Sobb, Source: Coinmarketcap 3.3 Meson Fi Meson Fi is a cross-chain bridge based on the principle of HTLC (Hash Time Locked Contract). It supports cross-chain interactions between 17 mainstream chains including BTC, ETH and SOL. In the cross-chain process, users sign the transaction under the chain, then submit it to Meson Contract for confirmation and lock the corresponding assets in the original chain. Meson Contract broadcasts the message to the target chain through Relayer after confirming the message. There are three types of Relayer: P2P node, centralized node and no node, P2P node has better security, centralized node has higher efficiency and availability, while no node requires user to hold certain assets on both chains, which user can choose depending on actual situation. LP on the target chain also calls the Lock method on the Meson Contract to lock the corresponding asset after checking the transaction through postSwap of the Meson Contract, and then exposes the address to Meson Fi. The next operation is the HTLC process, where the user specifies the address of the LP on the original chain and creates a hash lock, removing the asset by exposing the hash lock original image on the target chain. This is then followed by the HTLC process, where the user specifies the LP address and creates a hash lock in the original chain, exposing the hash lock image in the target chain to retrieve the asset, and then the LP retrieves the user-locked asset in the original chain through the original image. Source: Kernel Ventures Meson Fi is not a cross-chain bridge specifically designed for Bitcoin assets, but a full-chain bridge like LayerZero. However, major BTC Layer2 such as B2 Network, Merlin Chain, and BEVM have all established partnership with Meson Fi and recommend using it to cross-chain bridge their assets during the staking process. According to official reports, Meson Fi processed more than 200,000 transactions during the three-day Merlin Chain staking event, as well as about 2,000 cross-chain staking of BTC assets, including transactions across all major chains to Bitcoin. As Layer2 on Bitcoin continues to release and introduce staking incentives, Meson Fi’ is more likely to attract assets for cross-chain, and see an increase protocol revenue. 3.4 Comprehensive Comparison Overall, Meson Fi and the other two cross-chain bridges are two different kinds of cross-chain bridge. Meson Fi is essentially a full-chain cross-chain bridge, but happens to work with many of Bitcoin's Layer2s to help it bridge assets from other networks. Sobit and Multibit, on the other hand, are cross-chain bridges designed for Bitcoin's native assets, serving BRC20 assets as well as other DeFi and Stablecoin protocol assets on Bitcoin. Comparatively speaking, Multibit offers a wider variety of BRC20 assets, including dozens of assets such as ORDI and SATS, while Sobit only supports three BRC20 assets so far. In addition, Multibit has partnered with some of the Bitcoin stablecoin protocols to provide cross-chain services and stake revenue activities, providing a more comprehensive range of services. Finally, Multibit also offers better cross-chain liquidity, providing cross-chain services for five major chains, including Ethereum, Solana, and Polygon. 4. Bitcoin Stablecoin 4.1 BitSmiley BitSmiley is a series of protocols based on the Fintegra framework on the Bitcoin network, including the Stablecoin Protocol, the Lending Protocol, and the Derivatives Protocol. Users can mint bitUSD by over-collateralization of BTC in BitSmliey's stablecoin protocol, and when they want to withdraw their collateralized BTC, they need to send the bitUSD back to the Vault Wallet for destruction and pay a fee. When the value of the collateralization falls below a certain threshold, BItSmiley will enter into an automatic liquidation process for the collateralized assets, and the formula for calculating the liquidation price is as follows: $$𝐿𝑖𝑞𝑢𝑖𝑑𝑎𝑡𝑖𝑜𝑛 𝑃𝑟𝑖𝑐𝑒 = \frac{𝑏𝑖𝑡𝑈𝑆𝐷𝐺𝑒𝑛𝑒𝑟𝑎𝑡𝑒𝑑 ∗ 𝐿𝑖𝑞𝑢𝑖𝑑𝑎𝑡𝑖𝑜𝑛𝑅𝑎𝑡𝑖𝑜}{𝑄𝑢𝑎𝑛𝑡𝑖𝑡𝑦 𝑜𝑓 𝐶𝑜𝑙𝑙𝑎𝑡𝑒𝑟𝑎𝑙 } $$ The exact liquidation price is related to the real-time value of the user's collateral and the amount of bitUSD minted, where Liquidation Ratio is a fixed constant. During the liquidation process, in order to prevent price fluctuations from causing losses to the liquidated, a Liquidation Penalty is designed in BItSmily to compensate for this, and the longer the liquidation time, the greater the amount of this compensation. The liquidation of assets is done by Dutch Auction, in order to complete the liquidation of assets in the shortest possible time. At the same time, the surplus of the BitSmiley protocol will be stored in a designated account and auctioned at regular intervals, in the form of a British auction with BTC bidding, which can maximize the value of the surplus assets. The BitSmiley project will use 90% of the surplus assets to subsidize on-chain collateral, while the remaining 10% will be allocated to the BitSmiley team for daily maintenance costs. BitSmiley's lending agreement also introduces a number of innovations to the settlement mechanism for the Bitcoin network. Due to the 10-minute block rate of the main Bitcoin network, it is not possible to introduce a prediction machine to judge price fluctuations in real time like Ether, so BitSmiley introduces a mechanism to insure a third party against failure of the other party to deliver on time, whereby the user has the option to pay a certain amount of BTC to the third party in advance to insure the transaction (which both parties are required to pay for), and when one party fails to complete the transaction on time, the transaction will be insured by the third party. When one party fails to complete the transaction on time, the guarantor will compensate the other party for the loss. Source: BitSmliey WhitePaper BitSmiley offers a wide range of DeFi and stablecoin features, as well as a number of innovations in its settlement mechanism to better protect users and improve its compatibility with the Bitcoin network. BitSmiley is an excellent stablecoin and DeFi model in terms of both settlement and collateralization mechanisms, and with the Bitcoin ecosystem still in its infancy, BitSmiley should be able to capture a significant share of the stablecoin competition. 4.2 BitStable BitStable is a Bitcoin stablecoin protocol based on over-collateralization, and currently supports collateralization of ORDI and MUBI assets from Bitcoin mainnet, as well as USDT from Ethereum. Depending on the volatility of the three assets, BitStable sets different overcollateralization ratios, with USDT at 0%, ORDI at 70%, and MUBI at 90%. Source: Bitstable.finance BitStable has also deployed corresponding smart contracts on Ethereum, and the DALL stablecoin obtained by staking can be exchanged 1:1 on Ethereum for USDT and USDC. Meanwhile, BitStable has adopted a dual-token mechanism, in addition to the stablecoin DALL, it has adopted BSSB as its own governance token, through which holders can participate in the community's governance and share the revenue of the network. The total number of BSSBs is 21 million, which are distributed in two ways. The first is by staking DALL tokens on the Bitcoin network to earn the corresponding BSSB governance tokens, with the project distributing 50 percent of the BSSB tokens through staking rewards. The second method was the two rounds of LaunchPad on Bounce Finance at the end of November last year, in which 30% and 20% of the BSSBs were distributed through staking Auctions and Fixed Price Auctions. However, there was a hacking attack during the staking Auctions, which led to the destruction of more than 3 million BBSB tokens. Source: coinmarketcap During the hacker attack, the project team responded in a timely manner. The remaining 25% of the tokens that were not affected by the hacker attack were still issued, although at a higher cost, but this measure better restored the community's confidence, and ultimately prevent the clash of price. 5. Bitcoin DeFi 5.1 Bounce Finance Bounce Finance consists of a series of DeFi ecosystem projects, including BounceBit, BounceBox and Bounce Auction. It is worth noting that Bounce Finance was not originally a project that served the BTC ecosystem, but an auction protocol set up for Ethereum and BSC, which shifted gears last May to take advantage of the Bitcoin development boom. BounceBit is an EVM-compatible POS sidechain for Bitcoin, and will select verifiers based on who are staking Bitcoins from the Bitcoin mainnet. BounceBit also introduces a hybrid revenue mechanism, whereby users can stake BTC assets on BounceBit to earn revenue on-chain through POS validation and the associated DeFi protocol, and can also securely transfer their assets to and from CEX by mirroring the assets on-chain and earning revenue on CEX. BounceBox is similar to the application store in Web2, in which the publisher can custom design a dApp, that is, a box, and then distribute it through BounceBox, and then users can choose their favorite boxes to participate in the DeFi activities. Bounce Auction, the main part of the project on Ether, is an auction for various assets and offers a variety of auction options, including fixed-price auctions, UK auctions and Dutch auctions. Bounce's native token, Auction, was released in 2021 and has been used as the designated staking token for earning points in several rounds of Token LaunchPad on Bounce Finance, which has fueled the recent rise in the price of Auction tokens. What's more noteworthy is that BounceBit, the new staking chain that Bounce has built after switching to Bitcoin, is now open for on-chain staking to get points and test network interaction points, and the project's X account clearly indicates that points can be exchanged for tokens and that token issuance will take place in May this year. Source: Coinmarketcap 5.2 Orders Exchange Orders Exchange is a DeFi project built entirely on the Bitcoin network, currently supporting limit and market pending orders for dozens of BRC20 assets, with a blueprint to introduce swaps between BRC20 assets in the future. The underlying technology of Orders Exchange consists of Ordinals Protocol, PSBT and Nostr Protocol. More information on the Ordinals Protocol please refer to Kernel's previous research article, Kernel Ventures: Can RGB Replicate The Ordinals Hype. PSBT is a key feature on Bitcoin, where users sign a PSBT-type signature consisting of an Input and an Output via SIGHASH_SINGLE | ANYONECANPAY. PSBT is a bitcoin signature technology that allows users to sign a PSBT-X format consisting of an Input and an Output, with the Input containing the transaction that the user will execute and the Output containing the the prerequisite for user's transactions, which requires another user to execute the Output content and perform a SIGHASH_ALL signing on the network formula before the content of the Input finally takes effect. In Exchange's Pending Order transaction, the user completes the Pending Order by means of PSBT signature and waits for other party to complete the transaction. Source: orders-exchange.gitbook.io Nostr is an asset transfer protocol set up using NIP-100 that improves the interoperability of assets between different DEXs. All of Orders Exchange's 100 million tokens have been fully released. And although it emphasized in the whitepaper that ttokens are only experimental and do not have any value, the project's elaborate airdrop plan still shows a clear intention of token economy. There were 3 main directions for the initial token distribution, 45% of the tokens were distributed to traders on Orders Exchage, 40% of the tokens were airdropped to early users and promoters, and 10% were distributed to developers. However, the 40% drop was not described in detail on either the official website or the official tweets, and there was no discussion on X or in Discord's Orders community after the official announcement of the drop, so the actual distribution of the drop is still questionable. Overall, Orders Exchange's buy order page is intuitive and clear, and you can see the prices of all buy orders and sell orders explicitly, which is of high quality among the platforms offering BRC20 trading. The subsequent launch of the BRC20 token swap service on Orders Exchange should also help the value capture of protocols. 5.3 Alex Alex is a DeFi Protocol built on top of the Bitcoin sidechain Stacks, currently supporting Swap, Lending, Borrow, and some other transaction types. At the same time, Alex has introduced some innovations to the traditional DeFi transaction model. The first is Swap, the traditional Swap pricing model can be divided into two types: x*y=k for ordinary pairs and x+y=k for stablecoins, but on Alex, you can set up the trading rules for pairs, and set it to be a linear combination of the results of the two calculations according to a certain ratio, x*y=k and x+y=k. Alex has also introduced OrderBook, a combined on-chain and off-chain order thinning model that allows users to quickly cancel pending transactions at zero cost Finally, Alex offers fixed-rate lending activities and has established a diversified collateral pool for lending services instead of the traditional single collateral, which consists of both risky and risk-free assets, reducing the risk of lending. Source: Alexgo Docs Unlike other DeFi projects in the BTC ecosystem, which entered the market after the Ordinals protocol had blown up the BTC ecosystem, Alex started working on the BTC DeFi ecosystem as early as the last bull market, and has raised a seed round of funding. Alex is also excellent in terms of performance and the different types of transactions, even many DeFi projects on Ethereum do not have much competitive edge over Alex's transaction experience. Alex's native token, Alex Lab, has a total supply of 1 billion, and 60% of it has already been released, which can still be earned by staking or by offering as a liquidity provider on Alex. However, revenue will hardly reach the level it was at during early launch. As one of the most well-established DeFi project on Bitcoin, Alex's market cap is considered not that high, with the Bitcoin ecosystem probably being an important engine in this bull market. In addition, the sidechain where Alex was deployed, Stacks, will execute an important Satoshi Nakamoto upgrade, of which Stacks will be greatly optimized in terms of both transaction speed and transaction cost, and its security will be backed by the Bitcoin mainnet, making it a true Layer 2. This upgrade will also greatly reduce Alex's operating costs and improve its transaction experience and security. The Stacks chain will also provide Alex with larger market and trading demand, bringing more revenue to the protocol. 6. Conclusion The application of the Ordinals protocol has changed the inability of the Bitcoin network to implement complex logic and issue assets, and various types of asset protocols have been introduced on the Bitcoin network one after another, improving upon the idea of Ordinals. However, application layer is not prepared to provide services, and in the case of the surge of inscription assets, the functions that can be realized by Bitcoin applications appear to be anachronistic, and thus the development of applications on Bitcoin network has become a hotspot for all parties to seize. Layer 2 has the highest priority among all types of applications, because all other DeFi protocols, no matter developed they are, if they do not improve the transaction speed and reduce the transaction cost of the Bitcoin mainnet, it will be difficult to release the liquidity, and the chain will be flooded with new transactions for speculation purposes. After improving the speed and cost of transactions on the Bitcoin mainnet, the next step is to improve the experience and diversity of transactions. Various DeFi or stablecoin protocols provide traders with a wide range of financial derivatives. Finally, there are cross-chain protocols that allow assets on Bitcoin mainnet to flow to and from the other networks. Cross-chain protocols on Bitcoin are relatively mature, but not exclusively since the development of the Bitcoin mainnet, as many of the multi-chain bridges and mainstream cross-chain bridges were designed to provide cross-chain services to the Bitcoin network. For dApps like SocialFi and GameFi, due to the high gas and latency constraints of the main Bitcoin network, no excellent projects have appeared so far, but with the speed up and scaling of the Layer2 network, it is likely that they will emerge on Layer2 of the Bitcoin network. It is certain that the Bitcoin ecosystem will be at least one of the hot topics in this bull market. With plenty of enthusiasm and a huge market, although the various ecosystems on bitcoin are still in the early stages of development, we are likely to see the emergence of excellent projects from various verticals in the bull market this time. Source: Kernel Ventures Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world. References BEVM White Paper:https://github.com/btclayer2/BEVM-white-paperWhat is a Bitcoin Merkelized Abstract Syntax Tree:https://www.btcstudy.org/2021/09/07/what-is-a-bitcoin-merklized-abstract-syntax-tree-mast/#MAST-%E7%9A%84%E4%B8%80%E4%B8%AA%E4%BE%8B%E5%AD%90BitVM White Paper:https://bitvm.org/bitvm.pdfBitcoin Scripting Principles:https://happypeter.github.io/binfo/bitcoin-scriptsSatoshiVM Official Website:https://www.satoshivm.io/Multibit's Docs:https://docs.multibit.exchange/multibit/protocol/cross-chain-processAlex White Paper:https://docs.alexgo.io/Merlin Technical Docs:https://docs.merlinchain.io/merlin-docs/Sobit WhitePaper:https://sobit.gitbook.io/sobit/

Kernel Ventures: The Upsurge of Bitcoin Ecosystem — A Panoramic View of its Application Layer

Author: Kernel Ventures Jerry Luo
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
Along with the rise of inscription, the existing application layer of the Bitcoin network is unable to sustain market activities and is the main focus of the current Bitcoin ecosystem development.There are three mainstream Layer2 solutions for Bitcoin: Lightning network, Sidechain, and RollupThe Lightning network enables peer-to-peer payments by establishing an off-chain payment channel, which is settled on the main network after the channel is closed.Side chain locks BTC assets on the mainnet through specific addresses or multisig addresses, while mints equivalent BTC assets on the sidechain. Merlin Chain is able to support multiple types of inscription assets across the chain, backed by the Bitmap ecosystem, and its TVL has reached nearly 4 billion dollars.BTC Rollup is based on the Taproot circuit, which can simulates smart contracts on-chain, and it performs packing and computation operations outside the main Bitcoin Network. The B2 network is at the forefront of this implementation, with over $200 million on-chain TVL.Cross-chain bridges built specifically for Bitcoin aren't very common. There are more multi-chain and full-chain bridges that integrate with mainstream blockchains, one of which is Meson.Fi, that has established relationships with a number of Bitcoin Layer2.Stablecoin protocols on the Bitcoin network are mostly implemented in the form of over-collateralization and support other DeFi protocols to bring more yield for users.There are various DeFi projects in the Bitcoin ecosystem, from those that migrated from other chains, to those that are built on the native Bitcoin network during the current development boom, and those that were built during the last bull market and deployed as a sidechain. Overall Alex provides the most variety of trading products and the smoothest trading experience, but Orders Exchange has a higher growth ceiling.Bitcoin will be an important narrative in this cycle of bull markets. It is necessary to pay close attention to top tier projects in each vertical of the Bitcoin ecosystem.
1. Background
With the overflow of inscription assets due to the Ordinals protocol, Bitcoin network, which was once characterized by lack of smart contracts, inefficent for developing, and in dearth of infrastructure and scaling capabilities, is experiencing a data boom on chain (refer to Kernel's previous research article: Can RGB Replicate The Ordinals Hype for more details). Similar to what happened when the Ethereum network first came into established, formatted text, images, even videos were being scrambled to 4MB Tapscript scripts that would have never been executed. While this surge in on-chain activities contributed to the growth and development of the Bitcoin ecosystem and infrastructure, it also created a surge in transaction volumes and a huge storage burden on the network. In addition, for a wide variety of inscriptions, simple transfers can no longer satisfy users' transaction needs, and users are looking forward to the introduction of a wide range of derivatives trading services in Bitcoin. Hence the development of the Bitcoin application layer has become relatively urgent right now.

Source: CryptoQuant
2. Bitcoin Layer2
Unlike Layer2 on Ethereum, which is dominated by Rollup, Layer2 solution for Bitcoin is still vague. Bitcoin is not able to write smart contract on its own scripting language, and the publication of smart contracts must rely on third-party protocols, so applying a similar solution to Bitcoin can not guarantee the same level of security as an Ethereum Rollup. As a result, a variety of Layer2 solutions exist for the Bitcoin, including Lightning network, side chain, and Rollup based on TapScript.
2.1 Lightning network
Lightning network is the earliest Bitcoin Layer2 solution, first proposed by Gregory Maxwell in December 2015. The Lightning network stack, known as BOLT, was released by Lightning Labs in January 2017. Since then, it has undergone upgrades and improvements. The Lightning network allows users to make peer-to-peer, off-chain payment channel transfers of any size and number without fees until the Lightning network is closed. At that point, all previous transactions are settled with a single transaction. The Lightning network has the potential to achieve up to 10 million TPS (transactions per second) due to its use of off-chain channels. However, there is a risk of centralization with off-chain channels. And to successfully transact between two addresses, the off-chain channel must be established either directly or through a third party. Additionally, both parties must be online during the transaction for secure execution.

Source: Kernel Ventures
2.2 Side Chain
The side chain solution on Bitcoin is similar to that of Ethereum, with a new token pegged to Bitcoin 1:1 is issued on a new chain. This new chain would not be limited by transaction speed and development bottlenecks of the Bitcoin network, allowing for the transfer of Bitcoin-pegged tokens at a much faster rate and lower cost. The side chain solution presumably inherits the asset value of the mainnet, but not the security of the mainnet, and all transactions are recorded and confirmed on the side chain.
2.2.1 Stacks
Stacks 2.0 was released in 2021, where users can lock BTC on the Bitcoin mainnet and receive the equivalent value of SBTC assets on Stacks, but their transactions on the side chain require payment of STX, the Stacks' native token, as gas. Unlike Ethereum, Bitcoin network doesn't allow for a smart contract address that can effectively manage the locked BTC. Therefore, locked BTC is sent to a specific multisig address. The release process is relatively simple, requiring a request to the Burn-Unlock contract on Stacks to destroy the SBTC on Stacks and send the locked BTC back to the original address, since the Stacks network allows Clarity language for smart contract development. The block release process of the Stacks network uses the POX consensus mechanism. Bitcoin miners send BTC bids for block opportunities, and the higher the bid, the higher the weight of the miner. Ultimately, the winner is selected by a specific verifiable random function to package the blocks on the Stacks network, and receives a reward in the form of the corresponding STX. At the same time, this part of the bidding BTC will be distributed in the form of SBTC to the holders of STX tokens as a reward.

Source: Kernel Ventures
In addition, Stacks is expected to push Satoshi Nakamoto upgrade in April, which will include optimizations to its development language, Clarity, to lower the barriers for developers. Secondly, Stacks has optimized the security level of the network, confirming transactions on Stacks to be settled on Bitcoin mainnet, upgrading the security of Stacks from a sidechain to Layer2, which is the same as that of Bitcoin's mainnet. Finally, Stacks has also made significant improvements to its block rate, reaching 5 seconds per block in the testing phase (compared to 10-30 minutes per block in the current phase). If Satoshi Nakamoto upgrade is completed successfully, Stacks can narrow, perhaps even eliminate the gap between Layer2 on Ethereum, which should attract a lot of attention and stimulate the development of ecosystem.

2.2.2 RSK
RSK (RootStock) is a Bitcoin side chain without native tokens, and transactions on the side chain are currently handled on Bitcoin. Users can exchange BTC from the mainnet for RBTC at a 1:1 ratio on RSK through the built-in PowPeg protocol. RSK is also a POW chain, but with the introduction of a merger mining mechanism, the infrastructure and setup of Bitcoin miners can be fully applied to the RSK mining process, which reduces the cost of Bitcoin miners to participate in RSK mining. Until now, transactions on RSK are three times faster than on the mainnet and cost 1/20 as much as on the mainnet.

Source: RSK White Paper

2.2.3 BEVM
BEVM is an EVM-compatible POS side chain that has yet issued its own native token. It uses Schnorr's multisig algorithm on the Bitcoin network to store incoming assets in a multisig script address controlled by 1,000 addresses, which corresponds to the 1,000 POS verifiers on BEVM. The automated control of assets can be achieved by writing MAST (Merkelized Abstract Syntax Tree) scripts in the TapScript area, where the program is described in a number of independent chunks, each of which corresponds to a portion of the code logic, with no need to store a large amount of logic in the Script, only the hash result of each chunk. That has greatly reduced the amount of code that needs to be stored on the blockchain. When a user transfers BTC to BEVM, this part of BTC is locked by the script program, and the locked BTC can only be unlocked and sent back to the corresponding address if signed by more than 2/3 of the verifiers. BEVM is EVM compatible, so that allows for cost efficient migration of dApps originally built on Ethereum, trading with the above BTC-pegged assets while using it for gas expenses.

Source: BTCStudy
2.2.4 Merlin Chain
Merlin Chain is an EVM-compatible Bitcoin side chain that allows direct connection to the network through Bitcoin address generated by Particle network, with an unique Ethereum address generated. It can also be connected directly to an RPC node with an Ethereum account. Merlin Chain currently supports the transfer of BTC, Bitmap, BRC-420 and BRC-20 assets across the chain. The BRC420 protocol is developed by the Bitmap asset community based on recursive inscriptions like Merlin Chain, and the whole community has also put forward projects such as RCSV's recursive inscription matrix and the Bitmap Game meta-universe platform based on recursive inscriptions.

Source: Merlin Docs
Merlin Chain went live on February 5th, followed by a round of IDOs and staking rewards allocating 21% of the governance token MERL. The direct and massive airdrop attracted a large number of participants, Merlin Chain's TVL has surpassed $3 billion by now, with Bitcoin's on-chain TVL surpassing Polygon's, reaching #6 on all blockchains.

Source: DeFiLlama
During People's Launchpad's IDO, users can stake Ally or more than 0.00025 BTC to earn bonus points that could be redeemed for MERL, with a cumulative bonus stake limit of 0.02 BTC, which corresponded to 460 MERL tokens. Allocation of this round is relatively small, accounting for only 1% of total amount of MERL. However, considering OTC price of $2.90 MERL today, it has created a return of over 100%. In the second staking incentive round, Merlin allocated 20% of its total tokens, allowing users to stake BTC, Bitmap, USDT, USDC, and part of BRC-20 and BRC-420 assets on the Merlin Chain through Merlin's Seal. User's assets on Merlin will take an hourly snapshot of their value in USD, and the final daily average price multiplied by 10,000 will be the amount of points the user receives. The second round of staking is based on Blast's team model, where users can choose to be leader or team member. Leaders will receive an invitation code to share with their team members.
Merlin is relatively mature in the current Bitcoin Layer2 ecosystem, liberating the liquidity of Layer1 assets, and allow Bitcoin transfers on Layer2 at a lower cost. The Bitmap ecosystem behind Merlin is very large, and the technology is relatively sound, so it is probable to have good development in the long run. The stake on Merlin has a high rate of return. In addition to the expected return of MERL, there are also opportunities to obtain the corresponding Meme or other tokens airdropped by the project, such as the official airdropped Voya tokens. Staking of more than 0.01 BTC can obtain airdropping of 90 Voya tokens, whose price has been rising since the launch of the program, and the highest of which reaches 514% of issuance price. Voya's current price is quoted at US$5.89, and the yield is as high as 106% when calculated according to the average price of Bitcoin at US$50,000 when staked.

Source: CoinGecko
2.3 Rollup
2.3.1 BitVM
BitVM is based on Optimistic Rollup for Bitcoin Layer2. Similar to Optimistic Rollup on Ethereum, traders first send transactions to Layer2 on the Bitcoin network, where they can be calculated and packed, after which the results will be sent to smart contract on Layer1 for verification while time is given to the verifier to challenge the prover's statement. However, Bitcoin does not support native smart contract, so the implementation is not as simple as Ethereum's Optimistic Rollup. The whole process involves Bit Value Commitment, Logic Gate Commitment, and Binary Circuit Commitment, which can be summarized as BVC, LGC and BCC below.
BVC (Bit Value Commitment): BVC is essentially a level result with only two possibilities, 0 and 1, similar to a Bool type variable in other programming languages. Bitcoin is a stack-based scripting language, where bool type doesn't exist, so bytecode combinations are used to emulate it in BitVM.<Input Preimage of HASH>
OP_IF
OP_HASH160 //Hash the input of user
<HASH1>
OP_EQUALVERIFY //Output 1 if Hash(input)== HASH1
<1>
OP_ELSE
OP_HASH160 //Hash the input of user
<HASH2>
OP_EQUALVERIFY //Output 0 if Hash(input)== HASH2
<0>  In BVC, the user needs to submit an input first, then the Bitcoin network will hash the input and unlock the script only if the hash result is equal to HASH1 or HASH0 with HASH1 having an output of 1 and HASH2 having an output of 0. In the following section, we will summarise the entire snippet into an OP_BITCOMMITMENT opcode to simplify the description process.LGC (Logic Gate Commitment): All functions in a computer are essentially a combination of a series of bool gates, which can be simplified to a series of NAND gates. That's to say, if we can simulate NAND gates in the Bitcoin network through bytecode, we can essentially realize any function. Although Bitcoin does not have a direct implementation of the NAND opcode, it does have an AND gate, OP_BOOLAND, and a NOT gate, OP_NOT, which can be superimposed to reproduce the NAND. For the two output levels obtained from OP_BITCOMMITMENT, we can form a NAND output circuit with the OP_BOOLAND and OP_NOT opcodes.BCC (Binary Circuit Commitment): Based on LGC circuits, we can construct specific gate relationships between inputs and outputs. In BCC gate circuits, this input comes from the corresponding hash-primary image in the TapScript script, and different Taproot addresses correspond to a different gate, which we call a TapLeaf, and the many TapLeafs make up a Taptree, which serves as the input to the BCC circuit.

Source: BitVM White Paper
Ideally, a BitVM prover would compile and compute the circuits off-chain and return the results to the Bitcoin network for execution. However, since the off-chain process is not automated by smart contract, to prevent the provers from committing fraud transactions, BitVm requires the provers on the network to conduct a challenge. Verifier first reproduce the output of a certain TapLeaf, then add it with other TapLeaf results provided by the provers as inputs to drive the circuit. If the output is false, the challenge is successful, which means that the prover has provided a fraud message, and vice versa. However, to accomplish this process, the Taproot circuit needs to be shared between the challenger and the prover in advance, and, only the interaction between a single prover and a single verifier can be realized.
2.3.2 SatoshiVM
SatoshiVM is an EVM compatible zkRollup Layer2 solution for Bitcoin. The implementation of smart contracts on SatoshiVM is the same as on BitVM, using Taproot circuits to simulate complex functions. SatoshiVM is divided into three layers, the Settlement Layer, the Sequencing Layer and the Proving Layer. The Settlement Layer, also known as the Bitcoin mainnet, is responsible for providing the DA layer, storing the Merkle Roots and Zero Knowledge Proofs of transactions, and settling transactions by verifying the correctness of the Layer2 packaged transactions through the Taproot circuit. The Sequencing Layer is responsible for packaging and processing transactions, and returning the results of the transactions to the mainnet along with the zero-knowledge certificates, and the Proving Layer is responsible for generating zero-knowledge certificates for the tasks received from the Sequencing Layer and passing them back to the Sequencing Layer.

Source: SatoshiVM Docs
2.3.3 BL2
BL2 is a zkRollup Bitcoin Layer2 based on the VM Common Protocol (the official preconfigured VM protocol that is compatible with all major VMs). similar to other zkRollup Layers, its Rollup Layer mainly packs transactions and generates the corresponding zero-knowledge certificates through zkEVM. BL2's DA layer introduces Celestia to store bulk transaction data and only uses the BL2 network to store the zero-knowledge proofs, and finally returns the zero-knowledge proofs validation and a small amount of validation data, including BVC, to the main network for settlement.

Source: BL2.io
BL2's official X account has been updated daily, and it has also announced its development plan and token program, which will allocate 20% of its tokens to OG Mining, as well as the launch of a testnet in the near future. At this stage, the project is relatively new compared to other Bitcoin Layer2 and in its early stage, with only 33,000 followers on X. It's worth paying attention to as it introduces some of the more recent concepts such as Celestia and Bitcoin Layer2. However, there are no actual technical details on the website, with only a demo of what to expect, and no whitepaper for the project. At the same time, the goals are quite big, such as the abstraction of accounts on Bitcoin and compatibility with the VM protocol of mainstream virtual machines. Whether the team will be able to achieve this goal is still questionable, so we will consider taking a more reserved approach.

Source: BL2's X Account
2.3.4 B2 Network
The B2 Network is a zkRollup Layer2 with Bitcoin as the settlement layer and DA layer, which is structured into a Rollup Layer and a DA Layer. User transactions are first submitted and processed in the Rollup Layer, which uses a zkEVM scheme to execute user transactions and output the associated proofs, followed by storage of user state in the zkRollup Layer. The batch transactions and generated zero-knowledge proofs are forwarded to the DA Layer for storage and validation. The DA Layer can be subdivided into three parts: the Decentralised Storage Node, the B2 Node, and Bitcoin mainnet. The decentralised storage node receives the Rollup data and periodically generates temporal and spatial zero-knowledge proofs based on the Rollup data and sends the generated zero-knowledge proofs to the B2 Node, which is responsible for off-chain validation of the data, and then records the transaction data and corresponding zero-knowledge proofs in TapScript on the Bitcoin mainnet after validation is completed. The B2 Node is responsible for confirming the authenticity of the ZKP and finalising the settlement.

Source: B2 Network White Paper
B2 Network has a good influence among major BTC Layer2 programs, with 300,000 followers on X, surpassing BEVM's 140,000 and SatoshiVM's 166,000, which is also a Zk Rollup Layer2. At the same time, the project has received seed round funding from OKX and HashKey, attracting a lot of attention, and the TVL on the chain has exceeded $600 million.

Source: bsquared.network
B2 Network has launched B2 Buzz,and in order to use B2 Network, you need an invitation link. B2 Network uses the same communication model as Blast, which provides a strong two-way benefit binding newcomers and those who have already joined the network giving them sufficient motivation to promote the project. After completing simple tasks such as following the official X account, you can enter the staking interface, which supports the use of assets on four chains: BTC, Ethereum, BSC and Polygon. In addition to Bitcoin, inscriptions, ORDI and SATS can also be staked on the Bitcoin network. If you stake BTC, you can transfer the assets directly, whereas if you stake an inscription, you need to inscribe and transfer, and it is important to note that since there are no smart contracts on the Bitcoin network, the assets are essentially multisig-locked to a specific BTC address. The assets staked on the B2 network will not be released until at least April this year, and the points gained from staking during this period can be exchanged for mining components used for virtual mining, of which the BASIC miners only requires 10 components to activate, while the ADVANCED miner requires more than 80 components.
Officials announced a partial token program, 5% of the total tokens will be used to reward virtual mining, and the other 5% will be allocated to ecological projects on B2 network for airdrop. At the time when much attentions are paid for Tokenomics fairness, 10% of the total amount of tokens is difficult to fully mobilize the enthusiasm of the community. It is expected that B2 network will have other staking incentives or LaunchPad plans in the future.
2.4 Comprehensive Comparison
Among the three types of BTC Layer2, Lightning Network has the fastest transaction speed and lowest transaction cost, and has more applications in real-time payment and offline purchase. However, to realize the development of the application ecosystem on Bitcoin, it is difficult to build all kinds of DeFi or cross-chain protocols on Lightning network in terms of stability and security, and thus the competition in the application layer market is mainly between the sidechain and Rollup types. Sidechain solutions do not need to confirm transactions on the main network, and have more mature technical solutions and implementation difficulties, and thus have the highest TVL among the three. Due to the lack of smart contracts on the Bitcoin main network, the confirmation solution for Rollup data is still under development, and it might take a while for actual usage.

Source: Kernel Ventures
3. Bitcoin Cross-chain Bridge
3.1 Multibit
Multibit is a cross-chain bridge designed specifically for BRC20 assets on the Bitcoin network, and currently supports the migration of BRC20 assets to Ethereum, BSC, Solana, and Polygon. In the process of cross-chain bridging, users first need to send their assets to a BRC20 address designated by Multibit, and wait for Multibit to confirm the transfer of the assets on the main network, then the users will have the right to cast the corresponding assets on other chains, and to complete the cross-chain bridging process, users need to pay gas to mint on the other chain. Among the cross-chain bridges, Multibit has the best interoperability and the largest number of BRC20 assets, including more than ten kinds of BRC20 assets such as ORDI. In addition, Multibit also actively expands the cross-chain bridging of assets other than BRC20, and currently supports the Farming and cross-chain bridging of governance tokens and stablecoins of Bitstable, the native stablecoin protocol of BTC. Multibit is at the forefront of cross-chain bridges for BTC-derived assets.

The Cross Chain Assets that Multibit supports, Source: Multibit's X Account
3.2 Sobit
Sobit is a cross-chain protocol between Solana and Bitcoin network. Cross-chain assets are mainly BRC20 tokens and Sobit's native tokens. Users collateralize BRC20 assets on the Bitcoin mainnet to a designated Sobit address, and wait for Sobit's validation network to verify that the user can then Mint the mapped assets at the designated address on the Solana network. At the heart of SoBit's validation network is a validator-based framework that requires multiple trusted validators to approve cross-chain transactions, providing additional security against unauthorized transfers. Sobit's native token is Sobb, which can be used to pay for cross-chain fees for the Sobit Cross-Chain Bridge, totaling 1 billion coins. Sobb distributes 74% of its assets in a Fair Launch. Unlike other DeFi and cross-chain tokens on Bitcoin that have gone a upward trend these days, Sobb's price has been on a downward cycle after a brief uptrend, dropping more than 90 percent, not picking up any significant momentum along with BTC's uptrend, which may be caused by Sobb's chosen vertical. Sobit and Multibit's market orientations are very similar. But at this stage, Sobit can only support cross-chain for Solana, with only three kinds of BRC20 assets that can be bridged cross-chain. Compared with Multibit, which also provides cross-chain bridging of BRC20 assets, Sobit is far behind in terms of its ecosystem and completeness of cross-chain assets, and thus can hardly gain any advantage in the competition with Multibit.

The Price of Sobb, Source: Coinmarketcap
3.3 Meson Fi
Meson Fi is a cross-chain bridge based on the principle of HTLC (Hash Time Locked Contract). It supports cross-chain interactions between 17 mainstream chains including BTC, ETH and SOL. In the cross-chain process, users sign the transaction under the chain, then submit it to Meson Contract for confirmation and lock the corresponding assets in the original chain. Meson Contract broadcasts the message to the target chain through Relayer after confirming the message. There are three types of Relayer: P2P node, centralized node and no node, P2P node has better security, centralized node has higher efficiency and availability, while no node requires user to hold certain assets on both chains, which user can choose depending on actual situation. LP on the target chain also calls the Lock method on the Meson Contract to lock the corresponding asset after checking the transaction through postSwap of the Meson Contract, and then exposes the address to Meson Fi. The next operation is the HTLC process, where the user specifies the address of the LP on the original chain and creates a hash lock, removing the asset by exposing the hash lock original image on the target chain. This is then followed by the HTLC process, where the user specifies the LP address and creates a hash lock in the original chain, exposing the hash lock image in the target chain to retrieve the asset, and then the LP retrieves the user-locked asset in the original chain through the original image.

Source: Kernel Ventures
Meson Fi is not a cross-chain bridge specifically designed for Bitcoin assets, but a full-chain bridge like LayerZero. However, major BTC Layer2 such as B2 Network, Merlin Chain, and BEVM have all established partnership with Meson Fi and recommend using it to cross-chain bridge their assets during the staking process. According to official reports, Meson Fi processed more than 200,000 transactions during the three-day Merlin Chain staking event, as well as about 2,000 cross-chain staking of BTC assets, including transactions across all major chains to Bitcoin. As Layer2 on Bitcoin continues to release and introduce staking incentives, Meson Fi’ is more likely to attract assets for cross-chain, and see an increase protocol revenue.
3.4 Comprehensive Comparison
Overall, Meson Fi and the other two cross-chain bridges are two different kinds of cross-chain bridge. Meson Fi is essentially a full-chain cross-chain bridge, but happens to work with many of Bitcoin's Layer2s to help it bridge assets from other networks. Sobit and Multibit, on the other hand, are cross-chain bridges designed for Bitcoin's native assets, serving BRC20 assets as well as other DeFi and Stablecoin protocol assets on Bitcoin. Comparatively speaking, Multibit offers a wider variety of BRC20 assets, including dozens of assets such as ORDI and SATS, while Sobit only supports three BRC20 assets so far. In addition, Multibit has partnered with some of the Bitcoin stablecoin protocols to provide cross-chain services and stake revenue activities, providing a more comprehensive range of services. Finally, Multibit also offers better cross-chain liquidity, providing cross-chain services for five major chains, including Ethereum, Solana, and Polygon.
4. Bitcoin Stablecoin
4.1 BitSmiley
BitSmiley is a series of protocols based on the Fintegra framework on the Bitcoin network, including the Stablecoin Protocol, the Lending Protocol, and the Derivatives Protocol. Users can mint bitUSD by over-collateralization of BTC in BitSmliey's stablecoin protocol, and when they want to withdraw their collateralized BTC, they need to send the bitUSD back to the Vault Wallet for destruction and pay a fee. When the value of the collateralization falls below a certain threshold, BItSmiley will enter into an automatic liquidation process for the collateralized assets, and the formula for calculating the liquidation price is as follows:
$$𝐿𝑖𝑞𝑢𝑖𝑑𝑎𝑡𝑖𝑜𝑛 𝑃𝑟𝑖𝑐𝑒 = \frac{𝑏𝑖𝑡𝑈𝑆𝐷𝐺𝑒𝑛𝑒𝑟𝑎𝑡𝑒𝑑 ∗ 𝐿𝑖𝑞𝑢𝑖𝑑𝑎𝑡𝑖𝑜𝑛𝑅𝑎𝑡𝑖𝑜}{𝑄𝑢𝑎𝑛𝑡𝑖𝑡𝑦 𝑜𝑓 𝐶𝑜𝑙𝑙𝑎𝑡𝑒𝑟𝑎𝑙 }
$$
The exact liquidation price is related to the real-time value of the user's collateral and the amount of bitUSD minted, where Liquidation Ratio is a fixed constant. During the liquidation process, in order to prevent price fluctuations from causing losses to the liquidated, a Liquidation Penalty is designed in BItSmily to compensate for this, and the longer the liquidation time, the greater the amount of this compensation. The liquidation of assets is done by Dutch Auction, in order to complete the liquidation of assets in the shortest possible time. At the same time, the surplus of the BitSmiley protocol will be stored in a designated account and auctioned at regular intervals, in the form of a British auction with BTC bidding, which can maximize the value of the surplus assets. The BitSmiley project will use 90% of the surplus assets to subsidize on-chain collateral, while the remaining 10% will be allocated to the BitSmiley team for daily maintenance costs. BitSmiley's lending agreement also introduces a number of innovations to the settlement mechanism for the Bitcoin network. Due to the 10-minute block rate of the main Bitcoin network, it is not possible to introduce a prediction machine to judge price fluctuations in real time like Ether, so BitSmiley introduces a mechanism to insure a third party against failure of the other party to deliver on time, whereby the user has the option to pay a certain amount of BTC to the third party in advance to insure the transaction (which both parties are required to pay for), and when one party fails to complete the transaction on time, the transaction will be insured by the third party. When one party fails to complete the transaction on time, the guarantor will compensate the other party for the loss.

Source: BitSmliey WhitePaper
BitSmiley offers a wide range of DeFi and stablecoin features, as well as a number of innovations in its settlement mechanism to better protect users and improve its compatibility with the Bitcoin network. BitSmiley is an excellent stablecoin and DeFi model in terms of both settlement and collateralization mechanisms, and with the Bitcoin ecosystem still in its infancy, BitSmiley should be able to capture a significant share of the stablecoin competition.
4.2 BitStable
BitStable is a Bitcoin stablecoin protocol based on over-collateralization, and currently supports collateralization of ORDI and MUBI assets from Bitcoin mainnet, as well as USDT from Ethereum. Depending on the volatility of the three assets, BitStable sets different overcollateralization ratios, with USDT at 0%, ORDI at 70%, and MUBI at 90%.

Source: Bitstable.finance
BitStable has also deployed corresponding smart contracts on Ethereum, and the DALL stablecoin obtained by staking can be exchanged 1:1 on Ethereum for USDT and USDC. Meanwhile, BitStable has adopted a dual-token mechanism, in addition to the stablecoin DALL, it has adopted BSSB as its own governance token, through which holders can participate in the community's governance and share the revenue of the network. The total number of BSSBs is 21 million, which are distributed in two ways. The first is by staking DALL tokens on the Bitcoin network to earn the corresponding BSSB governance tokens, with the project distributing 50 percent of the BSSB tokens through staking rewards. The second method was the two rounds of LaunchPad on Bounce Finance at the end of November last year, in which 30% and 20% of the BSSBs were distributed through staking Auctions and Fixed Price Auctions. However, there was a hacking attack during the staking Auctions, which led to the destruction of more than 3 million BBSB tokens.

Source: coinmarketcap
During the hacker attack, the project team responded in a timely manner. The remaining 25% of the tokens that were not affected by the hacker attack were still issued, although at a higher cost, but this measure better restored the community's confidence, and ultimately prevent the clash of price.

5. Bitcoin DeFi
5.1 Bounce Finance
Bounce Finance consists of a series of DeFi ecosystem projects, including BounceBit, BounceBox and Bounce Auction. It is worth noting that Bounce Finance was not originally a project that served the BTC ecosystem, but an auction protocol set up for Ethereum and BSC, which shifted gears last May to take advantage of the Bitcoin development boom. BounceBit is an EVM-compatible POS sidechain for Bitcoin, and will select verifiers based on who are staking Bitcoins from the Bitcoin mainnet. BounceBit also introduces a hybrid revenue mechanism, whereby users can stake BTC assets on BounceBit to earn revenue on-chain through POS validation and the associated DeFi protocol, and can also securely transfer their assets to and from CEX by mirroring the assets on-chain and earning revenue on CEX. BounceBox is similar to the application store in Web2, in which the publisher can custom design a dApp, that is, a box, and then distribute it through BounceBox, and then users can choose their favorite boxes to participate in the DeFi activities. Bounce Auction, the main part of the project on Ether, is an auction for various assets and offers a variety of auction options, including fixed-price auctions, UK auctions and Dutch auctions.
Bounce's native token, Auction, was released in 2021 and has been used as the designated staking token for earning points in several rounds of Token LaunchPad on Bounce Finance, which has fueled the recent rise in the price of Auction tokens. What's more noteworthy is that BounceBit, the new staking chain that Bounce has built after switching to Bitcoin, is now open for on-chain staking to get points and test network interaction points, and the project's X account clearly indicates that points can be exchanged for tokens and that token issuance will take place in May this year.

Source: Coinmarketcap
5.2 Orders Exchange
Orders Exchange is a DeFi project built entirely on the Bitcoin network, currently supporting limit and market pending orders for dozens of BRC20 assets, with a blueprint to introduce swaps between BRC20 assets in the future. The underlying technology of Orders Exchange consists of Ordinals Protocol, PSBT and Nostr Protocol. More information on the Ordinals Protocol please refer to Kernel's previous research article, Kernel Ventures: Can RGB Replicate The Ordinals Hype. PSBT is a key feature on Bitcoin, where users sign a PSBT-type signature consisting of an Input and an Output via SIGHASH_SINGLE | ANYONECANPAY. PSBT is a bitcoin signature technology that allows users to sign a PSBT-X format consisting of an Input and an Output, with the Input containing the transaction that the user will execute and the Output containing the the prerequisite for user's transactions, which requires another user to execute the Output content and perform a SIGHASH_ALL signing on the network formula before the content of the Input finally takes effect. In Exchange's Pending Order transaction, the user completes the Pending Order by means of PSBT signature and waits for other party to complete the transaction.

Source: orders-exchange.gitbook.io
Nostr is an asset transfer protocol set up using NIP-100 that improves the interoperability of assets between different DEXs. All of Orders Exchange's 100 million tokens have been fully released. And although it emphasized in the whitepaper that ttokens are only experimental and do not have any value, the project's elaborate airdrop plan still shows a clear intention of token economy. There were 3 main directions for the initial token distribution, 45% of the tokens were distributed to traders on Orders Exchage, 40% of the tokens were airdropped to early users and promoters, and 10% were distributed to developers. However, the 40% drop was not described in detail on either the official website or the official tweets, and there was no discussion on X or in Discord's Orders community after the official announcement of the drop, so the actual distribution of the drop is still questionable. Overall, Orders Exchange's buy order page is intuitive and clear, and you can see the prices of all buy orders and sell orders explicitly, which is of high quality among the platforms offering BRC20 trading. The subsequent launch of the BRC20 token swap service on Orders Exchange should also help the value capture of protocols.
5.3 Alex
Alex is a DeFi Protocol built on top of the Bitcoin sidechain Stacks, currently supporting Swap, Lending, Borrow, and some other transaction types. At the same time, Alex has introduced some innovations to the traditional DeFi transaction model. The first is Swap, the traditional Swap pricing model can be divided into two types: x*y=k for ordinary pairs and x+y=k for stablecoins, but on Alex, you can set up the trading rules for pairs, and set it to be a linear combination of the results of the two calculations according to a certain ratio, x*y=k and x+y=k. Alex has also introduced OrderBook, a combined on-chain and off-chain order thinning model that allows users to quickly cancel pending transactions at zero cost Finally, Alex offers fixed-rate lending activities and has established a diversified collateral pool for lending services instead of the traditional single collateral, which consists of both risky and risk-free assets, reducing the risk of lending.

Source: Alexgo Docs
Unlike other DeFi projects in the BTC ecosystem, which entered the market after the Ordinals protocol had blown up the BTC ecosystem, Alex started working on the BTC DeFi ecosystem as early as the last bull market, and has raised a seed round of funding. Alex is also excellent in terms of performance and the different types of transactions, even many DeFi projects on Ethereum do not have much competitive edge over Alex's transaction experience. Alex's native token, Alex Lab, has a total supply of 1 billion, and 60% of it has already been released, which can still be earned by staking or by offering as a liquidity provider on Alex. However, revenue will hardly reach the level it was at during early launch. As one of the most well-established DeFi project on Bitcoin, Alex's market cap is considered not that high, with the Bitcoin ecosystem probably being an important engine in this bull market. In addition, the sidechain where Alex was deployed, Stacks, will execute an important Satoshi Nakamoto upgrade, of which Stacks will be greatly optimized in terms of both transaction speed and transaction cost, and its security will be backed by the Bitcoin mainnet, making it a true Layer 2. This upgrade will also greatly reduce Alex's operating costs and improve its transaction experience and security. The Stacks chain will also provide Alex with larger market and trading demand, bringing more revenue to the protocol.
6. Conclusion
The application of the Ordinals protocol has changed the inability of the Bitcoin network to implement complex logic and issue assets, and various types of asset protocols have been introduced on the Bitcoin network one after another, improving upon the idea of Ordinals. However, application layer is not prepared to provide services, and in the case of the surge of inscription assets, the functions that can be realized by Bitcoin applications appear to be anachronistic, and thus the development of applications on Bitcoin network has become a hotspot for all parties to seize. Layer 2 has the highest priority among all types of applications, because all other DeFi protocols, no matter developed they are, if they do not improve the transaction speed and reduce the transaction cost of the Bitcoin mainnet, it will be difficult to release the liquidity, and the chain will be flooded with new transactions for speculation purposes. After improving the speed and cost of transactions on the Bitcoin mainnet, the next step is to improve the experience and diversity of transactions. Various DeFi or stablecoin protocols provide traders with a wide range of financial derivatives. Finally, there are cross-chain protocols that allow assets on Bitcoin mainnet to flow to and from the other networks. Cross-chain protocols on Bitcoin are relatively mature, but not exclusively since the development of the Bitcoin mainnet, as many of the multi-chain bridges and mainstream cross-chain bridges were designed to provide cross-chain services to the Bitcoin network. For dApps like SocialFi and GameFi, due to the high gas and latency constraints of the main Bitcoin network, no excellent projects have appeared so far, but with the speed up and scaling of the Layer2 network, it is likely that they will emerge on Layer2 of the Bitcoin network. It is certain that the Bitcoin ecosystem will be at least one of the hot topics in this bull market. With plenty of enthusiasm and a huge market, although the various ecosystems on bitcoin are still in the early stages of development, we are likely to see the emergence of excellent projects from various verticals in the bull market this time.

Source: Kernel Ventures
Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world.
References
BEVM White Paper:https://github.com/btclayer2/BEVM-white-paperWhat is a Bitcoin Merkelized Abstract Syntax Tree:https://www.btcstudy.org/2021/09/07/what-is-a-bitcoin-merklized-abstract-syntax-tree-mast/#MAST-%E7%9A%84%E4%B8%80%E4%B8%AA%E4%BE%8B%E5%AD%90BitVM White Paper:https://bitvm.org/bitvm.pdfBitcoin Scripting Principles:https://happypeter.github.io/binfo/bitcoin-scriptsSatoshiVM Official Website:https://www.satoshivm.io/Multibit's Docs:https://docs.multibit.exchange/multibit/protocol/cross-chain-processAlex White Paper:https://docs.alexgo.io/Merlin Technical Docs:https://docs.merlinchain.io/merlin-docs/Sobit WhitePaper:https://sobit.gitbook.io/sobit/
Kernel Ventures:BTC 生态开发热潮下的应用层全景图作者:Kernel Ventures Jerry Luo 审稿:Kernel Ventures Mandy, Kernel Ventures Joshua, Kernel Ventures Rose TLDR: 伴随着铭文赛道的爆火,比特币主网现有应用层无法满足铭文市场,是当下比特币网络开发的重点。现阶段比特币主流的 Layer2 方案有三种,分别是闪电网络,侧链以及 Rollup:闪电网络通过建立链下支付通道实现点对点支付,通道关闭后在主网进行结算;侧链在主网通过特定地址或者多签地址锁定主网 BTC 资产,在侧链上铸造等值 BTC 资产。其中 Merlin Chain 能够支持多类铭文资产跨链,并且背后与 BRC420 资产社区联系紧密,现阶段链上 TVL 总量超过了 30 亿美金;现阶段的 BTC Rollup 基于 Taproot 电路在链上模拟智能合约,并在比特币主网以外完成打包与计算操作。其中 B2 Network 在实现过程中走在了最前列,链上 TVL 总量也超过了 2 亿美金。专门针对比特币设计的跨链桥并不多,现阶段更多的集成了主流公链的多链桥和全链桥,其中 Meson.Fi 与不少比特币二层项目建立了合作关系。比特币稳定币协议多采用超额抵押的形式进行实现,并且基于稳定币协议构建了其他 DeFi 协议进行补充,以带给协议用户更多的收益。比特币生态上的 DeFi 项目差异较大,有从其他链上迁移而来的,有在这轮开发热潮下建立在比特币主网上的,还有从上轮牛市发迹,部署在比特币侧链上的。整体来看 Alex 有着最完善的交易种类和最好的交易体验,但 Orders Exchange 有更大的上升空间。比特币生态会是这轮牛市的重要叙事,可以适当关注比特币生态各细分赛道头部项目的动向。 1. 背景 伴随 Ordinals 协议带来的铭文资产溢出,曾经因为缺少智能合约、开发只能依靠脚本语言且基础设施和拓展功能薄弱的比特币网络迎来了一轮数据上链热潮(相关介绍可以参考 Kernel 之前的研报:RGB 能否复刻 Ordinals 的热潮)。和以太坊网络曾爆火时遇到的情况一致,文字,图片甚至视频被人们争相写入 4MB 大小永远不会被执行的 Tapscript 脚本空间。这一上链热潮虽然促进了比特币网络生态的繁荣与基础设施的开发,但同时也为比特币网络带来了激增的交易量与巨大的存储压力。此外,对于各式的铭文,简单的转账已经无法满足大家的交易需求,用户也期待着以太坊上繁多的衍生交易服务能够引入比特币网络。因此,比特币主网应用层的开发成为当下市场相对迫切的需求。 近一年比特币主网日交易量变化,图片来源:CryptoQuant 2. 比特币 Layer2 不同于以太坊上 Layer2 方案的相对一致性,比特币无法基于自身脚本语言实现智能合约,其智能合约的发布必须依赖第三方协议进行。比特币主网 Rollup 类型的 Layer2,无法像以太坊上 Rollup 类型 Layer2 接近以太坊主网安全性一样接近比特币主网安全性。现在比特币主网上存在着多种 Layer2 方案,包括闪电网络,侧链以及基于 TapScript 实现的 Rollup。 2.1 闪电网络 闪电网络是最早的比特币 Layer2 方案,由 Gregory Maxwell 在 2015 年 12 月第一次提出了闪电网络的协议栈 — BOLT,Lightning Labs 在 2017 年 1 月发布了 alpha 版本的 Lightning Network 并在后续对其进行了持续的升级与改进。闪电网络通过建立用户间点对点的链下支付通道,使用户可以在通道中进行任意次数以及规模的资产划转而不必支付费用,直到某一方关闭闪电网络后才会对之前的交易进行结算并只用支付一次交易的成本。由于采用了链下通道,Lightning Network 最高可以达到千万级别的 TPS。但链下通道具有中心化的风险,同时两个地址之间要实现交易必须先建立链下通道或者通过均建立了链下通道的第三方实现互联,并且在交易过程中必须保证双方在线,才可以安全交易。 闪电网络原理,图片来源:Kernel Ventures 2.2 侧链 比特币的侧链方案与以太坊的侧链方案类似,本质是发行一条实现了链上代币与比特币 1:1 锚定的新链。这条新链不会受到比特币主网交易速度以及开发难度的限制,可以以更快的速度以及更低的成本转移比特币锚定代币。侧链方案虽然继承了主网的资产价值,但并未继承主网的安全性,交易的结算被放在了侧链上进行。 2.2.1 Stacks 现阶段的 Stacks 项目是 2021 年推出的 2.0 版本,用户可以在比特币主网锁定 BTC ,然后得到 Stacks 上等值的 SBTC 资产,但其在侧链上的交易需要支付 Stacks 的原生代币 STX 作为 gas。比特币主网不像以太坊主网一样存在智能合约地址可以对锁定的 BTC 进行有效管理,因而锁定的 BTC 被发往了比特币主网的特定多签地址。由于 Stacks 主网可以使用 Clarity 语言进行智能合约开发,因而释放过程相对简单,只需要向 Stacks 上的 Burn-Unlock 合约发起请求,便可以销毁 Stacks 上的 SBTC,并将锁定的 BTC 发送回主网的原地址。Stacks 主网的出块过程采用了 POX 共识机制,比特币主网的矿工发送 BTC 竞价出块机会,相应出价越高的矿工可以获得更高的权重,最终通过特定的可验证随机函数选出胜利者在 Stacks 主网打包出块,并获得相应 Stacks 原生代币 STX 的奖励。同时这部分参与竞价的 BTC 则会以 SBTC 的形式分配给 STX 代币的持有者作为奖励。 POX 原理,图片来源:Kernel Ventures 此外,Stacks 预期在 4 月进行中本聪升级,升级内容包括了对其开发语言 Clarity 的优化,以降低开发者的开发门槛。其次 Stacks 优化了网络的安全等级,在 Stacks 上可以直接出了主网区块交易并且具有 100% 的比特币重组抵抗力,Stacks 中交易的确认都会放在主网进行,使其安全性从侧链升级到了和比特币主网等同的 Layer2。最后 Stacks 还对出块速度进行了大幅提速,在测试阶段达到 5 秒一个区块的出块速度(现阶段为 10-30 分钟一个区块)。如果中本聪升级可以顺利完成,Stacks 在性能上基本接近许多以太坊上的 Layer2,应该会吸引许多资金的流入并提高 Stacks 生态的开发热度。 2.2.2 RSK RSK(RootStock)是一条无原生代币的比特币侧链,现阶段侧链上的交易以比特币作为手续费。用户可以通过 RSK 内置的 PowPeg 双向锚定协议用主网的 BTC 在 RSK 上以 1:1 的比例兑换 RBTC。RSK 也是一条 POW 机制的公链,但是引入了合并挖矿的机制,比特币的矿工采矿的基础设施和设置可以完全适用于 RSK 的挖矿过程,降低了比特币矿工参与 RSK 挖矿的成本。现阶段的 RSK 上有着三倍于主网的交易速度以及主网 1/20 的交易成本。 RSK 与比特币主网性能对比,图片来源:RSK 白皮书 2.2.3 BEVM BEVM 是一条底层与 EVM 兼容的 POS 侧链,现阶段还未发行自己的原生代币。其在比特币主网通过 Schnorr 的多签算法可以将接收的资产存储在一个 1000 个地址共同控制的多签脚本地址,这 1000 个地址便对应 BEVM 上的 1000 个 POS 验证者。同时通过在 TapScript 区域内编写 MAST(默克尔化抽象语法树) 形式的脚本程序可以实现对于资产的自动化控制。MAST 中使用了许多独立的小块来描述程序,每个独立的小块对应一部分代码逻辑,而在 Script 脚本上不需要存储大量的逻辑代码,只需要存储每部分代码块的哈希结果即可,这大大减少了在区块链上所需存储的合约代码量。当用户向 BEVM 转入 BTC 时,这部分 BTC 便被脚本程序进行了锁定,只有得到超过 2/3 的验证者的签名,才可以对锁定的 BTC 进行解锁,并发还给对应地址。BEVM 在底层兼容了 EVM,可以无成本的迁移以太坊上的各类 dApp,交易以上 BTC 锚定资产并以锚定资产作为 gas 支出。 MAST 以及非 MAST 形式下数据量随子脚本数量的增长速度,图片来源:BTCStudy 2.2.4 Merlin Chain Merlin Chain 是一条底层 EVM 兼容的比特币侧链,支持通过 Particle Network 使用比特币地址直连网络,并且会将该地址产生一个唯一的以太坊地址,也可以直接用以太坊账户通过 RPC 节点连接。现阶段的 Merlin Chain 支持 BTC、Bitmap、BRC-420 和 BRC-20 资产的跨链迁移。其中 BRC-420 协议和 Merlin Chain 一样,是 Bitmap 资产社区基于递归铭文进行的开发,整个社区基于递归铭文还提出了 RCSV 的递归铭文矩阵,Bitmap Game 元宇宙平台等项目。 BTC 原生账户与 Merlin Chain 的连接方式,图片来源:Merlin 文档 Merlin Chain 于 2 月 5 日进行了主网上线,随后进行了一轮 IDO 和质押奖励,分配了 21% 的治理代币 MERL。直接和大规模的空投吸引了大量参与者,Merlin Chain 的 TVL 目前超过 30 亿美元,比特币的链上 TVL 也超过了 Polygon,在公链中排到了第 6 位。 比特币 TVL 分布,图片来源:DeFi Llama 在 People‘s Launchpad 的 IDO 中,用户质押 Ally 或超过 0.00025 BTC 获得可以兑换 MERL 购买资格的积分奖励,可累计奖励的质押 BTC 上限为 0.02,可对应获得的 MERL 代币为 460 枚。该轮分配较少,仅占代币总量的 1%。但尽管如此,按照当前 MERL 2.9 美金的场外计价计算,其收益率也超过 100%。在第二轮质押激励活动中,Merlin 分配了其代币总量的 20%,用户通过 Merlin's Seal 可以在 Merlin Chain 上质押 BTC、Bitmap、USDT、USDC 和部分 BRC-20 与 BRC-420 资产。用户在 Merlin 上的资产会每小时以 USD 计价进行一次快照,最后该日均价乘以 10000 便是用户获得的积分。第二轮质押活动采取了类似 Blast 的组队模式,用户可以选择队长和成员两个身份,选择队长身份可以获得一个邀请码,而选择队员身份则需要输入队长的邀请码绑定队伍。 Merlin 在当前落地的比特币 Layer2 方案中技术成熟,可以解放 Layer1 资产的流动性,主网的比特币可以在 Merlin 上低成本的流动。其背后依靠的 Bitmap 生态社区非常庞大,技术也相对完善,长期来看应该会有不错的发展。现阶段在 Merlin 上的质押具有极高的收益率,除了 MERL 的收益预期外,还有机会获得项目方空投的相应 Meme 或其他代币,比如官方空投的 Voya 代币,单号质押超过 0.01 BTC 便可以获得 90 枚 Voya 代币空投,上线以来币价不断上涨,最高到达了发行价的 514%,现阶段报价 5.89 美金,质押时按照比特币 50000 美金的均价计算,收益率高达 106% 。 Voya 代币价格走势,图片来源:coingecko 2.3 Rollup 2.3.1 BitVM BitVM 是基于 Optimistic Rollup 的比特币 Layer2。类似于以太坊上的 Optimistic Rollup,交易者首先在比特币主网向 Layer2 发送交易信息,然后在 Layer2 对交易进行计算与打包,得到的结果发送至 Layer1 的智能合约进行确认,这个过程需要留给验证者一定时间对证明者的陈述提出挑战。但比特币并不具备原生智能合约,所以具体实现不像以太坊的 Optimistic Rollup 一样简单,涉及了 Bit Value Commitment,Logic Gate Commitment 和 Binary Circuit Commitment 等过程,下文中分别用 BVC,LGC,BCC 对这三者进行代指。 BVC(Bit Value Commitment):BVC 本质上是一个电平结果,只有 0 和 1 两种可能,类似于其他编程语言中的 Bool 类型变量,比特币是基于栈的脚本语言操作,并不存在这种类型的变量,因而在 BitVM 中使用了字节码的组合对其进行模拟。<Input Preimage of HASH> OP_IF OP_HASH160 //Hash the input of user <HASH1> OP_EQUALVERIFY //Output 1 if Hash(input)== HASH1 <1> OP_ELSE OP_HASH160 //Hash the input of user <HASH2> OP_EQUALVERIFY //Output 0 if Hash(input)== HASH2 <0>  在 BVC 中,用户需要首先需要提交一个输入,然后在比特币主网上对输入进行哈希,只有当输入的哈希结果等于 HASH1 或者 HASH0 时才会对脚本解锁,当哈希结果为 HASH1 时,输出为 1,当哈希结果为 HASH2 时,输出为 0。接下来的描述中,我们会将整个代码段打包成一个 OP_BITCOMMITMENT 操作码,简化描述过程。LGC(Logic Gate Commitment):计算机中所有函数从本质上可以对应为一系列 Bool 门电路的组合,而任何门电路经过化简,都可以等效为一系列与非门电路的组合,也就是说,如果我们可以在比特币主网通过字节码模拟出与非门电路,本质上就可以复现任何函数。虽然比特币中没有直接实现与非的操作码,但是有与门 OP_BOOLAND 以及非门 OP_NOT,通过这两个操作码的叠加便可复现与非门功能。对于两个经由 OP_BITCOMMITMENT 得到的输出电平,我们通过 OP_BOOLAND 和 OP_NOT 操作码便可以构成一个与非输出电路。BCC(Binary Circuit Commitment):在 LGC 电路的基础上,我们可以在输入与输出间构建特定的门电路关系,而在 BCC 门电路中,这个输入来自于 TapScript 脚本中所对应的哈希原像,而不同的 Taproot 地址对应一个不同的门,我们称之为 TapLeaf,众多 TapLeaf 构成了一个 Taptree,作为 BCC 电路的输入。 8 输入与非门电路及其对应的 Taproot 电路,图片来源:BitVM 白皮书 理想情况下,由 BitVM 的证明者在链下对电路进行编译与计算,然后将得出的结果返回给比特币主网执行即可。但是由于链下的过程并不由智能合约自动化执行,为防止证明者的作恶,BitVm 需要在主网的验证者进行挑战。挑战过程中,验证者首先会复现某个 TapLeaf 门电路的输出,然后将其与证明者提供的其他 TapLeaf 结果一起作为输入驱动电路。如果输出 False,则挑战成功,说明证明者作假,反之则挑战失败。但要完成这一过程,需要挑战者和验证者提前对 Taproot 电路进行共享,并且现阶段只能实现单一验证者与单一证明者的交互。 2.3.2 SatoshiVM SatoshiVM 是一种 EVM 兼容的 Zk Rollup 型比特币 Layer2 方案。SatoshiVM 上智能合约的实现方式与 BitVM 上相同,都是使用了 Taproot 电路模拟复杂函数,因而不再赘述。SatoshiVM 总体分为三层,分别是 Settlement Layer,Sequencing Layer 和 Proving Layer。Settlement Layer 也就是比特币主网,负责提供 DA 层,存储交易的 Merkle 根和零知识证明同时通过 Taproot 电路验证 Layer2 打包交易的正确性而进行结算。Sequencing Layer 负责对交易进行打包和处理,将交易的计算结果和零知识证明一同返回到主网。Proving Layer 负责针对 Sequencing Layer 传来的 Task 生成零知识证明并传回 Sequencing Layer。 SatoshiVM 整体结构,图片来源:SatoshiVM 官方文档 2.3.3 BL2 BL2 的是基于 VM 通用协议(官方预设中可以兼容所有主流虚拟机的虚拟机协议)设计的 Zk Rollup 类型比特币 Layer2。和其他 Zk Rollup 的 Layer2类似,其 Rollup Layer 主要也是通过 zkEvm 对交易进行打包并生成相应零知识证明。BL2 的 DA 层引入了 Celsetia 存储批量的交易数据而仅使用 BL2 网络存储零知识证明,最后将零知识证明的验证和 BVC 在内的少量验证数据返回主网进行结算。 BL2 网络结构,图片来源:BL2.io BL2 的 X 账户最近更新频繁,基本处于日更的状态,也公示发展规划以及代币方案,会将 20 % 的代币分配给 OG Mining,同时也预示了近期测试网的上线。现阶段该项目相对其他比特币 Layer2 比较小众,处于早期状态,引入了 Celestia,比特币 Layer2 等最近较为火热的概念,概念上足够有热度。但其官网没有设计实际性的功能,仅有预期的演示,没有项目白皮书。同时设定的目标过高,比如比特币上的账户抽象以及兼容主流虚拟机的 VM 协议,实现难度都不小,团队最终能否有能力完成这一目标并不确定,因而现阶段还难以对项目做出一个准确的评判。 BL2 路线图,图片来源:BL2 官方 X 2.3.4 B2 Network B2 Network 是以比特币为结算层和 DA 层的 zkRollup 型 Layer2,在结构上总体可以分为 Rollup Layer 和 DA Layer 两层。用户的交易会首先在 Rollup Layer 提交和处理,Rollup Layer 采用 zkEvm 的方案执行用户交易并输出相关证明,并将用户状态也存储在 ZK-Rollup 层。批量打包好的交易和生成的零知识证明将被转发到 DA Layer 进行存储和验证。DA Layer 可以细分为去中心化存储节点,B2 Node 和比特币主网三个部分。去中心化存储节点接收 Rollup 数据后,定期生成基于 Rollup 数据的时间和空间的零知识证明,并将生成的零知识存储证明发送给 B2 Node。B2 Node 则负责对数据进行链外校验,校验完成后将交易数据和相应零知识验证以 TapScript 的形式在比特币主网进行记录。比特币主网将负责确认零知识验证的真实性并进行最终结算。 B2 Network 网络结构,图片来源:B2 Network 白皮书 B2 Network 在各大 BTC Layer2 方案中有着不错的关注度,在 X 上已有 30 万粉丝,超过了 BEVM 的 14 万和同为 Zk Rollup Layer2 的 SatoshiVM 的 16.6 万。同时该项目得到了包括 OKX,HashKey 在内的种子轮融资,备受关注,现阶段链上 TVL 已超过了 6 亿美金。 B2 Network 参投企业,图片来源:B2 Network 官网 B2 Network 上线了主网 B2 Buzz,现阶段要使用 B2 Network 必须获得邀请链接,而不能直接参与。B2 Network 借鉴了 Blast 的传播模式,在新入局者和已入局者之间进行了一个强有力的双向利益绑定,给到已入局者充分的项目推广动力。完成关注官网推特等简单任务后便可以进入质押界面,现阶段支持利用 BTC,Ethereum,BSC 和 Polygon 四条公链上的资产进行质押。比特币主网的资产除比特币外,铭文 ORDI 和 SATS 也可以进行质押。如果质押 BTC 资产则直接对资产进行转账即可,而如果要质押铭文资产,需要先后经历铭刻和转账阶段。值得注意的是,由于比特币主网并没有智能合约,所以现阶段的 BTC Layer2 跨链转账中,资产本质是被多签锁定在了一个特定的 BTC 地址。现阶段在 B2 Network 上质押的资产最早要等到今年 4 月才可以释放,而期间质押获得的积分可以兑换矿机组件进行虚拟挖矿,其中 BASIC 矿机仅需要 10 个组件便可开启,而 ADVANCED 矿机则需要超过 80 个组件。 官方公布了部分代币计划,将 5% 的代币总量用于奖励虚拟挖矿,另外将 5% 分配给 B2 Network 上的生态项目进行空投。现阶段项目方在 Tokenomics 公平性相互内卷的情况下,B2 Network 只分配 10% 的代币总量,难以充分调动社区的热情,预计 B2 Network 后期应该会有其他质押激励或 LaunchPad 的计划。 2.4 综合对比 综合三种比特币的二层网络形式,闪电网络拥有最快的交易速度和最低的交易成本,在比特币的实时支付以及线下购买物品的过程中有着更多的应用。但是如果要实现比特币上应用生态的开发,搭建各类 DeFi 或者跨链协议闪电网络物流从稳定性还是安全性上都难以支持,因而应用层市场的竞争主要在侧链与 Rollup 类型之间展开。相对来说侧链方案不用在主网确认交易,同时有着更为成熟的技术方案和实现难度,因而现阶段在三者中有最高的 TVL。由于比特币主网上智能合约的缺失,现阶段对于 Rollup 传回数据的确认方案还在发展之中,具体的落地可能还需要一段时间的等待。 比特币 Layer2 的综合对比,图片来源:Kernel Ventures 3. 比特币跨链桥 3.1 Multibit Multibit 是比特币网络上专门针对 BRC20 资产设计的跨链桥,现阶段支持了 BRC20 资产向 Ethereum,BSC,Solana 和 Polygon 四条链上的迁移。跨链过程中,用户首先需要将资产发送到一个 Multibit 指定的 BRC20 地址,等待 Multibit 在主网确定资产的转移后,用户便有了在其他链上铸造相应资产的权限,最终完成跨链还需要用户支付 gas 在另一条链上 Mint。现阶段可以对 BRC20 这类铭文资产提供跨链的跨链桥中,Multibit 具有最好的交互性并且资产的 BRC20 资产数量也最多,包括了 ORDI 在内的十余种 BRC20 资产。此外,Multibit 还积极拓展了 BRC20 之外资产的跨链,现阶段支持了 BTC 原生稳定币协议 Bitstable 的治理代币和稳定币的 Farming 和跨链。在现阶段对 BTC 衍生资产提供跨链的跨链桥中,Multibit 是赛道中最前沿的存在。 Multibit 可以支持跨链的 BRC20 资产,图片来源:Multibit's X Account 3.2 Sobit Sobit 是在 Solana 和比特币主网间建立的跨链协议,现阶段的跨链资产主要为 BRC20 代币以及 Sobit 的原生代币。用户在比特币主网上将 BRC20 资产质押到指定的 Sobit 地址,等待 Sobit 的验证网络验证通过后,用户便可以在 Solana 网络的指定地址 Mint 映射的资产。SoBit 验证网络的核心是基于验证器的框架,需要多个受信任的验证器来批准跨链交易,为防止未经授权的转账提供了额外的安全保障,这种措施可以显著提高系统的安全性和鲁棒性。Sobit 的原生代币为 Sobb,可用来支付 Sobit 跨链桥的跨链费用,总量为 10 亿枚。Sobb 将 74% 的资产以 Fair Launch 的方式进行了分配。和比特币上其他 DeFi 协议以及跨链协议代币上线后的情况不同, Sobb 币价短暂上行后便进入了下降周期,而且跌幅超过 90%,并且最近并没有伴随着 BTC 上涨的热情有明显的起势,这可能和 Sobb 所选赛道有关。Sobit 和 Multibit 选取的服务对象高度重合,但现阶段 Sobit 只能支持面向 Solana 的跨链,同时可跨链的 BRC20 资产也只有三种。相比同样提供 BRC20 资产跨链的 Multibit,Sobit 在生态和跨链资产的完善性上相差较远,因而在与 Multibit 的竞争中难以取得优势。 Sobb 代币价格走势,图片来源:Coinmarketcap 3.3 Meson Fi Meson Fi 是基于 HTLC(Hash Time Locked Contract)原理实现的跨链桥,现阶段已经实现了包括 BTC,ETH,SOL 在内的 17 条主流公链间的跨链交互。跨链过程中,用户在链下对交易信息签名后提交给 Meson Contract 确认并在原链锁定相应资产,Meson Contract 在确认消息后通过 Relayer 将消息广播到目标链上。其中 Relayer 有 P2P 节点,中心化节点和无节点三种情况,P2P 节点具有更好的安全性,中心化节点有更高的效率和可用性而不通过节点的情况则需要用户在两条链上均持有一定资产,用户可以根据实际情况选择。目标链上的 LP 通过 Meson Contract 的 postSwap 检验交易的正确性后也在 Meson Contract 上调用 Lock 方法锁定相应资产,之后将地址暴露到 Meson Fi。接下来的操作便是 HTLC 的过程,用户在原链指定 LP 地址与创建哈希锁,在目标链通过暴露哈希锁原像取出资产,LP 再通过原像在原链取出用户锁定的资产。 Meson Fi 上的 HTLC 过程,图片来源:Kernel Ventures Meson Fi 并不是一条专门针对比特币资产设计的跨链桥,更类似于 LayerZero 这种全链桥。但现阶段的主流 BTC Layer2 诸如 B2 Network,Merlin Chain 以及 Bevm 均与其建立了合作关系并在质押过程中推荐使用 Meson Fi 进行资产的跨链。据官方披露,Meson Fi 在 Merlin Chain 质押的三天活动期间处理了超 20 万笔交易,以及约 2000 枚 BTC 资产的跨链质押,几乎包揽了所有主流链跨向比特币的交易。比特币上 Layer2 不断发布并推出质押激励的这一过程中,Meson Fi 可以吸引到大量资产的跨链,应该会为其后续生态的壮大以及提高跨链收益带来不少促进作用。 3.4 综合对比 总的来说,Meson Fi 和 另外两条跨链桥属于两个种类。Meson Fi 本质是一条全链形式的跨链桥,但是刚好与比特币的许多 Layer2 达成了合作,帮助其桥接来自于其他网络的资产。而 Sobit 和 Multibit 是针对比特币原生资产设计的跨链桥,服务对象是 BRC20 资产以及比特币上其他 DeFi 和稳定币协议资产。相对来说,Multibit 提供的 BRC20 资产种类更多,包括了 ORDI、SATS 在内的数十种资产,而 Sobit 至今仍只支持了三种 BRC20 资产。此外,Multibit 和部分比特币稳定币协议也建立了合作关系,提供了相关的跨链服务和质押收益活动,有着更全面的服务类型。最后 Multibit 还有更好的跨链流动性,提供了 Ethereum、Solana 和 Polygon 在内共计 5 条主流链的跨链服务。 4. 比特币稳定币 4.1 BitSmiley BitSmiley 是基于 Fintegra 框架建立在比特币主网上的一系列协议,包含了稳定币协议,借贷协议和衍生品协议。用户通过其稳定币协议超额抵押 BTC,可以铸造出 bitUSD,而当用户想撤回其抵押的 BTC 时,需要将 bitUSD 发送回 Vault Wallet 进行销毁并支付一定手续费。而当质押品价值低于一定阈值时,BItSmiley 会进入对质押资产的自动清算流程,清算价格的计算公式如下: $$𝐿𝑖𝑞𝑢𝑖𝑑𝑎𝑡𝑖𝑜𝑛 𝑃𝑟𝑖𝑐𝑒 = \frac{𝑏𝑖𝑡𝑈𝑆𝐷𝐺𝑒𝑛𝑒𝑟𝑎𝑡𝑒𝑑 ∗ 𝐿𝑖𝑞𝑢𝑖𝑑𝑎𝑡𝑖𝑜𝑛𝑅𝑎𝑡𝑖𝑜}{𝑄𝑢𝑎𝑛𝑡𝑖𝑡𝑦 𝑜𝑓 𝐶𝑜𝑙𝑙𝑎𝑡𝑒𝑟𝑎𝑙 } $$ 可以看到,具体的清算价格与用户抵押品实时的价值和所铸造 bitUSD 数量有关,其中 Liquidation Ratio 为一个固定的常数。在清算过程中,为了防止价格波动对被清算者造成的损失,BItSmily 中设计了一个 Liquidation Penalty 来进行补偿,清算时间越长,这个补偿的数额越大。资产的清算采取了荷兰拍的竞价方式,以达到最短时间内完成资产清算的目的。同时,BitSmiley 协议自身的盈余会被存储到指定账户,并定时进行拍卖,拍卖形式为 BTC 竞价的英国拍卖,这种方式可以对盈余资产的价值进行最大的发掘。对于这部分盈余资产,BitSmiley 项目方将 90% 用于对链上质押者的补贴,剩余 10% 则分配给 BitSmiley 团队作为日常维护成本。在 BitSmiley 的借贷协议中,也提出了一些针对比特币网络的结算机制创新。由于比特币主网 10 分钟一个区块的出块速度,无法像以太坊一样方便的引入预言机对价格的波动进行实时判断,所以在 BitSmiley 中引入了保险第三方的机制来避免另一方没有按时交付的情况,用户可以选择提前向第三方支付一定 BTC 作为交易保险(双方均需要支付),当某一方未按时完成交易时,则由担保方向另一方补偿损失。 BitSmiley 中的第三方保险机制,图片来源:BitSmliey WhitePaper BitSmiley 上提供了丰富的 DeFi 以及稳定币功能,并且在清算机制上也进行了许多创新以更好保障用户利益并提高对于比特币网络的适配情况。无论从结算机制还是抵押机制来看,BitSmiley 都是一个优秀的稳定币 与 DeFi 模型,何况比特币生态还在方兴未艾的状态,BitSmiley 应该能在稳定币的竞争中占到相当的市场份额。 4.2 BitStable BitStable 同样是一个基于超额抵押设计的比特币稳定币协议,现阶段支持来自比特币主网的 Ordi 和 Mubi 资产以及以太坊的 USDT 抵押。根据三种资产的波动率,BitStable 设置了不同的超额抵押比例,USDT 是 0%,Ordi 是 70%,而 Mubi 达到了 90%。 BitStable 的代币铸造模式,图片来源:Bitstable.finance BitStable 还在以太坊上部署了相应智能合约,质押获得的 DALL 稳定币可以在以太坊上 1:1 兑换 USDT 与 USDC。同时,BitStable 采取了双币机制,除了稳定币 DALL 外,其自身采用了 BSSB 作为治理代币 ,通过 BSSB 可以参与社区的投票治理,同时共享网络的收益。BSSB 总量为 2100 万枚,被以两种方式进行了分配。第一种是通过在比特币网络上质押 DALL 代币赚取相应 BSSB 治理代币,项目方将陆续通过质押奖励分配出 50% 的 BSSB 代币。第二种方式是去年 11 月底在 Bounce Finance 上进行的两轮 LaunchPad,分别通过 Auction 质押拍卖以及固定价格拍卖分发了 30% 和 20% 的 BSSB。但是在 Auction 质押拍卖中出现了黑客攻击的情况,导致超过 300 万枚 BBSB 代币被销毁。 BSSB 币价走势,图片来源:coinmarketcap 不过这个过程中项目团队及时的对黑客攻击做出了应对,对于剩余未被黑客攻击影响的 25% 的代币仍然进行了发放,虽然付出了更高的成本,但这一措施较好的恢复了社区的信心,最终没有导致开盘后闪崩的情况出现。 5. 比特币 DeFi 5.1 Bounce Finance Bounce Finance 由一系列的 DeFi 生态项目组成,包括了 BounceBit,BounceBox 和 Bounce Auction 三部分。值得注意的是,Bounce Finance 最早并不是服务于 BTC 生态的项目,而是针对以太坊和币安链设置的一个拍卖协议,去年 5 月才趁着比特币开发热潮转移了阵地。BounceBit 是一条 EVM 兼容的比特币 POS 侧链,而其上用来选取验证者的资产则是从比特币主网质押的比特币。同时,BounceBit 中引入了混合收益机制,用户通过在 BounceBit 质押 BTC 资产,既可以在链上通过 POS 验证和相关 DeFi 协议赚取收益,同时还可以通过链上镜像机制,在 CEX 上赚取相应收益后,将资产安全的转移与转出。BounceBox 则类似于 Web2 中的应用商店,发布者可以在其中自定义设计一个 dApp,也就是 box,完成后通过 BounceBox 进行发放,然后用户便可以选择自己喜欢的 box 参与其中的 DeFi 活动。而 Bounce Auction 则是项目原来在以太坊上的主体部分,主要进行各种资产的拍卖,并提供了包括固定价拍卖,英国拍卖和荷兰拍卖在内的各式拍卖方式。 Bounce 的原生代币 Auction 在 2021 年便进行了发放,并在 Bounce Finance 上进行的多轮代币 LaunchPad 中作为获得积分奖励的指定质押代币,质押奖励活动拉动了最近 Auction 代币价格的不断上涨。而更值得关注的是 Bounce 在转向比特币后所新搭建的质押链 BounceBit,现阶段开启了链上质押获取积分与测试网交互积分活动,项目的 X 账户上也明确表示了后续可以用积分兑换代币并且代币的发行就会在今年 5 月进行。 Auction 近一年价格走势,图片来源:Coinmarketcap 5.2 Orders Exchange Orders Exchange 是一个完全构建在比特币网络上的 DeFi 项目,现阶段支持了数十种 BRC20 资产的限价与市价挂单交易,并计划后续推出 BRC20 资产之间的 Swap。Orders Exchange 的底层技术由 Ordinals Protocol,PSBT 和 Nostr Protocol 三部分构成。关于 Ordinals Protocol 的介绍可以参考 Kernel 之前的研报 Kernel Ventures: RGB 能否复刻 Ordinals 的热潮。PSBT 是比特币上的一种签名技术,用户通过 SIGHASH_SINGLE | ANYONECANPAY 签署由 Input 与 Output 组成的 PSBT-X 格式内容,Input 内是用户自己将执行的交易,而 Output 中包含的内容是用户执行交易的前提,需要有另一个用户执行了 Output 内容并进行 SIGHASH_ALL 签名并公示在主网上后,Input 中的内容才最终生效。Orders Exchange 的挂单交易中,用户通过 PSBT 签名方式完成挂单并等待另一方完成交易。 PSBT 挂单方式,图片来源:orders-exchange.gitbook.io Nostr 是根据 NIP-100 设置的一种资产转移协议,提高了资产在不同 DEX 间的互操作性。该项目全部 1 亿枚代币已完全释放,虽然项目方在白皮书中强调该代币仅为实验代币,不具备任何价值,但项目方精心设计的空投计划还是表现出了明显的代币经济意图。初始代币的分配方向主要有 3 个,45% 的代币被分配给了在 Orders Exchage 上的交易者,40% 的代币被空投给了早期的用户与推广者,10% 被分配给了开发者。但值得注意的是,40% 的空投部分无论在官网还是官推都没有详细介绍分配方式,同时在官方宣称的空投后也没有在推特或 Discord 的 Orders 社群激起讨论,因而空投的实际发放情况存疑。整体来看 Orders Exchange 的买单挂单页面直观,清晰,可以看到按顺序排列的所有买单与卖单价格,在现阶段提供 BRC20 买卖的平台中,质量上乘。后续推出 BRC20 代币间的 Swap 服务,应该能对帮助到协议的价值捕获。 5.3 Alex Alex 是在比特币侧链 Stacks 上建立的 DeFi Protocol,现阶段支持了 Swap,Lending,Borrow 等交易类型。同时,Alex 还对传统DeFi 交易模行了创新。首先是 Swap,传统的 Swap 计价模式可以分为普通币对的 x*y=k 与稳定币的 x+y=k 两种,但在 Alex 中可以自由设置币对的交易规则,按照一定的比例设置为 x*y=k 和 x+y=k 两种模型的线性组合。同时 Alex 还引入了 OrderBook 这种链上链下相结合的订单薄模式,使用户可以零成本的快速取消挂单。最后 Alex 提供了固定利率的借贷活动,并且为借贷服务建立了多元化抵押品池而非传统的单一抵押品,其中的抵押品由风险资产和无风险资产组成,降低了借贷的风险。 Alex OrderBook 的实现原理,图片来源:Alexgo 文档 不同于比特币生态的其他 DeFi 项目在 Ordinals 协议带火比特币生态后才陆续入场,Alex 早在上一轮牛市就开始了对于比特币 DeFi 生态的布局,并拿到了种子轮的融资。无论是从交易的性能还是交易的种类上看,Alex 在现阶段 BTC 生态的 DeFi 中处于相对领先的水平,甚至许多以太坊上的 DeFi 项目都达不到 Alex 的交易体验。Alex 的原生代币 Alex Lab 总量为 10 亿枚,现阶段已经释放了总量的 60%,后续可以通过在 Alex 质押或提供流动性赚取一部分代币,但是相应的收益很难达到早期上线时的水平了。作为 BTC 上现阶段最完善的 DeFi 项目,Alex 的市值并不算高,而且这轮牛市中,BTC 生态应该会是重要甚至主要叙事,应该会给现阶段的整个 BTC 生态带来不少溢价。此外,最近 Alex 部署的侧链 Stacks 也将迎来重要的中本聪升级,届时 Stacks 无论在交易速度还是交易成本上都会有大幅优化,安全性也将背靠比特币主网,成为一条真正的 Layer2。这一升级也能大大降低 Alex 自身的运行成本并改善其交易体验与安全性,更多资金涌入 Stacks 链也可以为 Alex 提供更大的市场与交易需求,为协议带来更多的收益。 6. 总结 Ordinals 协议的应用,改变了比特币主网无法实现复杂逻辑和发行资产的原状,借鉴或改进 Ordinals 的思路,各类资产协议也在比特币网络上接连被推出。但比特币主网的应用层并没有提供这方面服务的准备,在铭文资产爆发的情况下,比特币应用所能实现的功能显得相当落后,因而现阶段比特币网络上应用的开发成了各方抢占的热点。在各类应用的发展中,Layer2 有着最高的优先级,因为其他各类 DeFi 协议无论如何发展也只是改善了交易体验,但如果无法改善主网的交易速度和降低交易成本,比特币主网的资产流动性始终难以得到释放,链上充斥着的也更多会是为了投机进行的打新交易。在比特币主网完成了对于交易速度和成本的改进后,下一步便是对于交易体验和交易多样性的改进。各类 DeFi 或者稳定币协议,为交易者提供了多样的金融衍生品交易。最后便是能够使比特币主网和其他网络资产互相流通的跨链协议,比特币上的跨链协议相对成熟,主要其不完全由于比特币主网的开发热潮兴起,许多全链桥和主流跨链桥设计之初便提供了对比特币网络的跨链服务。对于 SocialFi 和 GameFi 这类 dApp,由于比特币主网高 gas ,高延迟的限制,现阶段还未出现现象级的项目,但是随着二层网络的提速与扩容,未来在比特币二层网络上得到发展的可能性会比较大。几乎可以肯定,比特币生态至少会是这轮牛市的热点之一,甚至会是主流叙事。有了充分的热情和巨大的市场,虽然比特币的各生态还在开发早期阶段,相信这轮牛市中将可以看到各个赛道优秀项目的涌现。 比特币应用生态全景图,图片来源:Kernel Ventures Kernel Ventures是一个由研究和开发社区驱动的加密风险投资基金,拥有超过70个早期投资,专注于基础设施、中间件、dApps,尤其是ZK、Rollup、DEX、模块化区块链,以及将搭载未来数十亿加密用户的垂直领域,如账户抽象、数据可用性、可扩展性等。在过去的七年里,我们一直致力于支持世界各地的核心开发社区和大学区块链协会的发展。 参考 BEVM白皮书:https://github.com/btclayer2/BEVM-white-paper什么是比特币默克尔化抽象语法树:https://www.btcstudy.org/2021/09/07/what-is-a-bitcoin-merklized-abstract-syntax-tree-mast/#MAST-%E7%9A%84%E4%B8%80%E4%B8%AA%E4%BE%8B%E5%AD%90BitVM白皮书:https://bitvm.org/bitvm.pdf比特币脚本原理:https://happypeter.github.io/binfo/bitcoin-scriptsSatoshiVM 官网:https://www.satoshivm.io/Multibit's Docs:https://docs.multibit.exchange/multibit/protocol/cross-chain-processAlex 白皮书:https://docs.alexgo.io/Merlin 技术文档:https://docs.merlinchain.io/merlin-docs/Sobit 白皮书:https://sobit.gitbook.io/sobit/

Kernel Ventures:BTC 生态开发热潮下的应用层全景图

作者:Kernel Ventures Jerry Luo
审稿:Kernel Ventures Mandy, Kernel Ventures Joshua, Kernel Ventures Rose
TLDR:
伴随着铭文赛道的爆火,比特币主网现有应用层无法满足铭文市场,是当下比特币网络开发的重点。现阶段比特币主流的 Layer2 方案有三种,分别是闪电网络,侧链以及 Rollup:闪电网络通过建立链下支付通道实现点对点支付,通道关闭后在主网进行结算;侧链在主网通过特定地址或者多签地址锁定主网 BTC 资产,在侧链上铸造等值 BTC 资产。其中 Merlin Chain 能够支持多类铭文资产跨链,并且背后与 BRC420 资产社区联系紧密,现阶段链上 TVL 总量超过了 30 亿美金;现阶段的 BTC Rollup 基于 Taproot 电路在链上模拟智能合约,并在比特币主网以外完成打包与计算操作。其中 B2 Network 在实现过程中走在了最前列,链上 TVL 总量也超过了 2 亿美金。专门针对比特币设计的跨链桥并不多,现阶段更多的集成了主流公链的多链桥和全链桥,其中 Meson.Fi 与不少比特币二层项目建立了合作关系。比特币稳定币协议多采用超额抵押的形式进行实现,并且基于稳定币协议构建了其他 DeFi 协议进行补充,以带给协议用户更多的收益。比特币生态上的 DeFi 项目差异较大,有从其他链上迁移而来的,有在这轮开发热潮下建立在比特币主网上的,还有从上轮牛市发迹,部署在比特币侧链上的。整体来看 Alex 有着最完善的交易种类和最好的交易体验,但 Orders Exchange 有更大的上升空间。比特币生态会是这轮牛市的重要叙事,可以适当关注比特币生态各细分赛道头部项目的动向。

1. 背景
伴随 Ordinals 协议带来的铭文资产溢出,曾经因为缺少智能合约、开发只能依靠脚本语言且基础设施和拓展功能薄弱的比特币网络迎来了一轮数据上链热潮(相关介绍可以参考 Kernel 之前的研报:RGB 能否复刻 Ordinals 的热潮)。和以太坊网络曾爆火时遇到的情况一致,文字,图片甚至视频被人们争相写入 4MB 大小永远不会被执行的 Tapscript 脚本空间。这一上链热潮虽然促进了比特币网络生态的繁荣与基础设施的开发,但同时也为比特币网络带来了激增的交易量与巨大的存储压力。此外,对于各式的铭文,简单的转账已经无法满足大家的交易需求,用户也期待着以太坊上繁多的衍生交易服务能够引入比特币网络。因此,比特币主网应用层的开发成为当下市场相对迫切的需求。

近一年比特币主网日交易量变化,图片来源:CryptoQuant
2. 比特币 Layer2
不同于以太坊上 Layer2 方案的相对一致性,比特币无法基于自身脚本语言实现智能合约,其智能合约的发布必须依赖第三方协议进行。比特币主网 Rollup 类型的 Layer2,无法像以太坊上 Rollup 类型 Layer2 接近以太坊主网安全性一样接近比特币主网安全性。现在比特币主网上存在着多种 Layer2 方案,包括闪电网络,侧链以及基于 TapScript 实现的 Rollup。
2.1 闪电网络
闪电网络是最早的比特币 Layer2 方案,由 Gregory Maxwell 在 2015 年 12 月第一次提出了闪电网络的协议栈 — BOLT,Lightning Labs 在 2017 年 1 月发布了 alpha 版本的 Lightning Network 并在后续对其进行了持续的升级与改进。闪电网络通过建立用户间点对点的链下支付通道,使用户可以在通道中进行任意次数以及规模的资产划转而不必支付费用,直到某一方关闭闪电网络后才会对之前的交易进行结算并只用支付一次交易的成本。由于采用了链下通道,Lightning Network 最高可以达到千万级别的 TPS。但链下通道具有中心化的风险,同时两个地址之间要实现交易必须先建立链下通道或者通过均建立了链下通道的第三方实现互联,并且在交易过程中必须保证双方在线,才可以安全交易。

闪电网络原理,图片来源:Kernel Ventures
2.2 侧链
比特币的侧链方案与以太坊的侧链方案类似,本质是发行一条实现了链上代币与比特币 1:1 锚定的新链。这条新链不会受到比特币主网交易速度以及开发难度的限制,可以以更快的速度以及更低的成本转移比特币锚定代币。侧链方案虽然继承了主网的资产价值,但并未继承主网的安全性,交易的结算被放在了侧链上进行。
2.2.1 Stacks
现阶段的 Stacks 项目是 2021 年推出的 2.0 版本,用户可以在比特币主网锁定 BTC ,然后得到 Stacks 上等值的 SBTC 资产,但其在侧链上的交易需要支付 Stacks 的原生代币 STX 作为 gas。比特币主网不像以太坊主网一样存在智能合约地址可以对锁定的 BTC 进行有效管理,因而锁定的 BTC 被发往了比特币主网的特定多签地址。由于 Stacks 主网可以使用 Clarity 语言进行智能合约开发,因而释放过程相对简单,只需要向 Stacks 上的 Burn-Unlock 合约发起请求,便可以销毁 Stacks 上的 SBTC,并将锁定的 BTC 发送回主网的原地址。Stacks 主网的出块过程采用了 POX 共识机制,比特币主网的矿工发送 BTC 竞价出块机会,相应出价越高的矿工可以获得更高的权重,最终通过特定的可验证随机函数选出胜利者在 Stacks 主网打包出块,并获得相应 Stacks 原生代币 STX 的奖励。同时这部分参与竞价的 BTC 则会以 SBTC 的形式分配给 STX 代币的持有者作为奖励。

POX 原理,图片来源:Kernel Ventures
此外,Stacks 预期在 4 月进行中本聪升级,升级内容包括了对其开发语言 Clarity 的优化,以降低开发者的开发门槛。其次 Stacks 优化了网络的安全等级,在 Stacks 上可以直接出了主网区块交易并且具有 100% 的比特币重组抵抗力,Stacks 中交易的确认都会放在主网进行,使其安全性从侧链升级到了和比特币主网等同的 Layer2。最后 Stacks 还对出块速度进行了大幅提速,在测试阶段达到 5 秒一个区块的出块速度(现阶段为 10-30 分钟一个区块)。如果中本聪升级可以顺利完成,Stacks 在性能上基本接近许多以太坊上的 Layer2,应该会吸引许多资金的流入并提高 Stacks 生态的开发热度。
2.2.2 RSK
RSK(RootStock)是一条无原生代币的比特币侧链,现阶段侧链上的交易以比特币作为手续费。用户可以通过 RSK 内置的 PowPeg 双向锚定协议用主网的 BTC 在 RSK 上以 1:1 的比例兑换 RBTC。RSK 也是一条 POW 机制的公链,但是引入了合并挖矿的机制,比特币的矿工采矿的基础设施和设置可以完全适用于 RSK 的挖矿过程,降低了比特币矿工参与 RSK 挖矿的成本。现阶段的 RSK 上有着三倍于主网的交易速度以及主网 1/20 的交易成本。

RSK 与比特币主网性能对比,图片来源:RSK 白皮书
2.2.3 BEVM
BEVM 是一条底层与 EVM 兼容的 POS 侧链,现阶段还未发行自己的原生代币。其在比特币主网通过 Schnorr 的多签算法可以将接收的资产存储在一个 1000 个地址共同控制的多签脚本地址,这 1000 个地址便对应 BEVM 上的 1000 个 POS 验证者。同时通过在 TapScript 区域内编写 MAST(默克尔化抽象语法树) 形式的脚本程序可以实现对于资产的自动化控制。MAST 中使用了许多独立的小块来描述程序,每个独立的小块对应一部分代码逻辑,而在 Script 脚本上不需要存储大量的逻辑代码,只需要存储每部分代码块的哈希结果即可,这大大减少了在区块链上所需存储的合约代码量。当用户向 BEVM 转入 BTC 时,这部分 BTC 便被脚本程序进行了锁定,只有得到超过 2/3 的验证者的签名,才可以对锁定的 BTC 进行解锁,并发还给对应地址。BEVM 在底层兼容了 EVM,可以无成本的迁移以太坊上的各类 dApp,交易以上 BTC 锚定资产并以锚定资产作为 gas 支出。

MAST 以及非 MAST 形式下数据量随子脚本数量的增长速度,图片来源:BTCStudy
2.2.4 Merlin Chain
Merlin Chain 是一条底层 EVM 兼容的比特币侧链,支持通过 Particle Network 使用比特币地址直连网络,并且会将该地址产生一个唯一的以太坊地址,也可以直接用以太坊账户通过 RPC 节点连接。现阶段的 Merlin Chain 支持 BTC、Bitmap、BRC-420 和 BRC-20 资产的跨链迁移。其中 BRC-420 协议和 Merlin Chain 一样,是 Bitmap 资产社区基于递归铭文进行的开发,整个社区基于递归铭文还提出了 RCSV 的递归铭文矩阵,Bitmap Game 元宇宙平台等项目。

BTC 原生账户与 Merlin Chain 的连接方式,图片来源:Merlin 文档
Merlin Chain 于 2 月 5 日进行了主网上线,随后进行了一轮 IDO 和质押奖励,分配了 21% 的治理代币 MERL。直接和大规模的空投吸引了大量参与者,Merlin Chain 的 TVL 目前超过 30 亿美元,比特币的链上 TVL 也超过了 Polygon,在公链中排到了第 6 位。

比特币 TVL 分布,图片来源:DeFi Llama
在 People‘s Launchpad 的 IDO 中,用户质押 Ally 或超过 0.00025 BTC 获得可以兑换 MERL 购买资格的积分奖励,可累计奖励的质押 BTC 上限为 0.02,可对应获得的 MERL 代币为 460 枚。该轮分配较少,仅占代币总量的 1%。但尽管如此,按照当前 MERL 2.9 美金的场外计价计算,其收益率也超过 100%。在第二轮质押激励活动中,Merlin 分配了其代币总量的 20%,用户通过 Merlin's Seal 可以在 Merlin Chain 上质押 BTC、Bitmap、USDT、USDC 和部分 BRC-20 与 BRC-420 资产。用户在 Merlin 上的资产会每小时以 USD 计价进行一次快照,最后该日均价乘以 10000 便是用户获得的积分。第二轮质押活动采取了类似 Blast 的组队模式,用户可以选择队长和成员两个身份,选择队长身份可以获得一个邀请码,而选择队员身份则需要输入队长的邀请码绑定队伍。
Merlin 在当前落地的比特币 Layer2 方案中技术成熟,可以解放 Layer1 资产的流动性,主网的比特币可以在 Merlin 上低成本的流动。其背后依靠的 Bitmap 生态社区非常庞大,技术也相对完善,长期来看应该会有不错的发展。现阶段在 Merlin 上的质押具有极高的收益率,除了 MERL 的收益预期外,还有机会获得项目方空投的相应 Meme 或其他代币,比如官方空投的 Voya 代币,单号质押超过 0.01 BTC 便可以获得 90 枚 Voya 代币空投,上线以来币价不断上涨,最高到达了发行价的 514%,现阶段报价 5.89 美金,质押时按照比特币 50000 美金的均价计算,收益率高达 106% 。

Voya 代币价格走势,图片来源:coingecko
2.3 Rollup
2.3.1 BitVM
BitVM 是基于 Optimistic Rollup 的比特币 Layer2。类似于以太坊上的 Optimistic Rollup,交易者首先在比特币主网向 Layer2 发送交易信息,然后在 Layer2 对交易进行计算与打包,得到的结果发送至 Layer1 的智能合约进行确认,这个过程需要留给验证者一定时间对证明者的陈述提出挑战。但比特币并不具备原生智能合约,所以具体实现不像以太坊的 Optimistic Rollup 一样简单,涉及了 Bit Value Commitment,Logic Gate Commitment 和 Binary Circuit Commitment 等过程,下文中分别用 BVC,LGC,BCC 对这三者进行代指。
BVC(Bit Value Commitment):BVC 本质上是一个电平结果,只有 0 和 1 两种可能,类似于其他编程语言中的 Bool 类型变量,比特币是基于栈的脚本语言操作,并不存在这种类型的变量,因而在 BitVM 中使用了字节码的组合对其进行模拟。<Input Preimage of HASH>
OP_IF
OP_HASH160 //Hash the input of user
<HASH1>
OP_EQUALVERIFY //Output 1 if Hash(input)== HASH1
<1>
OP_ELSE
OP_HASH160 //Hash the input of user
<HASH2>
OP_EQUALVERIFY //Output 0 if Hash(input)== HASH2
<0>  在 BVC 中,用户需要首先需要提交一个输入,然后在比特币主网上对输入进行哈希,只有当输入的哈希结果等于 HASH1 或者 HASH0 时才会对脚本解锁,当哈希结果为 HASH1 时,输出为 1,当哈希结果为 HASH2 时,输出为 0。接下来的描述中,我们会将整个代码段打包成一个 OP_BITCOMMITMENT 操作码,简化描述过程。LGC(Logic Gate Commitment):计算机中所有函数从本质上可以对应为一系列 Bool 门电路的组合,而任何门电路经过化简,都可以等效为一系列与非门电路的组合,也就是说,如果我们可以在比特币主网通过字节码模拟出与非门电路,本质上就可以复现任何函数。虽然比特币中没有直接实现与非的操作码,但是有与门 OP_BOOLAND 以及非门 OP_NOT,通过这两个操作码的叠加便可复现与非门功能。对于两个经由 OP_BITCOMMITMENT 得到的输出电平,我们通过 OP_BOOLAND 和 OP_NOT 操作码便可以构成一个与非输出电路。BCC(Binary Circuit Commitment):在 LGC 电路的基础上,我们可以在输入与输出间构建特定的门电路关系,而在 BCC 门电路中,这个输入来自于 TapScript 脚本中所对应的哈希原像,而不同的 Taproot 地址对应一个不同的门,我们称之为 TapLeaf,众多 TapLeaf 构成了一个 Taptree,作为 BCC 电路的输入。

8 输入与非门电路及其对应的 Taproot 电路,图片来源:BitVM 白皮书
理想情况下,由 BitVM 的证明者在链下对电路进行编译与计算,然后将得出的结果返回给比特币主网执行即可。但是由于链下的过程并不由智能合约自动化执行,为防止证明者的作恶,BitVm 需要在主网的验证者进行挑战。挑战过程中,验证者首先会复现某个 TapLeaf 门电路的输出,然后将其与证明者提供的其他 TapLeaf 结果一起作为输入驱动电路。如果输出 False,则挑战成功,说明证明者作假,反之则挑战失败。但要完成这一过程,需要挑战者和验证者提前对 Taproot 电路进行共享,并且现阶段只能实现单一验证者与单一证明者的交互。
2.3.2 SatoshiVM
SatoshiVM 是一种 EVM 兼容的 Zk Rollup 型比特币 Layer2 方案。SatoshiVM 上智能合约的实现方式与 BitVM 上相同,都是使用了 Taproot 电路模拟复杂函数,因而不再赘述。SatoshiVM 总体分为三层,分别是 Settlement Layer,Sequencing Layer 和 Proving Layer。Settlement Layer 也就是比特币主网,负责提供 DA 层,存储交易的 Merkle 根和零知识证明同时通过 Taproot 电路验证 Layer2 打包交易的正确性而进行结算。Sequencing Layer 负责对交易进行打包和处理,将交易的计算结果和零知识证明一同返回到主网。Proving Layer 负责针对 Sequencing Layer 传来的 Task 生成零知识证明并传回 Sequencing Layer。

SatoshiVM 整体结构,图片来源:SatoshiVM 官方文档
2.3.3 BL2
BL2 的是基于 VM 通用协议(官方预设中可以兼容所有主流虚拟机的虚拟机协议)设计的 Zk Rollup 类型比特币 Layer2。和其他 Zk Rollup 的 Layer2类似,其 Rollup Layer 主要也是通过 zkEvm 对交易进行打包并生成相应零知识证明。BL2 的 DA 层引入了 Celsetia 存储批量的交易数据而仅使用 BL2 网络存储零知识证明,最后将零知识证明的验证和 BVC 在内的少量验证数据返回主网进行结算。

BL2 网络结构,图片来源:BL2.io
BL2 的 X 账户最近更新频繁,基本处于日更的状态,也公示发展规划以及代币方案,会将 20 % 的代币分配给 OG Mining,同时也预示了近期测试网的上线。现阶段该项目相对其他比特币 Layer2 比较小众,处于早期状态,引入了 Celestia,比特币 Layer2 等最近较为火热的概念,概念上足够有热度。但其官网没有设计实际性的功能,仅有预期的演示,没有项目白皮书。同时设定的目标过高,比如比特币上的账户抽象以及兼容主流虚拟机的 VM 协议,实现难度都不小,团队最终能否有能力完成这一目标并不确定,因而现阶段还难以对项目做出一个准确的评判。

BL2 路线图,图片来源:BL2 官方 X
2.3.4 B2 Network
B2 Network 是以比特币为结算层和 DA 层的 zkRollup 型 Layer2,在结构上总体可以分为 Rollup Layer 和 DA Layer 两层。用户的交易会首先在 Rollup Layer 提交和处理,Rollup Layer 采用 zkEvm 的方案执行用户交易并输出相关证明,并将用户状态也存储在 ZK-Rollup 层。批量打包好的交易和生成的零知识证明将被转发到 DA Layer 进行存储和验证。DA Layer 可以细分为去中心化存储节点,B2 Node 和比特币主网三个部分。去中心化存储节点接收 Rollup 数据后,定期生成基于 Rollup 数据的时间和空间的零知识证明,并将生成的零知识存储证明发送给 B2 Node。B2 Node 则负责对数据进行链外校验,校验完成后将交易数据和相应零知识验证以 TapScript 的形式在比特币主网进行记录。比特币主网将负责确认零知识验证的真实性并进行最终结算。

B2 Network 网络结构,图片来源:B2 Network 白皮书
B2 Network 在各大 BTC Layer2 方案中有着不错的关注度,在 X 上已有 30 万粉丝,超过了 BEVM 的 14 万和同为 Zk Rollup Layer2 的 SatoshiVM 的 16.6 万。同时该项目得到了包括 OKX,HashKey 在内的种子轮融资,备受关注,现阶段链上 TVL 已超过了 6 亿美金。

B2 Network 参投企业,图片来源:B2 Network 官网
B2 Network 上线了主网 B2 Buzz,现阶段要使用 B2 Network 必须获得邀请链接,而不能直接参与。B2 Network 借鉴了 Blast 的传播模式,在新入局者和已入局者之间进行了一个强有力的双向利益绑定,给到已入局者充分的项目推广动力。完成关注官网推特等简单任务后便可以进入质押界面,现阶段支持利用 BTC,Ethereum,BSC 和 Polygon 四条公链上的资产进行质押。比特币主网的资产除比特币外,铭文 ORDI 和 SATS 也可以进行质押。如果质押 BTC 资产则直接对资产进行转账即可,而如果要质押铭文资产,需要先后经历铭刻和转账阶段。值得注意的是,由于比特币主网并没有智能合约,所以现阶段的 BTC Layer2 跨链转账中,资产本质是被多签锁定在了一个特定的 BTC 地址。现阶段在 B2 Network 上质押的资产最早要等到今年 4 月才可以释放,而期间质押获得的积分可以兑换矿机组件进行虚拟挖矿,其中 BASIC 矿机仅需要 10 个组件便可开启,而 ADVANCED 矿机则需要超过 80 个组件。
官方公布了部分代币计划,将 5% 的代币总量用于奖励虚拟挖矿,另外将 5% 分配给 B2 Network 上的生态项目进行空投。现阶段项目方在 Tokenomics 公平性相互内卷的情况下,B2 Network 只分配 10% 的代币总量,难以充分调动社区的热情,预计 B2 Network 后期应该会有其他质押激励或 LaunchPad 的计划。
2.4 综合对比
综合三种比特币的二层网络形式,闪电网络拥有最快的交易速度和最低的交易成本,在比特币的实时支付以及线下购买物品的过程中有着更多的应用。但是如果要实现比特币上应用生态的开发,搭建各类 DeFi 或者跨链协议闪电网络物流从稳定性还是安全性上都难以支持,因而应用层市场的竞争主要在侧链与 Rollup 类型之间展开。相对来说侧链方案不用在主网确认交易,同时有着更为成熟的技术方案和实现难度,因而现阶段在三者中有最高的 TVL。由于比特币主网上智能合约的缺失,现阶段对于 Rollup 传回数据的确认方案还在发展之中,具体的落地可能还需要一段时间的等待。

比特币 Layer2 的综合对比,图片来源:Kernel Ventures
3. 比特币跨链桥
3.1 Multibit
Multibit 是比特币网络上专门针对 BRC20 资产设计的跨链桥,现阶段支持了 BRC20 资产向 Ethereum,BSC,Solana 和 Polygon 四条链上的迁移。跨链过程中,用户首先需要将资产发送到一个 Multibit 指定的 BRC20 地址,等待 Multibit 在主网确定资产的转移后,用户便有了在其他链上铸造相应资产的权限,最终完成跨链还需要用户支付 gas 在另一条链上 Mint。现阶段可以对 BRC20 这类铭文资产提供跨链的跨链桥中,Multibit 具有最好的交互性并且资产的 BRC20 资产数量也最多,包括了 ORDI 在内的十余种 BRC20 资产。此外,Multibit 还积极拓展了 BRC20 之外资产的跨链,现阶段支持了 BTC 原生稳定币协议 Bitstable 的治理代币和稳定币的 Farming 和跨链。在现阶段对 BTC 衍生资产提供跨链的跨链桥中,Multibit 是赛道中最前沿的存在。

Multibit 可以支持跨链的 BRC20 资产,图片来源:Multibit's X Account
3.2 Sobit
Sobit 是在 Solana 和比特币主网间建立的跨链协议,现阶段的跨链资产主要为 BRC20 代币以及 Sobit 的原生代币。用户在比特币主网上将 BRC20 资产质押到指定的 Sobit 地址,等待 Sobit 的验证网络验证通过后,用户便可以在 Solana 网络的指定地址 Mint 映射的资产。SoBit 验证网络的核心是基于验证器的框架,需要多个受信任的验证器来批准跨链交易,为防止未经授权的转账提供了额外的安全保障,这种措施可以显著提高系统的安全性和鲁棒性。Sobit 的原生代币为 Sobb,可用来支付 Sobit 跨链桥的跨链费用,总量为 10 亿枚。Sobb 将 74% 的资产以 Fair Launch 的方式进行了分配。和比特币上其他 DeFi 协议以及跨链协议代币上线后的情况不同, Sobb 币价短暂上行后便进入了下降周期,而且跌幅超过 90%,并且最近并没有伴随着 BTC 上涨的热情有明显的起势,这可能和 Sobb 所选赛道有关。Sobit 和 Multibit 选取的服务对象高度重合,但现阶段 Sobit 只能支持面向 Solana 的跨链,同时可跨链的 BRC20 资产也只有三种。相比同样提供 BRC20 资产跨链的 Multibit,Sobit 在生态和跨链资产的完善性上相差较远,因而在与 Multibit 的竞争中难以取得优势。

Sobb 代币价格走势,图片来源:Coinmarketcap
3.3 Meson Fi
Meson Fi 是基于 HTLC(Hash Time Locked Contract)原理实现的跨链桥,现阶段已经实现了包括 BTC,ETH,SOL 在内的 17 条主流公链间的跨链交互。跨链过程中,用户在链下对交易信息签名后提交给 Meson Contract 确认并在原链锁定相应资产,Meson Contract 在确认消息后通过 Relayer 将消息广播到目标链上。其中 Relayer 有 P2P 节点,中心化节点和无节点三种情况,P2P 节点具有更好的安全性,中心化节点有更高的效率和可用性而不通过节点的情况则需要用户在两条链上均持有一定资产,用户可以根据实际情况选择。目标链上的 LP 通过 Meson Contract 的 postSwap 检验交易的正确性后也在 Meson Contract 上调用 Lock 方法锁定相应资产,之后将地址暴露到 Meson Fi。接下来的操作便是 HTLC 的过程,用户在原链指定 LP 地址与创建哈希锁,在目标链通过暴露哈希锁原像取出资产,LP 再通过原像在原链取出用户锁定的资产。

Meson Fi 上的 HTLC 过程,图片来源:Kernel Ventures
Meson Fi 并不是一条专门针对比特币资产设计的跨链桥,更类似于 LayerZero 这种全链桥。但现阶段的主流 BTC Layer2 诸如 B2 Network,Merlin Chain 以及 Bevm 均与其建立了合作关系并在质押过程中推荐使用 Meson Fi 进行资产的跨链。据官方披露,Meson Fi 在 Merlin Chain 质押的三天活动期间处理了超 20 万笔交易,以及约 2000 枚 BTC 资产的跨链质押,几乎包揽了所有主流链跨向比特币的交易。比特币上 Layer2 不断发布并推出质押激励的这一过程中,Meson Fi 可以吸引到大量资产的跨链,应该会为其后续生态的壮大以及提高跨链收益带来不少促进作用。
3.4 综合对比
总的来说,Meson Fi 和 另外两条跨链桥属于两个种类。Meson Fi 本质是一条全链形式的跨链桥,但是刚好与比特币的许多 Layer2 达成了合作,帮助其桥接来自于其他网络的资产。而 Sobit 和 Multibit 是针对比特币原生资产设计的跨链桥,服务对象是 BRC20 资产以及比特币上其他 DeFi 和稳定币协议资产。相对来说,Multibit 提供的 BRC20 资产种类更多,包括了 ORDI、SATS 在内的数十种资产,而 Sobit 至今仍只支持了三种 BRC20 资产。此外,Multibit 和部分比特币稳定币协议也建立了合作关系,提供了相关的跨链服务和质押收益活动,有着更全面的服务类型。最后 Multibit 还有更好的跨链流动性,提供了 Ethereum、Solana 和 Polygon 在内共计 5 条主流链的跨链服务。
4. 比特币稳定币
4.1 BitSmiley
BitSmiley 是基于 Fintegra 框架建立在比特币主网上的一系列协议,包含了稳定币协议,借贷协议和衍生品协议。用户通过其稳定币协议超额抵押 BTC,可以铸造出 bitUSD,而当用户想撤回其抵押的 BTC 时,需要将 bitUSD 发送回 Vault Wallet 进行销毁并支付一定手续费。而当质押品价值低于一定阈值时,BItSmiley 会进入对质押资产的自动清算流程,清算价格的计算公式如下:
$$𝐿𝑖𝑞𝑢𝑖𝑑𝑎𝑡𝑖𝑜𝑛 𝑃𝑟𝑖𝑐𝑒 = \frac{𝑏𝑖𝑡𝑈𝑆𝐷𝐺𝑒𝑛𝑒𝑟𝑎𝑡𝑒𝑑 ∗ 𝐿𝑖𝑞𝑢𝑖𝑑𝑎𝑡𝑖𝑜𝑛𝑅𝑎𝑡𝑖𝑜}{𝑄𝑢𝑎𝑛𝑡𝑖𝑡𝑦 𝑜𝑓 𝐶𝑜𝑙𝑙𝑎𝑡𝑒𝑟𝑎𝑙 }
$$
可以看到,具体的清算价格与用户抵押品实时的价值和所铸造 bitUSD 数量有关,其中 Liquidation Ratio 为一个固定的常数。在清算过程中,为了防止价格波动对被清算者造成的损失,BItSmily 中设计了一个 Liquidation Penalty 来进行补偿,清算时间越长,这个补偿的数额越大。资产的清算采取了荷兰拍的竞价方式,以达到最短时间内完成资产清算的目的。同时,BitSmiley 协议自身的盈余会被存储到指定账户,并定时进行拍卖,拍卖形式为 BTC 竞价的英国拍卖,这种方式可以对盈余资产的价值进行最大的发掘。对于这部分盈余资产,BitSmiley 项目方将 90% 用于对链上质押者的补贴,剩余 10% 则分配给 BitSmiley 团队作为日常维护成本。在 BitSmiley 的借贷协议中,也提出了一些针对比特币网络的结算机制创新。由于比特币主网 10 分钟一个区块的出块速度,无法像以太坊一样方便的引入预言机对价格的波动进行实时判断,所以在 BitSmiley 中引入了保险第三方的机制来避免另一方没有按时交付的情况,用户可以选择提前向第三方支付一定 BTC 作为交易保险(双方均需要支付),当某一方未按时完成交易时,则由担保方向另一方补偿损失。

BitSmiley 中的第三方保险机制,图片来源:BitSmliey WhitePaper
BitSmiley 上提供了丰富的 DeFi 以及稳定币功能,并且在清算机制上也进行了许多创新以更好保障用户利益并提高对于比特币网络的适配情况。无论从结算机制还是抵押机制来看,BitSmiley 都是一个优秀的稳定币 与 DeFi 模型,何况比特币生态还在方兴未艾的状态,BitSmiley 应该能在稳定币的竞争中占到相当的市场份额。
4.2 BitStable
BitStable 同样是一个基于超额抵押设计的比特币稳定币协议,现阶段支持来自比特币主网的 Ordi 和 Mubi 资产以及以太坊的 USDT 抵押。根据三种资产的波动率,BitStable 设置了不同的超额抵押比例,USDT 是 0%,Ordi 是 70%,而 Mubi 达到了 90%。

BitStable 的代币铸造模式,图片来源:Bitstable.finance
BitStable 还在以太坊上部署了相应智能合约,质押获得的 DALL 稳定币可以在以太坊上 1:1 兑换 USDT 与 USDC。同时,BitStable 采取了双币机制,除了稳定币 DALL 外,其自身采用了 BSSB 作为治理代币 ,通过 BSSB 可以参与社区的投票治理,同时共享网络的收益。BSSB 总量为 2100 万枚,被以两种方式进行了分配。第一种是通过在比特币网络上质押 DALL 代币赚取相应 BSSB 治理代币,项目方将陆续通过质押奖励分配出 50% 的 BSSB 代币。第二种方式是去年 11 月底在 Bounce Finance 上进行的两轮 LaunchPad,分别通过 Auction 质押拍卖以及固定价格拍卖分发了 30% 和 20% 的 BSSB。但是在 Auction 质押拍卖中出现了黑客攻击的情况,导致超过 300 万枚 BBSB 代币被销毁。

BSSB 币价走势,图片来源:coinmarketcap
不过这个过程中项目团队及时的对黑客攻击做出了应对,对于剩余未被黑客攻击影响的 25% 的代币仍然进行了发放,虽然付出了更高的成本,但这一措施较好的恢复了社区的信心,最终没有导致开盘后闪崩的情况出现。

5. 比特币 DeFi
5.1 Bounce Finance
Bounce Finance 由一系列的 DeFi 生态项目组成,包括了 BounceBit,BounceBox 和 Bounce Auction 三部分。值得注意的是,Bounce Finance 最早并不是服务于 BTC 生态的项目,而是针对以太坊和币安链设置的一个拍卖协议,去年 5 月才趁着比特币开发热潮转移了阵地。BounceBit 是一条 EVM 兼容的比特币 POS 侧链,而其上用来选取验证者的资产则是从比特币主网质押的比特币。同时,BounceBit 中引入了混合收益机制,用户通过在 BounceBit 质押 BTC 资产,既可以在链上通过 POS 验证和相关 DeFi 协议赚取收益,同时还可以通过链上镜像机制,在 CEX 上赚取相应收益后,将资产安全的转移与转出。BounceBox 则类似于 Web2 中的应用商店,发布者可以在其中自定义设计一个 dApp,也就是 box,完成后通过 BounceBox 进行发放,然后用户便可以选择自己喜欢的 box 参与其中的 DeFi 活动。而 Bounce Auction 则是项目原来在以太坊上的主体部分,主要进行各种资产的拍卖,并提供了包括固定价拍卖,英国拍卖和荷兰拍卖在内的各式拍卖方式。
Bounce 的原生代币 Auction 在 2021 年便进行了发放,并在 Bounce Finance 上进行的多轮代币 LaunchPad 中作为获得积分奖励的指定质押代币,质押奖励活动拉动了最近 Auction 代币价格的不断上涨。而更值得关注的是 Bounce 在转向比特币后所新搭建的质押链 BounceBit,现阶段开启了链上质押获取积分与测试网交互积分活动,项目的 X 账户上也明确表示了后续可以用积分兑换代币并且代币的发行就会在今年 5 月进行。

Auction 近一年价格走势,图片来源:Coinmarketcap
5.2 Orders Exchange
Orders Exchange 是一个完全构建在比特币网络上的 DeFi 项目,现阶段支持了数十种 BRC20 资产的限价与市价挂单交易,并计划后续推出 BRC20 资产之间的 Swap。Orders Exchange 的底层技术由 Ordinals Protocol,PSBT 和 Nostr Protocol 三部分构成。关于 Ordinals Protocol 的介绍可以参考 Kernel 之前的研报 Kernel Ventures: RGB 能否复刻 Ordinals 的热潮。PSBT 是比特币上的一种签名技术,用户通过 SIGHASH_SINGLE | ANYONECANPAY 签署由 Input 与 Output 组成的 PSBT-X 格式内容,Input 内是用户自己将执行的交易,而 Output 中包含的内容是用户执行交易的前提,需要有另一个用户执行了 Output 内容并进行 SIGHASH_ALL 签名并公示在主网上后,Input 中的内容才最终生效。Orders Exchange 的挂单交易中,用户通过 PSBT 签名方式完成挂单并等待另一方完成交易。

PSBT 挂单方式,图片来源:orders-exchange.gitbook.io
Nostr 是根据 NIP-100 设置的一种资产转移协议,提高了资产在不同 DEX 间的互操作性。该项目全部 1 亿枚代币已完全释放,虽然项目方在白皮书中强调该代币仅为实验代币,不具备任何价值,但项目方精心设计的空投计划还是表现出了明显的代币经济意图。初始代币的分配方向主要有 3 个,45% 的代币被分配给了在 Orders Exchage 上的交易者,40% 的代币被空投给了早期的用户与推广者,10% 被分配给了开发者。但值得注意的是,40% 的空投部分无论在官网还是官推都没有详细介绍分配方式,同时在官方宣称的空投后也没有在推特或 Discord 的 Orders 社群激起讨论,因而空投的实际发放情况存疑。整体来看 Orders Exchange 的买单挂单页面直观,清晰,可以看到按顺序排列的所有买单与卖单价格,在现阶段提供 BRC20 买卖的平台中,质量上乘。后续推出 BRC20 代币间的 Swap 服务,应该能对帮助到协议的价值捕获。
5.3 Alex
Alex 是在比特币侧链 Stacks 上建立的 DeFi Protocol,现阶段支持了 Swap,Lending,Borrow 等交易类型。同时,Alex 还对传统DeFi 交易模行了创新。首先是 Swap,传统的 Swap 计价模式可以分为普通币对的 x*y=k 与稳定币的 x+y=k 两种,但在 Alex 中可以自由设置币对的交易规则,按照一定的比例设置为 x*y=k 和 x+y=k 两种模型的线性组合。同时 Alex 还引入了 OrderBook 这种链上链下相结合的订单薄模式,使用户可以零成本的快速取消挂单。最后 Alex 提供了固定利率的借贷活动,并且为借贷服务建立了多元化抵押品池而非传统的单一抵押品,其中的抵押品由风险资产和无风险资产组成,降低了借贷的风险。

Alex OrderBook 的实现原理,图片来源:Alexgo 文档
不同于比特币生态的其他 DeFi 项目在 Ordinals 协议带火比特币生态后才陆续入场,Alex 早在上一轮牛市就开始了对于比特币 DeFi 生态的布局,并拿到了种子轮的融资。无论是从交易的性能还是交易的种类上看,Alex 在现阶段 BTC 生态的 DeFi 中处于相对领先的水平,甚至许多以太坊上的 DeFi 项目都达不到 Alex 的交易体验。Alex 的原生代币 Alex Lab 总量为 10 亿枚,现阶段已经释放了总量的 60%,后续可以通过在 Alex 质押或提供流动性赚取一部分代币,但是相应的收益很难达到早期上线时的水平了。作为 BTC 上现阶段最完善的 DeFi 项目,Alex 的市值并不算高,而且这轮牛市中,BTC 生态应该会是重要甚至主要叙事,应该会给现阶段的整个 BTC 生态带来不少溢价。此外,最近 Alex 部署的侧链 Stacks 也将迎来重要的中本聪升级,届时 Stacks 无论在交易速度还是交易成本上都会有大幅优化,安全性也将背靠比特币主网,成为一条真正的 Layer2。这一升级也能大大降低 Alex 自身的运行成本并改善其交易体验与安全性,更多资金涌入 Stacks 链也可以为 Alex 提供更大的市场与交易需求,为协议带来更多的收益。
6. 总结
Ordinals 协议的应用,改变了比特币主网无法实现复杂逻辑和发行资产的原状,借鉴或改进 Ordinals 的思路,各类资产协议也在比特币网络上接连被推出。但比特币主网的应用层并没有提供这方面服务的准备,在铭文资产爆发的情况下,比特币应用所能实现的功能显得相当落后,因而现阶段比特币网络上应用的开发成了各方抢占的热点。在各类应用的发展中,Layer2 有着最高的优先级,因为其他各类 DeFi 协议无论如何发展也只是改善了交易体验,但如果无法改善主网的交易速度和降低交易成本,比特币主网的资产流动性始终难以得到释放,链上充斥着的也更多会是为了投机进行的打新交易。在比特币主网完成了对于交易速度和成本的改进后,下一步便是对于交易体验和交易多样性的改进。各类 DeFi 或者稳定币协议,为交易者提供了多样的金融衍生品交易。最后便是能够使比特币主网和其他网络资产互相流通的跨链协议,比特币上的跨链协议相对成熟,主要其不完全由于比特币主网的开发热潮兴起,许多全链桥和主流跨链桥设计之初便提供了对比特币网络的跨链服务。对于 SocialFi 和 GameFi 这类 dApp,由于比特币主网高 gas ,高延迟的限制,现阶段还未出现现象级的项目,但是随着二层网络的提速与扩容,未来在比特币二层网络上得到发展的可能性会比较大。几乎可以肯定,比特币生态至少会是这轮牛市的热点之一,甚至会是主流叙事。有了充分的热情和巨大的市场,虽然比特币的各生态还在开发早期阶段,相信这轮牛市中将可以看到各个赛道优秀项目的涌现。

比特币应用生态全景图,图片来源:Kernel Ventures
Kernel Ventures是一个由研究和开发社区驱动的加密风险投资基金,拥有超过70个早期投资,专注于基础设施、中间件、dApps,尤其是ZK、Rollup、DEX、模块化区块链,以及将搭载未来数十亿加密用户的垂直领域,如账户抽象、数据可用性、可扩展性等。在过去的七年里,我们一直致力于支持世界各地的核心开发社区和大学区块链协会的发展。
参考
BEVM白皮书:https://github.com/btclayer2/BEVM-white-paper什么是比特币默克尔化抽象语法树:https://www.btcstudy.org/2021/09/07/what-is-a-bitcoin-merklized-abstract-syntax-tree-mast/#MAST-%E7%9A%84%E4%B8%80%E4%B8%AA%E4%BE%8B%E5%AD%90BitVM白皮书:https://bitvm.org/bitvm.pdf比特币脚本原理:https://happypeter.github.io/binfo/bitcoin-scriptsSatoshiVM 官网:https://www.satoshivm.io/Multibit's Docs:https://docs.multibit.exchange/multibit/protocol/cross-chain-processAlex 白皮书:https://docs.alexgo.io/Merlin 技术文档:https://docs.merlinchain.io/merlin-docs/Sobit 白皮书:https://sobit.gitbook.io/sobit/
Kernel Ventures: Rollup Summer — The Flywheel Momentum Kicked Off by ZK FairAuthor: Kernel Ventures Stanley Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua TLDR : In just a few days, ZK Fair has achieved a Total Value Locked (TVL) of $120 million, currently stabilizing at $80 million, making it one of the fastest-growing Rollups. This "three-no" public chain, with no financing, no market makers, and no institutions, has managed such growth. This article will delve into the development of ZK Fair and provide a fundamental analysis of the momentum in the current Rollup market. Rollup Background Introduction Rollup is one of the Layer 2 solutions that transfers computation and storage of transactions from the Ethereum mainnet (Layer 1) to Layer 2 for processing and compression. The compressed data is then uploaded back to the Ethereum mainnet to enhance the performance of Ethereum. The emergence of Rollup has significantly reduced Gas fees on Layer 2 compared to the mainnet, leading to savings in Gas consumption, faster Transactions Per Second (TPS), and smoother transaction interactions. Some mainstream Rollup chains that have already been launched include Arbitrum, Optimism, Base, and ZK Rollup solutions such as Starknet and zkSync, which are widely used in the market. Data Overview Rollup Chain Data Comparison, Image Source: Kernel Ventures From the data, it is evident that currently, OP and ARB still dominate among the Rollup chains. However, newcomers such as Manta and ZK Fair have managed to accumulate a significant TVL in a short period. Nevertheless, in terms of the number of protocols, they may need some time to catch up. The protocols of mainstream Rollups are well-developed, and their infrastructure is robust. Meanwhile, emerging chains still have room for development in terms of protocol expansion and infrastructure enhancement. Rollup Analysis We will categorize and introduce some recently popular Rollup chains, as well as well-established Rollup chains. Existing Rollup Chains ARB Arbitrum is an Ethereum Layer 2 scaling solution created by the Offchain Labs, based on Optimistic Rollup . While Arbitrum settlements still occur on the Ethereum mainnet, the execution and contract storage take place off-chain, with only the essential transaction data being submitted to Ethereum. As a result, Arbitrum incurs significantly lower gas fees compared to the mainnet. OP Optimism is built on the Optimistic Rollup, utilizing a single-round interactive fraud proof mechanism to ensure that the data synchronized to Layer 1 is valid. Polygon zkEVM Polygon zkEVM is an Ethereum Layer 2 scaling solution built on ZK Rollup. This zkEVM expansion solution utilizes ZK proofs to reduce transaction costs, increase throughput, and concurrently maintain the security of the Ethereum Layer 1. Emerging Rollup Chains ZK Fair ZK Fair as a Rollup, has several key features: Built on the Polygon CDK, with the Data Availability (DA) layer utilizing Celestia (currently maintained by a self-operated data committee), and EVM compatibility.Uses USDC as Gas fees.The Rollup token, ZKF, is 100% distributed to the community. 75% of the tokens are distributed in four phases, completing distribution to participants in gas consumption activities within 48 hours. Essentially, participants engage in the token's primary market sale by paying gas fees to the official sequencer. The corresponding primary market financing valuation is only $4 million. ZK Fair TVL Growth Trends, Image Source: Kernel Ventures ZK Fair has experienced rapid growth in TVL in the short term, partly owing to its decentralized nature. As per community insights, the listing on mainstream exchanges like Bitget, Kucoin, and Gate resulted from the community and users establishing contact with the exchanges. Subsequently, the official team was invited for technical integration, all initiated by the community. Projects like Izumi Finance on-chain also follow a community-driven approach, with the community taking the lead and the project team providing support, showcasing a strong community cohesion. According to information from Lumoz, the development team behind ZK Fair (formerly Opside), they have plans to introduce different themed Rollup chains in the future. This includes Rollup chains based on current hot topics like Bitcoin, as well as those focused on social aspects and financial derivatives. The upcoming chains may be launched in collaboration with project teams, resembling the current trend of Layer 3 concepts, where each Dapp has its own chain. As revealed by the team, these upcoming chains will also adopt the Fair model, distributing a portion of the original tokens to participants on the chain. Blast Blast is a Layer2 network based on Optimistic Rollups and is compatible with Ethereum. In just 6 days, the TVL on the chain has surpassed $500 million, approaching $600 million. This surge has notably doubled the price of the $Blur token. Blast originated from the founder Pacman's observation that over a billion dollars in funds within the Blur bid pool were essentially dormant, not generating any returns. This situation is prevalent across applications on almost every chain, indicating that these funds are subjected to passive depreciation caused by inflation. Specifically, when users deposit funds into Blast, the corresponding ETH locked on the Layer 1 network is utilized for native network staking. The earned ETH staking rewards are then automatically returned to users on the Blast platform. In essence, if a user holds 1 ETH in their Blast account, it may grow automatically over time. Manta Manta Network serves as the gateway for modular ZK applications, establishing a new paradigm for L2 smart contract platforms by leveraging modular blockchain and zkEVM. It aims to build a modular ecosystem for the next generation of decentralized applications (dApps). Currently, Manta Network provides two networks. The focus here is on Manta Pacific, a modular L2 ecosystem built on Ethereum. It addresses usability concerns through modular infrastructure design, enabling seamless integration of modular Data Availability (DA) and zkEVM. Since becoming the first platform integrated into Celestia on Ethereum L2, Manta Pacific has assisted users in saving over $750,000 in gas fees. Metis Metis has been operational for over 2 years, but its recent introduction of a decentralized sequencer has brought it back into the spotlight. Metis is a Layer 2 solution built on the Ethereum blockchain. It is the first to innovate by using a decentralized sequencing pool (PoS Sequencer Pool) and a hybrid of Optimistic Rollup (OP) and Zero-Knowledge Rollup (ZK) to enhance network security, sustainability, and decentralization. In Metis' design, the initial sequencer nodes are created by whitelisted users, complemented by a parallel staking mechanism. Users can become new sequencer nodes by staking the native token $METIS, enabling network participants to supervise the sequencer nodes. This enhances the transparency and credibility of the entire system. Tech Stack Analysis Polygon CDK Polygon Chain Development Kit (CDK) is a modular open-source software toolkit designed for blockchain developers to launch new Layer 2 (L2) chains on Ethereum. Polygon CDK utilizes zero-knowledge proofs to compress transactions and enhance scalability. It prioritizes modularity, facilitating the flexible design of application-specific chains. This enables developers to choose the virtual machine, sequencer type, Gas token, and data availability solution based on their specific needs. It features: High Modularity Polygon CDK allows developers to customize L2 chains according to specific requirements, catering to the unique needs of various applications. Data Availability Chains built using CDK will have a dedicated Data Availability Committee (DAC) to ensure reliable off-chain data access. Celestia DA Celestia pioneered the concept of modular blockchains by decoupling blockchain into three layers: data, consensus, and execution. In a monolithic blockchain, these three layers are typically handled by a single network. Celestia focuses on the data and consensus layers, allowing L2 to delegate the data availability layer (DA) to reduce transaction gas fees. For instance, Manta Pacific has already adopted Celestia as its data availability layer, and according to official statements from Manta Pacific, after migrating DA from Ethereum to Celestia, costs have decreased by 99.81%. For specific technical details, you can refer to a previous article by Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design (details may be provided in the mentioned article). Comparison between OP and ARB Optimism is not the sole existing rollup solution. Arbitrum also provides a similar solution, and in terms of functionality and popularity, Arbitrum is the closest alternative to Optimism. Arbitrum allows developers to run unmodified EVM contracts and Ethereum transactions on Layer 2 protocols while still benefiting from the security of Ethereum's Layer 1 network. In these aspects, it offers features very similar to Optimism. The main difference between Optimism and Arbitrum lies in the type of fraud proof they use, with Optimism utilizing single-round fraud proofs, while Arbitrum uses multi-round fraud proofs. Optimism's single-round fraud proofs rely on Layer 1 to execute all Layer 2 transactions, ensuring that fraud proof verification is instant. Since its launch, Arbitrum has consistently shown better performance in various data on Layer 2 compared to Optimism. However, this trend began to change gradually after Optimism started promoting the OP stack. OP stack is an open-source Layer 2 technology stack, meaning that other projects wishing to run Layer 2 can use it for free to quickly deploy their own Layer 2, significantly reducing development and testing costs. L2 projects adopting the OP stack can achieve security and efficiency due to technical consistency in architecture. After the launch of the OP stack, it gained initial adoption by Coinbase, and with the demonstration effect of Coinbase, OP stack has been adopted by more projects, including Binance's opBNB, NFT project Zora, and others. Future Prospects Fair Launch The Fair launch model of the current Inscription vertical has a broad audience, allowing retail investors to directly acquire original tokens. This is also the reason why Inscription remains popular to this day. ZK Fair follows the essence of this model, namely, a public launch. In the future, more chains may adopt this model, leading to a rapid increase in TVL. Rollup Absorbing L1 Market Share From a user experience perspective, Rollup and L1 have little substantive difference. Efficient transactions and low fees often attract users, as most users make decisions based on experience rather than technical details. Some rapidly growing Rollup networks offer an excellent user experience with fast transaction speeds, providing substantial incentives for both users and developers. With the precedent set by ZK Fair, future chains may continue to adopt this approach, further absorbing market share from L1. Clear Plans & Healthy Ecosystem In this narrative of the current Rollup wave, projects like ZK Fair and Blast provide significant incentives, contributing to a healthier ecosystem. This has reduced much of the unnecessary TVL and meaningless activities. For example, zkSync has been live for years without token distribution. Although it boasts a high TVL due to substantial fundraising and continuous engagement of technical enthusiasts, there are few new projects, especially those with new narratives and themes, running on the chain. Public Goods In the latest Rollup wave, many chains have introduced the concept of fee sharing. In the case of ZK Fair, 75% of the fees are distributed to all ZKF token stakers, and 25% is allocated to dApp deployers. Blast also allocates fees to Dapp deployers. This allows many developers to go beyond project income and ecosystem fund grants, leveraging gas revenue to develop more free public goods. Decentralized Sequencers The cost collection on Layer 2 (L2) and cost payment on Layer 1 (L1) are both executed by the L2 sequencer. The profits are also attributed to the sequencer. Currently, both OP and ARB sequencers are operated by the respective official entities, with profits going to the official treasuries. The mechanism for decentralized sequencers is likely to operate on a Proof-of-Stake (POS) basis. In this system, decentralized sequencers need to stake the native tokens of L2, such as ARB or OP, as collateral. If they fail to fulfill their duties, the collateral may be slashed. Regular users can either stake themselves as sequencers or use services similar to Lido's staking service. In the latter case, users provide staking tokens, and professional, decentralized sequencer operators execute sequencing and uploading services. Stakers receive a significant portion of the sequencers' L2 fees and MEV rewards (in Lido's mechanism, this is 90%). This model aims to make Rollup more transparent, decentralized, and trustworthy. Disruptive Business Model Almost all Layer2 solutions profit from a "subletting" model. In this context, "subletting" refers to directly renting a property from the landlord and then subleasing it to other tenants. Similarly, in the blockchain world, Layer2 chains generate revenue by collecting Gas fees from users (tenants) and subsequently paying fees to Layer1 (landlords). In theory, economies of scale are crucial, as long as a sufficient number of users adopt Layer2, the costs paid to Layer1 do not change significantly (unless the volume is enormous, such as in the case of OP and ARB). Therefore, if a chain's transaction volume cannot meet expectations within a certain period, it may be in a long-term loss-making state. This is also why chains like zkSync, as mentioned earlier, prefer to attract and engage users actively; with a substantial TVL, they don't worry about a lack of user transactions. However, this business model is not sustainable in the long run. While the focus has been on chains like zkSync, which has excellent financing conditions, for smaller chains, relying solely on actively engaging and retaining users might not be as effective. Therefore, the rise of "grassroots" projects like ZK Fair, as mentioned earlier, provides valuable lessons for other chains. In the pursuit of TVL, it is essential to consider the long-term sustainability of TVL, not just blindly focus on acquiring it. Summary The article starts with ZK Fair achieving a TVL of $120 million in a short period, using it as a focal point to explore the Rollup landscape. It covers established players like Arbitrum and Optimism, as well as newer entrants such as ZK Fair, Blast, Manta, and Metis. On the technical front, it delves into the modular toolkit of Polygon CDK and the modular concept of Celestia DA. It compares the differences between Optimism and Arbitrum, highlights the potential adoption of a POS mechanism for decentralized sequencers, aiming to make Rollup more transparent and decentralized. In the future outlook, the article emphasizes the widespread appeal of the fair launch model and the potential for Rollup to absorb market share from L1. It points out the negligible difference in user experience between Rollup and L1, with efficient transactions and low fees attracting users. The significance of public goods and the fee-sharing concept introduced by chains in the latest Rollup wave is emphasized. The article concludes by addressing the need to focus not only on acquiring TVL but also on its long-term sustainability. In essence, this new wave of Rollup is characterized by new projects with tokens, modular design, generous incentives, accelerating the initial business and token price dynamics. Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world. Reference Rollup Summer Reflection:https://www.chaincatcher.com/article/2110635ZK Fair Official Docs:https://docs.zkfair.io/

Kernel Ventures: Rollup Summer — The Flywheel Momentum Kicked Off by ZK Fair

Author: Kernel Ventures Stanley
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua

TLDR :
In just a few days, ZK Fair has achieved a Total Value Locked (TVL) of $120 million, currently stabilizing at $80 million, making it one of the fastest-growing Rollups. This "three-no" public chain, with no financing, no market makers, and no institutions, has managed such growth. This article will delve into the development of ZK Fair and provide a fundamental analysis of the momentum in the current Rollup market.
Rollup Background
Introduction
Rollup is one of the Layer 2 solutions that transfers computation and storage of transactions from the Ethereum mainnet (Layer 1) to Layer 2 for processing and compression. The compressed data is then uploaded back to the Ethereum mainnet to enhance the performance of Ethereum. The emergence of Rollup has significantly reduced Gas fees on Layer 2 compared to the mainnet, leading to savings in Gas consumption, faster Transactions Per Second (TPS), and smoother transaction interactions. Some mainstream Rollup chains that have already been launched include Arbitrum, Optimism, Base, and ZK Rollup solutions such as Starknet and zkSync, which are widely used in the market.
Data Overview

Rollup Chain Data Comparison, Image Source: Kernel Ventures
From the data, it is evident that currently, OP and ARB still dominate among the Rollup chains. However, newcomers such as Manta and ZK Fair have managed to accumulate a significant TVL in a short period. Nevertheless, in terms of the number of protocols, they may need some time to catch up. The protocols of mainstream Rollups are well-developed, and their infrastructure is robust. Meanwhile, emerging chains still have room for development in terms of protocol expansion and infrastructure enhancement.
Rollup Analysis
We will categorize and introduce some recently popular Rollup chains, as well as well-established Rollup chains.
Existing Rollup Chains
ARB
Arbitrum is an Ethereum Layer 2 scaling solution created by the Offchain Labs, based on Optimistic Rollup . While Arbitrum settlements still occur on the Ethereum mainnet, the execution and contract storage take place off-chain, with only the essential transaction data being submitted to Ethereum. As a result, Arbitrum incurs significantly lower gas fees compared to the mainnet.
OP
Optimism is built on the Optimistic Rollup, utilizing a single-round interactive fraud proof mechanism to ensure that the data synchronized to Layer 1 is valid.
Polygon zkEVM
Polygon zkEVM is an Ethereum Layer 2 scaling solution built on ZK Rollup. This zkEVM expansion solution utilizes ZK proofs to reduce transaction costs, increase throughput, and concurrently maintain the security of the Ethereum Layer 1.
Emerging Rollup Chains
ZK Fair
ZK Fair as a Rollup, has several key features:
Built on the Polygon CDK, with the Data Availability (DA) layer utilizing Celestia (currently maintained by a self-operated data committee), and EVM compatibility.Uses USDC as Gas fees.The Rollup token, ZKF, is 100% distributed to the community. 75% of the tokens are distributed in four phases, completing distribution to participants in gas consumption activities within 48 hours. Essentially, participants engage in the token's primary market sale by paying gas fees to the official sequencer. The corresponding primary market financing valuation is only $4 million.

ZK Fair TVL Growth Trends, Image Source: Kernel Ventures
ZK Fair has experienced rapid growth in TVL in the short term, partly owing to its decentralized nature. As per community insights, the listing on mainstream exchanges like Bitget, Kucoin, and Gate resulted from the community and users establishing contact with the exchanges. Subsequently, the official team was invited for technical integration, all initiated by the community. Projects like Izumi Finance on-chain also follow a community-driven approach, with the community taking the lead and the project team providing support, showcasing a strong community cohesion.
According to information from Lumoz, the development team behind ZK Fair (formerly Opside), they have plans to introduce different themed Rollup chains in the future. This includes Rollup chains based on current hot topics like Bitcoin, as well as those focused on social aspects and financial derivatives. The upcoming chains may be launched in collaboration with project teams, resembling the current trend of Layer 3 concepts, where each Dapp has its own chain. As revealed by the team, these upcoming chains will also adopt the Fair model, distributing a portion of the original tokens to participants on the chain.
Blast
Blast is a Layer2 network based on Optimistic Rollups and is compatible with Ethereum. In just 6 days, the TVL on the chain has surpassed $500 million, approaching $600 million. This surge has notably doubled the price of the $Blur token.
Blast originated from the founder Pacman's observation that over a billion dollars in funds within the Blur bid pool were essentially dormant, not generating any returns. This situation is prevalent across applications on almost every chain, indicating that these funds are subjected to passive depreciation caused by inflation. Specifically, when users deposit funds into Blast, the corresponding ETH locked on the Layer 1 network is utilized for native network staking. The earned ETH staking rewards are then automatically returned to users on the Blast platform. In essence, if a user holds 1 ETH in their Blast account, it may grow automatically over time.
Manta
Manta Network serves as the gateway for modular ZK applications, establishing a new paradigm for L2 smart contract platforms by leveraging modular blockchain and zkEVM. It aims to build a modular ecosystem for the next generation of decentralized applications (dApps). Currently, Manta Network provides two networks.
The focus here is on Manta Pacific, a modular L2 ecosystem built on Ethereum. It addresses usability concerns through modular infrastructure design, enabling seamless integration of modular Data Availability (DA) and zkEVM. Since becoming the first platform integrated into Celestia on Ethereum L2, Manta Pacific has assisted users in saving over $750,000 in gas fees.
Metis
Metis has been operational for over 2 years, but its recent introduction of a decentralized sequencer has brought it back into the spotlight. Metis is a Layer 2 solution built on the Ethereum blockchain. It is the first to innovate by using a decentralized sequencing pool (PoS Sequencer Pool) and a hybrid of Optimistic Rollup (OP) and Zero-Knowledge Rollup (ZK) to enhance network security, sustainability, and decentralization.
In Metis' design, the initial sequencer nodes are created by whitelisted users, complemented by a parallel staking mechanism. Users can become new sequencer nodes by staking the native token $METIS, enabling network participants to supervise the sequencer nodes. This enhances the transparency and credibility of the entire system.
Tech Stack Analysis
Polygon CDK
Polygon Chain Development Kit (CDK) is a modular open-source software toolkit designed for blockchain developers to launch new Layer 2 (L2) chains on Ethereum.
Polygon CDK utilizes zero-knowledge proofs to compress transactions and enhance scalability. It prioritizes modularity, facilitating the flexible design of application-specific chains. This enables developers to choose the virtual machine, sequencer type, Gas token, and data availability solution based on their specific needs. It features:
High Modularity
Polygon CDK allows developers to customize L2 chains according to specific requirements, catering to the unique needs of various applications.
Data Availability
Chains built using CDK will have a dedicated Data Availability Committee (DAC) to ensure reliable off-chain data access.
Celestia DA
Celestia pioneered the concept of modular blockchains by decoupling blockchain into three layers: data, consensus, and execution. In a monolithic blockchain, these three layers are typically handled by a single network. Celestia focuses on the data and consensus layers, allowing L2 to delegate the data availability layer (DA) to reduce transaction gas fees. For instance, Manta Pacific has already adopted Celestia as its data availability layer, and according to official statements from Manta Pacific, after migrating DA from Ethereum to Celestia, costs have decreased by 99.81%.
For specific technical details, you can refer to a previous article by Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design (details may be provided in the mentioned article).
Comparison between OP and ARB
Optimism is not the sole existing rollup solution. Arbitrum also provides a similar solution, and in terms of functionality and popularity, Arbitrum is the closest alternative to Optimism. Arbitrum allows developers to run unmodified EVM contracts and Ethereum transactions on Layer 2 protocols while still benefiting from the security of Ethereum's Layer 1 network. In these aspects, it offers features very similar to Optimism.
The main difference between Optimism and Arbitrum lies in the type of fraud proof they use, with Optimism utilizing single-round fraud proofs, while Arbitrum uses multi-round fraud proofs. Optimism's single-round fraud proofs rely on Layer 1 to execute all Layer 2 transactions, ensuring that fraud proof verification is instant.
Since its launch, Arbitrum has consistently shown better performance in various data on Layer 2 compared to Optimism. However, this trend began to change gradually after Optimism started promoting the OP stack. OP stack is an open-source Layer 2 technology stack, meaning that other projects wishing to run Layer 2 can use it for free to quickly deploy their own Layer 2, significantly reducing development and testing costs. L2 projects adopting the OP stack can achieve security and efficiency due to technical consistency in architecture. After the launch of the OP stack, it gained initial adoption by Coinbase, and with the demonstration effect of Coinbase, OP stack has been adopted by more projects, including Binance's opBNB, NFT project Zora, and others.
Future Prospects
Fair Launch
The Fair launch model of the current Inscription vertical has a broad audience, allowing retail investors to directly acquire original tokens. This is also the reason why Inscription remains popular to this day. ZK Fair follows the essence of this model, namely, a public launch. In the future, more chains may adopt this model, leading to a rapid increase in TVL.
Rollup Absorbing L1 Market Share
From a user experience perspective, Rollup and L1 have little substantive difference. Efficient transactions and low fees often attract users, as most users make decisions based on experience rather than technical details. Some rapidly growing Rollup networks offer an excellent user experience with fast transaction speeds, providing substantial incentives for both users and developers. With the precedent set by ZK Fair, future chains may continue to adopt this approach, further absorbing market share from L1.
Clear Plans & Healthy Ecosystem
In this narrative of the current Rollup wave, projects like ZK Fair and Blast provide significant incentives, contributing to a healthier ecosystem. This has reduced much of the unnecessary TVL and meaningless activities. For example, zkSync has been live for years without token distribution. Although it boasts a high TVL due to substantial fundraising and continuous engagement of technical enthusiasts, there are few new projects, especially those with new narratives and themes, running on the chain.
Public Goods
In the latest Rollup wave, many chains have introduced the concept of fee sharing. In the case of ZK Fair, 75% of the fees are distributed to all ZKF token stakers, and 25% is allocated to dApp deployers. Blast also allocates fees to Dapp deployers. This allows many developers to go beyond project income and ecosystem fund grants, leveraging gas revenue to develop more free public goods.
Decentralized Sequencers
The cost collection on Layer 2 (L2) and cost payment on Layer 1 (L1) are both executed by the L2 sequencer. The profits are also attributed to the sequencer. Currently, both OP and ARB sequencers are operated by the respective official entities, with profits going to the official treasuries.
The mechanism for decentralized sequencers is likely to operate on a Proof-of-Stake (POS) basis. In this system, decentralized sequencers need to stake the native tokens of L2, such as ARB or OP, as collateral. If they fail to fulfill their duties, the collateral may be slashed. Regular users can either stake themselves as sequencers or use services similar to Lido's staking service. In the latter case, users provide staking tokens, and professional, decentralized sequencer operators execute sequencing and uploading services. Stakers receive a significant portion of the sequencers' L2 fees and MEV rewards (in Lido's mechanism, this is 90%). This model aims to make Rollup more transparent, decentralized, and trustworthy.
Disruptive Business Model
Almost all Layer2 solutions profit from a "subletting" model. In this context, "subletting" refers to directly renting a property from the landlord and then subleasing it to other tenants. Similarly, in the blockchain world, Layer2 chains generate revenue by collecting Gas fees from users (tenants) and subsequently paying fees to Layer1 (landlords). In theory, economies of scale are crucial, as long as a sufficient number of users adopt Layer2, the costs paid to Layer1 do not change significantly (unless the volume is enormous, such as in the case of OP and ARB). Therefore, if a chain's transaction volume cannot meet expectations within a certain period, it may be in a long-term loss-making state. This is also why chains like zkSync, as mentioned earlier, prefer to attract and engage users actively; with a substantial TVL, they don't worry about a lack of user transactions.
However, this business model is not sustainable in the long run. While the focus has been on chains like zkSync, which has excellent financing conditions, for smaller chains, relying solely on actively engaging and retaining users might not be as effective. Therefore, the rise of "grassroots" projects like ZK Fair, as mentioned earlier, provides valuable lessons for other chains. In the pursuit of TVL, it is essential to consider the long-term sustainability of TVL, not just blindly focus on acquiring it.
Summary
The article starts with ZK Fair achieving a TVL of $120 million in a short period, using it as a focal point to explore the Rollup landscape. It covers established players like Arbitrum and Optimism, as well as newer entrants such as ZK Fair, Blast, Manta, and Metis.
On the technical front, it delves into the modular toolkit of Polygon CDK and the modular concept of Celestia DA. It compares the differences between Optimism and Arbitrum, highlights the potential adoption of a POS mechanism for decentralized sequencers, aiming to make Rollup more transparent and decentralized.
In the future outlook, the article emphasizes the widespread appeal of the fair launch model and the potential for Rollup to absorb market share from L1. It points out the negligible difference in user experience between Rollup and L1, with efficient transactions and low fees attracting users. The significance of public goods and the fee-sharing concept introduced by chains in the latest Rollup wave is emphasized. The article concludes by addressing the need to focus not only on acquiring TVL but also on its long-term sustainability.
In essence, this new wave of Rollup is characterized by new projects with tokens, modular design, generous incentives, accelerating the initial business and token price dynamics.
Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world.
Reference
Rollup Summer Reflection:https://www.chaincatcher.com/article/2110635ZK Fair Official Docs:https://docs.zkfair.io/
Kernel Ventures: 由 ZK Fair 开启的 Rollup 飞轮作者:Kernel Ventures Stanley 审稿:Kernel Ventures Mandy, Kernel Ventures Joshua TLDR: ZK Fair 在短短几天 TVL 达到 1.2 亿美金,且目前维持稳定在 8000 万美金,是增速最快的 Rollup 之一。这一条无融资,无做市商,无机构的“三无”公链是如何做到的,本文会介绍 ZK Fair 的发展以及从本质分析这一轮 Rollup 行情的飞轮。 Rollup 赛道背景 赛道介绍 Rollup 是 Layer2 方案之一,通过将以太坊主网(即 Layer1)上交易的计算和存储转移至 Layer2 处理并压缩,再将压缩后的数据上传至以太坊主网以拓展以太坊性能。Rollup 的出现,使得 Layer2 的 Gas 费远低于主网,节约Gas消耗,更快的 TPS 等,使得交易和交互更加流畅。已经上线的一些主流 Rollup 链比如 Arbitrum、Optimism,Base,还有 ZK Rollup 比如 Starknet 和 zkSync 都是市面上比较常用的链。 数据概览 Rollup 链数据对比,图片来源:Kernel Ventures 从数据可以看出,目前Rollup链当中,OP 和 ARB 仍占主导地位,但是后起之秀比如 Manta & ZK Fair 能在短期积累到一定量的 TVL,但是在协议数量方面,还需增长一段时间,主流 Rollup 的协议开发完善,基础设施完善,其他新兴链还需发展。 赛道解析 本文将以各条链为分类,介绍最近较火热的一些 Rollup 链以及老牌 Rollup 链。 老牌 Rollup 链 ARB Arbitrum 是 Offchain Labs 团队基于 Optimistic Rollup 技术创建的以太坊 Layer2 扩容解决方案。 虽然 Arbitrum 交易仍然在以太坊上结算,但 Arbitrum 仅将原始交易数据提交给以太坊,执行和合约存储发生在链下,因此 Arbitrum 所需的 Gas 费用与主网相比非常少。 OP Optimism 基于 Optimistic Rollup 打造,通过单轮交互型欺诈证明保证同步到 Layer1 的数据是有效的。 Polygon zkEVM Polygon zkEVM 是基于 ZK Rollup 搭建的以太坊 Layer 2 层的 zkEVM 扩容方案,此方案采用 ZK 证明来降低交易费用并提高吞吐量,同时保持以太坊 L1 的安全性。 新兴 Rollup链 ZK Fair ZK Fair作为一条Rollup的主要特点是: 基于 Polygon CDK 构建,DA 层将使用 Celestia(目前由自营的数据委员会保存),EVM 兼容以 USDC 作为 Gas feeRollup 代币 ZKF 100% 分发给社区。其中 75% 的代币分四期,在 48 小时内完成对 Gas 消耗活动参与者的分发,本质上是参与者通过给官方排序器支付 Gas 的方式来参与代币的一级市场发售,对应的一级市场的融资估值仅 400 万美金 ZK Fair TVL 增长趋势,图片来源:Kernel Ventures ZK Fair 在短期 TVL 急速攀升,有一方面得益于其“无主”,从社区了解到,目前主流的交易所比如说 Bitget & Kucoin & Gate 等的上架,均是由社区以及用户与交易所建立联系,随后再邀请官方团队进行技术对接,均是社区自发行为,包括链上的 izumi Finance,均是社区主导,项目方为辅的模式进行对接,社区凝聚力强。 我们从 ZK Fair 开发团队 Lumoz(前 Opside)了解到,团队计划后续还会有不同主题的 Rollup 链上新,比如说当下热点的基于比特币的 Rollup 链,以及社交、金融衍生品等,未来上新的链可能是以与项目方合作的形式进行发射,类似于现在较火的 Layer 3 概念,即一个 Dapp 一条链,通过与团队得知,后续的一些链也会有 Fair 模式,团队会分发一部分原始筹码给链上的参与者。 Blast Blast 是一个基于 Optimistic Rollups 技术的、且兼容以太坊的第二层网络,仅耗时 6 天链上的 TVL 就已经突破了 5 亿美金,直逼 6 亿美金,更是将 Blur 代币价格直接拉升了一倍之多。 Blast 始于创始人 Pacman 认为 Blur bid 池内的上亿美元的资金一直在被动沉睡,未能赚取任何收益,且该状况几乎存在于每条链的每一个应用之上,这意味着这些资金正在遭受通胀而带来的被动折旧。具体而言,当用户将资金存入 Blast 后,Blast 随即将把锁定于 Layer 1 网络上的对应 ETH 用于网络原生质押,并将所获得的 ETH 质押收益自动返还给 Blast 之上的用户。简而言之,如果用户在 Blast 上的账户内持有 1 个 ETH,随着时间的推移,它可能会自动增长。 Manta Manta Network 是模块化 ZK 应用程序的网关,它利用模块化区块链和 zkEVM 建立了 L2 智能合约平台的新范例,为下一代 dApps 构建的模块化生态系统,目前提供两个网络: 本文主要介绍的是 Manta Pacific,是建立在以太坊上的模块化 L2 生态系统,通过模块化基础设施设计解决了可用性问题,允许模块化 DA 和 zkEVM 无缝集成。Manta自成为首个整合到 Celestia 的以太坊 L2 以来,Manta Pacific 已经帮助用户节省了超过 75 万美元的 Gas 费。 Metis Metis 已运营 2 年之久,但是最近其提出的去中心化排序器(Sequencer)使得其再次引起关注。Metis 是一条基于以太坊链的 Layer2, 是首个创新使用去中心化序列池 (PoS Sequencer Pool)、混合 OP 和 ZK 的 Rollup,来提高网络的安全性、可持续运营和去中心化程度。 在 Metis 的设计中,最初的序列器节点由白名单用户创建,另外还有一个并行的质押机制。用户可以通过质押原生代币 $METIS 来成为新的排序器节点,同时允许网络参与者对序列器节点进行监督,提高了整个系统的透明度和可信度。 技术核心解析 Polygon CDK Polygon Chain Development Kit (CDK) 是一个模块化的开源软件工具包,供区块链开发人员在以太坊上推出新的 L2 链。 Polygon CDK 利用零知识证明来压缩交易并增强可扩展性。它还优先考虑模块化,并促进灵活地设计特定于应用程序的链。这使开发人员能够选择虚拟机、排序器类型、Gas 代币和数据可用性解决方案。其具有以下特点: 高模块化 Polygon CDK 允许开发人员根据特定要求定制 L2 链。可以满足各种应用程序的独特需求。 数据可用性 使用CDK构建的链将有一个专门的数据可用性委员会(DAC),以保证可靠的链下数据访问。 Celestia DA Celestia 首先提出模块化区块链概念,将区块链解耦为数据、共识、执行这三层,单体区块链中这三层工作全由一个网络来完。Celestia 专注于数据和共识层,L2 可以让 Celestia 负责数据可用性层(DA)以降低交互 Gas 费。比如 Manta Pacific 已经采用 Celestia 作为数据可用层,根据 Manta Pacific 官方消息,DA 从以太坊迁移至 Celestia 后费用降低 99.81%。 具体技术细节可以参考Kernel Ventures 往期文章:Kernel Ventures:一文探讨 DA 和历史数据层设计 OP 与 ARB 的对比 Optimism 并不是唯一现有的汇总解决方案。Arbitrum 也提供类似的解决方案。就功能和受欢迎程度而言,Arbitrum 是与 Optimism 最相近的替代方案。Arbitrum 允许开发人员在第2层协议上运行未经修改的 EVM 合约和以太坊交易,同时仍然享受以太坊的第1层网络的安全性。在这些方面,它提供了与 Optimism 非常相似的特征。 Optimism 和 Arbitrum 之间的主要区别在于,前者使用单轮欺诈证明,而 Arbitrum 使用的是多轮欺诈证明。Optimism 的单轮防欺诈(FP)依赖于 L1 来执行所有 L2 交易。这确保了 FP 的验证是是即时进行的。 ARB 自推出以来,在 L2 的各类业务数据上一直明显好于 OP,但这一点在 OP 开始力推 OP stack 之后开始逐渐改变,OP 虽然还是无法达到 ARB 的体量,但是已有崛起的趋势,且这轮 OP 代币的涨幅巨大,三个月接近翻倍。Op stack 是一个开源的 L2 技术栈,意味着其他想要运行 L2 的项目可以免费使用它来快速部署属于自己的 L2,大大降低了开发测试的成本。采用了 OP stack 的 L2 们,由于技术架构上的一致性,彼此之间可以实现安全、高效。OP stack 推出后,首先获得了 Coinbase 的采用,其采用 OP stack 构建的 L2 Base ,有了 Coinbase 的示范效应,而后 Op stack 获得了越来越的项目采用,比如 Binance 的 opBNB,NFT 项目 Zora等。 赛道未来展望 公平发射 本轮铭文赛道的 Fair 的模式受众广,使得散户可以直接获取原始筹码,这也是铭文时至今日依然火爆的原因。ZK Fair 沿使用了此模式的精髓,即公开发射,以后可能更多的链沿用此模式,使得迅速TVL上升。 Rollup 吞并 L1 市场份额 在体验上,Rollup 与 L1 已经没有实质区别,甚至其高效的交易和低廉的手续费往往更能吸引用户,因为大部分用户不会这么在意技术上的细节,只会从体验感上来做决定。一些高速成长的 Rollup,体验感极佳,交易速度极快,且大大方方地给用户 & 开发者激励,有了 ZK Fair 的先例,未来的链可能会沿用此方法,进一步对 L1 进行吞并。 计划明确 & 生态健康 本轮 Rollup 叙事中,ZK Fair & Blast 等链,项目方大大方方给激励,使得生态更健康,减少了许多撸毛和无意义的 TVL,就比如 zkSync ,上线多年仍不发币,凭借高融资以及技术持续 “PUA” 参与者,固然其 TVL 非常高,但是链上新项目,尤其是新叙事新题材的项目却很少跑出来。 公共产品 新一轮 Rollup 中,许多链都提出了手续费分成的概念,ZK Fair 是 75% 手续费分成给所有 ZKF 代币的质押者,25%手续费分成给 Dapp 部署者,Blast 也有给 Dapp 部署者的手续费分成,由此,许多开发者不局限于项目的收入,也不局限于生态基金 Grants 的收入,有了 Gas 的收入,可以更多开发免费的公共产品。 去中心化排序器 L2 的费用收取和 L1 的成本支付都由 L2 的排序器(Sequencer)执行,利润也归于排序器,而目前 OP 与 ARB 的排序器均由官方运行,利润也归于官方国库。 而去中心化排序器的机制很有可能以 POS 机制运行,即去中心化排序器们需要质押 L2 的原生代币如 ARB 或 OP 作为信用押金,当未能履行职责时押金会被罚没(Slashed)。普通用户们可以自己质押作为排序器,也可以使用类似 Lido 提供的质押服务,由用户提供抵押代币,由专业、分散的排序器运营方执行排序和上传服务,而质押用户可以分得大部分排序器获得 L2 费用和 MEV 奖励(Lido 的机制下是 90%),该模式可以使得Rollup更加透明,更加去中心化,更加可信。 商业模式破局 几乎所有的 Layer2 都是以 “二房东” 的模式而盈利,所谓二房东,就是从房东手中直接租房子再转手租给其他房客,在区块链世界中也同理,Layer2 链通过收取用户(房客)的 Gas fee,之后给 Layer1(房东)缴费,所以理论上规模效应非常重要,只要 Layer2 用的人足够多,其实本质上给 Layer1 交的费用不会有太大变化(除非体量巨大,比如op arb),因此如果一条链的交易量不能在一定时间达到预期,则可能后续长期是亏损的状态,这也是前文提到的 zkSync 一类链喜欢 PUA 用户的原因,有了足够多的 TVL,就不愁没有用户来交易。 但是以往这种商业模式是无法延续的,因为目光聚焦的是 zkSync,是融资情况极佳的一条链,但是对于较小的一些链,以此 PUA 用户,可能作用不是很大。所以,上文提到的一些关于 ZK Fair 这一“草根”的崛起之路,值得其他链去学习。在盲目地关注 TVL 的时候,更要思考 TVL 的长期延续。且随着越来越多的 L2 诞生,L1 的手续费便会下降,从而削减了 L2 的最大优势,这也是 L2 面临的问题之一,如何不单单通过从低价的角度来获取用户。 总结 文章以 ZK Fair 短时间内 TVL 达到 1.2 亿美元为切入点。探讨了 Rollup 赛道,包括老牌如 Arbitrum、Optimism和新型如 ZK Fair、Blast、Manta、Metis。 并在技术层面,讲解了 Polygon CDK 的模块化工具包和 Celestia DA 的模块化概念。对比了 Optimism 和 Arbitrum 的差异,提到去中心化排序器可能采用POS机制,使 Rollup 更透明和去中心化。 未来展望中,强调了公平发射模式的广泛受众,Rollup 可能吞并 L1 市场份额。指出 Rollup 在体验上与 L1 无实质区别,高效交易和低手续费吸引用户。强调了公共产品的重要性,以及新一轮 Rollup 中链提出的手续费分成概念。最后提到商业模式破局,关注 TVL 的长期延续。 简单地来说,这轮新 Rollup 的特点就是:项目新,有代币,模块化,大大方方给激励,使项目初始的业务和币价的飞轮转起来更快。 Kernel Ventures是一个由研究和开发社区驱动的加密风险投资基金,拥有超过70个早期投资,专注于基础设施、中间件、dApps,尤其是ZK、Rollup、DEX、模块化区块链,以及将搭载未来数十亿加密用户的垂直领域,如账户抽象、数据可用性、可扩展性等。在过去的七年里,我们一直致力于支持世界各地的核心开发社区和大学区块链协会的发展。 参考资料: Rollup Summer 思考:https://www.chaincatcher.com/article/2110635ZK Fair 官方文档:https://docs.zkfair.io/

Kernel Ventures: 由 ZK Fair 开启的 Rollup 飞轮

作者:Kernel Ventures Stanley
审稿:Kernel Ventures Mandy, Kernel Ventures Joshua

TLDR:
ZK Fair 在短短几天 TVL 达到 1.2 亿美金,且目前维持稳定在 8000 万美金,是增速最快的 Rollup 之一。这一条无融资,无做市商,无机构的“三无”公链是如何做到的,本文会介绍 ZK Fair 的发展以及从本质分析这一轮 Rollup 行情的飞轮。
Rollup 赛道背景
赛道介绍
Rollup 是 Layer2 方案之一,通过将以太坊主网(即 Layer1)上交易的计算和存储转移至 Layer2 处理并压缩,再将压缩后的数据上传至以太坊主网以拓展以太坊性能。Rollup 的出现,使得 Layer2 的 Gas 费远低于主网,节约Gas消耗,更快的 TPS 等,使得交易和交互更加流畅。已经上线的一些主流 Rollup 链比如 Arbitrum、Optimism,Base,还有 ZK Rollup 比如 Starknet 和 zkSync 都是市面上比较常用的链。
数据概览

Rollup 链数据对比,图片来源:Kernel Ventures
从数据可以看出,目前Rollup链当中,OP 和 ARB 仍占主导地位,但是后起之秀比如 Manta & ZK Fair 能在短期积累到一定量的 TVL,但是在协议数量方面,还需增长一段时间,主流 Rollup 的协议开发完善,基础设施完善,其他新兴链还需发展。
赛道解析
本文将以各条链为分类,介绍最近较火热的一些 Rollup 链以及老牌 Rollup 链。
老牌 Rollup 链
ARB
Arbitrum 是 Offchain Labs 团队基于 Optimistic Rollup 技术创建的以太坊 Layer2 扩容解决方案。 虽然 Arbitrum 交易仍然在以太坊上结算,但 Arbitrum 仅将原始交易数据提交给以太坊,执行和合约存储发生在链下,因此 Arbitrum 所需的 Gas 费用与主网相比非常少。
OP
Optimism 基于 Optimistic Rollup 打造,通过单轮交互型欺诈证明保证同步到 Layer1 的数据是有效的。
Polygon zkEVM
Polygon zkEVM 是基于 ZK Rollup 搭建的以太坊 Layer 2 层的 zkEVM 扩容方案,此方案采用 ZK 证明来降低交易费用并提高吞吐量,同时保持以太坊 L1 的安全性。
新兴 Rollup链
ZK Fair
ZK Fair作为一条Rollup的主要特点是:
基于 Polygon CDK 构建,DA 层将使用 Celestia(目前由自营的数据委员会保存),EVM 兼容以 USDC 作为 Gas feeRollup 代币 ZKF 100% 分发给社区。其中 75% 的代币分四期,在 48 小时内完成对 Gas 消耗活动参与者的分发,本质上是参与者通过给官方排序器支付 Gas 的方式来参与代币的一级市场发售,对应的一级市场的融资估值仅 400 万美金

ZK Fair TVL 增长趋势,图片来源:Kernel Ventures
ZK Fair 在短期 TVL 急速攀升,有一方面得益于其“无主”,从社区了解到,目前主流的交易所比如说 Bitget & Kucoin & Gate 等的上架,均是由社区以及用户与交易所建立联系,随后再邀请官方团队进行技术对接,均是社区自发行为,包括链上的 izumi Finance,均是社区主导,项目方为辅的模式进行对接,社区凝聚力强。
我们从 ZK Fair 开发团队 Lumoz(前 Opside)了解到,团队计划后续还会有不同主题的 Rollup 链上新,比如说当下热点的基于比特币的 Rollup 链,以及社交、金融衍生品等,未来上新的链可能是以与项目方合作的形式进行发射,类似于现在较火的 Layer 3 概念,即一个 Dapp 一条链,通过与团队得知,后续的一些链也会有 Fair 模式,团队会分发一部分原始筹码给链上的参与者。
Blast
Blast 是一个基于 Optimistic Rollups 技术的、且兼容以太坊的第二层网络,仅耗时 6 天链上的 TVL 就已经突破了 5 亿美金,直逼 6 亿美金,更是将 Blur 代币价格直接拉升了一倍之多。
Blast 始于创始人 Pacman 认为 Blur bid 池内的上亿美元的资金一直在被动沉睡,未能赚取任何收益,且该状况几乎存在于每条链的每一个应用之上,这意味着这些资金正在遭受通胀而带来的被动折旧。具体而言,当用户将资金存入 Blast 后,Blast 随即将把锁定于 Layer 1 网络上的对应 ETH 用于网络原生质押,并将所获得的 ETH 质押收益自动返还给 Blast 之上的用户。简而言之,如果用户在 Blast 上的账户内持有 1 个 ETH,随着时间的推移,它可能会自动增长。
Manta
Manta Network 是模块化 ZK 应用程序的网关,它利用模块化区块链和 zkEVM 建立了 L2 智能合约平台的新范例,为下一代 dApps 构建的模块化生态系统,目前提供两个网络:
本文主要介绍的是 Manta Pacific,是建立在以太坊上的模块化 L2 生态系统,通过模块化基础设施设计解决了可用性问题,允许模块化 DA 和 zkEVM 无缝集成。Manta自成为首个整合到 Celestia 的以太坊 L2 以来,Manta Pacific 已经帮助用户节省了超过 75 万美元的 Gas 费。
Metis
Metis 已运营 2 年之久,但是最近其提出的去中心化排序器(Sequencer)使得其再次引起关注。Metis 是一条基于以太坊链的 Layer2, 是首个创新使用去中心化序列池 (PoS Sequencer Pool)、混合 OP 和 ZK 的 Rollup,来提高网络的安全性、可持续运营和去中心化程度。
在 Metis 的设计中,最初的序列器节点由白名单用户创建,另外还有一个并行的质押机制。用户可以通过质押原生代币 $METIS 来成为新的排序器节点,同时允许网络参与者对序列器节点进行监督,提高了整个系统的透明度和可信度。
技术核心解析
Polygon CDK
Polygon Chain Development Kit (CDK) 是一个模块化的开源软件工具包,供区块链开发人员在以太坊上推出新的 L2 链。
Polygon CDK 利用零知识证明来压缩交易并增强可扩展性。它还优先考虑模块化,并促进灵活地设计特定于应用程序的链。这使开发人员能够选择虚拟机、排序器类型、Gas 代币和数据可用性解决方案。其具有以下特点:
高模块化
Polygon CDK 允许开发人员根据特定要求定制 L2 链。可以满足各种应用程序的独特需求。
数据可用性
使用CDK构建的链将有一个专门的数据可用性委员会(DAC),以保证可靠的链下数据访问。
Celestia DA
Celestia 首先提出模块化区块链概念,将区块链解耦为数据、共识、执行这三层,单体区块链中这三层工作全由一个网络来完。Celestia 专注于数据和共识层,L2 可以让 Celestia 负责数据可用性层(DA)以降低交互 Gas 费。比如 Manta Pacific 已经采用 Celestia 作为数据可用层,根据 Manta Pacific 官方消息,DA 从以太坊迁移至 Celestia 后费用降低 99.81%。
具体技术细节可以参考Kernel Ventures 往期文章:Kernel Ventures:一文探讨 DA 和历史数据层设计
OP 与 ARB 的对比
Optimism 并不是唯一现有的汇总解决方案。Arbitrum 也提供类似的解决方案。就功能和受欢迎程度而言,Arbitrum 是与 Optimism 最相近的替代方案。Arbitrum 允许开发人员在第2层协议上运行未经修改的 EVM 合约和以太坊交易,同时仍然享受以太坊的第1层网络的安全性。在这些方面,它提供了与 Optimism 非常相似的特征。
Optimism 和 Arbitrum 之间的主要区别在于,前者使用单轮欺诈证明,而 Arbitrum 使用的是多轮欺诈证明。Optimism 的单轮防欺诈(FP)依赖于 L1 来执行所有 L2 交易。这确保了 FP 的验证是是即时进行的。
ARB 自推出以来,在 L2 的各类业务数据上一直明显好于 OP,但这一点在 OP 开始力推 OP stack 之后开始逐渐改变,OP 虽然还是无法达到 ARB 的体量,但是已有崛起的趋势,且这轮 OP 代币的涨幅巨大,三个月接近翻倍。Op stack 是一个开源的 L2 技术栈,意味着其他想要运行 L2 的项目可以免费使用它来快速部署属于自己的 L2,大大降低了开发测试的成本。采用了 OP stack 的 L2 们,由于技术架构上的一致性,彼此之间可以实现安全、高效。OP stack 推出后,首先获得了 Coinbase 的采用,其采用 OP stack 构建的 L2 Base ,有了 Coinbase 的示范效应,而后 Op stack 获得了越来越的项目采用,比如 Binance 的 opBNB,NFT 项目 Zora等。
赛道未来展望
公平发射
本轮铭文赛道的 Fair 的模式受众广,使得散户可以直接获取原始筹码,这也是铭文时至今日依然火爆的原因。ZK Fair 沿使用了此模式的精髓,即公开发射,以后可能更多的链沿用此模式,使得迅速TVL上升。
Rollup 吞并 L1 市场份额
在体验上,Rollup 与 L1 已经没有实质区别,甚至其高效的交易和低廉的手续费往往更能吸引用户,因为大部分用户不会这么在意技术上的细节,只会从体验感上来做决定。一些高速成长的 Rollup,体验感极佳,交易速度极快,且大大方方地给用户 & 开发者激励,有了 ZK Fair 的先例,未来的链可能会沿用此方法,进一步对 L1 进行吞并。
计划明确 & 生态健康
本轮 Rollup 叙事中,ZK Fair & Blast 等链,项目方大大方方给激励,使得生态更健康,减少了许多撸毛和无意义的 TVL,就比如 zkSync ,上线多年仍不发币,凭借高融资以及技术持续 “PUA” 参与者,固然其 TVL 非常高,但是链上新项目,尤其是新叙事新题材的项目却很少跑出来。
公共产品
新一轮 Rollup 中,许多链都提出了手续费分成的概念,ZK Fair 是 75% 手续费分成给所有 ZKF 代币的质押者,25%手续费分成给 Dapp 部署者,Blast 也有给 Dapp 部署者的手续费分成,由此,许多开发者不局限于项目的收入,也不局限于生态基金 Grants 的收入,有了 Gas 的收入,可以更多开发免费的公共产品。
去中心化排序器
L2 的费用收取和 L1 的成本支付都由 L2 的排序器(Sequencer)执行,利润也归于排序器,而目前 OP 与 ARB 的排序器均由官方运行,利润也归于官方国库。
而去中心化排序器的机制很有可能以 POS 机制运行,即去中心化排序器们需要质押 L2 的原生代币如 ARB 或 OP 作为信用押金,当未能履行职责时押金会被罚没(Slashed)。普通用户们可以自己质押作为排序器,也可以使用类似 Lido 提供的质押服务,由用户提供抵押代币,由专业、分散的排序器运营方执行排序和上传服务,而质押用户可以分得大部分排序器获得 L2 费用和 MEV 奖励(Lido 的机制下是 90%),该模式可以使得Rollup更加透明,更加去中心化,更加可信。
商业模式破局
几乎所有的 Layer2 都是以 “二房东” 的模式而盈利,所谓二房东,就是从房东手中直接租房子再转手租给其他房客,在区块链世界中也同理,Layer2 链通过收取用户(房客)的 Gas fee,之后给 Layer1(房东)缴费,所以理论上规模效应非常重要,只要 Layer2 用的人足够多,其实本质上给 Layer1 交的费用不会有太大变化(除非体量巨大,比如op arb),因此如果一条链的交易量不能在一定时间达到预期,则可能后续长期是亏损的状态,这也是前文提到的 zkSync 一类链喜欢 PUA 用户的原因,有了足够多的 TVL,就不愁没有用户来交易。
但是以往这种商业模式是无法延续的,因为目光聚焦的是 zkSync,是融资情况极佳的一条链,但是对于较小的一些链,以此 PUA 用户,可能作用不是很大。所以,上文提到的一些关于 ZK Fair 这一“草根”的崛起之路,值得其他链去学习。在盲目地关注 TVL 的时候,更要思考 TVL 的长期延续。且随着越来越多的 L2 诞生,L1 的手续费便会下降,从而削减了 L2 的最大优势,这也是 L2 面临的问题之一,如何不单单通过从低价的角度来获取用户。
总结
文章以 ZK Fair 短时间内 TVL 达到 1.2 亿美元为切入点。探讨了 Rollup 赛道,包括老牌如 Arbitrum、Optimism和新型如 ZK Fair、Blast、Manta、Metis。
并在技术层面,讲解了 Polygon CDK 的模块化工具包和 Celestia DA 的模块化概念。对比了 Optimism 和 Arbitrum 的差异,提到去中心化排序器可能采用POS机制,使 Rollup 更透明和去中心化。
未来展望中,强调了公平发射模式的广泛受众,Rollup 可能吞并 L1 市场份额。指出 Rollup 在体验上与 L1 无实质区别,高效交易和低手续费吸引用户。强调了公共产品的重要性,以及新一轮 Rollup 中链提出的手续费分成概念。最后提到商业模式破局,关注 TVL 的长期延续。
简单地来说,这轮新 Rollup 的特点就是:项目新,有代币,模块化,大大方方给激励,使项目初始的业务和币价的飞轮转起来更快。
Kernel Ventures是一个由研究和开发社区驱动的加密风险投资基金,拥有超过70个早期投资,专注于基础设施、中间件、dApps,尤其是ZK、Rollup、DEX、模块化区块链,以及将搭载未来数十亿加密用户的垂直领域,如账户抽象、数据可用性、可扩展性等。在过去的七年里,我们一直致力于支持世界各地的核心开发社区和大学区块链协会的发展。
参考资料:
Rollup Summer 思考:https://www.chaincatcher.com/article/2110635ZK Fair 官方文档:https://docs.zkfair.io/
LIVE
--
Bullish
🦄
🦄
LIVE
Kernel Ventures
--
Kernel Ventures: Cancun Upgrade — And Its Impact on the Broader Ethereum Ecosystem
Author: Kernel Ventures Jerry Luo
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
Ethereum has completed the first three upgrade phases, which addressed the problems of development thresholds, DoS attacks, and POS transition, respectively, and the main goal of the current upgrade phase is to reduce transaction fees and optimize the user experience.EIP-1553, EIP-4788, EIP-5656 and EIP-6780, have been realized to reduce the cost of inter-contractual interactions, to improve the efficiency of beacon chain access, to reduce the cost of data replication, and to limit the role authority of the SELFDESTRUCT byte code, respectively.By introducing blob data that is external to the block, EIP-4844 can greatly increase Ethereum's TPS and reduce data storage costs.The Cancun upgrade will have additional benefits for Ethereum-specific DAs while Ethereum Foundation is not open to DA solutions that do not utilize Ethereum at all in their data stores.The Cancun upgrade is likely to be relatively more favorable for Op Layer2 due to its more mature development environment as well as the increased demand for the Ethereum DA layer.The Cancun upgrade will raise the performance limit of the DApp, allowing it to have functionality closer to that of an app in Web2. On-chain games that haven't lost their popularity while need a lot of storage space on Ethereum are worth watching.The Ethereum is undervalued at this stage, and the Cancun upgrade could be the signal that Ethereum starts to soar up.
1. Ethereum's Upgrade
From October 16th of last year, when Cointelegraph published fake news about the pass of the Bitcoin ETF, to January 11th this year, when the ETF was finally passed, crypto market has experienced a surge in price. As bitcoin is more directly impacted by ETF, Ethereum and bitcoin's price diverged during this period. With bitcoin peaking at nearly $49,000, having recovered 2/3 of its previous bull market peak, Ethereum peaked at around $2,700, just over half of its previous bull market peak. But since the Bitcoin ETF landed, the ETH/BTC trend has rebounded significantly, in addition to the expectation of an upcoming Ethereum ETF, another important reason is that the delayed Cancun upgrade recently announced public testing on the Goerli test network, signaling that it is on the edge. As things stand, the Cancun upgrade will not take place until the first quarter of 2024 at the earliest. The Cancun upgrade is part of Ethereum's Serenity phase, designed to address Ethereum's low TPS and high transaction costs at this stage, and follows the Frontier, Homestead, and Metropolis phases of Ethereum. Prior to Serenity, Ethereum had gone through Frontier, Homestead, and Metropolis phases, which seperately addressed problems of developing thresholds, Dos attacks, and POS transition on Ethereum. The Ethereum roadmap clearly states that the main goal of the current phase is to realize cheaper transactions and a better user experience.

Source: TradingView
2. Content of the Cancun Upgrade
As a decentralized community, Ethereum's upgrades are based on proposals made by the developer community that are ultimately supported by the majority of the Ethereum community, including the ERC proposals that have been adopted and those that are still under discussion or will be implemented on the mainnet soon, collectively referred to as EIP proposals. At the Cancun upgrade, five EIP proposals are expected to be adopted: EIP-1153, EIP-4788, EIP-5656, EIP-6780 and EIP-4844.
2.1 Essential Mission EIP-4844
Blob: EIP-4844 introduced a new transaction type for Ethereum, the blob, a 125kb data block. Blobs compress and encode transaction data and are not permanently stored on Ethereum as CALLDATA bytecodes, which greatly reduces gas consumption, but cannot be accessed directly in EVMs.The EIP-4844 implementation allows for up to two blobs per transaction and up to 16 blobs per block. After the implementation of EIP-4844, each transaction can carry up to two blobs, and each block can carry up to 16 blobs. However, the Ethereum community recommends that each block carry eight blobs, and when the number exceeded 8, it can continue to be carried, but will face a relatively constant increase in gas cost until it reaches the maximum of 16 blobs.
In addition, two other core technologies utilized in EIP-4844 are KZG polynomial promises and temporary storage, which were analyzed in detail in our previous article Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design, which explored the design of the DA and historical data layers. In summary, EIP-4844's changes to the size of Ethereum's individual block capacity and the location where transaction data is stored have significantly increased the TPS of Ethereum network while reducing its gas.
2.1 Side Missions EIP-1553
EIP-1553: This proposal is made to reduce storage costs during contract interactions. A transaction on Ethereum can be broken down into multiple frames created by the CALL instruction set, which may belong to different contracts and thus may involve the transfer of information across multiple contracts. There are two ways of transferring state between contracts, one is in the form of input/output, and the other is to call SSTORE/SLOAD bytecode for on-chain permanent storage. In the past, data is stored and transmitted in the form of memory, which has lower cost, but if the whole transmission process passes through any untrustworthy third-party contract, there will be a huge security risk. However, if use the SSTORE/SLOAD bytecode, it will bring a considerable amount of storage overhead and increase the burden of on-chain storage. EIP-1553 solves this problem by introducing the instantaneous storage opcodes TSTORE and TLOAD. Variables stored by these two bytecodes have the same properties as those stored by the SSTORE/SLOAD bytecodes and cannot be modified during transmission. However, the difference is that the transiently stored data will not remain on the chain after the transaction is over, but to be destroyed like the temporary variables, which realize the security of the state transmission process and the relatively low storage cost.

Source: Kernel Ventures
EIP-4788: In the beacon chain after Ethereum's POS upgrade, each new execution block contains the Roots of the parent beacon block, and even if the missing of some of the older Roots, it only need to keep some of the latest Roots during the process of creating a new block due to the reliability of the Roots that have been stored by the Consensus Layer. However, in the process of creating new blocks, frequently requesting data from the EVM to the consensus layer will cause inefficiency and create possibilities for MEV. Therefore, in EIP-4788, it is proposed to use a specialized Beacon Root Contract to store the latest Roots, which makes the Roots of the parent beacons exposed by EVM, and greatly improves the efficiency of calling for data.

Source: Kernel Ventures
EIP-5656: Copying data in memory is a very high-frequency basic operation on Ethereum, but performing this operation on the EVM incurs a lot of overhead. To solve this problem, the Ethereum community proposed the MCOPY opcode in EIP-5656, which allows efficient replication on EVMs. MCOPY uses a special data structure for short-term storage of the data in charge, including efficient slice access and in-memory object replication. Having a dedicated MCOPY instruction also provides forward-looking protection against changes in the gas cost of CALL instructions in future Ethereum upgrades.

Source: Kernel Ventures
EIP-6780: In Ethereum, SELFDESTRUCT can destroy a contract and clear out all the code and all the state associated with that contract. However, in the Verkle Tree structure, that will be used in the future of Ethereum, this poses a huge problem. In Ethereum that uses Verkle Tire to store state, the emptied storage will be marked as previously written but empty, which will not result in observable differences in EVM execution, but will result in different Verkle Commitments for created and deleted contracts compared to operations that did not take place, which will result in data validation issues for Ethereum in the Verkle Tree structure. data validation problems under the Verkle Tree structure. As a result, SELFDESTRUCT in EIP-6780 retains only the ability to return ETH from a contract to a specified address, leaving the code and storage state associated with that contract on the Ethereum.
3. Prospect of Different Verticals Post Cancun Upgrade
3.1 DA
3.1.1 Profit Model
For an introduction to the principles of DA and the various DA types, it can be learned from our organization's previous article Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design. For DA projects, the revenue comes from the fees paid by users for storing data on them, and the expenses come from the fees paid to maintain the operation of the storage network and the persistence and security of the stored data. The remaining value of the network is the value accumulated by the network, and the main means for DA projects to realize the value increase is to improve the utilization of network storage space, so as to attract as many users as possible to use the network for storage. On the other hand, improvements in storage technology such as data compression or slice-and-dice storage can reduce network expenses, and on the other hand, realize higher value accumulation.
3.1.2 Detachment of DA
There are three main types of DA services today, DA for main chain, modularization DA, and Storage Chain DA, which are described and differentiated in Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design.
3.1.3 Impact of Cancun Upgrade on DA
User requirements: After the Cancun upgrade, the historical transaction data of Ethereum will increase by tens of times. These historical data will also bring about greater storage needs. Since the Ethereum after the Cancun upgrade has not realized the improvement of storage performance, the DA layer of the main chain adopts a simple and regular cleaning of these histories, and this part of the data storage market will naturally fall on the heads of all kinds of DA projects, which will bring about greater user demand.Direction of Development: The increase in the historical data of Ethereum after the Cancun upgrade will prompt major DA projects to improve the efficiency and interoperability of data interaction with Ethereum in order to better capture this part of the market. It is foreseeable that all kinds of cross-public chain storage bridge technologies will become the focus of the development of storage public chain DAs and modular DAs, while for the main chain DAs of Ethereum, it is also necessary to consider how to further enhance their compatibility with the main network and minimize the transmission costs and risks.
3.1.4 Cancun Upgrade and Various DA Verticals
The Cancun upgrade brought faster data growth to Ethereum while not changing the data storage method synchronized across the network, which made the main chain have to carry out regular cleaning of a large amount of historical data and delegate the function of long-term storage of transaction data. However, this part of the historical data is still in demand in the process of airdrops conducted by project parties and data analysis by on-chain analytics organizations. The value of the data behind it will attract competition from different DA projects, and the key to determining the market share lies in the data security and storage cost of DA projects.
DA for main chain: In the current stage of DA for main chain projects, such as EthStorage, its storage market mainly comes from some images, music and other large-memory data of the NFT project on Ethereum. Due to the high compatibility between the node clusters and Ethereum, the main chain DA can realize secure data interaction with the main network of Ethereum at a low cost. At the same time, it stores the storage index data on the smart contract of Ethereum and does not completely detach the DA layer from Ethereum, which has received strong support from the Ethereum Foundation. For the storage market brought by Ethereum, the main chain-specific DA has a natural advantage over other DAs.Modularization DA and Storage Chain DA: These projects are difficult to achieve a competitive advantage in historical data storage performance in the Cancun upgrade compared to the DA for main chain. However, at this stage, DA for main chain is still in the testing stage and has not been fully implemented, while the Cancun upgrade is imminent, and if the dedicated DA projects fail to provide an implemented storage solution before the Cancun upgrade, this round of data value mining may still be dominated by modular DAs.
3.1.5 Opportunities for DA Post Cancun Upgrade
EthStorage: DA for main chain, like EthStorage, will be the biggest beneficiary of the Cancun upgrade, which deserves attention. In addition, after the recent news that the Cancun upgrade may take place in February this year, EthStorage's official X account has also been very active, releasing its latest official website and annual report, and the marketing seems to be very successful.
Let’s celebrate the reveal of our new website! Please visit http://EthStorage.io to see the brand new design!Meet the Frontier of ScalabilityReal-time Cost Comparison with EthereumHow EthStorage WorksCore Features of EthStorageApplications Enabled by EthStorage
However, comparing the content of the latest official website with that of the 2022 version, except for the cooler front-end effect and more detailed introduction, it has not realized too many innovations in service functions, and the main promotion is still storage and Web3Q domain name service. If interested, can click the following link to get the test token W3Q to experience EthStorage service on Galileo Chain network. To get the token, you need to have a W3Q domain name or an account with a balance of more than 0.1 ETH on the main network. Judging from the recent outpouring of water from the tap, there has not been very large participation at this stage, despite some publicity. However, combined with the fact that EthStorage just received a $7 million seed round of financing in July this year and did not see any obvious source of this funding, it is possible that the project is secretly brewing some infrastructure advancement, waiting for the Cancun upgrade to arrive in the pre-release to attract the greatest heat.

EthStorage's Faucet, Source: Web3q.io
Celestia: Celestia is currently the leading modular DA project. Compared to the DA for main chain projects still in development, Celestia have started to make its mark since the last bull market and received its first round of funding. After more than two years of precipitation, Celestia perfected its rollup model, token model, and finally, after a long period of testing, completed its main online launch and first airdrop on October 31st. The price of the coin has been rising since the market opened and recently exceeded US$20. According to the current circulation of 150 million TIA, the market capitalization of this project has already reached 3 billion US dollars. However, considering the limited service group of the blockchain historical storage track, the market capitalization of TIA has far exceeded that of Arweave, a traditional storage public chain with a richer profit model, and directly pushed the market capitalization of Filecoin, although there is still some room for growth compared to the bull market, and the market capitalization of TIA is somewhat overestimated at this stage. However, with the support of the Star project and the enthusiasm for airdrops that has not dissipated, if the Cancun upgrade can move forward in the first quarter of this year as expected, Celestia is still a play to watch. However, there is one risk worth noting: the Ethereum Foundation has repeatedly emphasized in discussions involving Celestia that any project that departs from the Ethereum's DA layer will not be Layer2, indicating a rejection for third party storage projects such as Celestia. The possible presentation of the Ethereum Foundation before and after the Cancun upgrade will also add uncertainty to Celestia's pricing.

Source: CoinmarketCap
3.2 Layer2
3.2.1 Profit Model
Due to the increasing number of users and the continuous development of projects on Ethereum, the low TPS of Ethereum has become a huge obstacle to the further development of its ecosystem, and the high transaction fees on Ethereum also make it difficult to promote some projects involving complex interactions on a large scale. However, many projects have already landed on Ethereum, and there are huge costs and risks in migrating, and at the same time, except for the Bitcoin public chain focused on payment, it is difficult to find a public chain with the same security as Ethereum. The emergence of Layer2 is an attempt to solve the above problems by placing all transaction processing and calculations on another public chain (Layer2), verifying the packaged data through the smart contracts bridged with Layer1, and changing the status on the main network. Layer2 focuses on transaction processing and validation, using Ethereum as the DA layer to store compressed transaction data, resulting in faster speeds and lower computational costs. Users who wish to use Layer2 to execute transactions must purchase Layer2 tokens and pay the network operator in advance. The Layer2 network operator has to pay for the security of the data stored on Ethereum, and the revenue of Layer2 is the amount paid by users for the security of the Layer2 data minus the amount paid by Layer2 for the security of the data on Layer1. So for Layer2 on Ethereum, the following two improvements can bring more revenue. From the perspective of open source, the more active the Ethereum ecosystem is, the more projects there are, the more users and projects will have the need to reduce gas and accelerate transactions, which will bring a larger user base to the Layer2 ecosystem, and under the premise that the profit from a single transaction remains unchanged, more transactions will bring more revenue to the Layer2 network operator. From the point of view of cost saving, if the storage cost of Ethereum decreases, the DA layer storage cost paid by the Layer2 project side decreases, and the number of transactions remains unchanged, the Layer2 operator can also get more revenue.
3.2.2 Detachment of Layer2
Around 2018, the Layer2 scheme of Ethereum presents a blossoming situation, and there are 4 kinds of schemes: Sidechain, Rollup, State Channel and Plasma. However, due to the risk of data unavailability during off-chain transmission and a large number of grief attacks, State Channel has been gradually marginalized from Layer2 schemes at this stage, and Plasma is relatively niche and cannot enter the top 10 in terms of TVL in Layer2, so it will not be discussed there. Finally, Layer2 solutions in the form of sidechains that do not use Ethereum as a DA layer at all have been gradually excluded from the definition of Layer2. In this paper, we will only discuss the mainstream Layer2 scheme Rollup and analyze it with its sub-tracks ZK Rollup and Op Rollup.
Optimistic Rollup
Implementation Principle: To begin with, Optimistic Rollup chain needs to deploy a bridge contract on the Ethereum main network, through which it can realize the interaction with the Ethereum main network.Op Layer2 will batch pack the user's transaction data and send it to the Ethereum, which includes the latest state root of the account on Layer2, the batch processed root and the compressed transaction data. data. At this stage, these data are stored in the form of Calldata in the Chain Bridge contract, although it has reduced a lot of gas compared to the permanent storage in MPT, but it is still a considerable data overhead, and also creates a lot of obstacles to the possible performance improvement of Op Layer2 (Optimistic Rollup Layer2) in the future.

Source: Kernel Ventures
Current status: These days, Op Layer2 is the top ecosystem of Layer2, with the top five Layer2 in TVL all from Optimistic Rollup ecosystem. Also, the combined TVL of Optimism and Arbitrium alone have exceeded 16 billion dollars.

Source: L2BEAT
  One of the main reasons why Op Rollup ecosystem can occupy the leading position now is its friendly development environment. It has completed the first round of Layer2 release and mainnet launch before ZK Rollup, which attracted a large number of DApp developers suffering from the limitations of Ethereum fees and low TPS, and shifted the position of DApp development from Layer1 to Layer2 migration. At the same time, Op Layer2 has a higher compatibility with EVM in the bottom layer, which clears the obstacles for the migration of projects on the main network of Ethereum, and realizes the deployment of various types of DApps on Ethereum such as Uniswap, Sushiswap, Cureve and so on to Layer2 in the fastest possible time, and even attracts projects such as Wordcoin and other projects to migrate from the main network of Polygon. At the present stage, Op Layer2 has not only Uniswap V3, a leading Ethereum DeFi, and GMX, a native DeFi project with a TVL of more than 100 million dollars, but also Friend.tech, a SocialFi project with a transaction fee of more than 20 million dollars, which not only completes the accumulation of the number of projects, but also promotes the qualitative breakthrough of the whole ecosystem by the high-quality projects in each track. But in the long run, ZK Lite will not be the best choice. However, in the long run, ZK Layer2 (ZK Rollup Layer2) has a higher TPS limit and lower gas consumption for a single transaction, and Op Layer2 will face a fierce competition with ZK Layer2 when ZK Rollup technology is gradually improved.

Source: Dune
ZK Rollup (Zero-knowledge Rollup)
Implementation Principle: The transaction data in ZK Layer2 has the similar processing method as Op Layer2, which is packaged and processed in Layer2 and then returned to the smart contract in Layer1 to be stored in Calldata. However, the transaction data in Layer2 has an extra step of generating ZKp, and it does not need to return the compressed transaction data to the network, but only needs to return the transaction root and batch root with ZKp used for verifying the legitimacy of the corresponding transaction. The data returned to Layer1 via ZK Rollup does not require any window period and can be updated in real time on the main network after validation.

Source: Kernel Ventures
Current status: ZK Layer2 has become the second largest Layer2 ecosystem, right after Op Layer2 with 4 of the top 10 Layer2 in TVL ranking are ZK Layer2. But the general phenomenon is that there are not any ZK Layer2 strong enough as Op Layer2. While we all think that ZK Layer2 have a good prospect, they just can't be developed. The first reason is that the early release of Op Layer2 has attracted many developers to implement projects on it, and if they can't get enough benefits from project migration, it is unlikely that they will migrate their projects that have already generated stable income on Op Layer2. Secondly, many ZK Layer2 projects are still struggling with the compatibility of the underlying layer with Ethereum. For example, Linea, a ZK star project, is currently incompatible with many EVM opcodes, which brings a lot of development obstacles for developers to adapt to EVM. And another star project, zkSync, is currently unable to realize compatibility with the underlying layer of EVM, and can only be compatible with some development tools of Ethereum.

Source: Kernel Ventures
  Compatibility with Ethereum also makes it difficult to migrate native projects to it. Since bytecode is not fully interoperable, projects need to make changes to the underlying contract to adapt to ZKEVM, a process that involves many difficulties and risks and thus slows down the migration process of Ethereum native projects. It can be seen that at this stage, most of the projects on ZK Layer2 are native projects, and they are mainly DeFi such as Zigzag and SyncSwap, which are relatively less difficult to develop, and the total number and diversity of projects on ZK Layer2 are waiting for further development. However, the advantage of ZK Layer2 lies in its technological advancement. If the compatibility between ZKEVM and EVM can be realized and the ZKp generation algorithm can be perfected, the performance of ZK Layer2 will have a better upper limit compared to Op Layer2. This is also the reason why ZK Layer2 projects continue to emerge in the Op Layer2-dominated market. As the Op Layer2 track has already been carved up, the most appropriate way for the latecomers to attract users to migrate from their original networks is to propose an expected better solution. However, even if ZK Layer2 is technically perfected one day, if Op Layer2 has formed a comprehensive ecosystem with enough projects on the ground, even if there is a Layer2 with better performance, whether users and developers are willing to take the huge risk of migrating will still be an unknown. In addition, Op Layer2 is also making improvements at this stage to stabilize its ecological position, including Optimism's open-source Op Stack to assist other Op Layer2 developers in rapid development, and improvements to the challenge method such as the dichotomous challenge method. While ZK Layer2 is in the process of improvement, Op Layer2 is not slowing down its development, so the important task of ZK Layer2 at this stage is to grasp the improvement of cryptographic algorithms and EVM compatibility in order to prevent users' dependence on the Op Layer2 ecosystem.
3.2.3 Impact of Cancun Upgrade on Layer2
Transaction speed: After Cancun's upgrade, a block can carry up to 20 times more data through a blob, while keeping the block's exit speed unchanged. Therefore, theoretically, Layer2, which uses Layer1 as the DA layer and settlement layer, can also get up to 20 times the TPS increase compared to the original. Even at a 10x increase, any one of the major Layer2 stars would exceed the highest transaction speed in the history of the mainnet.

Source: L2BEAT
Transaction fee: One of the most important factors limiting the decline of the Layer2 network is the cost of data security provided to Layer1, which is currently quoted at close to $3 for 1KB of Calldata data stored on an Ethereum smart contract. But through the Cancun upgrade, Layer2 packaged transaction data is only stored in the form of blobs in the consensus layer of Ethereum, and 1 GB of data storage costs only about $0.1 a month, which greatly reduces the operating costs of Layer2. This greatly reduces Layer2's operating costs. As for the revenue generated from this open source, Layer2 operators will surely give a portion of it to users in order to attract more users and thus reduce Layer2's transaction costs.Scalability: The impact of the Cancun upgrade on Layer2 is mainly due to its temporary storage scheme and the new blob data type. Temporary storage periodically removes old state on the main network that is not useful for current validation, which reduces the storage pressure on nodes, thus speeding up network synchronization and node access between Layer1 and Layer2 at the same time. The blob, with its large external space and flexible adjustment mechanism based on the price of gas, can better adapt to changes in the network transaction volume, increasing the number of blobs carried by a block when the transaction volume is too large, and decreasing it when the transaction volume drops.
3.2.4 Cancun Upgrade and Various Layer2 Verticals
The Cancun upgrade will be positive for the entire Layer2 ecosystem. Since the core change in the Cancun upgrade is to reduce the cost of data storage and the size of individual blocks on Ethereum, Layer2, which uses Ethereum as its DA layer, will naturally see a corresponding increase in TPS and a reduction in the storage fees it pays to Layer1. However, due to the difference in the degree of use of the two Rollups for the Ethereum DA layer, there will be a difference in the degree of benefit for Op Layer2 and ZK Layer2.
Op Layer2: Since Op Layer2 needs to leave the compressed transaction data on the Ethereum for recording, it needs to pay more transaction fees to the Ethereum than ZK Layer2. Therefore, by reducing the gas consumption through EIP-4844, Op Layer2 can get a larger reduction in fees, thus narrowing the disadvantage of ZK Layer2 in terms of fee difference. At the same time, this round of Ethereum gas reduction is also bound to attract more participants and developers, compared with ZK Layer2, which has not issued any coins and its underlying layer is difficult to be compatible with EVMs, more projects and capitals will tend to flock to Op Layer2, especially Arbitrium, which has a strong performance in the recent period, which may lead to a new round of development of Layer2 ecosystem dominated by Op Layer2. This may lead to a new round of development in the Layer2 ecosystem led by Op Layer2, especially for SocialFi and GameFi projects, which are affected by high fees and have difficulties in providing quality user experience. Along with that, this phase of Layer2 is likely to see the emergence of many quality projects that can approach the Web2 user experience. If this round of development is taken by Op again, it will further widen the gap with the ZK Layer2 ecosystem, making it difficult enough for ZK Layer2 to catch up.ZK Layer2: Compared to Op Layer2, the benefit of downward gas adjustments will be smaller because ZK Layer2 does not need to store transaction-specific information on the chain, and although ZK Layer2 is still in the process of development and does not have the large ecosystem of Op Layer2, the facilities of Op Layer2 have already been improved, and there is more intense competition for the development of Op Layer2, which is attracted by the Cancun upgrades. However, the facilities on Op Layer2 are already well established and there is more competition for development on it, and it may not be wise for the new entrants attracted by the Cancun upgrades to compete with the already mature Op Layer2 developers. If ZK Layer2 is able to improve the supporting facilities for developers at this stage and provide a better development environment for developers, considering the better expectation of ZK Layer2 and the fierce competition in the market, new developers may choose to flock to the ZK Layer2 track, and this process will speed up the process of catching up with ZK Layer2, and achieve the goal of catching up with Op Layer2 before Op Layer2 completely dominates the market. before Op Layer2 completely dominates the market.
3.2.5 Opportunities for Layer2 Post Cancun Upgrade
DYDX:Although DYDX is a DEX deployed on Ethereum, its functions and principles are very different from traditional DEX on Ethereum such as Uniswap. First of all, it chooses thin orders instead of the AMM trading model used by mainstream DEXs, which allows users to have a smoother trading experience and creates a good condition for leveraged trading on it. In addition, it utilizes Layer 2 solutions such as StarkEx to achieve scalability and process transactions, packaging transactions off-chain and transmitting them back on-chain. Through the underlying principles of Layer2, DYDX allows users to obtain a far lower transaction cost than traditional DEX, with each transaction costing only about $0.005. At a time when the Cancun upgrade and the volatility of Ethereum and related tokens is almost certain to see a surge in high-risk investments such as leveraged trading. Through the Cancun upgrade, the transaction fees on DYDX will surpass those of CEX even for small transactions, while providing higher fairness and security, thus providing an excellent trading environment for high-risk investments and leveraged trading enthusiasts. From the above perspective, the Cancun upgrade will bring a very good opportunity for DYDX.Rollup Node:The data that was regularly purged in the Cancun upgrade is no longer relevant for the validation of new out-of-block, but that doesn't mean that there is no value in that purged data. For example, projects that are about to be airdropped conveniently need complete historical data to determine the security of the funds of each project that is about to receive airdrops, and there are also some on-chain analytics organizations that often need complete historical data to trace the flow of funds. At this time, one option is to query the historical data from the Rollup operator of Layer2, and in the process the Rollup operator can charge for data retrieval. Therefore, in the context of the Cancun upgrade, if we can effectively improve the data storage and retrieval mechanism on Rollup, and develop related projects in advance for layout, it will greatly increase the possibility of project survival and further development.
3.3 DApp
3.3.1 Profit Model
Similar to Web2 applications, DApps serves to provide a service to users on Ethereum. For example, Uniswap provides users with real-time exchange of different ERC20 tokens; Aave provides users with overcollateralized lending and flash lending services; and Mirror provides creators with decentralized content creation opportunities. However, the difference is that in Web2, the main way to profit is to attract more users to the platform through low-cost and high-quality services, and then use the traffic as a value to attract third-party advertisements and profit from the advertisements. However, DApp maintains zero infringement on users' attention in the whole process, and does not provide any recommendation to users, but collects the corresponding commission from a single service after providing a certain service to users. Thus, the value of a DApp comes mainly from the number of times users use the DApp's services and the depth of each interaction, and if a DApp wants to increase its value, it needs to provide services that are better than those of similar DApps, so that more developers will tend to use it rather than other DApps.
3.3.2 Detachment of DApps
At this stage, Ethereum DApps are dominated by DeFi, GameFi, and SocialFi. There were some Gamble projects in the early days, but due to the limitation of Ethereum's transaction speeds and the release of EOS, which is a more suitable public chain, the Gamble projects have gradually declined on Ethereum. These three types of DApps provide financial, gaming and social services respectively, and realize value capture from them.
DeFi
Implementation Principle: DeFi is essentially one or a series of smart contracts on Ethereum.In the release phase of DeFi, relevant contracts (such as coin contracts, exchange contracts, etc.) need to be deployed on the Ethereum main network, and the contracts will realize the interaction between DeFi function modules and Ethereum through the interfaces. When users interact with DeFi, they will call the contract interface to deposit, withdraw and exchange coins, etc. The DeFi smart contract will package the transaction data, interact with Ethereum through the script interface of the contract, and record the state changes on the Ethereum chain. In this process, the DeFi contract will charge a certain fee as a reward for upstream and downstream liquidity providers and for its own profit.Current status: DeFi has an absolute dominance among DApps. Apart from cross-chain projects and Layer2 projects. DeFi occupies other places in the top 10 DApps in terms of contract assets on Ethereum. Until this time, the cumulative number of DeFi users on Ethereum has exceeded 40 million. Although the number of monthly active users has declined from the peak of nearly 8 million in November 2021 due to the impact of the bear market, with the recovery of the market, the number of monthly active users has also recovered to about half of the peak, and is waiting for the next round of the bull market to make another surge. Meanwhile, DeFi is becoming more diverse and versatile. From the early cryptocurrency trading and mortgage lending to the current leveraged trading, forward buying, NFT financing, flash loans, etc., financial methods that can be realized in Web2 have been gradually realized in DeFi, even somthing can't be realized in Web2, including flash loans, have also been realized in DeFi.

Source: DAppRadar
SocialFi
Implementation Principle: Similar to traditional design platforms, SocialFi supports individuals to create content and publish the created content through the platform to spread it and further attract followers for the accounts, while users can access the content they need and obtain the services they need through the platform. The difference is that the content published by users, the interaction records between the content publishers and their fans, and the information of the accounts themselves are all decentralized through blockchain smart contracts, which means that the ownership of the information is returned to each individual account. For the SocialFi platform, the more people are willing to create and share content through its platform, the more revenue it can generate by providing these services. The cost of user interaction on the SocialFi platform minus the cost of storing account and transaction data is the profit of the SocialFi project.Current status: Although the UAW (User Active Wallet) of SocialFi seems to be comparable to DeFi's when it comes to the Head project, its volume often comes from the airdrop expectation of some projects, which is unsustainable. After the intial boom, Friend.tech had less than 1,000 UAWs these days. And when comparing with DeFi outside the top 5, it is more supportive of this conclusion. The root cause of this is that SocialFi's high service fees and inefficiencies have made it impossible for SocialFi to take on the social attributes it is supposed to have, and it has been reduced to a purely speculative platform.

Source: DAppRadar
GameFi
Implementation Principle: The application of GameFi is similar to that of SocialFi, except that the object of application has become a game. At this stage, the mainstream profit method of GameFi is to sell the props in the game for profit.Current status: If the project owner wants to get more profits, more people to participate in the game is essentially needed. At this stage, there are only two things that can attract users to participate in the game, one is the fun of the game, which drives users to buy props in order to get the right to participate in the game or a better gaming experience. The other is the expectation of profitability, as users believe they can sell the props at a higher price in the future. The first model is similar to Steam, where the program gets real money and the users get to enjoy the game. In the other model, if the users and the project's profits come from the constant influx of new users, and once the new funds can not offset the project's props issued, the project will quickly fall into a vicious cycle of selling, market expectations decline, and continue to sell and difficult to sustainably realize the revenue, with the Ponzi attribute. Due to the limitations brought by blockchain fees and transaction speed, GameFi at this stage is basically unable to achieve the user experience required by the former mode, and is mostly based on the second mode.
3.3.3 Impact of Cancun Upgrade on DApps
Performance optimization: Cancun upgraded a block can carry more transaction data, corresponding to the DApp can realize more state changes. According to the average expansion of 8 blob capacity calculation, Cancun upgraded DApp processing speed can reach ten times the original.Reduced Costs: Data storage costs are a fixed expense for DApps, and both Layer1 and Layer2 DApps directly or indirectly utilize Ethereum to record the status of accounts within the DApp. With the Cancun upgrade, every transaction in a DApp can be stored as a blob of data, significantly reducing the cost of running the DApp.Functionality Expansions: Due to the high cost of storage on Ethereum, project owners are deliberately reducing the amount of data that can be uploaded during the development of DApps. This has made it impossible to migrate many Web2 experiences to DApps, such as SocialFi's inability to support video creation in Twitter, or even if they could, the data would not be as secure as Ethereum on the underlying chain, and GameFi's gameplay options are often low-level and uninteresting, as every state change needs to be recorded on the chain. With the Cancun upgrade, project owners will have more opportunities to experiment with these aspects.
3.3.4 Cancun Upgrade and Various DApp Verticals
DeFi: The impact of the Cancun upgrade on DeFi is relatively small because the only thing that needs to be recorded in DeFi is the current state of the user's assets in the contract, whether it is pledged, borrowed or other states, and the amount of data required to be stored is much smaller than that of the other two types of DApps. However, the increase of Ethereum's TPS brought by the Cancun upgrade can greatly facilitate the arbitrage business of DeFi, which has a high trading frequency, and the leverage business, which needs to complete the opening and closing of positions in a short period of time. At the same time, the reduction in storage costs, which is not evident in single-coin exchanges, can also add up to significant fee savings in leveraged and arbitrage transactions.SocialFi: The Cancun upgrade has the most immediate impact on SocialFi's performance. The Cancun upgrade improves the ability of SocialFi's smart contracts to process and store large amounts of data to provide a superior user experience that is closer to that of Web2. At the same time, basic interactions such as user creation, commenting, liking, etc. on SocialFi can be done at a lower cost, thus attracting truly socially oriented long-term participants.GameFi: For Asset on chain games in the last bull market, the effect is similar to DeFi, with a relatively small decrease in storage cost. But the increase in TPS can still benefit high frequency interactions, timeliness of interactions, and support for interactive features that can improve game playability. Fully On-chain games are more directly affected by the Cancun upgrade. Since all game logic, state, and data is stored on the chain, the Cancun upgrade will significantly reduce the cost of operation and user interaction for the Fully On-chain game. At the same time, the initial deployment cost of the game will also be greatly reduced, thus lowering the threshold for game development and encouraging the emergence of more fully chain games in the future.
3.3.5 Opportunities for DApps Post Cancun Upgrade
Dark Forest: Since the third quarter of 2023, perhaps because of the question that traditional asset-on-chain games are not decentralized enough, or simply because the traditional GameFi narrative seemed lukewarm, capital began to look for new growth points, Fully On-chain games began to explode and attracted a lot of attention. But for the fully on-chain game on Ethereum, the transaction speed of 15 TPS and the storage cost of 16 gas single bytes for the CALLDATA field severely limit the upper limit of its development. The landing of the Cancun upgrade can be a good improvement to both problems, combined with the continuous development of related projects in the second half of 2023, the Cancun upgrade can bring a relatively large positive for this track. Considering the head effect, Dark Forest is one of the few fully on-chain games from the last round of the bull market, with a relatively well-established community base, and has not yet issued its own tokens. It should have good prospects if the project side takes action around the time of Cancun's upgrade.
4. Conclusion
The landing of the Cancun upgrade will not only bring higher TPS and lower storage costs to Ethereum, but also a surge in storage pressure. DA and Layer2 are the ones that will be heavily impacted by the upgrade. In contrast, DA projects that do not use Ethereum in their underlying data storage are not supported by the Ethereum development community, and while there are opportunities, it need to be more cautious when dealing with specific projects. Since most of the ZK system Layer2 tokens have not yet been introduced, and Arbitrium has strengthened significantly in the recent period in anticipation of the Cancun upgrade, if the price of Arb's coins can stabilize through the pullback phase, Arb and its ecosystem of related projects should see a good rise along with the landing of Cancun. Due to the influx of speculators, the DYDX project may also have some opportunity at the Cancun upgrade node. Finally, Rollup has a natural advantage for storing Layer2-related transaction history data, when it comes to providing historical data access services, Rollup on Layer2 will also be a good choice.
If we take a longer-term perspective, the Cancun upgrade has created conditions for the development and performance of various types of DApps, and in the future, we will inevitably see Web3 projects gradually approaching Web2 in terms of interactive functions and real-time performance, which will bring Ethereum to the goal of a world computer, and it is worth making long-term investments for any pragmatic development projects. Ethereum has been in a weak position relative to Bitcoin in the recent market rally, and while Bitcoin has recovered to nearly 2/3 of its previous bull market high, Ethereum has not yet recovered 1/2 of its previous high.The arrival of the Cancun upgrade may change this trend and bring Ethereum a round of complementary gains, after all, as a rare public chain that can maintain profitability while in the midst of token deflation, there is indeed an undervalued value at this stage.
Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, DApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world.
Reference
eips.Ethereums-core: https://eips.Ethereum.org/coreEthStorage 官网: https://eth-store.w3eth.io/#/EIP-1153: Transient storage opcodes: https://eips.Ethereum.org/EIPS/eip-1153EIP-4788: Beacon block root in the EVM: https://eips.Ethereum.org/EIPS/eip-4788EIP-5656: MCOPY - Memory copying instruction:https://eips.Ethereum.org/EIPS/eip-5656EIP-6780: SELFDESTRUCT only in same transaction:https://eips.Ethereum.org/EIPS/eip-6780零知识卷叠如何运作:https://Ethereum.org/zh/developers/docs/scaling/ZK-rollups#how-do-ZK-rollups-workOPTIMISTIC ROLLUPS:https://Ethereum.org/developers/docs/scaling/optimistic-rollupsZK、ZKVM、ZKEVM 及其未来:https://foresightnews.pro/article/detail/11802重建与突破,探讨全链游戏的现在与未来:https://foresightnews.pro/article/detail/39608一文分析 Axie Infinity 背后的经济模式:https://www.tuoluo.cn/article/detail-10066131.html
🐳
🐳
LIVE
Kernel Ventures
--
Kernel Ventures:坎昆升级下的泛以太坊生态展望
作者:Kernel Ventures Jerry Luo
审稿:Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
以太坊已经完成前三个升级阶段,分别解决了开发门槛,DoS 攻击以及 POS 转型的问题,现阶段的主要升级目标是降低交易费用并优化用户体验。EIP-1553,EIP-4788,EIP-5656,EIP-6780 四个提案分别实现了降低合约间交互成本,提高信标链访问效率,降低数据复制成本以及限制 SELFDESTRUCT 字节码的作用权限。EIP-4844 通过引入外挂在区块上的 blob 数据,可以大大提高以太坊的 TPS 并降低数据存储成本。坎昆升级对于 DA 赛道中的以太坊专用 DA 会有额外利好并且现阶段以太坊基金会对于数据存储中完全没有借助以太坊的 DA 方案持排斥态度。由于 Op Layer2 更成熟的开发环境同时以及对以太坊 DA 层更多的需求,坎昆升级可能会给其带来相对更多的利好。坎昆升级可以提高 DApp 的性能上限,使得 DApp 具有更接近 Web2 中 App 的功能。热度没有消散又需要以太坊上大量存储空间的全链游戏值得关注。现阶段以太坊生态存在低估,坎昆升级可能成为以太坊开始走强的信号。
1. 以太坊升级之路
自去年 10 月 16 日 Cointelegraph 发布比特币 ETF 通过的假新闻到今年 1 月 11 日 ETF 的最终通过,整个加密市场经历了一段持续的上涨。由于 ETF 最直接的利好对象是比特币,这段时间内以太坊和比特币走势出现了背离的情况,比特币最高接近 49000 美金,已收复上轮牛市峰值的 2/3,而以太坊最高仅至 2700 美金附近,刚刚超过上轮牛市峰值的一半。但自比特币 ETF 落地以来,ETH/BTC 走势出现了显著回升,除了即将到来的以太坊 ETF 预期外,另一个重要原因便是一拖再拖的坎昆升级最近也宣布了在 Goerli 测试网的公开测试,释放出了即将进行的信号。从目前的情况来看,坎昆升级的时间将不会早于 2024 第一季度。坎昆升级致力于解决现阶段在以太坊上 TPS 低下与交易费用高昂的问题,属于以太坊 Serenity 升级阶段的一部分。Serenity 之前,以太坊已经经历了 Frontier,Homestead,Metropolis,三个阶段。前三个阶段分别解决了以太坊上开发门槛,DoS 攻击以及 POS 转型的问题。在以太坊 Roadmap 中明确指出,现阶段的主要目标则是实现 ”Cheaper Transactions“ 和 “Better User Experience”。

近一年 ETH/BTC 汇率走势,图片来源:TradingView
2. 坎昆升级核心内容
以太坊作为一个去中心化社区,其升级方案来源于开发者社区提出并最终经以太坊社区多数赞同的提案,其中得以通过的是 ERC 提案,还在讨论中或者即将在主网施行的统称 EIP 提案。此次坎昆升级预期将通过 5 个 EIP 提案,分别是 EIP-1153,EIP-4788,EIP-5656,EIP-6780 与 EIP-4844。
2.1 主线任务:EIP-4844
Blob:在EIP-4844 中为以太坊引入了一种新的交易类型 blob,一个大小为 125kb 的数据包。Blob 对交易数据进行了压缩和编码并且并没有以 CALLDATA 字节码的形式在以太坊上进行永久存储,从而大大降低了 gas 消耗,但无法在 EVM 中直接访问。EIP-4844 推行后的以太坊中,每笔交易可以携带最多两个 blob,而每个区块最多可以携带 16 个 blob。但是以太坊社区建议每个区块携带的 blob 数量为 8 个,当超过这个数量后,虽然还可以继续携带,但是会面临相对不断增加的 gas 费,直到 16 个 blob 的上限。
此外,EIP-4844 中利用的另外两项核心技术分别是 KZG 多项式承诺与临时存储,这部分在我们机构前一篇文章 Kernel Ventures:一文探讨 DA 和历史数据层设计 中有详细分析。总而言之,通过 EIP-4844 对以太坊单个区块容量大小以及交易数据的存储位置进行了改动,在降低以太坊主网 gas 的同时大幅提升了主网的 TPS 。
2.2 支线任务:EIP-1153
EIP-1153:这个提案的提出旨在降低合约交互过程中的存储成本。以太坊上一笔交易可以拆解为多个由 CALL 指令集创建的框架,这些框架可能隶属于不同合约,因而可能涉及多个合约的信息传输。状态在不同合约间存在两种传输方式,一种是以输入/输出的方式,另一种则是调用 SSTORE/SLOAD 字节码实现链上永久存储。前者中数据以内存的形式进行存储传输,具有较低的成本,但如果整个传输过程经过了任一不可信的第三方合约,都会存在巨大安全风险。但如果选择后者,又会带来一笔不小的存储开销,也会加重链上存储的负担。而 EIP-1153 的通过引入了瞬时存储的操作码 TSTORE 和 TLOAD 解决了这一问题。通过这两个字节码存储的变量具有和 SSTORE/SLOAD 字节码存储的变量有一样的性质,在传输过程中无法修改。但不同之处在于,瞬时存储的数据在这笔交易结束后不会留在链上,而是会和临时变量一样湮灭,通过这一方式,实现了状态传输过程的安全与相对较低的存储成本。

三种操作码的区别,图片来源:Kernel Ventures
EIP-4788:在以太坊 POS 升级后的信标链中,每个新的执行块包含父信标块的 Root。即使遗失了部分产生时间较早的 Root ,由于共识层完成存储的 Root 具有可靠性,因而我们在创建新区块的过程中,我们只需要留有最新的某几个 Root 便可。但是在创建新区块的过程中,频繁的从 EVM 向共识层请求数据会造成执行效率的低下并为 MEV 创造可能。因而在 EIP-4788 中提出使用一个专门的 Beacon Root Contract 对最新的 Root 进行存储,这使得父信标块的 Root 都是 EVM 暴露的,大大提高了对数据的调用效率。

Beacon Root 的调用方式,图片来源:Kernel Ventures
EIP-5656:对内存中的数据进行复制是以太坊上非常高频的一项基本操作,但在 EVM 上执行这项操作会产生许多开销。为了解决这一问题,以太坊社区在 EIP-5656 中提出了可以在 EVM 上高效进行复制的 MCOPY 操作码。MCOPY 使用了特殊的数据结构来短期存储被负责的数据,包括高效的分片访问和内存对象复制。拥有专用的 MCOPY 指令还能提供前瞻性保护,可以更好的应对未来以太坊升级中 CALL 指令的 gas 成本发生变化。

以太坊数据拷贝 gas 消耗的变化过程,来源:Kernel Ventures
EIP-6780:以太坊中,通过 SELFDESTRUCT 可以对某个合约进行销毁,并清空该合约的所有代码和所有与该合约相关的状态。但在以太坊未来即将使用的 Verkle Tree 结构中,这会带来巨大隐患。在使用 Verkle Tire 存储状态的以太坊中,清空的存储空间将被标记为之前已写入但为空,这不会导致 EVM 执行中出现可观察到的差异,但与未发生的操作相比,已创建和删除的合约会生成不同的 Verkle Commitment,这会导致 Verkle Tree 结构下的以太坊出现数据校验的问题。因而 EIP-6780 中的 SELFDESTRUCT 仅保留了将合约中的 ETH 退还指定地址的功能,会将与该合约相关的代码与存储状态继续保存在以太坊上。
3. 坎昆升级后的各大赛道
3.1 DA 赛道
3.1.1 生态价值探讨
关于 DA 原理与各种 DA 类型的介绍,可以参考我们的上一篇文章Kernel Ventures:一文探讨 DA 和历史数据层设计。对于 DA 项目来说,其收益来源于用户在其上进行数据存储并支付的费用,而其支出来源于维护存储网络运行与存储数据持久性和安全性所支付的费用。收益与支出相减,所余下的便是网络积累下的价值。DA 项目要实现价值的提升,最主要的手段便是提高网络存储空间利用率,吸引尽可能多的用户利用网络进行存储。另一方面,在存储技术上的改进比如压缩数据或者分片存储可以减少网络的支出,从另一方面实现价值更高的积累。
3.1.2 DA 赛道细分
现阶段提供 DA 服务的项目主要分为三种类型,分别是主链专用 DA,模块化 DA 以及存储公链 DA。三者的具体介绍与区别见Kernel Ventures:一文探讨 DA 和历史数据层设计 。
3.1.3 坎昆升级对 DA 项目的影响
用户需求:坎昆升级后,以太坊的历史交易数据将会有数十倍于原来的增长速度。这些历史数据随之也会带来更大的存储需求,但是由于当下坎昆升级后的以太坊并未实现存储性能的提升,因而主链的 DA 层对这些历史采取了简单的定期清理方式,这一部分数据的存储市场便自然落到了各类 DA 项目头上,从而为其带来了更大的用户需求。发展方向:坎昆升级后以太坊历史数据的增加会促使各大 DA 项目方提高与以太坊的数据交互效率和互操作性来更好的抢占这部分市场。可以预见,各类跨公链的存储桥技术会成为存储公链 DA 与模块化 DA 的发展重点,而对于以太坊的主链专用 DA 来说,其也需要考虑如何进一步增强其与主网的兼容性,最小化传输成本与传输风险。
3.1.4 坎昆升级下的不同 DA 赛道
坎昆升级给以太坊带来更快数据增长的同时并没有改变全网同步的数据存储方式,这使得主链不得不对大量历史数据进行定期清理,下放交易数据长期存储的职能。但这部分历史数据在项目方进行空投,链上分析机构的数据分析等过程中仍存在需求。其背后的数据价值将引来不同 DA 项目方的争夺,而决定市场份额走向的关键便在于 DA 项目的数据安全性以及存储成本。
主链专用 DA:现阶段的主链 DA 项目比如 EthStorage 中,存储市场主要来源于以太坊上 NFT 项目的一些图片,音乐等大内存数据。主链 DA 由于在节点集群上和以太坊有高兼容性,可以低成本的与以太坊主网实现安全的数据交互。同时,其将存储索引数据存储在了以太坊主网智能合约上,并未将 DA 层完全脱离以太坊,从而得到了以太坊基金会的大力支持。对于以太坊带来的存储市场,主链专用 DA 相对其他 DA 有天然的优势。存储公链 DA 和模块化 DA:这类非主链 DA 项目相对于以太坊的专用 DA 难以在坎昆升级中取得历史数据存储性能上的竞争优势。但现阶段的以太坊专用 DA 还处于测试阶段,未能实现完全的落地,而坎昆升级已迫在眉睫,如果在坎昆升级前专用 DA 项目无法给出一个已实现的存储方案,这轮数据价值的挖掘仍有可能被模块化 DA 主导。
3.1.5 坎昆升级下 DA 的机遇
EthStorage:EthStorage 类的主链项目将是坎昆升级中最大的受益者,所以坎昆升级前后可以重点关注EthStorage 项目。此外,在近期坎昆升级可能于今年 2 月进行的消息放出后,EthStorage 的官推也是动作频频,先后发布了自己的最新官网与年度报告,宣传上显得是十分卖力。
Let’s celebrate the reveal of our new website! Please visit http://EthStorage.io to see the brand new design!Meet the Frontier of ScalabilityReal-time Cost Comparison with EthereumHow EthStorage WorksCore Features of EthStorageApplications Enabled by EthStorage
但是对比起最新官网的内容与 2022 版官网的内容,除了更酷炫的前端效果与更详细的介绍外,并未实现太多服务功能上的革新,主推的仍然是存储和 Web3Q 域名服务。有兴趣的话可以点击下面链接(https://galileo.web3q.io/faucet.w3q/faucet.html)领取测试代币 W3Q, 在 Galileo Chain network 网络上体验 EthStorage的服务,参与领取代币需要拥有一个 W3Q 域名或者主网余额超过 0.1 ETH 的账户。从水龙头最近的出水情况来看,尽管有了一定的宣传,现阶段并没有一个非常大的参与量。不过结合今年 7 月份 EthStorage 刚拿到 700 万美金的种子轮融资并且并没有看到这笔资金的明显出处,也有可能项目方在暗地酝酿某些基础设施的推进,等待这坎昆升级到来的前期发布以吸引最大热度。

EthStorage 的水龙头出水情况,来源:Web3q.io
Celestia:Celestia 是现阶段模块化 DA 的龙头项目。相对于还在发展中的以太坊专用 DA 项目,Celestia 早在上轮牛市就开始发迹并拿到了首轮融资。经过了两年多的沉淀,Celestia 完善了其 Rollup 模型,代币模型并经过了长时间的测试网检验最终于 23 年的 10 月 31 号完成了其主网上线与首批空投。可以看到其币价从开盘以来便经历了一路的攀升,近日币价一度突破了 20 美金,按照现阶段 1.5 亿 TIA 的流通量来看,这一项目的市值已经达到了 30 亿美金附近。但是考虑到区块链历史存储赛道这一有限的服务群体,TIA 的市值已经远超了盈利模式更丰富的传统存储公链 Arweave 并直逼 Filecoin 的市值,尽管相对于牛市还有一定的上涨空间,现阶段 TIA 的市值存在一定的高估情况。不过在明星项目以及未消散的空投热情的加持下,如果坎昆升级能在今年第一季度如期推动,Celestia 仍是非常值得关注的项目。但有一点风险也很值得注意,以太坊基金会在涉及 Celestia 的讨论中多次强调,脱离了以太坊 DA 层的项目都不会是 Layer2,表现出了 Celestia 这类非以太坊原生存储项目的排斥态度。坎昆升级前后以太坊基金会可能的表态也将对 Celestia 价格的走势带来不确定性。

TIA 代币价格走势,图片来源:CoinmarketCap
3.2 Layer2 赛道
3.2.1 生态价值探讨
由于以太坊上用户数量的不断增加与项目的不断开发,以太坊低下的 TPS 成为其生态进一步开发的巨大阻碍,同时以太坊上高昂的交易费用也使得一些涉及复杂交互的项目难以大范围推广。但是,许多项目已经落地以太坊,进行迁移存在着巨大的成本与风险,同时,除了专注于支付的比特币公链,再难找到具有以太坊同样安全性的公链。 Layer2 的出现便是尝试解决上述问题,其将交易的处理与计算全部放在另一条公链(Layer2)上进行,数据打包好后通过与 Layer1 桥接的智能合约进行验证;并在主网上更改状态。Layer2 专注于交易的处理与验证,以以太坊作为 DA 层存储压缩后的交易数据,因而有更快的速度与更低的计算成本。用户如果想使用 Layer2 执行交易,需要预先购买 Layer2 相应 token 并向网络运营者支付。而 Layer2 的网络运营者则需要为存储在以太坊的数据安全支付相应费用,用户为 Layer2 数据安全支付的费用减去 Layer2 向 Layer1 上数据安全支付的费用就是 Layer2 的网络营收。所以对于以太坊上的 Layer2 来说,下面两方面的提高可以带来更多的收益。从开源的角度来说,以太坊生态越活跃,项目越多,就会有更多用户和项目方有降低 gas 与加速交易的需求,从而为 Layer2 生态带来更大的用户基数,单笔交易获利不变前提下,更多的交易便会给 Layer2 网络运营者带来更多的收益。从节流的角度出发,如果以太坊自身的存储成本下降,Layer2 项目方所需支付的 DA 层存储费用下降,在交易数量不变的前提下,Layer2 运营者也可以获取更多的收益。
3.2.2 Layer2赛道细分
2018年前后,以太坊的 Layer2 方案呈现百花齐放的状况,存在着侧链,Rollup,状态通道和 Plasma 共计 4 种方案。但状态通道由于在链下通道传输过程中的数据不可用风险以及大量的悲伤攻击,现阶段已经被从 Layer2 的方案中逐渐边缘化。Plasma 类型比较小众,并且 TVL 总量在 Layer2 中也无法进入前 10,也不多加讨论。最后,对于侧链形式的 Layer2 方案,其完全没有采用以太坊作为 DA 层,因而逐渐被排除在了 Layer2 的定义之外。本文仅对当下主流的 Layer2 方案 Rollup进行讨论,并结合其细分赛道 ZKRollup 与 OpRollup 来分析。
Optimistic Rollup
实现原理:初始化阶段,Optimistic Rollup 链需要在以太坊主网上部署一个链桥合约,通过这个合约实现和以太坊主网的交互。Op Layer2 会将用户的交易数据进行批量打包后发送至以太坊,其中包括了 Layer2 上账户最新的状态根,批处理的根与压缩后的交易数据。现阶段,这些数据以 Calldata 的形式存储在链桥合约中,虽然相对在 MPT 中的永久存储已经减少了不少 gas,但仍是一笔不小的数据开销,同时也为 Op Layer2(Optimistic Rollup Layer2)未来可能的性能提升产生了不少阻碍。

Optimistic Rollup 原理,图片来源:Kernel Ventures
现状:现阶段的 Op Layer2 是 Layer2 的第一大生态,TVL 排名前五的公链全部来自 Optimistic Rollup 生态,光是 Optimism 与 Arbitrium 两条公链的 TVL 总量之和便超过了 160 亿美金。

以太坊 Layer2 TVL 总量,图片来源:L2BEAT
而 Op Rollup 生态现今能够占据领跑位置的一个主要原因便是其友好的开发环境,抢先 ZK Rollup 完成了第一轮 Layer2 的发布与主网上线,吸引了大量饱受以太坊手续费与低下 TPS 限制的 DApp 开发者,将 DApp 开发阵地从 Layer1 向 Layer2 迁移的转移。同时 Op Layer2 在底层有着与 EVM 更高的兼容性,为以太坊主网项目的迁移扫清了障碍,以最快的时间实现了以太坊上 Uniswap,Sushiswap,Cureve 等各类 DApp 在 Layer2 的部署,甚至还吸引了 Wordcoin 等项目从 Polygon 主网进行迁移。现阶段的 Op Layer2 既有 Uniswap V3 这类以太坊龙头 DeFi,还有 GMX 这类 TVL 超过1亿美金的原生 DeFi 项目,又有 Friend.tech 这类交易费破 2000 万的 SocialFi 项目,不仅完成了项目数量上的积累,各个赛道的高质量项目还带动了整个生态质的突破。但是从长期来看,ZK Layer2(ZK Rollup Layer2)有着更高的 TPS 上限以及单笔交易更低的 gas 消耗,当后续 ZK Rollup 技术逐渐完善后 Op Layer2 将会面临一场与 ZK Layer2 的激烈竞争。

Friend.tech 的交易费用与 GMX V2 的 TVL,图片来源:Dune
ZK Rollup(Zeroknowledge Rollup)
实现原理:ZK Layer2 中的交易数据有着与 Op Layer2 相近的处理方式,在 Layer2 上进行打包处理后返回到 Layer1 的智能合约中以 Calldata 存储。但是交易数据在 Layer2 上多出了一步生成 ZKp 的计算过程,同时不需要向网络返回压缩后的交易数据而只需返还交易根和批处理根并附带用于对相应交易合法性验证的 ZKp 。通过 ZK Rollup 返回给 Layer1 的数据无需任何窗口期,验证通过后可以在主网得到实时更新。

Zeroknowledge Rollup 原理,图片来源:Kernel Ventures
现状:现阶段的 ZK Layer2 已经发展为了第二大 Layer2 生态,紧跟 Op Layer2 之后。TVL 排名前 10 的 Layer2 中,ZK 系的数量也占据了 4 个,但总体上呈现出多而不强的现象,大家都认为 ZK 系的 Layer2 很有发展前景,但就是无法发展起来。首先是因为早期 Op 系 Layer2 的率先发布已经吸引了许多开发者在其上的项目落地,如果无法从项目迁移中获得足够多的优势,项目方不太可能将已经在 Op Layer2 上产生稳定收益的项目进行迁移。其次是 ZK 系 Layer2 现在许多还在底层与以太坊的兼容性上进行努力,比如 ZK 系的明星项目 Linea 现阶段还无法兼容许多 EVM 操作码,为适应了 EVM 的开发者带来不少开发障碍,而另一个明星项目 zkSync 现阶段甚至几乎无法实现与 EVM 底层的兼容而只能兼容以太坊的一些开发工具。

现有 ZK Layer2 项目与以太坊的兼容性,图片来源:Kernel Ventures
与以太坊的兼容性还为其上原生项目的迁移带来了巨大难度。由于字节码不具有完全互操作性,项目方需要对合约底层进行更改以适配 zkEVM,这个过程存在着许多困难与风险因而大大拖慢了以太坊原生项目的迁移过程。可以看到,现阶段 ZK 系 Layer2 上的项目多为原生项目,并且以 Zigzag,SyncSwap 这类开发难度相对较低的 DeFi 为主,ZK Layer2 上项目的总量以及多样性都等待着进一步开发。但是,ZK Layer2 的优势在于其技术上的先进性,如果能够实现 zkEVM 与 EVM 的兼容和 ZKp 生成算法的完善,其相对 Op Layer2 会有更好的性能上限。这也是为什么即便现阶段 Op Layer2 主导的市场下也会不断有 ZK Layer2 项目出现的原因,在 Op Layer2 赛道已经被瓜分殆尽的情况下,后来者最合适的方式也只能是通过提出一种预期更好的方案以吸引用户从原有网络的迁移。但是即便有一天 ZK Layer2 实现了技术上的完善,如果 Op Layer2 上已经形成了一个足够全面的生态,有了足够多的项目落地,此时即便有更好性能的 Layer2,用户与开发者能否愿意承担巨大风险进行迁移也会是一个未知数。此外,Op Layer2 在这个阶段也在不断进行完善以稳固自身的生态地位,包括 Optimism 开源 Op Stack 以协助其他 Op Layer2 开发者的快速开发以及二分挑战法等对挑战方式的改进。当 ZK Layer2 在进行完善的过程中,Op Layer2 也没有放慢发展的脚步,所以现阶段 ZK Layer2 的重要任务便是抓紧密码学算法的完善与 EVM 的兼容以防止用户对 Op Layer2 生态依赖性的形成。

3.2.3 坎昆升级对 Layer2 的影响
交易速度:坎昆升级后,一个区块通过 blob 可以携带最多 20 倍于原来的数据,而保持区块的出块速度不变。因而理论上来说,以 Layer1 作为 DA 层和结算层的 Layer2 也可以得到相对原来最多 20 倍的 TPS 提升。即便按照 10 倍的增长进行估计,几大 Layer2 明星项目中任何一者的交易速度都将超过以太坊主网历史最高的交易速度。

主流 Layer2 项目当前 TPS,图片来源:L2BEAT
交易费用:限制 Layer2 网络无法下降的一大重要原因来自向 Layer1 提供的数据安全费用,按当前报价计算,以太坊智能合约上 1KB Calldata 数据的存储价格就接近 3 美金。但通过坎昆升级,Layer2 打包的交易数据仅以 blobs 的形式存储在以太坊的共识层,1 GB 数据存储一个月也只花费大约 0.1 美金,这大大减少了 Layer2 的运营成本。而对于这部分开源产生的收益,Layer2 运营者肯定会让利一部分给用户以吸引更多的使用者从而降低 Layer2 的交易成本。可拓展性:坎昆升级中对 Layer2 的影响主要来源于其临时存储的方案和新增的 blob 数据类型。临时存储会定期在主网上删除对当下验证作用不大的旧状态,减小了节点的存储压力,从而同时加快了 Layer1 与 Layer2 的网络同步与节点访问速度。而 blob 通过外带的巨大空间以及基于 gas 价格的灵活的调节机制,可以更好的适应网络交易量的变化,当交易量过大时增加一个区块携带的 blobs 数量,而当交易量下降时也可以随之减少。
3.2.4 坎昆升级下的不同 Layer2 赛道
坎昆升级的到来将对整个 Layer2 生态都会形成利好。因为坎昆升级中最核心的变动是降低了以太坊上数据存储的成本以及单个区块的大小,以以太坊为 DA 层的 Layer2 自然也可以得到相应 TPS 的上升并减少向 Layer1 支付的存储费用。但是由于两种 Rollup 对于以太坊 DA 层使用程度上的不同,对 Op Layer2 与 ZK Layer2 的利好程度会有所差异。
Op Layer2:由于 Op Layer2 上需要将压缩后的完整交易数据留在以太坊上进行记录,导致其相对 ZK Layer2 需要向以太坊支付更多的交易费用。因而通过 EIP-4844 降低 gas 消耗后,Op Layer2 上相对可以获得更大幅度的手续费下调,从而相对缩小相对 ZK Layer2 在手续费差价上的劣势。同时,这轮以太坊的 gas 下调也必然吸引更多参与者和开发者的涌入,相对于没有发币并且底层难以兼容 EVM 的 ZK Layer2,更多的项目和资本会倾向涌入 Op Layer2,尤其是近段表现强势的 Arbitrium。这或许会带来以 Op Layer2 为主导的 Layer2 生态的新一轮开发,特别是受到高昂手续费影响而难以提供优质用户体验的 SocialFi 与 GameFi 项目。伴随着,这个阶段的 Layer2 上可能涌现许多可以接近 Web2 用户体验的优质项目。如果这轮开发高地再次被 Op 夺下,那么其将进一步拉开与 ZK Layer2 生态整体的差距,为后续 ZK Layer2 可能的追赶制造足够多的困难。ZK Layer2:相对 Op Layer2,由于ZK Layer2 不需要在链上存储交易的具体信息,gas 下调的利好会小于 Op Layer2。虽然 ZK Layer2 整体处于发展过程中并且没有 Op Layer2 上庞大的生态,但是 Op Layer2 上各项设施已经趋于完善,在其上的开发存在着更激烈的竞争,而对于坎昆升级吸引进来的新入局的开发者,要与已经很成熟的 Op Layer2 开发者竞争或许并非明智的选择。如果 ZK Layer2 能够在此阶段实现开发者配套设施的完善,为开发者提供更好的开发环境,考虑到 ZK Layer2 更好的预期以及市场竞争的激烈程度,或许新晋的开发者会选择涌入 ZK Layer2 赛道,这一过程反而会加速 ZK Layer2 的追赶过程,实现在 Op Layer2 彻底形成统治性优势前的超越。
3.2.5 坎昆升级下 Layer2 的机遇
DYDX:DYDX 虽然是一个部署在以太坊上的 DEX,但其功能和原理与 Uniswap 这类以太坊上的传统 DEX 有很大区别。首先是其选用了订单薄而非主流 DEX 使用的 AMM 这种交易模式,使得用户可以获得更丝滑的交易体验,这也为其上进行杠杆交易创造了一个良好的条件。此外,其利用了 StarkEx 等第 2 层方案来实现可扩展性与处理交易,对交易在链外打包后传回链上。通过 Layer2 的底层原理,DYDX 使用户可以获得远低于传统 DEX 的交易成本,每笔交易的费用仅在 0.005 美金左右。而在坎昆升级这一以太坊以及相关 token 剧烈波动之际,几乎可以肯定会出现高风险投资比如杠杆交易资金量的激增。而通过坎昆升级,DYDX 上的交易费用即便在小额交易上也将实现对 CEX 的超越,同时还具有更高的公平性与安全性,因而对高风险投资以及杠杆爱好者提供了一个绝佳的交易环境。从上述角度考虑,坎昆升级将会给 DYDX 带来一个非常好的机遇。Rollup Node:对新出块的验证来说,坎昆升级中被定期清理的数据已经没有意义,但并非代表这些被清理的数据不存在价值。比如即将空投的项目方便需要完整的历史数据以确定每个即将接收空投的项目资金的安全性,还有一些链上分析的机构,往往也需要完整的历史数据对资金流向进行追溯。这个时候,一个选择便是向 Layer2 的 Rollup 运营者查询历史数据,在这个过程中 Rollup 运营者便可以对数据检索进行收费。因而在坎昆升级的大背景下,如果能有效的完善 Rollup 上的数据存储与检索机制,提前开发相关项目进行布局,将会大大提高项目存活与进一步发展的可能。
3.3 DApp 赛道
3.3.1 生态价值探讨
与 Web2 的应用相似,DApp 的作用也是为以太坊上的用户提供某项服务。比如 Uniswap 可以为用户实时提供不同 ERC20 token 的交换;Aave 为用户提供了超额抵押借贷与闪电贷的服务;Mirror 则为创作者提供了去中心化的内容创作机会。但不同的是,在 Web2 中,应用主要的获利方式是通过低成本与优质的服务吸引更多的用户引入其平台,然后以流量为价值,吸引第三方投放广告而从广告中获利。 但 DApp 全过程保持了对用户注意力的零侵犯,不向用户提供任何推荐,而是通过为用户提供某项服务后从单次服务中收取对应手续费。因而 DApp 的价值主要来自用户对 DApp 服务的使用次数以及每次交互过程的交互深度,如果 DApp 想要提高自身价值,就需要提供优于同类 DApp 的服务,从而使更多的开发者倾向于使用其而非其他 DApp 进行操作。
3.3.2 DApp 赛道细分
现阶段的以太坊 DApp 以 DeFi,GameFi,SocialFi 为主,早期存在一些 Gamble 项目,但由于以太坊交易速度的限制以及 EOS 这类更适合的公链的发布,Gamble 类项目现在在以太坊上已逐渐势微。这三类 DApp 分别提供了金融,游戏与社交方面的服务,并从中实现价值捕获。
DeFi
实现原理:本质来说,DeFi 是以太坊上的一个或一系列智能合约。DeFi 的发布阶段,需要在以太坊主网上部署相关合约(如币种合约、兑换合约等),合约通过接口实现 DeFi 功能模块与以太坊的交互。用户进行交互时,会调用合约接口进行存币、取币、兑换等操作,DeFi 智能合约会将交易数据打包,通过合约的脚本接口同以太坊交互,在以太坊链上记录状态变更。这个过程中,DeFi 合约会收取一定费用作为上下游流动性提供者的奖励以及自身获利。现状:现阶段的以太坊上,DeFi 在 DApp 中占据了绝对的优势。除了跨链项目和 Layer2 项目外,DeFi 占据了以太坊上合约资产排名前 10 DApp 的其他席位。截止目前,以太坊上 DeFi 的累计用户数量已经超过了 4000 万,虽然受到熊市的影响,月活跃用户量经历了从 2021 年 11 月 峰值近 800 万的冲高回落,但是随着市场的回暖,现在的月用户量也回升到了 峰值的一半左右,并等待着下一轮牛市进行再次冲高。同时 DeFi 的类型也是越来越多样,功能越来越全面。从最早的币币交易、抵押借贷到现在的杠杆交易、定期购、NFT 金融、闪电贷等。Web2 中可以实现的金融方式在 DeFi 中都逐渐得到了实现,而 Web2 中不能实现的包括闪电贷等功能在 DeFi 中也得到了实现。

以太坊合约资产排名前 10 DApp ,图片来源:DAppRadar
SocialFi
实现原理:与传统设计平台类似,SocialFi 上也支持个体进行内容创造同时将创造的内容通过平台进行发布以进行传播并进一步的为账户吸引粉丝,用户而言则可以通过平台查阅自己需要的内容,获取自己需要的服务。不同的是,用户发布的内容,内容发布者与粉丝的交互记录以及账户本身的信息都通过区块链智能合约进行去中心化的记录,也就是将信息的所有权交还了每个账户个体。对于 SocialFi 平台来说,越多的人愿意通过其平台进行内容创造和分享,其便可以通过提供这些服务获取更多的收益,用户使用 SocialFi 平台进行交互的费用减去对于账号和交易数据的存储费用便是 SocialFi 项目的获利。现状:现阶段头部 SocialFi 的 UAW(User Active Wallet) 虽然貌似可以和 DeFi 比肩,但这往往来自部分项目的空投预期,非常不具有持久性,比如前段时间的 friend.tech ,在热潮过后的 UAW 量甚至不到 1000,从 5 名开外的 DeFi 与 SocialFi 间的对比也可以清楚看到这点。根本原因在于 SocialFi 的高服务费与低效使得其无法承担本该具有的社交属性而单纯沦落为了一个投机平台。

Layer1 和 Layer2 上头部 SocialFi 与 DeFi 项目的 UAW 对比,图片来源:DAppRadar
GameFi
实现原理:GameFi 的实现与 SocialFi 大体类似,只是应用的对象变成了游戏。现阶段 GameFi 项目方主流的盈利方式是通过出卖游戏中的道具进行获利。现状:项目方想获取更多的获利,就必须吸引更多人参与游戏。而现阶段能吸引用户参与游戏的无非两点,一个是游戏的趣味性,驱使用户为了获得游戏的参与权或更好的游戏体验而不得不购买道具。另一个则是可以获利的预期,用户相信将来其可以以更高的价格卖出这些道具。第一种模式类似于 Steam ,项目方获得了真金白银而用户获得了游戏的享受。而另一种模式中如果用户和项目方的获利来源于不断涌入的新用户,而一旦新增资金无法抵消项目方的道具增发,项目会迅速陷入卖出,市场预期下降,持续卖出的恶性循环而难以持续性实现收益,具有 Ponzi 属性。而由于区块链手续费和交易速度带来的限制,现阶段的 GameFi 基本无法达到前一种模式要求的用户体验而以第二种模式居多。
3.3.3 坎昆升级对 DApp 的影响
性能优化:坎昆升级后一个区块内可以携带更多笔交易数据,对应到 DApp 便可以实现更多状态的变更。按照 8 个 blob 的平均扩容计算,坎昆升级后 DApp 的处理速度可以达到原来的十倍。成本下降:数据存储成本是 DApp 项目方的一笔固定的支出,无论 Layer1 还是 Layer2 上的 DApp,都会直接或间接的利用以太坊对 DApp 内账户的状态进行记录。经过坎昆升级,DApp 中的每笔交易都可以以 Blob 的数据形式进行存储,从而大大降低 DApp 的运行成本。功能拓展:受限于以太坊上高昂的存储成本,项目方在 DApp 开发过程中都在刻意减少上链的数据。这导致许多 Web2 中用户可以享受的体验无法向 DApp 进行迁移,比如 SocialFi 便无法支持推特中视频创作的需求,或者即便可以也无法是数据在底层享有等同以太坊的安全性。GameFi 中游戏的交互选项往往低级且无趣,因为每一次的状态变更都需要在链上进行记录。通过坎昆升级,项目方在上述方面便有了更多的尝试机会。
3.3.4 坎昆升级下的不同 DApp 赛道
DeFi:坎昆升级中存储成本下降对 DeFi 带来的影响相对较小,因为 DeFi 中需要记录的只有合约中用户资产现在所处的状态,是质押,借贷还是其他状态,所需存储的数据量相对其他两种类型的 DApp 少很多。但坎昆升级带来的以太坊 TPS 的提高可以极大促进 DeFi 中交易频率较高的套利业务以及需要短期内完成开仓平仓的杠杆业务。同时,在单一币币交换中无法明显体现的存储成本降低这一改进在杠杆和套利交易中累积下来也可以节省大量手续费。SocialFi:坎昆升级对于 SocialFi 性能带来的提升最为直接。通过坎昆升级, 可以提高 SocialFi 智能合约处理和存储大量数据的能力从而提供更接近 Web2 的优质用户体验。同时,SocialFi 上用户创造、评论、点赞等基础操作交互也可以具有更低的成本,从而吸引真正社交导向的长期参与者。GameFi:对于上轮牛市中的资产上链游戏,其受到的影响与 DeFi 类似,存储成本下降相对较小,但是 TPS 的增加为游戏的高频交互提供了条件,可以提高游戏的交互实时性,支持改善游戏可玩性的复杂交互功能。全链游戏受到坎昆升级的影响更为直接。由于游戏的所有逻辑,状态,数据都存储在了链上,坎昆升级将大大降低游戏的运营与用户交互成本。同时,游戏初始部署的成本也会大大下降,从而降低游戏的开发门槛,促使未来更多全链游戏的出现。
3.3.5 坎昆升级下 DApp 的机遇
Dark Forest:23 年三季度以来,或许是因为对于传统资产上链游戏不够去中心化的质疑,也可能单纯因为传统 GameFi 的叙事出现乏力,全链游戏开始爆火。但对于以太坊上的全链游戏来说,15 TPS 的交易速度与 CALLDATA 字段 16 gas 单个字节的存储成本大大限制了其发展上限。而坎昆升级的落地对这两个问题都能有很好的改善,结合 23 年下半年相关项目的不断开发,坎昆升级可能会为这个赛道带来比较大的利好。考虑到头部效应,Dark Forest 是少有的从少轮牛市走出来的 Fully On-Chain Game,有了比较完善的社区基础,而且还没有发行自己的代币,如果坎昆升级前后项目方有这个想法,应该能有一个不错的走势。
4. 总结
坎昆升级的落地将为以太坊带来更高的 TPS 和更低的存储费用,但同时也带来了激增的存储压力。明牌会受到巨大影响的是 DA 和 Layer2 的赛道。对于 DA 赛道,激增的存储压力下以太坊专用的 DA 将会迎来巨大利好,相关项目比如 EthStorage 值得关注,相比之下,数据底层存储中完全没有使用到以太坊的 DA 项目并未得到以太坊开发社区的支持,尽管也存在机会,但对待具体项目时需要更加谨慎。由于 ZK 系 Layer2 大多还未引入代币,加之近段坎昆升级预期下 Arbitrium 已经出现了明显走强,如果接下来 Arbitrium 不发生大的暴雷,Arb 及其生态相关的项目相比其他 Layer2 将在坎昆升级中更具优势。由于大量投机者的涌入,DYDX 项目在坎昆升级的节点也可能存在一定机会。最后,Rollup 对于 Layer2 相关交易历史数据的存储具有天然优势,如果说到提供历史数据访问服务,Layer2 上的 Rollup 也会是一个不错的选择。
如果从更长期的角度考虑,坎昆升级为各类 DApp 的开发以及性能创造了条件,将来势必看到 Web3 项目从交互功能和实时性上的逐渐向 Web2 的靠近,使得以太坊向世界计算机的目标进一步靠近,任何务实开发的项目方都值得进行长期投资。在最近一段时间大盘的涨幅中,以太坊相对比特币一直处于弱势状态,在比特币已经恢复到上轮牛市高点近 2/3 的情况下,以太坊却还未恢复上轮高点的 1/2。坎昆升级的到来或许能改变这一趋势,为以太坊带来一轮补涨,毕竟作为一条少有的能保持盈利并处于代币通缩的公链,现阶段确实存在价值的低估了。
Kernel Ventures是一个由研究和开发社区驱动的加密风险投资基金,拥有超过70个早期投资,专注于基础设施、中间件、DApps,尤其是ZK、Rollup、DEX、模块化区块链,以及将搭载未来数十亿加密用户的垂直领域,如账户抽象、数据可用性、可扩展性等。在过去的七年里,我们一直致力于支持世界各地的核心开发社区和大学区块链协会的发展。
参考
eips.ethereums-core:https://eips.ethereum.org/coreEthStorage 官网:https://eth-store.w3eth.io/#/EIP-1153: Transient storage opcodes:https://eips.ethereum.org/EIPS/eip-1153EIP-4788: Beacon block root in the EVM:https://eips.ethereum.org/EIPS/eip-4788EIP-5656: MCOPY - Memory copying instruction:https://eips.ethereum.org/EIPS/eip-5656EIP-6780: SELFDESTRUCT only in same transaction:https://eips.ethereum.org/EIPS/eip-6780零知识卷叠如何运作:https://ethereum.org/zh/developers/docs/scaling/zk-rollups#how-do-zk-rollups-workOPTIMISTIC ROLLUPS:https://ethereum.org/developers/docs/scaling/optimistic-rollupszk、zkVM、zkEVM 及其未来:https://foresightnews.pro/article/detail/11802重建与突破,探讨全链游戏的现在与未来:https://foresightnews.pro/article/detail/39608一文分析 Axie Infinity 背后的经济模式:https://www.tuoluo.cn/article/detail-10066131.html
Kernel Ventures: Cancun Upgrade — And Its Impact on the Broader Ethereum EcosystemAuthor: Kernel Ventures Jerry Luo Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua TLDR: Ethereum has completed the first three upgrade phases, which addressed the problems of development thresholds, DoS attacks, and POS transition, respectively, and the main goal of the current upgrade phase is to reduce transaction fees and optimize the user experience.EIP-1553, EIP-4788, EIP-5656 and EIP-6780, have been realized to reduce the cost of inter-contractual interactions, to improve the efficiency of beacon chain access, to reduce the cost of data replication, and to limit the role authority of the SELFDESTRUCT byte code, respectively.By introducing blob data that is external to the block, EIP-4844 can greatly increase Ethereum's TPS and reduce data storage costs.The Cancun upgrade will have additional benefits for Ethereum-specific DAs while Ethereum Foundation is not open to DA solutions that do not utilize Ethereum at all in their data stores.The Cancun upgrade is likely to be relatively more favorable for Op Layer2 due to its more mature development environment as well as the increased demand for the Ethereum DA layer.The Cancun upgrade will raise the performance limit of the DApp, allowing it to have functionality closer to that of an app in Web2. On-chain games that haven't lost their popularity while need a lot of storage space on Ethereum are worth watching.The Ethereum is undervalued at this stage, and the Cancun upgrade could be the signal that Ethereum starts to soar up. 1. Ethereum's Upgrade From October 16th of last year, when Cointelegraph published fake news about the pass of the Bitcoin ETF, to January 11th this year, when the ETF was finally passed, crypto market has experienced a surge in price. As bitcoin is more directly impacted by ETF, Ethereum and bitcoin's price diverged during this period. With bitcoin peaking at nearly $49,000, having recovered 2/3 of its previous bull market peak, Ethereum peaked at around $2,700, just over half of its previous bull market peak. But since the Bitcoin ETF landed, the ETH/BTC trend has rebounded significantly, in addition to the expectation of an upcoming Ethereum ETF, another important reason is that the delayed Cancun upgrade recently announced public testing on the Goerli test network, signaling that it is on the edge. As things stand, the Cancun upgrade will not take place until the first quarter of 2024 at the earliest. The Cancun upgrade is part of Ethereum's Serenity phase, designed to address Ethereum's low TPS and high transaction costs at this stage, and follows the Frontier, Homestead, and Metropolis phases of Ethereum. Prior to Serenity, Ethereum had gone through Frontier, Homestead, and Metropolis phases, which seperately addressed problems of developing thresholds, Dos attacks, and POS transition on Ethereum. The Ethereum roadmap clearly states that the main goal of the current phase is to realize cheaper transactions and a better user experience. Source: TradingView 2. Content of the Cancun Upgrade As a decentralized community, Ethereum's upgrades are based on proposals made by the developer community that are ultimately supported by the majority of the Ethereum community, including the ERC proposals that have been adopted and those that are still under discussion or will be implemented on the mainnet soon, collectively referred to as EIP proposals. At the Cancun upgrade, five EIP proposals are expected to be adopted: EIP-1153, EIP-4788, EIP-5656, EIP-6780 and EIP-4844. 2.1 Essential Mission EIP-4844 Blob: EIP-4844 introduced a new transaction type for Ethereum, the blob, a 125kb data block. Blobs compress and encode transaction data and are not permanently stored on Ethereum as CALLDATA bytecodes, which greatly reduces gas consumption, but cannot be accessed directly in EVMs.The EIP-4844 implementation allows for up to two blobs per transaction and up to 16 blobs per block. After the implementation of EIP-4844, each transaction can carry up to two blobs, and each block can carry up to 16 blobs. However, the Ethereum community recommends that each block carry eight blobs, and when the number exceeded 8, it can continue to be carried, but will face a relatively constant increase in gas cost until it reaches the maximum of 16 blobs. In addition, two other core technologies utilized in EIP-4844 are KZG polynomial promises and temporary storage, which were analyzed in detail in our previous article Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design, which explored the design of the DA and historical data layers. In summary, EIP-4844's changes to the size of Ethereum's individual block capacity and the location where transaction data is stored have significantly increased the TPS of Ethereum network while reducing its gas. 2.1 Side Missions EIP-1553 EIP-1553: This proposal is made to reduce storage costs during contract interactions. A transaction on Ethereum can be broken down into multiple frames created by the CALL instruction set, which may belong to different contracts and thus may involve the transfer of information across multiple contracts. There are two ways of transferring state between contracts, one is in the form of input/output, and the other is to call SSTORE/SLOAD bytecode for on-chain permanent storage. In the past, data is stored and transmitted in the form of memory, which has lower cost, but if the whole transmission process passes through any untrustworthy third-party contract, there will be a huge security risk. However, if use the SSTORE/SLOAD bytecode, it will bring a considerable amount of storage overhead and increase the burden of on-chain storage. EIP-1553 solves this problem by introducing the instantaneous storage opcodes TSTORE and TLOAD. Variables stored by these two bytecodes have the same properties as those stored by the SSTORE/SLOAD bytecodes and cannot be modified during transmission. However, the difference is that the transiently stored data will not remain on the chain after the transaction is over, but to be destroyed like the temporary variables, which realize the security of the state transmission process and the relatively low storage cost. Source: Kernel Ventures EIP-4788: In the beacon chain after Ethereum's POS upgrade, each new execution block contains the Roots of the parent beacon block, and even if the missing of some of the older Roots, it only need to keep some of the latest Roots during the process of creating a new block due to the reliability of the Roots that have been stored by the Consensus Layer. However, in the process of creating new blocks, frequently requesting data from the EVM to the consensus layer will cause inefficiency and create possibilities for MEV. Therefore, in EIP-4788, it is proposed to use a specialized Beacon Root Contract to store the latest Roots, which makes the Roots of the parent beacons exposed by EVM, and greatly improves the efficiency of calling for data. Source: Kernel Ventures EIP-5656: Copying data in memory is a very high-frequency basic operation on Ethereum, but performing this operation on the EVM incurs a lot of overhead. To solve this problem, the Ethereum community proposed the MCOPY opcode in EIP-5656, which allows efficient replication on EVMs. MCOPY uses a special data structure for short-term storage of the data in charge, including efficient slice access and in-memory object replication. Having a dedicated MCOPY instruction also provides forward-looking protection against changes in the gas cost of CALL instructions in future Ethereum upgrades. Source: Kernel Ventures EIP-6780: In Ethereum, SELFDESTRUCT can destroy a contract and clear out all the code and all the state associated with that contract. However, in the Verkle Tree structure, that will be used in the future of Ethereum, this poses a huge problem. In Ethereum that uses Verkle Tire to store state, the emptied storage will be marked as previously written but empty, which will not result in observable differences in EVM execution, but will result in different Verkle Commitments for created and deleted contracts compared to operations that did not take place, which will result in data validation issues for Ethereum in the Verkle Tree structure. data validation problems under the Verkle Tree structure. As a result, SELFDESTRUCT in EIP-6780 retains only the ability to return ETH from a contract to a specified address, leaving the code and storage state associated with that contract on the Ethereum. 3. Prospect of Different Verticals Post Cancun Upgrade 3.1 DA 3.1.1 Profit Model For an introduction to the principles of DA and the various DA types, it can be learned from our organization's previous article Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design. For DA projects, the revenue comes from the fees paid by users for storing data on them, and the expenses come from the fees paid to maintain the operation of the storage network and the persistence and security of the stored data. The remaining value of the network is the value accumulated by the network, and the main means for DA projects to realize the value increase is to improve the utilization of network storage space, so as to attract as many users as possible to use the network for storage. On the other hand, improvements in storage technology such as data compression or slice-and-dice storage can reduce network expenses, and on the other hand, realize higher value accumulation. 3.1.2 Detachment of DA There are three main types of DA services today, DA for main chain, modularization DA, and Storage Chain DA, which are described and differentiated in Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design. 3.1.3 Impact of Cancun Upgrade on DA User requirements: After the Cancun upgrade, the historical transaction data of Ethereum will increase by tens of times. These historical data will also bring about greater storage needs. Since the Ethereum after the Cancun upgrade has not realized the improvement of storage performance, the DA layer of the main chain adopts a simple and regular cleaning of these histories, and this part of the data storage market will naturally fall on the heads of all kinds of DA projects, which will bring about greater user demand.Direction of Development: The increase in the historical data of Ethereum after the Cancun upgrade will prompt major DA projects to improve the efficiency and interoperability of data interaction with Ethereum in order to better capture this part of the market. It is foreseeable that all kinds of cross-public chain storage bridge technologies will become the focus of the development of storage public chain DAs and modular DAs, while for the main chain DAs of Ethereum, it is also necessary to consider how to further enhance their compatibility with the main network and minimize the transmission costs and risks. 3.1.4 Cancun Upgrade and Various DA Verticals The Cancun upgrade brought faster data growth to Ethereum while not changing the data storage method synchronized across the network, which made the main chain have to carry out regular cleaning of a large amount of historical data and delegate the function of long-term storage of transaction data. However, this part of the historical data is still in demand in the process of airdrops conducted by project parties and data analysis by on-chain analytics organizations. The value of the data behind it will attract competition from different DA projects, and the key to determining the market share lies in the data security and storage cost of DA projects. DA for main chain: In the current stage of DA for main chain projects, such as EthStorage, its storage market mainly comes from some images, music and other large-memory data of the NFT project on Ethereum. Due to the high compatibility between the node clusters and Ethereum, the main chain DA can realize secure data interaction with the main network of Ethereum at a low cost. At the same time, it stores the storage index data on the smart contract of Ethereum and does not completely detach the DA layer from Ethereum, which has received strong support from the Ethereum Foundation. For the storage market brought by Ethereum, the main chain-specific DA has a natural advantage over other DAs.Modularization DA and Storage Chain DA: These projects are difficult to achieve a competitive advantage in historical data storage performance in the Cancun upgrade compared to the DA for main chain. However, at this stage, DA for main chain is still in the testing stage and has not been fully implemented, while the Cancun upgrade is imminent, and if the dedicated DA projects fail to provide an implemented storage solution before the Cancun upgrade, this round of data value mining may still be dominated by modular DAs. 3.1.5 Opportunities for DA Post Cancun Upgrade EthStorage: DA for main chain, like EthStorage, will be the biggest beneficiary of the Cancun upgrade, which deserves attention. In addition, after the recent news that the Cancun upgrade may take place in February this year, EthStorage's official X account has also been very active, releasing its latest official website and annual report, and the marketing seems to be very successful. Let’s celebrate the reveal of our new website! Please visit http://EthStorage.io to see the brand new design!Meet the Frontier of ScalabilityReal-time Cost Comparison with EthereumHow EthStorage WorksCore Features of EthStorageApplications Enabled by EthStorage However, comparing the content of the latest official website with that of the 2022 version, except for the cooler front-end effect and more detailed introduction, it has not realized too many innovations in service functions, and the main promotion is still storage and Web3Q domain name service. If interested, can click the following link to get the test token W3Q to experience EthStorage service on Galileo Chain network. To get the token, you need to have a W3Q domain name or an account with a balance of more than 0.1 ETH on the main network. Judging from the recent outpouring of water from the tap, there has not been very large participation at this stage, despite some publicity. However, combined with the fact that EthStorage just received a $7 million seed round of financing in July this year and did not see any obvious source of this funding, it is possible that the project is secretly brewing some infrastructure advancement, waiting for the Cancun upgrade to arrive in the pre-release to attract the greatest heat. EthStorage's Faucet, Source: Web3q.io Celestia: Celestia is currently the leading modular DA project. Compared to the DA for main chain projects still in development, Celestia have started to make its mark since the last bull market and received its first round of funding. After more than two years of precipitation, Celestia perfected its rollup model, token model, and finally, after a long period of testing, completed its main online launch and first airdrop on October 31st. The price of the coin has been rising since the market opened and recently exceeded US$20. According to the current circulation of 150 million TIA, the market capitalization of this project has already reached 3 billion US dollars. However, considering the limited service group of the blockchain historical storage track, the market capitalization of TIA has far exceeded that of Arweave, a traditional storage public chain with a richer profit model, and directly pushed the market capitalization of Filecoin, although there is still some room for growth compared to the bull market, and the market capitalization of TIA is somewhat overestimated at this stage. However, with the support of the Star project and the enthusiasm for airdrops that has not dissipated, if the Cancun upgrade can move forward in the first quarter of this year as expected, Celestia is still a play to watch. However, there is one risk worth noting: the Ethereum Foundation has repeatedly emphasized in discussions involving Celestia that any project that departs from the Ethereum's DA layer will not be Layer2, indicating a rejection for third party storage projects such as Celestia. The possible presentation of the Ethereum Foundation before and after the Cancun upgrade will also add uncertainty to Celestia's pricing. Source: CoinmarketCap 3.2 Layer2 3.2.1 Profit Model Due to the increasing number of users and the continuous development of projects on Ethereum, the low TPS of Ethereum has become a huge obstacle to the further development of its ecosystem, and the high transaction fees on Ethereum also make it difficult to promote some projects involving complex interactions on a large scale. However, many projects have already landed on Ethereum, and there are huge costs and risks in migrating, and at the same time, except for the Bitcoin public chain focused on payment, it is difficult to find a public chain with the same security as Ethereum. The emergence of Layer2 is an attempt to solve the above problems by placing all transaction processing and calculations on another public chain (Layer2), verifying the packaged data through the smart contracts bridged with Layer1, and changing the status on the main network. Layer2 focuses on transaction processing and validation, using Ethereum as the DA layer to store compressed transaction data, resulting in faster speeds and lower computational costs. Users who wish to use Layer2 to execute transactions must purchase Layer2 tokens and pay the network operator in advance. The Layer2 network operator has to pay for the security of the data stored on Ethereum, and the revenue of Layer2 is the amount paid by users for the security of the Layer2 data minus the amount paid by Layer2 for the security of the data on Layer1. So for Layer2 on Ethereum, the following two improvements can bring more revenue. From the perspective of open source, the more active the Ethereum ecosystem is, the more projects there are, the more users and projects will have the need to reduce gas and accelerate transactions, which will bring a larger user base to the Layer2 ecosystem, and under the premise that the profit from a single transaction remains unchanged, more transactions will bring more revenue to the Layer2 network operator. From the point of view of cost saving, if the storage cost of Ethereum decreases, the DA layer storage cost paid by the Layer2 project side decreases, and the number of transactions remains unchanged, the Layer2 operator can also get more revenue. 3.2.2 Detachment of Layer2 Around 2018, the Layer2 scheme of Ethereum presents a blossoming situation, and there are 4 kinds of schemes: Sidechain, Rollup, State Channel and Plasma. However, due to the risk of data unavailability during off-chain transmission and a large number of grief attacks, State Channel has been gradually marginalized from Layer2 schemes at this stage, and Plasma is relatively niche and cannot enter the top 10 in terms of TVL in Layer2, so it will not be discussed there. Finally, Layer2 solutions in the form of sidechains that do not use Ethereum as a DA layer at all have been gradually excluded from the definition of Layer2. In this paper, we will only discuss the mainstream Layer2 scheme Rollup and analyze it with its sub-tracks ZK Rollup and Op Rollup. Optimistic Rollup Implementation Principle: To begin with, Optimistic Rollup chain needs to deploy a bridge contract on the Ethereum main network, through which it can realize the interaction with the Ethereum main network.Op Layer2 will batch pack the user's transaction data and send it to the Ethereum, which includes the latest state root of the account on Layer2, the batch processed root and the compressed transaction data. data. At this stage, these data are stored in the form of Calldata in the Chain Bridge contract, although it has reduced a lot of gas compared to the permanent storage in MPT, but it is still a considerable data overhead, and also creates a lot of obstacles to the possible performance improvement of Op Layer2 (Optimistic Rollup Layer2) in the future. Source: Kernel Ventures Current status: These days, Op Layer2 is the top ecosystem of Layer2, with the top five Layer2 in TVL all from Optimistic Rollup ecosystem. Also, the combined TVL of Optimism and Arbitrium alone have exceeded 16 billion dollars. Source: L2BEAT   One of the main reasons why Op Rollup ecosystem can occupy the leading position now is its friendly development environment. It has completed the first round of Layer2 release and mainnet launch before ZK Rollup, which attracted a large number of DApp developers suffering from the limitations of Ethereum fees and low TPS, and shifted the position of DApp development from Layer1 to Layer2 migration. At the same time, Op Layer2 has a higher compatibility with EVM in the bottom layer, which clears the obstacles for the migration of projects on the main network of Ethereum, and realizes the deployment of various types of DApps on Ethereum such as Uniswap, Sushiswap, Cureve and so on to Layer2 in the fastest possible time, and even attracts projects such as Wordcoin and other projects to migrate from the main network of Polygon. At the present stage, Op Layer2 has not only Uniswap V3, a leading Ethereum DeFi, and GMX, a native DeFi project with a TVL of more than 100 million dollars, but also Friend.tech, a SocialFi project with a transaction fee of more than 20 million dollars, which not only completes the accumulation of the number of projects, but also promotes the qualitative breakthrough of the whole ecosystem by the high-quality projects in each track. But in the long run, ZK Lite will not be the best choice. However, in the long run, ZK Layer2 (ZK Rollup Layer2) has a higher TPS limit and lower gas consumption for a single transaction, and Op Layer2 will face a fierce competition with ZK Layer2 when ZK Rollup technology is gradually improved. Source: Dune ZK Rollup (Zero-knowledge Rollup) Implementation Principle: The transaction data in ZK Layer2 has the similar processing method as Op Layer2, which is packaged and processed in Layer2 and then returned to the smart contract in Layer1 to be stored in Calldata. However, the transaction data in Layer2 has an extra step of generating ZKp, and it does not need to return the compressed transaction data to the network, but only needs to return the transaction root and batch root with ZKp used for verifying the legitimacy of the corresponding transaction. The data returned to Layer1 via ZK Rollup does not require any window period and can be updated in real time on the main network after validation. Source: Kernel Ventures Current status: ZK Layer2 has become the second largest Layer2 ecosystem, right after Op Layer2 with 4 of the top 10 Layer2 in TVL ranking are ZK Layer2. But the general phenomenon is that there are not any ZK Layer2 strong enough as Op Layer2. While we all think that ZK Layer2 have a good prospect, they just can't be developed. The first reason is that the early release of Op Layer2 has attracted many developers to implement projects on it, and if they can't get enough benefits from project migration, it is unlikely that they will migrate their projects that have already generated stable income on Op Layer2. Secondly, many ZK Layer2 projects are still struggling with the compatibility of the underlying layer with Ethereum. For example, Linea, a ZK star project, is currently incompatible with many EVM opcodes, which brings a lot of development obstacles for developers to adapt to EVM. And another star project, zkSync, is currently unable to realize compatibility with the underlying layer of EVM, and can only be compatible with some development tools of Ethereum. Source: Kernel Ventures   Compatibility with Ethereum also makes it difficult to migrate native projects to it. Since bytecode is not fully interoperable, projects need to make changes to the underlying contract to adapt to ZKEVM, a process that involves many difficulties and risks and thus slows down the migration process of Ethereum native projects. It can be seen that at this stage, most of the projects on ZK Layer2 are native projects, and they are mainly DeFi such as Zigzag and SyncSwap, which are relatively less difficult to develop, and the total number and diversity of projects on ZK Layer2 are waiting for further development. However, the advantage of ZK Layer2 lies in its technological advancement. If the compatibility between ZKEVM and EVM can be realized and the ZKp generation algorithm can be perfected, the performance of ZK Layer2 will have a better upper limit compared to Op Layer2. This is also the reason why ZK Layer2 projects continue to emerge in the Op Layer2-dominated market. As the Op Layer2 track has already been carved up, the most appropriate way for the latecomers to attract users to migrate from their original networks is to propose an expected better solution. However, even if ZK Layer2 is technically perfected one day, if Op Layer2 has formed a comprehensive ecosystem with enough projects on the ground, even if there is a Layer2 with better performance, whether users and developers are willing to take the huge risk of migrating will still be an unknown. In addition, Op Layer2 is also making improvements at this stage to stabilize its ecological position, including Optimism's open-source Op Stack to assist other Op Layer2 developers in rapid development, and improvements to the challenge method such as the dichotomous challenge method. While ZK Layer2 is in the process of improvement, Op Layer2 is not slowing down its development, so the important task of ZK Layer2 at this stage is to grasp the improvement of cryptographic algorithms and EVM compatibility in order to prevent users' dependence on the Op Layer2 ecosystem. 3.2.3 Impact of Cancun Upgrade on Layer2 Transaction speed: After Cancun's upgrade, a block can carry up to 20 times more data through a blob, while keeping the block's exit speed unchanged. Therefore, theoretically, Layer2, which uses Layer1 as the DA layer and settlement layer, can also get up to 20 times the TPS increase compared to the original. Even at a 10x increase, any one of the major Layer2 stars would exceed the highest transaction speed in the history of the mainnet. Source: L2BEAT Transaction fee: One of the most important factors limiting the decline of the Layer2 network is the cost of data security provided to Layer1, which is currently quoted at close to $3 for 1KB of Calldata data stored on an Ethereum smart contract. But through the Cancun upgrade, Layer2 packaged transaction data is only stored in the form of blobs in the consensus layer of Ethereum, and 1 GB of data storage costs only about $0.1 a month, which greatly reduces the operating costs of Layer2. This greatly reduces Layer2's operating costs. As for the revenue generated from this open source, Layer2 operators will surely give a portion of it to users in order to attract more users and thus reduce Layer2's transaction costs.Scalability: The impact of the Cancun upgrade on Layer2 is mainly due to its temporary storage scheme and the new blob data type. Temporary storage periodically removes old state on the main network that is not useful for current validation, which reduces the storage pressure on nodes, thus speeding up network synchronization and node access between Layer1 and Layer2 at the same time. The blob, with its large external space and flexible adjustment mechanism based on the price of gas, can better adapt to changes in the network transaction volume, increasing the number of blobs carried by a block when the transaction volume is too large, and decreasing it when the transaction volume drops. 3.2.4 Cancun Upgrade and Various Layer2 Verticals The Cancun upgrade will be positive for the entire Layer2 ecosystem. Since the core change in the Cancun upgrade is to reduce the cost of data storage and the size of individual blocks on Ethereum, Layer2, which uses Ethereum as its DA layer, will naturally see a corresponding increase in TPS and a reduction in the storage fees it pays to Layer1. However, due to the difference in the degree of use of the two Rollups for the Ethereum DA layer, there will be a difference in the degree of benefit for Op Layer2 and ZK Layer2. Op Layer2: Since Op Layer2 needs to leave the compressed transaction data on the Ethereum for recording, it needs to pay more transaction fees to the Ethereum than ZK Layer2. Therefore, by reducing the gas consumption through EIP-4844, Op Layer2 can get a larger reduction in fees, thus narrowing the disadvantage of ZK Layer2 in terms of fee difference. At the same time, this round of Ethereum gas reduction is also bound to attract more participants and developers, compared with ZK Layer2, which has not issued any coins and its underlying layer is difficult to be compatible with EVMs, more projects and capitals will tend to flock to Op Layer2, especially Arbitrium, which has a strong performance in the recent period, which may lead to a new round of development of Layer2 ecosystem dominated by Op Layer2. This may lead to a new round of development in the Layer2 ecosystem led by Op Layer2, especially for SocialFi and GameFi projects, which are affected by high fees and have difficulties in providing quality user experience. Along with that, this phase of Layer2 is likely to see the emergence of many quality projects that can approach the Web2 user experience. If this round of development is taken by Op again, it will further widen the gap with the ZK Layer2 ecosystem, making it difficult enough for ZK Layer2 to catch up.ZK Layer2: Compared to Op Layer2, the benefit of downward gas adjustments will be smaller because ZK Layer2 does not need to store transaction-specific information on the chain, and although ZK Layer2 is still in the process of development and does not have the large ecosystem of Op Layer2, the facilities of Op Layer2 have already been improved, and there is more intense competition for the development of Op Layer2, which is attracted by the Cancun upgrades. However, the facilities on Op Layer2 are already well established and there is more competition for development on it, and it may not be wise for the new entrants attracted by the Cancun upgrades to compete with the already mature Op Layer2 developers. If ZK Layer2 is able to improve the supporting facilities for developers at this stage and provide a better development environment for developers, considering the better expectation of ZK Layer2 and the fierce competition in the market, new developers may choose to flock to the ZK Layer2 track, and this process will speed up the process of catching up with ZK Layer2, and achieve the goal of catching up with Op Layer2 before Op Layer2 completely dominates the market. before Op Layer2 completely dominates the market. 3.2.5 Opportunities for Layer2 Post Cancun Upgrade DYDX:Although DYDX is a DEX deployed on Ethereum, its functions and principles are very different from traditional DEX on Ethereum such as Uniswap. First of all, it chooses thin orders instead of the AMM trading model used by mainstream DEXs, which allows users to have a smoother trading experience and creates a good condition for leveraged trading on it. In addition, it utilizes Layer 2 solutions such as StarkEx to achieve scalability and process transactions, packaging transactions off-chain and transmitting them back on-chain. Through the underlying principles of Layer2, DYDX allows users to obtain a far lower transaction cost than traditional DEX, with each transaction costing only about $0.005. At a time when the Cancun upgrade and the volatility of Ethereum and related tokens is almost certain to see a surge in high-risk investments such as leveraged trading. Through the Cancun upgrade, the transaction fees on DYDX will surpass those of CEX even for small transactions, while providing higher fairness and security, thus providing an excellent trading environment for high-risk investments and leveraged trading enthusiasts. From the above perspective, the Cancun upgrade will bring a very good opportunity for DYDX.Rollup Node:The data that was regularly purged in the Cancun upgrade is no longer relevant for the validation of new out-of-block, but that doesn't mean that there is no value in that purged data. For example, projects that are about to be airdropped conveniently need complete historical data to determine the security of the funds of each project that is about to receive airdrops, and there are also some on-chain analytics organizations that often need complete historical data to trace the flow of funds. At this time, one option is to query the historical data from the Rollup operator of Layer2, and in the process the Rollup operator can charge for data retrieval. Therefore, in the context of the Cancun upgrade, if we can effectively improve the data storage and retrieval mechanism on Rollup, and develop related projects in advance for layout, it will greatly increase the possibility of project survival and further development. 3.3 DApp 3.3.1 Profit Model Similar to Web2 applications, DApps serves to provide a service to users on Ethereum. For example, Uniswap provides users with real-time exchange of different ERC20 tokens; Aave provides users with overcollateralized lending and flash lending services; and Mirror provides creators with decentralized content creation opportunities. However, the difference is that in Web2, the main way to profit is to attract more users to the platform through low-cost and high-quality services, and then use the traffic as a value to attract third-party advertisements and profit from the advertisements. However, DApp maintains zero infringement on users' attention in the whole process, and does not provide any recommendation to users, but collects the corresponding commission from a single service after providing a certain service to users. Thus, the value of a DApp comes mainly from the number of times users use the DApp's services and the depth of each interaction, and if a DApp wants to increase its value, it needs to provide services that are better than those of similar DApps, so that more developers will tend to use it rather than other DApps. 3.3.2 Detachment of DApps At this stage, Ethereum DApps are dominated by DeFi, GameFi, and SocialFi. There were some Gamble projects in the early days, but due to the limitation of Ethereum's transaction speeds and the release of EOS, which is a more suitable public chain, the Gamble projects have gradually declined on Ethereum. These three types of DApps provide financial, gaming and social services respectively, and realize value capture from them. DeFi Implementation Principle: DeFi is essentially one or a series of smart contracts on Ethereum.In the release phase of DeFi, relevant contracts (such as coin contracts, exchange contracts, etc.) need to be deployed on the Ethereum main network, and the contracts will realize the interaction between DeFi function modules and Ethereum through the interfaces. When users interact with DeFi, they will call the contract interface to deposit, withdraw and exchange coins, etc. The DeFi smart contract will package the transaction data, interact with Ethereum through the script interface of the contract, and record the state changes on the Ethereum chain. In this process, the DeFi contract will charge a certain fee as a reward for upstream and downstream liquidity providers and for its own profit.Current status: DeFi has an absolute dominance among DApps. Apart from cross-chain projects and Layer2 projects. DeFi occupies other places in the top 10 DApps in terms of contract assets on Ethereum. Until this time, the cumulative number of DeFi users on Ethereum has exceeded 40 million. Although the number of monthly active users has declined from the peak of nearly 8 million in November 2021 due to the impact of the bear market, with the recovery of the market, the number of monthly active users has also recovered to about half of the peak, and is waiting for the next round of the bull market to make another surge. Meanwhile, DeFi is becoming more diverse and versatile. From the early cryptocurrency trading and mortgage lending to the current leveraged trading, forward buying, NFT financing, flash loans, etc., financial methods that can be realized in Web2 have been gradually realized in DeFi, even somthing can't be realized in Web2, including flash loans, have also been realized in DeFi. Source: DAppRadar SocialFi Implementation Principle: Similar to traditional design platforms, SocialFi supports individuals to create content and publish the created content through the platform to spread it and further attract followers for the accounts, while users can access the content they need and obtain the services they need through the platform. The difference is that the content published by users, the interaction records between the content publishers and their fans, and the information of the accounts themselves are all decentralized through blockchain smart contracts, which means that the ownership of the information is returned to each individual account. For the SocialFi platform, the more people are willing to create and share content through its platform, the more revenue it can generate by providing these services. The cost of user interaction on the SocialFi platform minus the cost of storing account and transaction data is the profit of the SocialFi project.Current status: Although the UAW (User Active Wallet) of SocialFi seems to be comparable to DeFi's when it comes to the Head project, its volume often comes from the airdrop expectation of some projects, which is unsustainable. After the intial boom, Friend.tech had less than 1,000 UAWs these days. And when comparing with DeFi outside the top 5, it is more supportive of this conclusion. The root cause of this is that SocialFi's high service fees and inefficiencies have made it impossible for SocialFi to take on the social attributes it is supposed to have, and it has been reduced to a purely speculative platform. Source: DAppRadar GameFi Implementation Principle: The application of GameFi is similar to that of SocialFi, except that the object of application has become a game. At this stage, the mainstream profit method of GameFi is to sell the props in the game for profit.Current status: If the project owner wants to get more profits, more people to participate in the game is essentially needed. At this stage, there are only two things that can attract users to participate in the game, one is the fun of the game, which drives users to buy props in order to get the right to participate in the game or a better gaming experience. The other is the expectation of profitability, as users believe they can sell the props at a higher price in the future. The first model is similar to Steam, where the program gets real money and the users get to enjoy the game. In the other model, if the users and the project's profits come from the constant influx of new users, and once the new funds can not offset the project's props issued, the project will quickly fall into a vicious cycle of selling, market expectations decline, and continue to sell and difficult to sustainably realize the revenue, with the Ponzi attribute. Due to the limitations brought by blockchain fees and transaction speed, GameFi at this stage is basically unable to achieve the user experience required by the former mode, and is mostly based on the second mode. 3.3.3 Impact of Cancun Upgrade on DApps Performance optimization: Cancun upgraded a block can carry more transaction data, corresponding to the DApp can realize more state changes. According to the average expansion of 8 blob capacity calculation, Cancun upgraded DApp processing speed can reach ten times the original.Reduced Costs: Data storage costs are a fixed expense for DApps, and both Layer1 and Layer2 DApps directly or indirectly utilize Ethereum to record the status of accounts within the DApp. With the Cancun upgrade, every transaction in a DApp can be stored as a blob of data, significantly reducing the cost of running the DApp.Functionality Expansions: Due to the high cost of storage on Ethereum, project owners are deliberately reducing the amount of data that can be uploaded during the development of DApps. This has made it impossible to migrate many Web2 experiences to DApps, such as SocialFi's inability to support video creation in Twitter, or even if they could, the data would not be as secure as Ethereum on the underlying chain, and GameFi's gameplay options are often low-level and uninteresting, as every state change needs to be recorded on the chain. With the Cancun upgrade, project owners will have more opportunities to experiment with these aspects. 3.3.4 Cancun Upgrade and Various DApp Verticals DeFi: The impact of the Cancun upgrade on DeFi is relatively small because the only thing that needs to be recorded in DeFi is the current state of the user's assets in the contract, whether it is pledged, borrowed or other states, and the amount of data required to be stored is much smaller than that of the other two types of DApps. However, the increase of Ethereum's TPS brought by the Cancun upgrade can greatly facilitate the arbitrage business of DeFi, which has a high trading frequency, and the leverage business, which needs to complete the opening and closing of positions in a short period of time. At the same time, the reduction in storage costs, which is not evident in single-coin exchanges, can also add up to significant fee savings in leveraged and arbitrage transactions.SocialFi: The Cancun upgrade has the most immediate impact on SocialFi's performance. The Cancun upgrade improves the ability of SocialFi's smart contracts to process and store large amounts of data to provide a superior user experience that is closer to that of Web2. At the same time, basic interactions such as user creation, commenting, liking, etc. on SocialFi can be done at a lower cost, thus attracting truly socially oriented long-term participants.GameFi: For Asset on chain games in the last bull market, the effect is similar to DeFi, with a relatively small decrease in storage cost. But the increase in TPS can still benefit high frequency interactions, timeliness of interactions, and support for interactive features that can improve game playability. Fully On-chain games are more directly affected by the Cancun upgrade. Since all game logic, state, and data is stored on the chain, the Cancun upgrade will significantly reduce the cost of operation and user interaction for the Fully On-chain game. At the same time, the initial deployment cost of the game will also be greatly reduced, thus lowering the threshold for game development and encouraging the emergence of more fully chain games in the future. 3.3.5 Opportunities for DApps Post Cancun Upgrade Dark Forest: Since the third quarter of 2023, perhaps because of the question that traditional asset-on-chain games are not decentralized enough, or simply because the traditional GameFi narrative seemed lukewarm, capital began to look for new growth points, Fully On-chain games began to explode and attracted a lot of attention. But for the fully on-chain game on Ethereum, the transaction speed of 15 TPS and the storage cost of 16 gas single bytes for the CALLDATA field severely limit the upper limit of its development. The landing of the Cancun upgrade can be a good improvement to both problems, combined with the continuous development of related projects in the second half of 2023, the Cancun upgrade can bring a relatively large positive for this track. Considering the head effect, Dark Forest is one of the few fully on-chain games from the last round of the bull market, with a relatively well-established community base, and has not yet issued its own tokens. It should have good prospects if the project side takes action around the time of Cancun's upgrade. 4. Conclusion The landing of the Cancun upgrade will not only bring higher TPS and lower storage costs to Ethereum, but also a surge in storage pressure. DA and Layer2 are the ones that will be heavily impacted by the upgrade. In contrast, DA projects that do not use Ethereum in their underlying data storage are not supported by the Ethereum development community, and while there are opportunities, it need to be more cautious when dealing with specific projects. Since most of the ZK system Layer2 tokens have not yet been introduced, and Arbitrium has strengthened significantly in the recent period in anticipation of the Cancun upgrade, if the price of Arb's coins can stabilize through the pullback phase, Arb and its ecosystem of related projects should see a good rise along with the landing of Cancun. Due to the influx of speculators, the DYDX project may also have some opportunity at the Cancun upgrade node. Finally, Rollup has a natural advantage for storing Layer2-related transaction history data, when it comes to providing historical data access services, Rollup on Layer2 will also be a good choice. If we take a longer-term perspective, the Cancun upgrade has created conditions for the development and performance of various types of DApps, and in the future, we will inevitably see Web3 projects gradually approaching Web2 in terms of interactive functions and real-time performance, which will bring Ethereum to the goal of a world computer, and it is worth making long-term investments for any pragmatic development projects. Ethereum has been in a weak position relative to Bitcoin in the recent market rally, and while Bitcoin has recovered to nearly 2/3 of its previous bull market high, Ethereum has not yet recovered 1/2 of its previous high.The arrival of the Cancun upgrade may change this trend and bring Ethereum a round of complementary gains, after all, as a rare public chain that can maintain profitability while in the midst of token deflation, there is indeed an undervalued value at this stage. Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, DApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world. Reference eips.Ethereums-core: https://eips.Ethereum.org/coreEthStorage 官网: https://eth-store.w3eth.io/#/EIP-1153: Transient storage opcodes: https://eips.Ethereum.org/EIPS/eip-1153EIP-4788: Beacon block root in the EVM: https://eips.Ethereum.org/EIPS/eip-4788EIP-5656: MCOPY - Memory copying instruction:https://eips.Ethereum.org/EIPS/eip-5656EIP-6780: SELFDESTRUCT only in same transaction:https://eips.Ethereum.org/EIPS/eip-6780零知识卷叠如何运作:https://Ethereum.org/zh/developers/docs/scaling/ZK-rollups#how-do-ZK-rollups-workOPTIMISTIC ROLLUPS:https://Ethereum.org/developers/docs/scaling/optimistic-rollupsZK、ZKVM、ZKEVM 及其未来:https://foresightnews.pro/article/detail/11802重建与突破,探讨全链游戏的现在与未来:https://foresightnews.pro/article/detail/39608一文分析 Axie Infinity 背后的经济模式:https://www.tuoluo.cn/article/detail-10066131.html

Kernel Ventures: Cancun Upgrade — And Its Impact on the Broader Ethereum Ecosystem

Author: Kernel Ventures Jerry Luo
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
Ethereum has completed the first three upgrade phases, which addressed the problems of development thresholds, DoS attacks, and POS transition, respectively, and the main goal of the current upgrade phase is to reduce transaction fees and optimize the user experience.EIP-1553, EIP-4788, EIP-5656 and EIP-6780, have been realized to reduce the cost of inter-contractual interactions, to improve the efficiency of beacon chain access, to reduce the cost of data replication, and to limit the role authority of the SELFDESTRUCT byte code, respectively.By introducing blob data that is external to the block, EIP-4844 can greatly increase Ethereum's TPS and reduce data storage costs.The Cancun upgrade will have additional benefits for Ethereum-specific DAs while Ethereum Foundation is not open to DA solutions that do not utilize Ethereum at all in their data stores.The Cancun upgrade is likely to be relatively more favorable for Op Layer2 due to its more mature development environment as well as the increased demand for the Ethereum DA layer.The Cancun upgrade will raise the performance limit of the DApp, allowing it to have functionality closer to that of an app in Web2. On-chain games that haven't lost their popularity while need a lot of storage space on Ethereum are worth watching.The Ethereum is undervalued at this stage, and the Cancun upgrade could be the signal that Ethereum starts to soar up.
1. Ethereum's Upgrade
From October 16th of last year, when Cointelegraph published fake news about the pass of the Bitcoin ETF, to January 11th this year, when the ETF was finally passed, crypto market has experienced a surge in price. As bitcoin is more directly impacted by ETF, Ethereum and bitcoin's price diverged during this period. With bitcoin peaking at nearly $49,000, having recovered 2/3 of its previous bull market peak, Ethereum peaked at around $2,700, just over half of its previous bull market peak. But since the Bitcoin ETF landed, the ETH/BTC trend has rebounded significantly, in addition to the expectation of an upcoming Ethereum ETF, another important reason is that the delayed Cancun upgrade recently announced public testing on the Goerli test network, signaling that it is on the edge. As things stand, the Cancun upgrade will not take place until the first quarter of 2024 at the earliest. The Cancun upgrade is part of Ethereum's Serenity phase, designed to address Ethereum's low TPS and high transaction costs at this stage, and follows the Frontier, Homestead, and Metropolis phases of Ethereum. Prior to Serenity, Ethereum had gone through Frontier, Homestead, and Metropolis phases, which seperately addressed problems of developing thresholds, Dos attacks, and POS transition on Ethereum. The Ethereum roadmap clearly states that the main goal of the current phase is to realize cheaper transactions and a better user experience.

Source: TradingView
2. Content of the Cancun Upgrade
As a decentralized community, Ethereum's upgrades are based on proposals made by the developer community that are ultimately supported by the majority of the Ethereum community, including the ERC proposals that have been adopted and those that are still under discussion or will be implemented on the mainnet soon, collectively referred to as EIP proposals. At the Cancun upgrade, five EIP proposals are expected to be adopted: EIP-1153, EIP-4788, EIP-5656, EIP-6780 and EIP-4844.
2.1 Essential Mission EIP-4844
Blob: EIP-4844 introduced a new transaction type for Ethereum, the blob, a 125kb data block. Blobs compress and encode transaction data and are not permanently stored on Ethereum as CALLDATA bytecodes, which greatly reduces gas consumption, but cannot be accessed directly in EVMs.The EIP-4844 implementation allows for up to two blobs per transaction and up to 16 blobs per block. After the implementation of EIP-4844, each transaction can carry up to two blobs, and each block can carry up to 16 blobs. However, the Ethereum community recommends that each block carry eight blobs, and when the number exceeded 8, it can continue to be carried, but will face a relatively constant increase in gas cost until it reaches the maximum of 16 blobs.
In addition, two other core technologies utilized in EIP-4844 are KZG polynomial promises and temporary storage, which were analyzed in detail in our previous article Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design, which explored the design of the DA and historical data layers. In summary, EIP-4844's changes to the size of Ethereum's individual block capacity and the location where transaction data is stored have significantly increased the TPS of Ethereum network while reducing its gas.
2.1 Side Missions EIP-1553
EIP-1553: This proposal is made to reduce storage costs during contract interactions. A transaction on Ethereum can be broken down into multiple frames created by the CALL instruction set, which may belong to different contracts and thus may involve the transfer of information across multiple contracts. There are two ways of transferring state between contracts, one is in the form of input/output, and the other is to call SSTORE/SLOAD bytecode for on-chain permanent storage. In the past, data is stored and transmitted in the form of memory, which has lower cost, but if the whole transmission process passes through any untrustworthy third-party contract, there will be a huge security risk. However, if use the SSTORE/SLOAD bytecode, it will bring a considerable amount of storage overhead and increase the burden of on-chain storage. EIP-1553 solves this problem by introducing the instantaneous storage opcodes TSTORE and TLOAD. Variables stored by these two bytecodes have the same properties as those stored by the SSTORE/SLOAD bytecodes and cannot be modified during transmission. However, the difference is that the transiently stored data will not remain on the chain after the transaction is over, but to be destroyed like the temporary variables, which realize the security of the state transmission process and the relatively low storage cost.

Source: Kernel Ventures
EIP-4788: In the beacon chain after Ethereum's POS upgrade, each new execution block contains the Roots of the parent beacon block, and even if the missing of some of the older Roots, it only need to keep some of the latest Roots during the process of creating a new block due to the reliability of the Roots that have been stored by the Consensus Layer. However, in the process of creating new blocks, frequently requesting data from the EVM to the consensus layer will cause inefficiency and create possibilities for MEV. Therefore, in EIP-4788, it is proposed to use a specialized Beacon Root Contract to store the latest Roots, which makes the Roots of the parent beacons exposed by EVM, and greatly improves the efficiency of calling for data.

Source: Kernel Ventures
EIP-5656: Copying data in memory is a very high-frequency basic operation on Ethereum, but performing this operation on the EVM incurs a lot of overhead. To solve this problem, the Ethereum community proposed the MCOPY opcode in EIP-5656, which allows efficient replication on EVMs. MCOPY uses a special data structure for short-term storage of the data in charge, including efficient slice access and in-memory object replication. Having a dedicated MCOPY instruction also provides forward-looking protection against changes in the gas cost of CALL instructions in future Ethereum upgrades.

Source: Kernel Ventures
EIP-6780: In Ethereum, SELFDESTRUCT can destroy a contract and clear out all the code and all the state associated with that contract. However, in the Verkle Tree structure, that will be used in the future of Ethereum, this poses a huge problem. In Ethereum that uses Verkle Tire to store state, the emptied storage will be marked as previously written but empty, which will not result in observable differences in EVM execution, but will result in different Verkle Commitments for created and deleted contracts compared to operations that did not take place, which will result in data validation issues for Ethereum in the Verkle Tree structure. data validation problems under the Verkle Tree structure. As a result, SELFDESTRUCT in EIP-6780 retains only the ability to return ETH from a contract to a specified address, leaving the code and storage state associated with that contract on the Ethereum.
3. Prospect of Different Verticals Post Cancun Upgrade
3.1 DA
3.1.1 Profit Model
For an introduction to the principles of DA and the various DA types, it can be learned from our organization's previous article Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design. For DA projects, the revenue comes from the fees paid by users for storing data on them, and the expenses come from the fees paid to maintain the operation of the storage network and the persistence and security of the stored data. The remaining value of the network is the value accumulated by the network, and the main means for DA projects to realize the value increase is to improve the utilization of network storage space, so as to attract as many users as possible to use the network for storage. On the other hand, improvements in storage technology such as data compression or slice-and-dice storage can reduce network expenses, and on the other hand, realize higher value accumulation.
3.1.2 Detachment of DA
There are three main types of DA services today, DA for main chain, modularization DA, and Storage Chain DA, which are described and differentiated in Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design.
3.1.3 Impact of Cancun Upgrade on DA
User requirements: After the Cancun upgrade, the historical transaction data of Ethereum will increase by tens of times. These historical data will also bring about greater storage needs. Since the Ethereum after the Cancun upgrade has not realized the improvement of storage performance, the DA layer of the main chain adopts a simple and regular cleaning of these histories, and this part of the data storage market will naturally fall on the heads of all kinds of DA projects, which will bring about greater user demand.Direction of Development: The increase in the historical data of Ethereum after the Cancun upgrade will prompt major DA projects to improve the efficiency and interoperability of data interaction with Ethereum in order to better capture this part of the market. It is foreseeable that all kinds of cross-public chain storage bridge technologies will become the focus of the development of storage public chain DAs and modular DAs, while for the main chain DAs of Ethereum, it is also necessary to consider how to further enhance their compatibility with the main network and minimize the transmission costs and risks.
3.1.4 Cancun Upgrade and Various DA Verticals
The Cancun upgrade brought faster data growth to Ethereum while not changing the data storage method synchronized across the network, which made the main chain have to carry out regular cleaning of a large amount of historical data and delegate the function of long-term storage of transaction data. However, this part of the historical data is still in demand in the process of airdrops conducted by project parties and data analysis by on-chain analytics organizations. The value of the data behind it will attract competition from different DA projects, and the key to determining the market share lies in the data security and storage cost of DA projects.
DA for main chain: In the current stage of DA for main chain projects, such as EthStorage, its storage market mainly comes from some images, music and other large-memory data of the NFT project on Ethereum. Due to the high compatibility between the node clusters and Ethereum, the main chain DA can realize secure data interaction with the main network of Ethereum at a low cost. At the same time, it stores the storage index data on the smart contract of Ethereum and does not completely detach the DA layer from Ethereum, which has received strong support from the Ethereum Foundation. For the storage market brought by Ethereum, the main chain-specific DA has a natural advantage over other DAs.Modularization DA and Storage Chain DA: These projects are difficult to achieve a competitive advantage in historical data storage performance in the Cancun upgrade compared to the DA for main chain. However, at this stage, DA for main chain is still in the testing stage and has not been fully implemented, while the Cancun upgrade is imminent, and if the dedicated DA projects fail to provide an implemented storage solution before the Cancun upgrade, this round of data value mining may still be dominated by modular DAs.
3.1.5 Opportunities for DA Post Cancun Upgrade
EthStorage: DA for main chain, like EthStorage, will be the biggest beneficiary of the Cancun upgrade, which deserves attention. In addition, after the recent news that the Cancun upgrade may take place in February this year, EthStorage's official X account has also been very active, releasing its latest official website and annual report, and the marketing seems to be very successful.
Let’s celebrate the reveal of our new website! Please visit http://EthStorage.io to see the brand new design!Meet the Frontier of ScalabilityReal-time Cost Comparison with EthereumHow EthStorage WorksCore Features of EthStorageApplications Enabled by EthStorage
However, comparing the content of the latest official website with that of the 2022 version, except for the cooler front-end effect and more detailed introduction, it has not realized too many innovations in service functions, and the main promotion is still storage and Web3Q domain name service. If interested, can click the following link to get the test token W3Q to experience EthStorage service on Galileo Chain network. To get the token, you need to have a W3Q domain name or an account with a balance of more than 0.1 ETH on the main network. Judging from the recent outpouring of water from the tap, there has not been very large participation at this stage, despite some publicity. However, combined with the fact that EthStorage just received a $7 million seed round of financing in July this year and did not see any obvious source of this funding, it is possible that the project is secretly brewing some infrastructure advancement, waiting for the Cancun upgrade to arrive in the pre-release to attract the greatest heat.

EthStorage's Faucet, Source: Web3q.io
Celestia: Celestia is currently the leading modular DA project. Compared to the DA for main chain projects still in development, Celestia have started to make its mark since the last bull market and received its first round of funding. After more than two years of precipitation, Celestia perfected its rollup model, token model, and finally, after a long period of testing, completed its main online launch and first airdrop on October 31st. The price of the coin has been rising since the market opened and recently exceeded US$20. According to the current circulation of 150 million TIA, the market capitalization of this project has already reached 3 billion US dollars. However, considering the limited service group of the blockchain historical storage track, the market capitalization of TIA has far exceeded that of Arweave, a traditional storage public chain with a richer profit model, and directly pushed the market capitalization of Filecoin, although there is still some room for growth compared to the bull market, and the market capitalization of TIA is somewhat overestimated at this stage. However, with the support of the Star project and the enthusiasm for airdrops that has not dissipated, if the Cancun upgrade can move forward in the first quarter of this year as expected, Celestia is still a play to watch. However, there is one risk worth noting: the Ethereum Foundation has repeatedly emphasized in discussions involving Celestia that any project that departs from the Ethereum's DA layer will not be Layer2, indicating a rejection for third party storage projects such as Celestia. The possible presentation of the Ethereum Foundation before and after the Cancun upgrade will also add uncertainty to Celestia's pricing.

Source: CoinmarketCap
3.2 Layer2
3.2.1 Profit Model
Due to the increasing number of users and the continuous development of projects on Ethereum, the low TPS of Ethereum has become a huge obstacle to the further development of its ecosystem, and the high transaction fees on Ethereum also make it difficult to promote some projects involving complex interactions on a large scale. However, many projects have already landed on Ethereum, and there are huge costs and risks in migrating, and at the same time, except for the Bitcoin public chain focused on payment, it is difficult to find a public chain with the same security as Ethereum. The emergence of Layer2 is an attempt to solve the above problems by placing all transaction processing and calculations on another public chain (Layer2), verifying the packaged data through the smart contracts bridged with Layer1, and changing the status on the main network. Layer2 focuses on transaction processing and validation, using Ethereum as the DA layer to store compressed transaction data, resulting in faster speeds and lower computational costs. Users who wish to use Layer2 to execute transactions must purchase Layer2 tokens and pay the network operator in advance. The Layer2 network operator has to pay for the security of the data stored on Ethereum, and the revenue of Layer2 is the amount paid by users for the security of the Layer2 data minus the amount paid by Layer2 for the security of the data on Layer1. So for Layer2 on Ethereum, the following two improvements can bring more revenue. From the perspective of open source, the more active the Ethereum ecosystem is, the more projects there are, the more users and projects will have the need to reduce gas and accelerate transactions, which will bring a larger user base to the Layer2 ecosystem, and under the premise that the profit from a single transaction remains unchanged, more transactions will bring more revenue to the Layer2 network operator. From the point of view of cost saving, if the storage cost of Ethereum decreases, the DA layer storage cost paid by the Layer2 project side decreases, and the number of transactions remains unchanged, the Layer2 operator can also get more revenue.
3.2.2 Detachment of Layer2
Around 2018, the Layer2 scheme of Ethereum presents a blossoming situation, and there are 4 kinds of schemes: Sidechain, Rollup, State Channel and Plasma. However, due to the risk of data unavailability during off-chain transmission and a large number of grief attacks, State Channel has been gradually marginalized from Layer2 schemes at this stage, and Plasma is relatively niche and cannot enter the top 10 in terms of TVL in Layer2, so it will not be discussed there. Finally, Layer2 solutions in the form of sidechains that do not use Ethereum as a DA layer at all have been gradually excluded from the definition of Layer2. In this paper, we will only discuss the mainstream Layer2 scheme Rollup and analyze it with its sub-tracks ZK Rollup and Op Rollup.
Optimistic Rollup
Implementation Principle: To begin with, Optimistic Rollup chain needs to deploy a bridge contract on the Ethereum main network, through which it can realize the interaction with the Ethereum main network.Op Layer2 will batch pack the user's transaction data and send it to the Ethereum, which includes the latest state root of the account on Layer2, the batch processed root and the compressed transaction data. data. At this stage, these data are stored in the form of Calldata in the Chain Bridge contract, although it has reduced a lot of gas compared to the permanent storage in MPT, but it is still a considerable data overhead, and also creates a lot of obstacles to the possible performance improvement of Op Layer2 (Optimistic Rollup Layer2) in the future.

Source: Kernel Ventures
Current status: These days, Op Layer2 is the top ecosystem of Layer2, with the top five Layer2 in TVL all from Optimistic Rollup ecosystem. Also, the combined TVL of Optimism and Arbitrium alone have exceeded 16 billion dollars.

Source: L2BEAT
  One of the main reasons why Op Rollup ecosystem can occupy the leading position now is its friendly development environment. It has completed the first round of Layer2 release and mainnet launch before ZK Rollup, which attracted a large number of DApp developers suffering from the limitations of Ethereum fees and low TPS, and shifted the position of DApp development from Layer1 to Layer2 migration. At the same time, Op Layer2 has a higher compatibility with EVM in the bottom layer, which clears the obstacles for the migration of projects on the main network of Ethereum, and realizes the deployment of various types of DApps on Ethereum such as Uniswap, Sushiswap, Cureve and so on to Layer2 in the fastest possible time, and even attracts projects such as Wordcoin and other projects to migrate from the main network of Polygon. At the present stage, Op Layer2 has not only Uniswap V3, a leading Ethereum DeFi, and GMX, a native DeFi project with a TVL of more than 100 million dollars, but also Friend.tech, a SocialFi project with a transaction fee of more than 20 million dollars, which not only completes the accumulation of the number of projects, but also promotes the qualitative breakthrough of the whole ecosystem by the high-quality projects in each track. But in the long run, ZK Lite will not be the best choice. However, in the long run, ZK Layer2 (ZK Rollup Layer2) has a higher TPS limit and lower gas consumption for a single transaction, and Op Layer2 will face a fierce competition with ZK Layer2 when ZK Rollup technology is gradually improved.

Source: Dune
ZK Rollup (Zero-knowledge Rollup)
Implementation Principle: The transaction data in ZK Layer2 has the similar processing method as Op Layer2, which is packaged and processed in Layer2 and then returned to the smart contract in Layer1 to be stored in Calldata. However, the transaction data in Layer2 has an extra step of generating ZKp, and it does not need to return the compressed transaction data to the network, but only needs to return the transaction root and batch root with ZKp used for verifying the legitimacy of the corresponding transaction. The data returned to Layer1 via ZK Rollup does not require any window period and can be updated in real time on the main network after validation.

Source: Kernel Ventures
Current status: ZK Layer2 has become the second largest Layer2 ecosystem, right after Op Layer2 with 4 of the top 10 Layer2 in TVL ranking are ZK Layer2. But the general phenomenon is that there are not any ZK Layer2 strong enough as Op Layer2. While we all think that ZK Layer2 have a good prospect, they just can't be developed. The first reason is that the early release of Op Layer2 has attracted many developers to implement projects on it, and if they can't get enough benefits from project migration, it is unlikely that they will migrate their projects that have already generated stable income on Op Layer2. Secondly, many ZK Layer2 projects are still struggling with the compatibility of the underlying layer with Ethereum. For example, Linea, a ZK star project, is currently incompatible with many EVM opcodes, which brings a lot of development obstacles for developers to adapt to EVM. And another star project, zkSync, is currently unable to realize compatibility with the underlying layer of EVM, and can only be compatible with some development tools of Ethereum.

Source: Kernel Ventures
  Compatibility with Ethereum also makes it difficult to migrate native projects to it. Since bytecode is not fully interoperable, projects need to make changes to the underlying contract to adapt to ZKEVM, a process that involves many difficulties and risks and thus slows down the migration process of Ethereum native projects. It can be seen that at this stage, most of the projects on ZK Layer2 are native projects, and they are mainly DeFi such as Zigzag and SyncSwap, which are relatively less difficult to develop, and the total number and diversity of projects on ZK Layer2 are waiting for further development. However, the advantage of ZK Layer2 lies in its technological advancement. If the compatibility between ZKEVM and EVM can be realized and the ZKp generation algorithm can be perfected, the performance of ZK Layer2 will have a better upper limit compared to Op Layer2. This is also the reason why ZK Layer2 projects continue to emerge in the Op Layer2-dominated market. As the Op Layer2 track has already been carved up, the most appropriate way for the latecomers to attract users to migrate from their original networks is to propose an expected better solution. However, even if ZK Layer2 is technically perfected one day, if Op Layer2 has formed a comprehensive ecosystem with enough projects on the ground, even if there is a Layer2 with better performance, whether users and developers are willing to take the huge risk of migrating will still be an unknown. In addition, Op Layer2 is also making improvements at this stage to stabilize its ecological position, including Optimism's open-source Op Stack to assist other Op Layer2 developers in rapid development, and improvements to the challenge method such as the dichotomous challenge method. While ZK Layer2 is in the process of improvement, Op Layer2 is not slowing down its development, so the important task of ZK Layer2 at this stage is to grasp the improvement of cryptographic algorithms and EVM compatibility in order to prevent users' dependence on the Op Layer2 ecosystem.
3.2.3 Impact of Cancun Upgrade on Layer2
Transaction speed: After Cancun's upgrade, a block can carry up to 20 times more data through a blob, while keeping the block's exit speed unchanged. Therefore, theoretically, Layer2, which uses Layer1 as the DA layer and settlement layer, can also get up to 20 times the TPS increase compared to the original. Even at a 10x increase, any one of the major Layer2 stars would exceed the highest transaction speed in the history of the mainnet.

Source: L2BEAT
Transaction fee: One of the most important factors limiting the decline of the Layer2 network is the cost of data security provided to Layer1, which is currently quoted at close to $3 for 1KB of Calldata data stored on an Ethereum smart contract. But through the Cancun upgrade, Layer2 packaged transaction data is only stored in the form of blobs in the consensus layer of Ethereum, and 1 GB of data storage costs only about $0.1 a month, which greatly reduces the operating costs of Layer2. This greatly reduces Layer2's operating costs. As for the revenue generated from this open source, Layer2 operators will surely give a portion of it to users in order to attract more users and thus reduce Layer2's transaction costs.Scalability: The impact of the Cancun upgrade on Layer2 is mainly due to its temporary storage scheme and the new blob data type. Temporary storage periodically removes old state on the main network that is not useful for current validation, which reduces the storage pressure on nodes, thus speeding up network synchronization and node access between Layer1 and Layer2 at the same time. The blob, with its large external space and flexible adjustment mechanism based on the price of gas, can better adapt to changes in the network transaction volume, increasing the number of blobs carried by a block when the transaction volume is too large, and decreasing it when the transaction volume drops.
3.2.4 Cancun Upgrade and Various Layer2 Verticals
The Cancun upgrade will be positive for the entire Layer2 ecosystem. Since the core change in the Cancun upgrade is to reduce the cost of data storage and the size of individual blocks on Ethereum, Layer2, which uses Ethereum as its DA layer, will naturally see a corresponding increase in TPS and a reduction in the storage fees it pays to Layer1. However, due to the difference in the degree of use of the two Rollups for the Ethereum DA layer, there will be a difference in the degree of benefit for Op Layer2 and ZK Layer2.
Op Layer2: Since Op Layer2 needs to leave the compressed transaction data on the Ethereum for recording, it needs to pay more transaction fees to the Ethereum than ZK Layer2. Therefore, by reducing the gas consumption through EIP-4844, Op Layer2 can get a larger reduction in fees, thus narrowing the disadvantage of ZK Layer2 in terms of fee difference. At the same time, this round of Ethereum gas reduction is also bound to attract more participants and developers, compared with ZK Layer2, which has not issued any coins and its underlying layer is difficult to be compatible with EVMs, more projects and capitals will tend to flock to Op Layer2, especially Arbitrium, which has a strong performance in the recent period, which may lead to a new round of development of Layer2 ecosystem dominated by Op Layer2. This may lead to a new round of development in the Layer2 ecosystem led by Op Layer2, especially for SocialFi and GameFi projects, which are affected by high fees and have difficulties in providing quality user experience. Along with that, this phase of Layer2 is likely to see the emergence of many quality projects that can approach the Web2 user experience. If this round of development is taken by Op again, it will further widen the gap with the ZK Layer2 ecosystem, making it difficult enough for ZK Layer2 to catch up.ZK Layer2: Compared to Op Layer2, the benefit of downward gas adjustments will be smaller because ZK Layer2 does not need to store transaction-specific information on the chain, and although ZK Layer2 is still in the process of development and does not have the large ecosystem of Op Layer2, the facilities of Op Layer2 have already been improved, and there is more intense competition for the development of Op Layer2, which is attracted by the Cancun upgrades. However, the facilities on Op Layer2 are already well established and there is more competition for development on it, and it may not be wise for the new entrants attracted by the Cancun upgrades to compete with the already mature Op Layer2 developers. If ZK Layer2 is able to improve the supporting facilities for developers at this stage and provide a better development environment for developers, considering the better expectation of ZK Layer2 and the fierce competition in the market, new developers may choose to flock to the ZK Layer2 track, and this process will speed up the process of catching up with ZK Layer2, and achieve the goal of catching up with Op Layer2 before Op Layer2 completely dominates the market. before Op Layer2 completely dominates the market.
3.2.5 Opportunities for Layer2 Post Cancun Upgrade
DYDX:Although DYDX is a DEX deployed on Ethereum, its functions and principles are very different from traditional DEX on Ethereum such as Uniswap. First of all, it chooses thin orders instead of the AMM trading model used by mainstream DEXs, which allows users to have a smoother trading experience and creates a good condition for leveraged trading on it. In addition, it utilizes Layer 2 solutions such as StarkEx to achieve scalability and process transactions, packaging transactions off-chain and transmitting them back on-chain. Through the underlying principles of Layer2, DYDX allows users to obtain a far lower transaction cost than traditional DEX, with each transaction costing only about $0.005. At a time when the Cancun upgrade and the volatility of Ethereum and related tokens is almost certain to see a surge in high-risk investments such as leveraged trading. Through the Cancun upgrade, the transaction fees on DYDX will surpass those of CEX even for small transactions, while providing higher fairness and security, thus providing an excellent trading environment for high-risk investments and leveraged trading enthusiasts. From the above perspective, the Cancun upgrade will bring a very good opportunity for DYDX.Rollup Node:The data that was regularly purged in the Cancun upgrade is no longer relevant for the validation of new out-of-block, but that doesn't mean that there is no value in that purged data. For example, projects that are about to be airdropped conveniently need complete historical data to determine the security of the funds of each project that is about to receive airdrops, and there are also some on-chain analytics organizations that often need complete historical data to trace the flow of funds. At this time, one option is to query the historical data from the Rollup operator of Layer2, and in the process the Rollup operator can charge for data retrieval. Therefore, in the context of the Cancun upgrade, if we can effectively improve the data storage and retrieval mechanism on Rollup, and develop related projects in advance for layout, it will greatly increase the possibility of project survival and further development.
3.3 DApp
3.3.1 Profit Model
Similar to Web2 applications, DApps serves to provide a service to users on Ethereum. For example, Uniswap provides users with real-time exchange of different ERC20 tokens; Aave provides users with overcollateralized lending and flash lending services; and Mirror provides creators with decentralized content creation opportunities. However, the difference is that in Web2, the main way to profit is to attract more users to the platform through low-cost and high-quality services, and then use the traffic as a value to attract third-party advertisements and profit from the advertisements. However, DApp maintains zero infringement on users' attention in the whole process, and does not provide any recommendation to users, but collects the corresponding commission from a single service after providing a certain service to users. Thus, the value of a DApp comes mainly from the number of times users use the DApp's services and the depth of each interaction, and if a DApp wants to increase its value, it needs to provide services that are better than those of similar DApps, so that more developers will tend to use it rather than other DApps.
3.3.2 Detachment of DApps
At this stage, Ethereum DApps are dominated by DeFi, GameFi, and SocialFi. There were some Gamble projects in the early days, but due to the limitation of Ethereum's transaction speeds and the release of EOS, which is a more suitable public chain, the Gamble projects have gradually declined on Ethereum. These three types of DApps provide financial, gaming and social services respectively, and realize value capture from them.
DeFi
Implementation Principle: DeFi is essentially one or a series of smart contracts on Ethereum.In the release phase of DeFi, relevant contracts (such as coin contracts, exchange contracts, etc.) need to be deployed on the Ethereum main network, and the contracts will realize the interaction between DeFi function modules and Ethereum through the interfaces. When users interact with DeFi, they will call the contract interface to deposit, withdraw and exchange coins, etc. The DeFi smart contract will package the transaction data, interact with Ethereum through the script interface of the contract, and record the state changes on the Ethereum chain. In this process, the DeFi contract will charge a certain fee as a reward for upstream and downstream liquidity providers and for its own profit.Current status: DeFi has an absolute dominance among DApps. Apart from cross-chain projects and Layer2 projects. DeFi occupies other places in the top 10 DApps in terms of contract assets on Ethereum. Until this time, the cumulative number of DeFi users on Ethereum has exceeded 40 million. Although the number of monthly active users has declined from the peak of nearly 8 million in November 2021 due to the impact of the bear market, with the recovery of the market, the number of monthly active users has also recovered to about half of the peak, and is waiting for the next round of the bull market to make another surge. Meanwhile, DeFi is becoming more diverse and versatile. From the early cryptocurrency trading and mortgage lending to the current leveraged trading, forward buying, NFT financing, flash loans, etc., financial methods that can be realized in Web2 have been gradually realized in DeFi, even somthing can't be realized in Web2, including flash loans, have also been realized in DeFi.

Source: DAppRadar
SocialFi
Implementation Principle: Similar to traditional design platforms, SocialFi supports individuals to create content and publish the created content through the platform to spread it and further attract followers for the accounts, while users can access the content they need and obtain the services they need through the platform. The difference is that the content published by users, the interaction records between the content publishers and their fans, and the information of the accounts themselves are all decentralized through blockchain smart contracts, which means that the ownership of the information is returned to each individual account. For the SocialFi platform, the more people are willing to create and share content through its platform, the more revenue it can generate by providing these services. The cost of user interaction on the SocialFi platform minus the cost of storing account and transaction data is the profit of the SocialFi project.Current status: Although the UAW (User Active Wallet) of SocialFi seems to be comparable to DeFi's when it comes to the Head project, its volume often comes from the airdrop expectation of some projects, which is unsustainable. After the intial boom, Friend.tech had less than 1,000 UAWs these days. And when comparing with DeFi outside the top 5, it is more supportive of this conclusion. The root cause of this is that SocialFi's high service fees and inefficiencies have made it impossible for SocialFi to take on the social attributes it is supposed to have, and it has been reduced to a purely speculative platform.

Source: DAppRadar
GameFi
Implementation Principle: The application of GameFi is similar to that of SocialFi, except that the object of application has become a game. At this stage, the mainstream profit method of GameFi is to sell the props in the game for profit.Current status: If the project owner wants to get more profits, more people to participate in the game is essentially needed. At this stage, there are only two things that can attract users to participate in the game, one is the fun of the game, which drives users to buy props in order to get the right to participate in the game or a better gaming experience. The other is the expectation of profitability, as users believe they can sell the props at a higher price in the future. The first model is similar to Steam, where the program gets real money and the users get to enjoy the game. In the other model, if the users and the project's profits come from the constant influx of new users, and once the new funds can not offset the project's props issued, the project will quickly fall into a vicious cycle of selling, market expectations decline, and continue to sell and difficult to sustainably realize the revenue, with the Ponzi attribute. Due to the limitations brought by blockchain fees and transaction speed, GameFi at this stage is basically unable to achieve the user experience required by the former mode, and is mostly based on the second mode.
3.3.3 Impact of Cancun Upgrade on DApps
Performance optimization: Cancun upgraded a block can carry more transaction data, corresponding to the DApp can realize more state changes. According to the average expansion of 8 blob capacity calculation, Cancun upgraded DApp processing speed can reach ten times the original.Reduced Costs: Data storage costs are a fixed expense for DApps, and both Layer1 and Layer2 DApps directly or indirectly utilize Ethereum to record the status of accounts within the DApp. With the Cancun upgrade, every transaction in a DApp can be stored as a blob of data, significantly reducing the cost of running the DApp.Functionality Expansions: Due to the high cost of storage on Ethereum, project owners are deliberately reducing the amount of data that can be uploaded during the development of DApps. This has made it impossible to migrate many Web2 experiences to DApps, such as SocialFi's inability to support video creation in Twitter, or even if they could, the data would not be as secure as Ethereum on the underlying chain, and GameFi's gameplay options are often low-level and uninteresting, as every state change needs to be recorded on the chain. With the Cancun upgrade, project owners will have more opportunities to experiment with these aspects.
3.3.4 Cancun Upgrade and Various DApp Verticals
DeFi: The impact of the Cancun upgrade on DeFi is relatively small because the only thing that needs to be recorded in DeFi is the current state of the user's assets in the contract, whether it is pledged, borrowed or other states, and the amount of data required to be stored is much smaller than that of the other two types of DApps. However, the increase of Ethereum's TPS brought by the Cancun upgrade can greatly facilitate the arbitrage business of DeFi, which has a high trading frequency, and the leverage business, which needs to complete the opening and closing of positions in a short period of time. At the same time, the reduction in storage costs, which is not evident in single-coin exchanges, can also add up to significant fee savings in leveraged and arbitrage transactions.SocialFi: The Cancun upgrade has the most immediate impact on SocialFi's performance. The Cancun upgrade improves the ability of SocialFi's smart contracts to process and store large amounts of data to provide a superior user experience that is closer to that of Web2. At the same time, basic interactions such as user creation, commenting, liking, etc. on SocialFi can be done at a lower cost, thus attracting truly socially oriented long-term participants.GameFi: For Asset on chain games in the last bull market, the effect is similar to DeFi, with a relatively small decrease in storage cost. But the increase in TPS can still benefit high frequency interactions, timeliness of interactions, and support for interactive features that can improve game playability. Fully On-chain games are more directly affected by the Cancun upgrade. Since all game logic, state, and data is stored on the chain, the Cancun upgrade will significantly reduce the cost of operation and user interaction for the Fully On-chain game. At the same time, the initial deployment cost of the game will also be greatly reduced, thus lowering the threshold for game development and encouraging the emergence of more fully chain games in the future.
3.3.5 Opportunities for DApps Post Cancun Upgrade
Dark Forest: Since the third quarter of 2023, perhaps because of the question that traditional asset-on-chain games are not decentralized enough, or simply because the traditional GameFi narrative seemed lukewarm, capital began to look for new growth points, Fully On-chain games began to explode and attracted a lot of attention. But for the fully on-chain game on Ethereum, the transaction speed of 15 TPS and the storage cost of 16 gas single bytes for the CALLDATA field severely limit the upper limit of its development. The landing of the Cancun upgrade can be a good improvement to both problems, combined with the continuous development of related projects in the second half of 2023, the Cancun upgrade can bring a relatively large positive for this track. Considering the head effect, Dark Forest is one of the few fully on-chain games from the last round of the bull market, with a relatively well-established community base, and has not yet issued its own tokens. It should have good prospects if the project side takes action around the time of Cancun's upgrade.
4. Conclusion
The landing of the Cancun upgrade will not only bring higher TPS and lower storage costs to Ethereum, but also a surge in storage pressure. DA and Layer2 are the ones that will be heavily impacted by the upgrade. In contrast, DA projects that do not use Ethereum in their underlying data storage are not supported by the Ethereum development community, and while there are opportunities, it need to be more cautious when dealing with specific projects. Since most of the ZK system Layer2 tokens have not yet been introduced, and Arbitrium has strengthened significantly in the recent period in anticipation of the Cancun upgrade, if the price of Arb's coins can stabilize through the pullback phase, Arb and its ecosystem of related projects should see a good rise along with the landing of Cancun. Due to the influx of speculators, the DYDX project may also have some opportunity at the Cancun upgrade node. Finally, Rollup has a natural advantage for storing Layer2-related transaction history data, when it comes to providing historical data access services, Rollup on Layer2 will also be a good choice.
If we take a longer-term perspective, the Cancun upgrade has created conditions for the development and performance of various types of DApps, and in the future, we will inevitably see Web3 projects gradually approaching Web2 in terms of interactive functions and real-time performance, which will bring Ethereum to the goal of a world computer, and it is worth making long-term investments for any pragmatic development projects. Ethereum has been in a weak position relative to Bitcoin in the recent market rally, and while Bitcoin has recovered to nearly 2/3 of its previous bull market high, Ethereum has not yet recovered 1/2 of its previous high.The arrival of the Cancun upgrade may change this trend and bring Ethereum a round of complementary gains, after all, as a rare public chain that can maintain profitability while in the midst of token deflation, there is indeed an undervalued value at this stage.
Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, DApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world.
Reference
eips.Ethereums-core: https://eips.Ethereum.org/coreEthStorage 官网: https://eth-store.w3eth.io/#/EIP-1153: Transient storage opcodes: https://eips.Ethereum.org/EIPS/eip-1153EIP-4788: Beacon block root in the EVM: https://eips.Ethereum.org/EIPS/eip-4788EIP-5656: MCOPY - Memory copying instruction:https://eips.Ethereum.org/EIPS/eip-5656EIP-6780: SELFDESTRUCT only in same transaction:https://eips.Ethereum.org/EIPS/eip-6780零知识卷叠如何运作:https://Ethereum.org/zh/developers/docs/scaling/ZK-rollups#how-do-ZK-rollups-workOPTIMISTIC ROLLUPS:https://Ethereum.org/developers/docs/scaling/optimistic-rollupsZK、ZKVM、ZKEVM 及其未来:https://foresightnews.pro/article/detail/11802重建与突破,探讨全链游戏的现在与未来:https://foresightnews.pro/article/detail/39608一文分析 Axie Infinity 背后的经济模式:https://www.tuoluo.cn/article/detail-10066131.html
Kernel Ventures:坎昆升级下的泛以太坊生态展望作者:Kernel Ventures Jerry Luo 审稿:Kernel Ventures Mandy, Kernel Ventures Joshua TLDR: 以太坊已经完成前三个升级阶段,分别解决了开发门槛,DoS 攻击以及 POS 转型的问题,现阶段的主要升级目标是降低交易费用并优化用户体验。EIP-1553,EIP-4788,EIP-5656,EIP-6780 四个提案分别实现了降低合约间交互成本,提高信标链访问效率,降低数据复制成本以及限制 SELFDESTRUCT 字节码的作用权限。EIP-4844 通过引入外挂在区块上的 blob 数据,可以大大提高以太坊的 TPS 并降低数据存储成本。坎昆升级对于 DA 赛道中的以太坊专用 DA 会有额外利好并且现阶段以太坊基金会对于数据存储中完全没有借助以太坊的 DA 方案持排斥态度。由于 Op Layer2 更成熟的开发环境同时以及对以太坊 DA 层更多的需求,坎昆升级可能会给其带来相对更多的利好。坎昆升级可以提高 DApp 的性能上限,使得 DApp 具有更接近 Web2 中 App 的功能。热度没有消散又需要以太坊上大量存储空间的全链游戏值得关注。现阶段以太坊生态存在低估,坎昆升级可能成为以太坊开始走强的信号。 1. 以太坊升级之路 自去年 10 月 16 日 Cointelegraph 发布比特币 ETF 通过的假新闻到今年 1 月 11 日 ETF 的最终通过,整个加密市场经历了一段持续的上涨。由于 ETF 最直接的利好对象是比特币,这段时间内以太坊和比特币走势出现了背离的情况,比特币最高接近 49000 美金,已收复上轮牛市峰值的 2/3,而以太坊最高仅至 2700 美金附近,刚刚超过上轮牛市峰值的一半。但自比特币 ETF 落地以来,ETH/BTC 走势出现了显著回升,除了即将到来的以太坊 ETF 预期外,另一个重要原因便是一拖再拖的坎昆升级最近也宣布了在 Goerli 测试网的公开测试,释放出了即将进行的信号。从目前的情况来看,坎昆升级的时间将不会早于 2024 第一季度。坎昆升级致力于解决现阶段在以太坊上 TPS 低下与交易费用高昂的问题,属于以太坊 Serenity 升级阶段的一部分。Serenity 之前,以太坊已经经历了 Frontier,Homestead,Metropolis,三个阶段。前三个阶段分别解决了以太坊上开发门槛,DoS 攻击以及 POS 转型的问题。在以太坊 Roadmap 中明确指出,现阶段的主要目标则是实现 ”Cheaper Transactions“ 和 “Better User Experience”。 近一年 ETH/BTC 汇率走势,图片来源:TradingView 2. 坎昆升级核心内容 以太坊作为一个去中心化社区,其升级方案来源于开发者社区提出并最终经以太坊社区多数赞同的提案,其中得以通过的是 ERC 提案,还在讨论中或者即将在主网施行的统称 EIP 提案。此次坎昆升级预期将通过 5 个 EIP 提案,分别是 EIP-1153,EIP-4788,EIP-5656,EIP-6780 与 EIP-4844。 2.1 主线任务:EIP-4844 Blob:在EIP-4844 中为以太坊引入了一种新的交易类型 blob,一个大小为 125kb 的数据包。Blob 对交易数据进行了压缩和编码并且并没有以 CALLDATA 字节码的形式在以太坊上进行永久存储,从而大大降低了 gas 消耗,但无法在 EVM 中直接访问。EIP-4844 推行后的以太坊中,每笔交易可以携带最多两个 blob,而每个区块最多可以携带 16 个 blob。但是以太坊社区建议每个区块携带的 blob 数量为 8 个,当超过这个数量后,虽然还可以继续携带,但是会面临相对不断增加的 gas 费,直到 16 个 blob 的上限。 此外,EIP-4844 中利用的另外两项核心技术分别是 KZG 多项式承诺与临时存储,这部分在我们机构前一篇文章 Kernel Ventures:一文探讨 DA 和历史数据层设计 中有详细分析。总而言之,通过 EIP-4844 对以太坊单个区块容量大小以及交易数据的存储位置进行了改动,在降低以太坊主网 gas 的同时大幅提升了主网的 TPS 。 2.2 支线任务:EIP-1153 EIP-1153:这个提案的提出旨在降低合约交互过程中的存储成本。以太坊上一笔交易可以拆解为多个由 CALL 指令集创建的框架,这些框架可能隶属于不同合约,因而可能涉及多个合约的信息传输。状态在不同合约间存在两种传输方式,一种是以输入/输出的方式,另一种则是调用 SSTORE/SLOAD 字节码实现链上永久存储。前者中数据以内存的形式进行存储传输,具有较低的成本,但如果整个传输过程经过了任一不可信的第三方合约,都会存在巨大安全风险。但如果选择后者,又会带来一笔不小的存储开销,也会加重链上存储的负担。而 EIP-1153 的通过引入了瞬时存储的操作码 TSTORE 和 TLOAD 解决了这一问题。通过这两个字节码存储的变量具有和 SSTORE/SLOAD 字节码存储的变量有一样的性质,在传输过程中无法修改。但不同之处在于,瞬时存储的数据在这笔交易结束后不会留在链上,而是会和临时变量一样湮灭,通过这一方式,实现了状态传输过程的安全与相对较低的存储成本。 三种操作码的区别,图片来源:Kernel Ventures EIP-4788:在以太坊 POS 升级后的信标链中,每个新的执行块包含父信标块的 Root。即使遗失了部分产生时间较早的 Root ,由于共识层完成存储的 Root 具有可靠性,因而我们在创建新区块的过程中,我们只需要留有最新的某几个 Root 便可。但是在创建新区块的过程中,频繁的从 EVM 向共识层请求数据会造成执行效率的低下并为 MEV 创造可能。因而在 EIP-4788 中提出使用一个专门的 Beacon Root Contract 对最新的 Root 进行存储,这使得父信标块的 Root 都是 EVM 暴露的,大大提高了对数据的调用效率。 Beacon Root 的调用方式,图片来源:Kernel Ventures EIP-5656:对内存中的数据进行复制是以太坊上非常高频的一项基本操作,但在 EVM 上执行这项操作会产生许多开销。为了解决这一问题,以太坊社区在 EIP-5656 中提出了可以在 EVM 上高效进行复制的 MCOPY 操作码。MCOPY 使用了特殊的数据结构来短期存储被负责的数据,包括高效的分片访问和内存对象复制。拥有专用的 MCOPY 指令还能提供前瞻性保护,可以更好的应对未来以太坊升级中 CALL 指令的 gas 成本发生变化。 以太坊数据拷贝 gas 消耗的变化过程,来源:Kernel Ventures EIP-6780:以太坊中,通过 SELFDESTRUCT 可以对某个合约进行销毁,并清空该合约的所有代码和所有与该合约相关的状态。但在以太坊未来即将使用的 Verkle Tree 结构中,这会带来巨大隐患。在使用 Verkle Tire 存储状态的以太坊中,清空的存储空间将被标记为之前已写入但为空,这不会导致 EVM 执行中出现可观察到的差异,但与未发生的操作相比,已创建和删除的合约会生成不同的 Verkle Commitment,这会导致 Verkle Tree 结构下的以太坊出现数据校验的问题。因而 EIP-6780 中的 SELFDESTRUCT 仅保留了将合约中的 ETH 退还指定地址的功能,会将与该合约相关的代码与存储状态继续保存在以太坊上。 3. 坎昆升级后的各大赛道 3.1 DA 赛道 3.1.1 生态价值探讨 关于 DA 原理与各种 DA 类型的介绍,可以参考我们的上一篇文章Kernel Ventures:一文探讨 DA 和历史数据层设计。对于 DA 项目来说,其收益来源于用户在其上进行数据存储并支付的费用,而其支出来源于维护存储网络运行与存储数据持久性和安全性所支付的费用。收益与支出相减,所余下的便是网络积累下的价值。DA 项目要实现价值的提升,最主要的手段便是提高网络存储空间利用率,吸引尽可能多的用户利用网络进行存储。另一方面,在存储技术上的改进比如压缩数据或者分片存储可以减少网络的支出,从另一方面实现价值更高的积累。 3.1.2 DA 赛道细分 现阶段提供 DA 服务的项目主要分为三种类型,分别是主链专用 DA,模块化 DA 以及存储公链 DA。三者的具体介绍与区别见Kernel Ventures:一文探讨 DA 和历史数据层设计 。 3.1.3 坎昆升级对 DA 项目的影响 用户需求:坎昆升级后,以太坊的历史交易数据将会有数十倍于原来的增长速度。这些历史数据随之也会带来更大的存储需求,但是由于当下坎昆升级后的以太坊并未实现存储性能的提升,因而主链的 DA 层对这些历史采取了简单的定期清理方式,这一部分数据的存储市场便自然落到了各类 DA 项目头上,从而为其带来了更大的用户需求。发展方向:坎昆升级后以太坊历史数据的增加会促使各大 DA 项目方提高与以太坊的数据交互效率和互操作性来更好的抢占这部分市场。可以预见,各类跨公链的存储桥技术会成为存储公链 DA 与模块化 DA 的发展重点,而对于以太坊的主链专用 DA 来说,其也需要考虑如何进一步增强其与主网的兼容性,最小化传输成本与传输风险。 3.1.4 坎昆升级下的不同 DA 赛道 坎昆升级给以太坊带来更快数据增长的同时并没有改变全网同步的数据存储方式,这使得主链不得不对大量历史数据进行定期清理,下放交易数据长期存储的职能。但这部分历史数据在项目方进行空投,链上分析机构的数据分析等过程中仍存在需求。其背后的数据价值将引来不同 DA 项目方的争夺,而决定市场份额走向的关键便在于 DA 项目的数据安全性以及存储成本。 主链专用 DA:现阶段的主链 DA 项目比如 EthStorage 中,存储市场主要来源于以太坊上 NFT 项目的一些图片,音乐等大内存数据。主链 DA 由于在节点集群上和以太坊有高兼容性,可以低成本的与以太坊主网实现安全的数据交互。同时,其将存储索引数据存储在了以太坊主网智能合约上,并未将 DA 层完全脱离以太坊,从而得到了以太坊基金会的大力支持。对于以太坊带来的存储市场,主链专用 DA 相对其他 DA 有天然的优势。存储公链 DA 和模块化 DA:这类非主链 DA 项目相对于以太坊的专用 DA 难以在坎昆升级中取得历史数据存储性能上的竞争优势。但现阶段的以太坊专用 DA 还处于测试阶段,未能实现完全的落地,而坎昆升级已迫在眉睫,如果在坎昆升级前专用 DA 项目无法给出一个已实现的存储方案,这轮数据价值的挖掘仍有可能被模块化 DA 主导。 3.1.5 坎昆升级下 DA 的机遇 EthStorage:EthStorage 类的主链项目将是坎昆升级中最大的受益者,所以坎昆升级前后可以重点关注EthStorage 项目。此外,在近期坎昆升级可能于今年 2 月进行的消息放出后,EthStorage 的官推也是动作频频,先后发布了自己的最新官网与年度报告,宣传上显得是十分卖力。 Let’s celebrate the reveal of our new website! Please visit http://EthStorage.io to see the brand new design!Meet the Frontier of ScalabilityReal-time Cost Comparison with EthereumHow EthStorage WorksCore Features of EthStorageApplications Enabled by EthStorage 但是对比起最新官网的内容与 2022 版官网的内容,除了更酷炫的前端效果与更详细的介绍外,并未实现太多服务功能上的革新,主推的仍然是存储和 Web3Q 域名服务。有兴趣的话可以点击下面链接(https://galileo.web3q.io/faucet.w3q/faucet.html)领取测试代币 W3Q, 在 Galileo Chain network 网络上体验 EthStorage的服务,参与领取代币需要拥有一个 W3Q 域名或者主网余额超过 0.1 ETH 的账户。从水龙头最近的出水情况来看,尽管有了一定的宣传,现阶段并没有一个非常大的参与量。不过结合今年 7 月份 EthStorage 刚拿到 700 万美金的种子轮融资并且并没有看到这笔资金的明显出处,也有可能项目方在暗地酝酿某些基础设施的推进,等待这坎昆升级到来的前期发布以吸引最大热度。 EthStorage 的水龙头出水情况,来源:Web3q.io Celestia:Celestia 是现阶段模块化 DA 的龙头项目。相对于还在发展中的以太坊专用 DA 项目,Celestia 早在上轮牛市就开始发迹并拿到了首轮融资。经过了两年多的沉淀,Celestia 完善了其 Rollup 模型,代币模型并经过了长时间的测试网检验最终于 23 年的 10 月 31 号完成了其主网上线与首批空投。可以看到其币价从开盘以来便经历了一路的攀升,近日币价一度突破了 20 美金,按照现阶段 1.5 亿 TIA 的流通量来看,这一项目的市值已经达到了 30 亿美金附近。但是考虑到区块链历史存储赛道这一有限的服务群体,TIA 的市值已经远超了盈利模式更丰富的传统存储公链 Arweave 并直逼 Filecoin 的市值,尽管相对于牛市还有一定的上涨空间,现阶段 TIA 的市值存在一定的高估情况。不过在明星项目以及未消散的空投热情的加持下,如果坎昆升级能在今年第一季度如期推动,Celestia 仍是非常值得关注的项目。但有一点风险也很值得注意,以太坊基金会在涉及 Celestia 的讨论中多次强调,脱离了以太坊 DA 层的项目都不会是 Layer2,表现出了 Celestia 这类非以太坊原生存储项目的排斥态度。坎昆升级前后以太坊基金会可能的表态也将对 Celestia 价格的走势带来不确定性。 TIA 代币价格走势,图片来源:CoinmarketCap 3.2 Layer2 赛道 3.2.1 生态价值探讨 由于以太坊上用户数量的不断增加与项目的不断开发,以太坊低下的 TPS 成为其生态进一步开发的巨大阻碍,同时以太坊上高昂的交易费用也使得一些涉及复杂交互的项目难以大范围推广。但是,许多项目已经落地以太坊,进行迁移存在着巨大的成本与风险,同时,除了专注于支付的比特币公链,再难找到具有以太坊同样安全性的公链。 Layer2 的出现便是尝试解决上述问题,其将交易的处理与计算全部放在另一条公链(Layer2)上进行,数据打包好后通过与 Layer1 桥接的智能合约进行验证;并在主网上更改状态。Layer2 专注于交易的处理与验证,以以太坊作为 DA 层存储压缩后的交易数据,因而有更快的速度与更低的计算成本。用户如果想使用 Layer2 执行交易,需要预先购买 Layer2 相应 token 并向网络运营者支付。而 Layer2 的网络运营者则需要为存储在以太坊的数据安全支付相应费用,用户为 Layer2 数据安全支付的费用减去 Layer2 向 Layer1 上数据安全支付的费用就是 Layer2 的网络营收。所以对于以太坊上的 Layer2 来说,下面两方面的提高可以带来更多的收益。从开源的角度来说,以太坊生态越活跃,项目越多,就会有更多用户和项目方有降低 gas 与加速交易的需求,从而为 Layer2 生态带来更大的用户基数,单笔交易获利不变前提下,更多的交易便会给 Layer2 网络运营者带来更多的收益。从节流的角度出发,如果以太坊自身的存储成本下降,Layer2 项目方所需支付的 DA 层存储费用下降,在交易数量不变的前提下,Layer2 运营者也可以获取更多的收益。 3.2.2 Layer2赛道细分 2018年前后,以太坊的 Layer2 方案呈现百花齐放的状况,存在着侧链,Rollup,状态通道和 Plasma 共计 4 种方案。但状态通道由于在链下通道传输过程中的数据不可用风险以及大量的悲伤攻击,现阶段已经被从 Layer2 的方案中逐渐边缘化。Plasma 类型比较小众,并且 TVL 总量在 Layer2 中也无法进入前 10,也不多加讨论。最后,对于侧链形式的 Layer2 方案,其完全没有采用以太坊作为 DA 层,因而逐渐被排除在了 Layer2 的定义之外。本文仅对当下主流的 Layer2 方案 Rollup进行讨论,并结合其细分赛道 ZKRollup 与 OpRollup 来分析。 Optimistic Rollup 实现原理:初始化阶段,Optimistic Rollup 链需要在以太坊主网上部署一个链桥合约,通过这个合约实现和以太坊主网的交互。Op Layer2 会将用户的交易数据进行批量打包后发送至以太坊,其中包括了 Layer2 上账户最新的状态根,批处理的根与压缩后的交易数据。现阶段,这些数据以 Calldata 的形式存储在链桥合约中,虽然相对在 MPT 中的永久存储已经减少了不少 gas,但仍是一笔不小的数据开销,同时也为 Op Layer2(Optimistic Rollup Layer2)未来可能的性能提升产生了不少阻碍。 Optimistic Rollup 原理,图片来源:Kernel Ventures 现状:现阶段的 Op Layer2 是 Layer2 的第一大生态,TVL 排名前五的公链全部来自 Optimistic Rollup 生态,光是 Optimism 与 Arbitrium 两条公链的 TVL 总量之和便超过了 160 亿美金。 以太坊 Layer2 TVL 总量,图片来源:L2BEAT 而 Op Rollup 生态现今能够占据领跑位置的一个主要原因便是其友好的开发环境,抢先 ZK Rollup 完成了第一轮 Layer2 的发布与主网上线,吸引了大量饱受以太坊手续费与低下 TPS 限制的 DApp 开发者,将 DApp 开发阵地从 Layer1 向 Layer2 迁移的转移。同时 Op Layer2 在底层有着与 EVM 更高的兼容性,为以太坊主网项目的迁移扫清了障碍,以最快的时间实现了以太坊上 Uniswap,Sushiswap,Cureve 等各类 DApp 在 Layer2 的部署,甚至还吸引了 Wordcoin 等项目从 Polygon 主网进行迁移。现阶段的 Op Layer2 既有 Uniswap V3 这类以太坊龙头 DeFi,还有 GMX 这类 TVL 超过1亿美金的原生 DeFi 项目,又有 Friend.tech 这类交易费破 2000 万的 SocialFi 项目,不仅完成了项目数量上的积累,各个赛道的高质量项目还带动了整个生态质的突破。但是从长期来看,ZK Layer2(ZK Rollup Layer2)有着更高的 TPS 上限以及单笔交易更低的 gas 消耗,当后续 ZK Rollup 技术逐渐完善后 Op Layer2 将会面临一场与 ZK Layer2 的激烈竞争。 Friend.tech 的交易费用与 GMX V2 的 TVL,图片来源:Dune ZK Rollup(Zeroknowledge Rollup) 实现原理:ZK Layer2 中的交易数据有着与 Op Layer2 相近的处理方式,在 Layer2 上进行打包处理后返回到 Layer1 的智能合约中以 Calldata 存储。但是交易数据在 Layer2 上多出了一步生成 ZKp 的计算过程,同时不需要向网络返回压缩后的交易数据而只需返还交易根和批处理根并附带用于对相应交易合法性验证的 ZKp 。通过 ZK Rollup 返回给 Layer1 的数据无需任何窗口期,验证通过后可以在主网得到实时更新。 Zeroknowledge Rollup 原理,图片来源:Kernel Ventures 现状:现阶段的 ZK Layer2 已经发展为了第二大 Layer2 生态,紧跟 Op Layer2 之后。TVL 排名前 10 的 Layer2 中,ZK 系的数量也占据了 4 个,但总体上呈现出多而不强的现象,大家都认为 ZK 系的 Layer2 很有发展前景,但就是无法发展起来。首先是因为早期 Op 系 Layer2 的率先发布已经吸引了许多开发者在其上的项目落地,如果无法从项目迁移中获得足够多的优势,项目方不太可能将已经在 Op Layer2 上产生稳定收益的项目进行迁移。其次是 ZK 系 Layer2 现在许多还在底层与以太坊的兼容性上进行努力,比如 ZK 系的明星项目 Linea 现阶段还无法兼容许多 EVM 操作码,为适应了 EVM 的开发者带来不少开发障碍,而另一个明星项目 zkSync 现阶段甚至几乎无法实现与 EVM 底层的兼容而只能兼容以太坊的一些开发工具。 现有 ZK Layer2 项目与以太坊的兼容性,图片来源:Kernel Ventures 与以太坊的兼容性还为其上原生项目的迁移带来了巨大难度。由于字节码不具有完全互操作性,项目方需要对合约底层进行更改以适配 zkEVM,这个过程存在着许多困难与风险因而大大拖慢了以太坊原生项目的迁移过程。可以看到,现阶段 ZK 系 Layer2 上的项目多为原生项目,并且以 Zigzag,SyncSwap 这类开发难度相对较低的 DeFi 为主,ZK Layer2 上项目的总量以及多样性都等待着进一步开发。但是,ZK Layer2 的优势在于其技术上的先进性,如果能够实现 zkEVM 与 EVM 的兼容和 ZKp 生成算法的完善,其相对 Op Layer2 会有更好的性能上限。这也是为什么即便现阶段 Op Layer2 主导的市场下也会不断有 ZK Layer2 项目出现的原因,在 Op Layer2 赛道已经被瓜分殆尽的情况下,后来者最合适的方式也只能是通过提出一种预期更好的方案以吸引用户从原有网络的迁移。但是即便有一天 ZK Layer2 实现了技术上的完善,如果 Op Layer2 上已经形成了一个足够全面的生态,有了足够多的项目落地,此时即便有更好性能的 Layer2,用户与开发者能否愿意承担巨大风险进行迁移也会是一个未知数。此外,Op Layer2 在这个阶段也在不断进行完善以稳固自身的生态地位,包括 Optimism 开源 Op Stack 以协助其他 Op Layer2 开发者的快速开发以及二分挑战法等对挑战方式的改进。当 ZK Layer2 在进行完善的过程中,Op Layer2 也没有放慢发展的脚步,所以现阶段 ZK Layer2 的重要任务便是抓紧密码学算法的完善与 EVM 的兼容以防止用户对 Op Layer2 生态依赖性的形成。 3.2.3 坎昆升级对 Layer2 的影响 交易速度:坎昆升级后,一个区块通过 blob 可以携带最多 20 倍于原来的数据,而保持区块的出块速度不变。因而理论上来说,以 Layer1 作为 DA 层和结算层的 Layer2 也可以得到相对原来最多 20 倍的 TPS 提升。即便按照 10 倍的增长进行估计,几大 Layer2 明星项目中任何一者的交易速度都将超过以太坊主网历史最高的交易速度。 主流 Layer2 项目当前 TPS,图片来源:L2BEAT 交易费用:限制 Layer2 网络无法下降的一大重要原因来自向 Layer1 提供的数据安全费用,按当前报价计算,以太坊智能合约上 1KB Calldata 数据的存储价格就接近 3 美金。但通过坎昆升级,Layer2 打包的交易数据仅以 blobs 的形式存储在以太坊的共识层,1 GB 数据存储一个月也只花费大约 0.1 美金,这大大减少了 Layer2 的运营成本。而对于这部分开源产生的收益,Layer2 运营者肯定会让利一部分给用户以吸引更多的使用者从而降低 Layer2 的交易成本。可拓展性:坎昆升级中对 Layer2 的影响主要来源于其临时存储的方案和新增的 blob 数据类型。临时存储会定期在主网上删除对当下验证作用不大的旧状态,减小了节点的存储压力,从而同时加快了 Layer1 与 Layer2 的网络同步与节点访问速度。而 blob 通过外带的巨大空间以及基于 gas 价格的灵活的调节机制,可以更好的适应网络交易量的变化,当交易量过大时增加一个区块携带的 blobs 数量,而当交易量下降时也可以随之减少。 3.2.4 坎昆升级下的不同 Layer2 赛道 坎昆升级的到来将对整个 Layer2 生态都会形成利好。因为坎昆升级中最核心的变动是降低了以太坊上数据存储的成本以及单个区块的大小,以以太坊为 DA 层的 Layer2 自然也可以得到相应 TPS 的上升并减少向 Layer1 支付的存储费用。但是由于两种 Rollup 对于以太坊 DA 层使用程度上的不同,对 Op Layer2 与 ZK Layer2 的利好程度会有所差异。 Op Layer2:由于 Op Layer2 上需要将压缩后的完整交易数据留在以太坊上进行记录,导致其相对 ZK Layer2 需要向以太坊支付更多的交易费用。因而通过 EIP-4844 降低 gas 消耗后,Op Layer2 上相对可以获得更大幅度的手续费下调,从而相对缩小相对 ZK Layer2 在手续费差价上的劣势。同时,这轮以太坊的 gas 下调也必然吸引更多参与者和开发者的涌入,相对于没有发币并且底层难以兼容 EVM 的 ZK Layer2,更多的项目和资本会倾向涌入 Op Layer2,尤其是近段表现强势的 Arbitrium。这或许会带来以 Op Layer2 为主导的 Layer2 生态的新一轮开发,特别是受到高昂手续费影响而难以提供优质用户体验的 SocialFi 与 GameFi 项目。伴随着,这个阶段的 Layer2 上可能涌现许多可以接近 Web2 用户体验的优质项目。如果这轮开发高地再次被 Op 夺下,那么其将进一步拉开与 ZK Layer2 生态整体的差距,为后续 ZK Layer2 可能的追赶制造足够多的困难。ZK Layer2:相对 Op Layer2,由于ZK Layer2 不需要在链上存储交易的具体信息,gas 下调的利好会小于 Op Layer2。虽然 ZK Layer2 整体处于发展过程中并且没有 Op Layer2 上庞大的生态,但是 Op Layer2 上各项设施已经趋于完善,在其上的开发存在着更激烈的竞争,而对于坎昆升级吸引进来的新入局的开发者,要与已经很成熟的 Op Layer2 开发者竞争或许并非明智的选择。如果 ZK Layer2 能够在此阶段实现开发者配套设施的完善,为开发者提供更好的开发环境,考虑到 ZK Layer2 更好的预期以及市场竞争的激烈程度,或许新晋的开发者会选择涌入 ZK Layer2 赛道,这一过程反而会加速 ZK Layer2 的追赶过程,实现在 Op Layer2 彻底形成统治性优势前的超越。 3.2.5 坎昆升级下 Layer2 的机遇 DYDX:DYDX 虽然是一个部署在以太坊上的 DEX,但其功能和原理与 Uniswap 这类以太坊上的传统 DEX 有很大区别。首先是其选用了订单薄而非主流 DEX 使用的 AMM 这种交易模式,使得用户可以获得更丝滑的交易体验,这也为其上进行杠杆交易创造了一个良好的条件。此外,其利用了 StarkEx 等第 2 层方案来实现可扩展性与处理交易,对交易在链外打包后传回链上。通过 Layer2 的底层原理,DYDX 使用户可以获得远低于传统 DEX 的交易成本,每笔交易的费用仅在 0.005 美金左右。而在坎昆升级这一以太坊以及相关 token 剧烈波动之际,几乎可以肯定会出现高风险投资比如杠杆交易资金量的激增。而通过坎昆升级,DYDX 上的交易费用即便在小额交易上也将实现对 CEX 的超越,同时还具有更高的公平性与安全性,因而对高风险投资以及杠杆爱好者提供了一个绝佳的交易环境。从上述角度考虑,坎昆升级将会给 DYDX 带来一个非常好的机遇。Rollup Node:对新出块的验证来说,坎昆升级中被定期清理的数据已经没有意义,但并非代表这些被清理的数据不存在价值。比如即将空投的项目方便需要完整的历史数据以确定每个即将接收空投的项目资金的安全性,还有一些链上分析的机构,往往也需要完整的历史数据对资金流向进行追溯。这个时候,一个选择便是向 Layer2 的 Rollup 运营者查询历史数据,在这个过程中 Rollup 运营者便可以对数据检索进行收费。因而在坎昆升级的大背景下,如果能有效的完善 Rollup 上的数据存储与检索机制,提前开发相关项目进行布局,将会大大提高项目存活与进一步发展的可能。 3.3 DApp 赛道 3.3.1 生态价值探讨 与 Web2 的应用相似,DApp 的作用也是为以太坊上的用户提供某项服务。比如 Uniswap 可以为用户实时提供不同 ERC20 token 的交换;Aave 为用户提供了超额抵押借贷与闪电贷的服务;Mirror 则为创作者提供了去中心化的内容创作机会。但不同的是,在 Web2 中,应用主要的获利方式是通过低成本与优质的服务吸引更多的用户引入其平台,然后以流量为价值,吸引第三方投放广告而从广告中获利。 但 DApp 全过程保持了对用户注意力的零侵犯,不向用户提供任何推荐,而是通过为用户提供某项服务后从单次服务中收取对应手续费。因而 DApp 的价值主要来自用户对 DApp 服务的使用次数以及每次交互过程的交互深度,如果 DApp 想要提高自身价值,就需要提供优于同类 DApp 的服务,从而使更多的开发者倾向于使用其而非其他 DApp 进行操作。 3.3.2 DApp 赛道细分 现阶段的以太坊 DApp 以 DeFi,GameFi,SocialFi 为主,早期存在一些 Gamble 项目,但由于以太坊交易速度的限制以及 EOS 这类更适合的公链的发布,Gamble 类项目现在在以太坊上已逐渐势微。这三类 DApp 分别提供了金融,游戏与社交方面的服务,并从中实现价值捕获。 DeFi 实现原理:本质来说,DeFi 是以太坊上的一个或一系列智能合约。DeFi 的发布阶段,需要在以太坊主网上部署相关合约(如币种合约、兑换合约等),合约通过接口实现 DeFi 功能模块与以太坊的交互。用户进行交互时,会调用合约接口进行存币、取币、兑换等操作,DeFi 智能合约会将交易数据打包,通过合约的脚本接口同以太坊交互,在以太坊链上记录状态变更。这个过程中,DeFi 合约会收取一定费用作为上下游流动性提供者的奖励以及自身获利。现状:现阶段的以太坊上,DeFi 在 DApp 中占据了绝对的优势。除了跨链项目和 Layer2 项目外,DeFi 占据了以太坊上合约资产排名前 10 DApp 的其他席位。截止目前,以太坊上 DeFi 的累计用户数量已经超过了 4000 万,虽然受到熊市的影响,月活跃用户量经历了从 2021 年 11 月 峰值近 800 万的冲高回落,但是随着市场的回暖,现在的月用户量也回升到了 峰值的一半左右,并等待着下一轮牛市进行再次冲高。同时 DeFi 的类型也是越来越多样,功能越来越全面。从最早的币币交易、抵押借贷到现在的杠杆交易、定期购、NFT 金融、闪电贷等。Web2 中可以实现的金融方式在 DeFi 中都逐渐得到了实现,而 Web2 中不能实现的包括闪电贷等功能在 DeFi 中也得到了实现。 以太坊合约资产排名前 10 DApp ,图片来源:DAppRadar SocialFi 实现原理:与传统设计平台类似,SocialFi 上也支持个体进行内容创造同时将创造的内容通过平台进行发布以进行传播并进一步的为账户吸引粉丝,用户而言则可以通过平台查阅自己需要的内容,获取自己需要的服务。不同的是,用户发布的内容,内容发布者与粉丝的交互记录以及账户本身的信息都通过区块链智能合约进行去中心化的记录,也就是将信息的所有权交还了每个账户个体。对于 SocialFi 平台来说,越多的人愿意通过其平台进行内容创造和分享,其便可以通过提供这些服务获取更多的收益,用户使用 SocialFi 平台进行交互的费用减去对于账号和交易数据的存储费用便是 SocialFi 项目的获利。现状:现阶段头部 SocialFi 的 UAW(User Active Wallet) 虽然貌似可以和 DeFi 比肩,但这往往来自部分项目的空投预期,非常不具有持久性,比如前段时间的 friend.tech ,在热潮过后的 UAW 量甚至不到 1000,从 5 名开外的 DeFi 与 SocialFi 间的对比也可以清楚看到这点。根本原因在于 SocialFi 的高服务费与低效使得其无法承担本该具有的社交属性而单纯沦落为了一个投机平台。 Layer1 和 Layer2 上头部 SocialFi 与 DeFi 项目的 UAW 对比,图片来源:DAppRadar GameFi 实现原理:GameFi 的实现与 SocialFi 大体类似,只是应用的对象变成了游戏。现阶段 GameFi 项目方主流的盈利方式是通过出卖游戏中的道具进行获利。现状:项目方想获取更多的获利,就必须吸引更多人参与游戏。而现阶段能吸引用户参与游戏的无非两点,一个是游戏的趣味性,驱使用户为了获得游戏的参与权或更好的游戏体验而不得不购买道具。另一个则是可以获利的预期,用户相信将来其可以以更高的价格卖出这些道具。第一种模式类似于 Steam ,项目方获得了真金白银而用户获得了游戏的享受。而另一种模式中如果用户和项目方的获利来源于不断涌入的新用户,而一旦新增资金无法抵消项目方的道具增发,项目会迅速陷入卖出,市场预期下降,持续卖出的恶性循环而难以持续性实现收益,具有 Ponzi 属性。而由于区块链手续费和交易速度带来的限制,现阶段的 GameFi 基本无法达到前一种模式要求的用户体验而以第二种模式居多。 3.3.3 坎昆升级对 DApp 的影响 性能优化:坎昆升级后一个区块内可以携带更多笔交易数据,对应到 DApp 便可以实现更多状态的变更。按照 8 个 blob 的平均扩容计算,坎昆升级后 DApp 的处理速度可以达到原来的十倍。成本下降:数据存储成本是 DApp 项目方的一笔固定的支出,无论 Layer1 还是 Layer2 上的 DApp,都会直接或间接的利用以太坊对 DApp 内账户的状态进行记录。经过坎昆升级,DApp 中的每笔交易都可以以 Blob 的数据形式进行存储,从而大大降低 DApp 的运行成本。功能拓展:受限于以太坊上高昂的存储成本,项目方在 DApp 开发过程中都在刻意减少上链的数据。这导致许多 Web2 中用户可以享受的体验无法向 DApp 进行迁移,比如 SocialFi 便无法支持推特中视频创作的需求,或者即便可以也无法是数据在底层享有等同以太坊的安全性。GameFi 中游戏的交互选项往往低级且无趣,因为每一次的状态变更都需要在链上进行记录。通过坎昆升级,项目方在上述方面便有了更多的尝试机会。 3.3.4 坎昆升级下的不同 DApp 赛道 DeFi:坎昆升级中存储成本下降对 DeFi 带来的影响相对较小,因为 DeFi 中需要记录的只有合约中用户资产现在所处的状态,是质押,借贷还是其他状态,所需存储的数据量相对其他两种类型的 DApp 少很多。但坎昆升级带来的以太坊 TPS 的提高可以极大促进 DeFi 中交易频率较高的套利业务以及需要短期内完成开仓平仓的杠杆业务。同时,在单一币币交换中无法明显体现的存储成本降低这一改进在杠杆和套利交易中累积下来也可以节省大量手续费。SocialFi:坎昆升级对于 SocialFi 性能带来的提升最为直接。通过坎昆升级, 可以提高 SocialFi 智能合约处理和存储大量数据的能力从而提供更接近 Web2 的优质用户体验。同时,SocialFi 上用户创造、评论、点赞等基础操作交互也可以具有更低的成本,从而吸引真正社交导向的长期参与者。GameFi:对于上轮牛市中的资产上链游戏,其受到的影响与 DeFi 类似,存储成本下降相对较小,但是 TPS 的增加为游戏的高频交互提供了条件,可以提高游戏的交互实时性,支持改善游戏可玩性的复杂交互功能。全链游戏受到坎昆升级的影响更为直接。由于游戏的所有逻辑,状态,数据都存储在了链上,坎昆升级将大大降低游戏的运营与用户交互成本。同时,游戏初始部署的成本也会大大下降,从而降低游戏的开发门槛,促使未来更多全链游戏的出现。 3.3.5 坎昆升级下 DApp 的机遇 Dark Forest:23 年三季度以来,或许是因为对于传统资产上链游戏不够去中心化的质疑,也可能单纯因为传统 GameFi 的叙事出现乏力,全链游戏开始爆火。但对于以太坊上的全链游戏来说,15 TPS 的交易速度与 CALLDATA 字段 16 gas 单个字节的存储成本大大限制了其发展上限。而坎昆升级的落地对这两个问题都能有很好的改善,结合 23 年下半年相关项目的不断开发,坎昆升级可能会为这个赛道带来比较大的利好。考虑到头部效应,Dark Forest 是少有的从少轮牛市走出来的 Fully On-Chain Game,有了比较完善的社区基础,而且还没有发行自己的代币,如果坎昆升级前后项目方有这个想法,应该能有一个不错的走势。 4. 总结 坎昆升级的落地将为以太坊带来更高的 TPS 和更低的存储费用,但同时也带来了激增的存储压力。明牌会受到巨大影响的是 DA 和 Layer2 的赛道。对于 DA 赛道,激增的存储压力下以太坊专用的 DA 将会迎来巨大利好,相关项目比如 EthStorage 值得关注,相比之下,数据底层存储中完全没有使用到以太坊的 DA 项目并未得到以太坊开发社区的支持,尽管也存在机会,但对待具体项目时需要更加谨慎。由于 ZK 系 Layer2 大多还未引入代币,加之近段坎昆升级预期下 Arbitrium 已经出现了明显走强,如果接下来 Arbitrium 不发生大的暴雷,Arb 及其生态相关的项目相比其他 Layer2 将在坎昆升级中更具优势。由于大量投机者的涌入,DYDX 项目在坎昆升级的节点也可能存在一定机会。最后,Rollup 对于 Layer2 相关交易历史数据的存储具有天然优势,如果说到提供历史数据访问服务,Layer2 上的 Rollup 也会是一个不错的选择。 如果从更长期的角度考虑,坎昆升级为各类 DApp 的开发以及性能创造了条件,将来势必看到 Web3 项目从交互功能和实时性上的逐渐向 Web2 的靠近,使得以太坊向世界计算机的目标进一步靠近,任何务实开发的项目方都值得进行长期投资。在最近一段时间大盘的涨幅中,以太坊相对比特币一直处于弱势状态,在比特币已经恢复到上轮牛市高点近 2/3 的情况下,以太坊却还未恢复上轮高点的 1/2。坎昆升级的到来或许能改变这一趋势,为以太坊带来一轮补涨,毕竟作为一条少有的能保持盈利并处于代币通缩的公链,现阶段确实存在价值的低估了。 Kernel Ventures是一个由研究和开发社区驱动的加密风险投资基金,拥有超过70个早期投资,专注于基础设施、中间件、DApps,尤其是ZK、Rollup、DEX、模块化区块链,以及将搭载未来数十亿加密用户的垂直领域,如账户抽象、数据可用性、可扩展性等。在过去的七年里,我们一直致力于支持世界各地的核心开发社区和大学区块链协会的发展。 参考 eips.ethereums-core:https://eips.ethereum.org/coreEthStorage 官网:https://eth-store.w3eth.io/#/EIP-1153: Transient storage opcodes:https://eips.ethereum.org/EIPS/eip-1153EIP-4788: Beacon block root in the EVM:https://eips.ethereum.org/EIPS/eip-4788EIP-5656: MCOPY - Memory copying instruction:https://eips.ethereum.org/EIPS/eip-5656EIP-6780: SELFDESTRUCT only in same transaction:https://eips.ethereum.org/EIPS/eip-6780零知识卷叠如何运作:https://ethereum.org/zh/developers/docs/scaling/zk-rollups#how-do-zk-rollups-workOPTIMISTIC ROLLUPS:https://ethereum.org/developers/docs/scaling/optimistic-rollupszk、zkVM、zkEVM 及其未来:https://foresightnews.pro/article/detail/11802重建与突破,探讨全链游戏的现在与未来:https://foresightnews.pro/article/detail/39608一文分析 Axie Infinity 背后的经济模式:https://www.tuoluo.cn/article/detail-10066131.html

Kernel Ventures:坎昆升级下的泛以太坊生态展望

作者:Kernel Ventures Jerry Luo
审稿:Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
以太坊已经完成前三个升级阶段,分别解决了开发门槛,DoS 攻击以及 POS 转型的问题,现阶段的主要升级目标是降低交易费用并优化用户体验。EIP-1553,EIP-4788,EIP-5656,EIP-6780 四个提案分别实现了降低合约间交互成本,提高信标链访问效率,降低数据复制成本以及限制 SELFDESTRUCT 字节码的作用权限。EIP-4844 通过引入外挂在区块上的 blob 数据,可以大大提高以太坊的 TPS 并降低数据存储成本。坎昆升级对于 DA 赛道中的以太坊专用 DA 会有额外利好并且现阶段以太坊基金会对于数据存储中完全没有借助以太坊的 DA 方案持排斥态度。由于 Op Layer2 更成熟的开发环境同时以及对以太坊 DA 层更多的需求,坎昆升级可能会给其带来相对更多的利好。坎昆升级可以提高 DApp 的性能上限,使得 DApp 具有更接近 Web2 中 App 的功能。热度没有消散又需要以太坊上大量存储空间的全链游戏值得关注。现阶段以太坊生态存在低估,坎昆升级可能成为以太坊开始走强的信号。
1. 以太坊升级之路
自去年 10 月 16 日 Cointelegraph 发布比特币 ETF 通过的假新闻到今年 1 月 11 日 ETF 的最终通过,整个加密市场经历了一段持续的上涨。由于 ETF 最直接的利好对象是比特币,这段时间内以太坊和比特币走势出现了背离的情况,比特币最高接近 49000 美金,已收复上轮牛市峰值的 2/3,而以太坊最高仅至 2700 美金附近,刚刚超过上轮牛市峰值的一半。但自比特币 ETF 落地以来,ETH/BTC 走势出现了显著回升,除了即将到来的以太坊 ETF 预期外,另一个重要原因便是一拖再拖的坎昆升级最近也宣布了在 Goerli 测试网的公开测试,释放出了即将进行的信号。从目前的情况来看,坎昆升级的时间将不会早于 2024 第一季度。坎昆升级致力于解决现阶段在以太坊上 TPS 低下与交易费用高昂的问题,属于以太坊 Serenity 升级阶段的一部分。Serenity 之前,以太坊已经经历了 Frontier,Homestead,Metropolis,三个阶段。前三个阶段分别解决了以太坊上开发门槛,DoS 攻击以及 POS 转型的问题。在以太坊 Roadmap 中明确指出,现阶段的主要目标则是实现 ”Cheaper Transactions“ 和 “Better User Experience”。

近一年 ETH/BTC 汇率走势,图片来源:TradingView
2. 坎昆升级核心内容
以太坊作为一个去中心化社区,其升级方案来源于开发者社区提出并最终经以太坊社区多数赞同的提案,其中得以通过的是 ERC 提案,还在讨论中或者即将在主网施行的统称 EIP 提案。此次坎昆升级预期将通过 5 个 EIP 提案,分别是 EIP-1153,EIP-4788,EIP-5656,EIP-6780 与 EIP-4844。
2.1 主线任务:EIP-4844
Blob:在EIP-4844 中为以太坊引入了一种新的交易类型 blob,一个大小为 125kb 的数据包。Blob 对交易数据进行了压缩和编码并且并没有以 CALLDATA 字节码的形式在以太坊上进行永久存储,从而大大降低了 gas 消耗,但无法在 EVM 中直接访问。EIP-4844 推行后的以太坊中,每笔交易可以携带最多两个 blob,而每个区块最多可以携带 16 个 blob。但是以太坊社区建议每个区块携带的 blob 数量为 8 个,当超过这个数量后,虽然还可以继续携带,但是会面临相对不断增加的 gas 费,直到 16 个 blob 的上限。
此外,EIP-4844 中利用的另外两项核心技术分别是 KZG 多项式承诺与临时存储,这部分在我们机构前一篇文章 Kernel Ventures:一文探讨 DA 和历史数据层设计 中有详细分析。总而言之,通过 EIP-4844 对以太坊单个区块容量大小以及交易数据的存储位置进行了改动,在降低以太坊主网 gas 的同时大幅提升了主网的 TPS 。
2.2 支线任务:EIP-1153
EIP-1153:这个提案的提出旨在降低合约交互过程中的存储成本。以太坊上一笔交易可以拆解为多个由 CALL 指令集创建的框架,这些框架可能隶属于不同合约,因而可能涉及多个合约的信息传输。状态在不同合约间存在两种传输方式,一种是以输入/输出的方式,另一种则是调用 SSTORE/SLOAD 字节码实现链上永久存储。前者中数据以内存的形式进行存储传输,具有较低的成本,但如果整个传输过程经过了任一不可信的第三方合约,都会存在巨大安全风险。但如果选择后者,又会带来一笔不小的存储开销,也会加重链上存储的负担。而 EIP-1153 的通过引入了瞬时存储的操作码 TSTORE 和 TLOAD 解决了这一问题。通过这两个字节码存储的变量具有和 SSTORE/SLOAD 字节码存储的变量有一样的性质,在传输过程中无法修改。但不同之处在于,瞬时存储的数据在这笔交易结束后不会留在链上,而是会和临时变量一样湮灭,通过这一方式,实现了状态传输过程的安全与相对较低的存储成本。

三种操作码的区别,图片来源:Kernel Ventures
EIP-4788:在以太坊 POS 升级后的信标链中,每个新的执行块包含父信标块的 Root。即使遗失了部分产生时间较早的 Root ,由于共识层完成存储的 Root 具有可靠性,因而我们在创建新区块的过程中,我们只需要留有最新的某几个 Root 便可。但是在创建新区块的过程中,频繁的从 EVM 向共识层请求数据会造成执行效率的低下并为 MEV 创造可能。因而在 EIP-4788 中提出使用一个专门的 Beacon Root Contract 对最新的 Root 进行存储,这使得父信标块的 Root 都是 EVM 暴露的,大大提高了对数据的调用效率。

Beacon Root 的调用方式,图片来源:Kernel Ventures
EIP-5656:对内存中的数据进行复制是以太坊上非常高频的一项基本操作,但在 EVM 上执行这项操作会产生许多开销。为了解决这一问题,以太坊社区在 EIP-5656 中提出了可以在 EVM 上高效进行复制的 MCOPY 操作码。MCOPY 使用了特殊的数据结构来短期存储被负责的数据,包括高效的分片访问和内存对象复制。拥有专用的 MCOPY 指令还能提供前瞻性保护,可以更好的应对未来以太坊升级中 CALL 指令的 gas 成本发生变化。

以太坊数据拷贝 gas 消耗的变化过程,来源:Kernel Ventures
EIP-6780:以太坊中,通过 SELFDESTRUCT 可以对某个合约进行销毁,并清空该合约的所有代码和所有与该合约相关的状态。但在以太坊未来即将使用的 Verkle Tree 结构中,这会带来巨大隐患。在使用 Verkle Tire 存储状态的以太坊中,清空的存储空间将被标记为之前已写入但为空,这不会导致 EVM 执行中出现可观察到的差异,但与未发生的操作相比,已创建和删除的合约会生成不同的 Verkle Commitment,这会导致 Verkle Tree 结构下的以太坊出现数据校验的问题。因而 EIP-6780 中的 SELFDESTRUCT 仅保留了将合约中的 ETH 退还指定地址的功能,会将与该合约相关的代码与存储状态继续保存在以太坊上。
3. 坎昆升级后的各大赛道
3.1 DA 赛道
3.1.1 生态价值探讨
关于 DA 原理与各种 DA 类型的介绍,可以参考我们的上一篇文章Kernel Ventures:一文探讨 DA 和历史数据层设计。对于 DA 项目来说,其收益来源于用户在其上进行数据存储并支付的费用,而其支出来源于维护存储网络运行与存储数据持久性和安全性所支付的费用。收益与支出相减,所余下的便是网络积累下的价值。DA 项目要实现价值的提升,最主要的手段便是提高网络存储空间利用率,吸引尽可能多的用户利用网络进行存储。另一方面,在存储技术上的改进比如压缩数据或者分片存储可以减少网络的支出,从另一方面实现价值更高的积累。
3.1.2 DA 赛道细分
现阶段提供 DA 服务的项目主要分为三种类型,分别是主链专用 DA,模块化 DA 以及存储公链 DA。三者的具体介绍与区别见Kernel Ventures:一文探讨 DA 和历史数据层设计 。
3.1.3 坎昆升级对 DA 项目的影响
用户需求:坎昆升级后,以太坊的历史交易数据将会有数十倍于原来的增长速度。这些历史数据随之也会带来更大的存储需求,但是由于当下坎昆升级后的以太坊并未实现存储性能的提升,因而主链的 DA 层对这些历史采取了简单的定期清理方式,这一部分数据的存储市场便自然落到了各类 DA 项目头上,从而为其带来了更大的用户需求。发展方向:坎昆升级后以太坊历史数据的增加会促使各大 DA 项目方提高与以太坊的数据交互效率和互操作性来更好的抢占这部分市场。可以预见,各类跨公链的存储桥技术会成为存储公链 DA 与模块化 DA 的发展重点,而对于以太坊的主链专用 DA 来说,其也需要考虑如何进一步增强其与主网的兼容性,最小化传输成本与传输风险。
3.1.4 坎昆升级下的不同 DA 赛道
坎昆升级给以太坊带来更快数据增长的同时并没有改变全网同步的数据存储方式,这使得主链不得不对大量历史数据进行定期清理,下放交易数据长期存储的职能。但这部分历史数据在项目方进行空投,链上分析机构的数据分析等过程中仍存在需求。其背后的数据价值将引来不同 DA 项目方的争夺,而决定市场份额走向的关键便在于 DA 项目的数据安全性以及存储成本。
主链专用 DA:现阶段的主链 DA 项目比如 EthStorage 中,存储市场主要来源于以太坊上 NFT 项目的一些图片,音乐等大内存数据。主链 DA 由于在节点集群上和以太坊有高兼容性,可以低成本的与以太坊主网实现安全的数据交互。同时,其将存储索引数据存储在了以太坊主网智能合约上,并未将 DA 层完全脱离以太坊,从而得到了以太坊基金会的大力支持。对于以太坊带来的存储市场,主链专用 DA 相对其他 DA 有天然的优势。存储公链 DA 和模块化 DA:这类非主链 DA 项目相对于以太坊的专用 DA 难以在坎昆升级中取得历史数据存储性能上的竞争优势。但现阶段的以太坊专用 DA 还处于测试阶段,未能实现完全的落地,而坎昆升级已迫在眉睫,如果在坎昆升级前专用 DA 项目无法给出一个已实现的存储方案,这轮数据价值的挖掘仍有可能被模块化 DA 主导。
3.1.5 坎昆升级下 DA 的机遇
EthStorage:EthStorage 类的主链项目将是坎昆升级中最大的受益者,所以坎昆升级前后可以重点关注EthStorage 项目。此外,在近期坎昆升级可能于今年 2 月进行的消息放出后,EthStorage 的官推也是动作频频,先后发布了自己的最新官网与年度报告,宣传上显得是十分卖力。
Let’s celebrate the reveal of our new website! Please visit http://EthStorage.io to see the brand new design!Meet the Frontier of ScalabilityReal-time Cost Comparison with EthereumHow EthStorage WorksCore Features of EthStorageApplications Enabled by EthStorage
但是对比起最新官网的内容与 2022 版官网的内容,除了更酷炫的前端效果与更详细的介绍外,并未实现太多服务功能上的革新,主推的仍然是存储和 Web3Q 域名服务。有兴趣的话可以点击下面链接(https://galileo.web3q.io/faucet.w3q/faucet.html)领取测试代币 W3Q, 在 Galileo Chain network 网络上体验 EthStorage的服务,参与领取代币需要拥有一个 W3Q 域名或者主网余额超过 0.1 ETH 的账户。从水龙头最近的出水情况来看,尽管有了一定的宣传,现阶段并没有一个非常大的参与量。不过结合今年 7 月份 EthStorage 刚拿到 700 万美金的种子轮融资并且并没有看到这笔资金的明显出处,也有可能项目方在暗地酝酿某些基础设施的推进,等待这坎昆升级到来的前期发布以吸引最大热度。

EthStorage 的水龙头出水情况,来源:Web3q.io
Celestia:Celestia 是现阶段模块化 DA 的龙头项目。相对于还在发展中的以太坊专用 DA 项目,Celestia 早在上轮牛市就开始发迹并拿到了首轮融资。经过了两年多的沉淀,Celestia 完善了其 Rollup 模型,代币模型并经过了长时间的测试网检验最终于 23 年的 10 月 31 号完成了其主网上线与首批空投。可以看到其币价从开盘以来便经历了一路的攀升,近日币价一度突破了 20 美金,按照现阶段 1.5 亿 TIA 的流通量来看,这一项目的市值已经达到了 30 亿美金附近。但是考虑到区块链历史存储赛道这一有限的服务群体,TIA 的市值已经远超了盈利模式更丰富的传统存储公链 Arweave 并直逼 Filecoin 的市值,尽管相对于牛市还有一定的上涨空间,现阶段 TIA 的市值存在一定的高估情况。不过在明星项目以及未消散的空投热情的加持下,如果坎昆升级能在今年第一季度如期推动,Celestia 仍是非常值得关注的项目。但有一点风险也很值得注意,以太坊基金会在涉及 Celestia 的讨论中多次强调,脱离了以太坊 DA 层的项目都不会是 Layer2,表现出了 Celestia 这类非以太坊原生存储项目的排斥态度。坎昆升级前后以太坊基金会可能的表态也将对 Celestia 价格的走势带来不确定性。

TIA 代币价格走势,图片来源:CoinmarketCap
3.2 Layer2 赛道
3.2.1 生态价值探讨
由于以太坊上用户数量的不断增加与项目的不断开发,以太坊低下的 TPS 成为其生态进一步开发的巨大阻碍,同时以太坊上高昂的交易费用也使得一些涉及复杂交互的项目难以大范围推广。但是,许多项目已经落地以太坊,进行迁移存在着巨大的成本与风险,同时,除了专注于支付的比特币公链,再难找到具有以太坊同样安全性的公链。 Layer2 的出现便是尝试解决上述问题,其将交易的处理与计算全部放在另一条公链(Layer2)上进行,数据打包好后通过与 Layer1 桥接的智能合约进行验证;并在主网上更改状态。Layer2 专注于交易的处理与验证,以以太坊作为 DA 层存储压缩后的交易数据,因而有更快的速度与更低的计算成本。用户如果想使用 Layer2 执行交易,需要预先购买 Layer2 相应 token 并向网络运营者支付。而 Layer2 的网络运营者则需要为存储在以太坊的数据安全支付相应费用,用户为 Layer2 数据安全支付的费用减去 Layer2 向 Layer1 上数据安全支付的费用就是 Layer2 的网络营收。所以对于以太坊上的 Layer2 来说,下面两方面的提高可以带来更多的收益。从开源的角度来说,以太坊生态越活跃,项目越多,就会有更多用户和项目方有降低 gas 与加速交易的需求,从而为 Layer2 生态带来更大的用户基数,单笔交易获利不变前提下,更多的交易便会给 Layer2 网络运营者带来更多的收益。从节流的角度出发,如果以太坊自身的存储成本下降,Layer2 项目方所需支付的 DA 层存储费用下降,在交易数量不变的前提下,Layer2 运营者也可以获取更多的收益。
3.2.2 Layer2赛道细分
2018年前后,以太坊的 Layer2 方案呈现百花齐放的状况,存在着侧链,Rollup,状态通道和 Plasma 共计 4 种方案。但状态通道由于在链下通道传输过程中的数据不可用风险以及大量的悲伤攻击,现阶段已经被从 Layer2 的方案中逐渐边缘化。Plasma 类型比较小众,并且 TVL 总量在 Layer2 中也无法进入前 10,也不多加讨论。最后,对于侧链形式的 Layer2 方案,其完全没有采用以太坊作为 DA 层,因而逐渐被排除在了 Layer2 的定义之外。本文仅对当下主流的 Layer2 方案 Rollup进行讨论,并结合其细分赛道 ZKRollup 与 OpRollup 来分析。
Optimistic Rollup
实现原理:初始化阶段,Optimistic Rollup 链需要在以太坊主网上部署一个链桥合约,通过这个合约实现和以太坊主网的交互。Op Layer2 会将用户的交易数据进行批量打包后发送至以太坊,其中包括了 Layer2 上账户最新的状态根,批处理的根与压缩后的交易数据。现阶段,这些数据以 Calldata 的形式存储在链桥合约中,虽然相对在 MPT 中的永久存储已经减少了不少 gas,但仍是一笔不小的数据开销,同时也为 Op Layer2(Optimistic Rollup Layer2)未来可能的性能提升产生了不少阻碍。

Optimistic Rollup 原理,图片来源:Kernel Ventures
现状:现阶段的 Op Layer2 是 Layer2 的第一大生态,TVL 排名前五的公链全部来自 Optimistic Rollup 生态,光是 Optimism 与 Arbitrium 两条公链的 TVL 总量之和便超过了 160 亿美金。

以太坊 Layer2 TVL 总量,图片来源:L2BEAT
而 Op Rollup 生态现今能够占据领跑位置的一个主要原因便是其友好的开发环境,抢先 ZK Rollup 完成了第一轮 Layer2 的发布与主网上线,吸引了大量饱受以太坊手续费与低下 TPS 限制的 DApp 开发者,将 DApp 开发阵地从 Layer1 向 Layer2 迁移的转移。同时 Op Layer2 在底层有着与 EVM 更高的兼容性,为以太坊主网项目的迁移扫清了障碍,以最快的时间实现了以太坊上 Uniswap,Sushiswap,Cureve 等各类 DApp 在 Layer2 的部署,甚至还吸引了 Wordcoin 等项目从 Polygon 主网进行迁移。现阶段的 Op Layer2 既有 Uniswap V3 这类以太坊龙头 DeFi,还有 GMX 这类 TVL 超过1亿美金的原生 DeFi 项目,又有 Friend.tech 这类交易费破 2000 万的 SocialFi 项目,不仅完成了项目数量上的积累,各个赛道的高质量项目还带动了整个生态质的突破。但是从长期来看,ZK Layer2(ZK Rollup Layer2)有着更高的 TPS 上限以及单笔交易更低的 gas 消耗,当后续 ZK Rollup 技术逐渐完善后 Op Layer2 将会面临一场与 ZK Layer2 的激烈竞争。

Friend.tech 的交易费用与 GMX V2 的 TVL,图片来源:Dune
ZK Rollup(Zeroknowledge Rollup)
实现原理:ZK Layer2 中的交易数据有着与 Op Layer2 相近的处理方式,在 Layer2 上进行打包处理后返回到 Layer1 的智能合约中以 Calldata 存储。但是交易数据在 Layer2 上多出了一步生成 ZKp 的计算过程,同时不需要向网络返回压缩后的交易数据而只需返还交易根和批处理根并附带用于对相应交易合法性验证的 ZKp 。通过 ZK Rollup 返回给 Layer1 的数据无需任何窗口期,验证通过后可以在主网得到实时更新。

Zeroknowledge Rollup 原理,图片来源:Kernel Ventures
现状:现阶段的 ZK Layer2 已经发展为了第二大 Layer2 生态,紧跟 Op Layer2 之后。TVL 排名前 10 的 Layer2 中,ZK 系的数量也占据了 4 个,但总体上呈现出多而不强的现象,大家都认为 ZK 系的 Layer2 很有发展前景,但就是无法发展起来。首先是因为早期 Op 系 Layer2 的率先发布已经吸引了许多开发者在其上的项目落地,如果无法从项目迁移中获得足够多的优势,项目方不太可能将已经在 Op Layer2 上产生稳定收益的项目进行迁移。其次是 ZK 系 Layer2 现在许多还在底层与以太坊的兼容性上进行努力,比如 ZK 系的明星项目 Linea 现阶段还无法兼容许多 EVM 操作码,为适应了 EVM 的开发者带来不少开发障碍,而另一个明星项目 zkSync 现阶段甚至几乎无法实现与 EVM 底层的兼容而只能兼容以太坊的一些开发工具。

现有 ZK Layer2 项目与以太坊的兼容性,图片来源:Kernel Ventures
与以太坊的兼容性还为其上原生项目的迁移带来了巨大难度。由于字节码不具有完全互操作性,项目方需要对合约底层进行更改以适配 zkEVM,这个过程存在着许多困难与风险因而大大拖慢了以太坊原生项目的迁移过程。可以看到,现阶段 ZK 系 Layer2 上的项目多为原生项目,并且以 Zigzag,SyncSwap 这类开发难度相对较低的 DeFi 为主,ZK Layer2 上项目的总量以及多样性都等待着进一步开发。但是,ZK Layer2 的优势在于其技术上的先进性,如果能够实现 zkEVM 与 EVM 的兼容和 ZKp 生成算法的完善,其相对 Op Layer2 会有更好的性能上限。这也是为什么即便现阶段 Op Layer2 主导的市场下也会不断有 ZK Layer2 项目出现的原因,在 Op Layer2 赛道已经被瓜分殆尽的情况下,后来者最合适的方式也只能是通过提出一种预期更好的方案以吸引用户从原有网络的迁移。但是即便有一天 ZK Layer2 实现了技术上的完善,如果 Op Layer2 上已经形成了一个足够全面的生态,有了足够多的项目落地,此时即便有更好性能的 Layer2,用户与开发者能否愿意承担巨大风险进行迁移也会是一个未知数。此外,Op Layer2 在这个阶段也在不断进行完善以稳固自身的生态地位,包括 Optimism 开源 Op Stack 以协助其他 Op Layer2 开发者的快速开发以及二分挑战法等对挑战方式的改进。当 ZK Layer2 在进行完善的过程中,Op Layer2 也没有放慢发展的脚步,所以现阶段 ZK Layer2 的重要任务便是抓紧密码学算法的完善与 EVM 的兼容以防止用户对 Op Layer2 生态依赖性的形成。

3.2.3 坎昆升级对 Layer2 的影响
交易速度:坎昆升级后,一个区块通过 blob 可以携带最多 20 倍于原来的数据,而保持区块的出块速度不变。因而理论上来说,以 Layer1 作为 DA 层和结算层的 Layer2 也可以得到相对原来最多 20 倍的 TPS 提升。即便按照 10 倍的增长进行估计,几大 Layer2 明星项目中任何一者的交易速度都将超过以太坊主网历史最高的交易速度。

主流 Layer2 项目当前 TPS,图片来源:L2BEAT
交易费用:限制 Layer2 网络无法下降的一大重要原因来自向 Layer1 提供的数据安全费用,按当前报价计算,以太坊智能合约上 1KB Calldata 数据的存储价格就接近 3 美金。但通过坎昆升级,Layer2 打包的交易数据仅以 blobs 的形式存储在以太坊的共识层,1 GB 数据存储一个月也只花费大约 0.1 美金,这大大减少了 Layer2 的运营成本。而对于这部分开源产生的收益,Layer2 运营者肯定会让利一部分给用户以吸引更多的使用者从而降低 Layer2 的交易成本。可拓展性:坎昆升级中对 Layer2 的影响主要来源于其临时存储的方案和新增的 blob 数据类型。临时存储会定期在主网上删除对当下验证作用不大的旧状态,减小了节点的存储压力,从而同时加快了 Layer1 与 Layer2 的网络同步与节点访问速度。而 blob 通过外带的巨大空间以及基于 gas 价格的灵活的调节机制,可以更好的适应网络交易量的变化,当交易量过大时增加一个区块携带的 blobs 数量,而当交易量下降时也可以随之减少。
3.2.4 坎昆升级下的不同 Layer2 赛道
坎昆升级的到来将对整个 Layer2 生态都会形成利好。因为坎昆升级中最核心的变动是降低了以太坊上数据存储的成本以及单个区块的大小,以以太坊为 DA 层的 Layer2 自然也可以得到相应 TPS 的上升并减少向 Layer1 支付的存储费用。但是由于两种 Rollup 对于以太坊 DA 层使用程度上的不同,对 Op Layer2 与 ZK Layer2 的利好程度会有所差异。
Op Layer2:由于 Op Layer2 上需要将压缩后的完整交易数据留在以太坊上进行记录,导致其相对 ZK Layer2 需要向以太坊支付更多的交易费用。因而通过 EIP-4844 降低 gas 消耗后,Op Layer2 上相对可以获得更大幅度的手续费下调,从而相对缩小相对 ZK Layer2 在手续费差价上的劣势。同时,这轮以太坊的 gas 下调也必然吸引更多参与者和开发者的涌入,相对于没有发币并且底层难以兼容 EVM 的 ZK Layer2,更多的项目和资本会倾向涌入 Op Layer2,尤其是近段表现强势的 Arbitrium。这或许会带来以 Op Layer2 为主导的 Layer2 生态的新一轮开发,特别是受到高昂手续费影响而难以提供优质用户体验的 SocialFi 与 GameFi 项目。伴随着,这个阶段的 Layer2 上可能涌现许多可以接近 Web2 用户体验的优质项目。如果这轮开发高地再次被 Op 夺下,那么其将进一步拉开与 ZK Layer2 生态整体的差距,为后续 ZK Layer2 可能的追赶制造足够多的困难。ZK Layer2:相对 Op Layer2,由于ZK Layer2 不需要在链上存储交易的具体信息,gas 下调的利好会小于 Op Layer2。虽然 ZK Layer2 整体处于发展过程中并且没有 Op Layer2 上庞大的生态,但是 Op Layer2 上各项设施已经趋于完善,在其上的开发存在着更激烈的竞争,而对于坎昆升级吸引进来的新入局的开发者,要与已经很成熟的 Op Layer2 开发者竞争或许并非明智的选择。如果 ZK Layer2 能够在此阶段实现开发者配套设施的完善,为开发者提供更好的开发环境,考虑到 ZK Layer2 更好的预期以及市场竞争的激烈程度,或许新晋的开发者会选择涌入 ZK Layer2 赛道,这一过程反而会加速 ZK Layer2 的追赶过程,实现在 Op Layer2 彻底形成统治性优势前的超越。
3.2.5 坎昆升级下 Layer2 的机遇
DYDX:DYDX 虽然是一个部署在以太坊上的 DEX,但其功能和原理与 Uniswap 这类以太坊上的传统 DEX 有很大区别。首先是其选用了订单薄而非主流 DEX 使用的 AMM 这种交易模式,使得用户可以获得更丝滑的交易体验,这也为其上进行杠杆交易创造了一个良好的条件。此外,其利用了 StarkEx 等第 2 层方案来实现可扩展性与处理交易,对交易在链外打包后传回链上。通过 Layer2 的底层原理,DYDX 使用户可以获得远低于传统 DEX 的交易成本,每笔交易的费用仅在 0.005 美金左右。而在坎昆升级这一以太坊以及相关 token 剧烈波动之际,几乎可以肯定会出现高风险投资比如杠杆交易资金量的激增。而通过坎昆升级,DYDX 上的交易费用即便在小额交易上也将实现对 CEX 的超越,同时还具有更高的公平性与安全性,因而对高风险投资以及杠杆爱好者提供了一个绝佳的交易环境。从上述角度考虑,坎昆升级将会给 DYDX 带来一个非常好的机遇。Rollup Node:对新出块的验证来说,坎昆升级中被定期清理的数据已经没有意义,但并非代表这些被清理的数据不存在价值。比如即将空投的项目方便需要完整的历史数据以确定每个即将接收空投的项目资金的安全性,还有一些链上分析的机构,往往也需要完整的历史数据对资金流向进行追溯。这个时候,一个选择便是向 Layer2 的 Rollup 运营者查询历史数据,在这个过程中 Rollup 运营者便可以对数据检索进行收费。因而在坎昆升级的大背景下,如果能有效的完善 Rollup 上的数据存储与检索机制,提前开发相关项目进行布局,将会大大提高项目存活与进一步发展的可能。
3.3 DApp 赛道
3.3.1 生态价值探讨
与 Web2 的应用相似,DApp 的作用也是为以太坊上的用户提供某项服务。比如 Uniswap 可以为用户实时提供不同 ERC20 token 的交换;Aave 为用户提供了超额抵押借贷与闪电贷的服务;Mirror 则为创作者提供了去中心化的内容创作机会。但不同的是,在 Web2 中,应用主要的获利方式是通过低成本与优质的服务吸引更多的用户引入其平台,然后以流量为价值,吸引第三方投放广告而从广告中获利。 但 DApp 全过程保持了对用户注意力的零侵犯,不向用户提供任何推荐,而是通过为用户提供某项服务后从单次服务中收取对应手续费。因而 DApp 的价值主要来自用户对 DApp 服务的使用次数以及每次交互过程的交互深度,如果 DApp 想要提高自身价值,就需要提供优于同类 DApp 的服务,从而使更多的开发者倾向于使用其而非其他 DApp 进行操作。
3.3.2 DApp 赛道细分
现阶段的以太坊 DApp 以 DeFi,GameFi,SocialFi 为主,早期存在一些 Gamble 项目,但由于以太坊交易速度的限制以及 EOS 这类更适合的公链的发布,Gamble 类项目现在在以太坊上已逐渐势微。这三类 DApp 分别提供了金融,游戏与社交方面的服务,并从中实现价值捕获。
DeFi
实现原理:本质来说,DeFi 是以太坊上的一个或一系列智能合约。DeFi 的发布阶段,需要在以太坊主网上部署相关合约(如币种合约、兑换合约等),合约通过接口实现 DeFi 功能模块与以太坊的交互。用户进行交互时,会调用合约接口进行存币、取币、兑换等操作,DeFi 智能合约会将交易数据打包,通过合约的脚本接口同以太坊交互,在以太坊链上记录状态变更。这个过程中,DeFi 合约会收取一定费用作为上下游流动性提供者的奖励以及自身获利。现状:现阶段的以太坊上,DeFi 在 DApp 中占据了绝对的优势。除了跨链项目和 Layer2 项目外,DeFi 占据了以太坊上合约资产排名前 10 DApp 的其他席位。截止目前,以太坊上 DeFi 的累计用户数量已经超过了 4000 万,虽然受到熊市的影响,月活跃用户量经历了从 2021 年 11 月 峰值近 800 万的冲高回落,但是随着市场的回暖,现在的月用户量也回升到了 峰值的一半左右,并等待着下一轮牛市进行再次冲高。同时 DeFi 的类型也是越来越多样,功能越来越全面。从最早的币币交易、抵押借贷到现在的杠杆交易、定期购、NFT 金融、闪电贷等。Web2 中可以实现的金融方式在 DeFi 中都逐渐得到了实现,而 Web2 中不能实现的包括闪电贷等功能在 DeFi 中也得到了实现。

以太坊合约资产排名前 10 DApp ,图片来源:DAppRadar
SocialFi
实现原理:与传统设计平台类似,SocialFi 上也支持个体进行内容创造同时将创造的内容通过平台进行发布以进行传播并进一步的为账户吸引粉丝,用户而言则可以通过平台查阅自己需要的内容,获取自己需要的服务。不同的是,用户发布的内容,内容发布者与粉丝的交互记录以及账户本身的信息都通过区块链智能合约进行去中心化的记录,也就是将信息的所有权交还了每个账户个体。对于 SocialFi 平台来说,越多的人愿意通过其平台进行内容创造和分享,其便可以通过提供这些服务获取更多的收益,用户使用 SocialFi 平台进行交互的费用减去对于账号和交易数据的存储费用便是 SocialFi 项目的获利。现状:现阶段头部 SocialFi 的 UAW(User Active Wallet) 虽然貌似可以和 DeFi 比肩,但这往往来自部分项目的空投预期,非常不具有持久性,比如前段时间的 friend.tech ,在热潮过后的 UAW 量甚至不到 1000,从 5 名开外的 DeFi 与 SocialFi 间的对比也可以清楚看到这点。根本原因在于 SocialFi 的高服务费与低效使得其无法承担本该具有的社交属性而单纯沦落为了一个投机平台。

Layer1 和 Layer2 上头部 SocialFi 与 DeFi 项目的 UAW 对比,图片来源:DAppRadar
GameFi
实现原理:GameFi 的实现与 SocialFi 大体类似,只是应用的对象变成了游戏。现阶段 GameFi 项目方主流的盈利方式是通过出卖游戏中的道具进行获利。现状:项目方想获取更多的获利,就必须吸引更多人参与游戏。而现阶段能吸引用户参与游戏的无非两点,一个是游戏的趣味性,驱使用户为了获得游戏的参与权或更好的游戏体验而不得不购买道具。另一个则是可以获利的预期,用户相信将来其可以以更高的价格卖出这些道具。第一种模式类似于 Steam ,项目方获得了真金白银而用户获得了游戏的享受。而另一种模式中如果用户和项目方的获利来源于不断涌入的新用户,而一旦新增资金无法抵消项目方的道具增发,项目会迅速陷入卖出,市场预期下降,持续卖出的恶性循环而难以持续性实现收益,具有 Ponzi 属性。而由于区块链手续费和交易速度带来的限制,现阶段的 GameFi 基本无法达到前一种模式要求的用户体验而以第二种模式居多。
3.3.3 坎昆升级对 DApp 的影响
性能优化:坎昆升级后一个区块内可以携带更多笔交易数据,对应到 DApp 便可以实现更多状态的变更。按照 8 个 blob 的平均扩容计算,坎昆升级后 DApp 的处理速度可以达到原来的十倍。成本下降:数据存储成本是 DApp 项目方的一笔固定的支出,无论 Layer1 还是 Layer2 上的 DApp,都会直接或间接的利用以太坊对 DApp 内账户的状态进行记录。经过坎昆升级,DApp 中的每笔交易都可以以 Blob 的数据形式进行存储,从而大大降低 DApp 的运行成本。功能拓展:受限于以太坊上高昂的存储成本,项目方在 DApp 开发过程中都在刻意减少上链的数据。这导致许多 Web2 中用户可以享受的体验无法向 DApp 进行迁移,比如 SocialFi 便无法支持推特中视频创作的需求,或者即便可以也无法是数据在底层享有等同以太坊的安全性。GameFi 中游戏的交互选项往往低级且无趣,因为每一次的状态变更都需要在链上进行记录。通过坎昆升级,项目方在上述方面便有了更多的尝试机会。
3.3.4 坎昆升级下的不同 DApp 赛道
DeFi:坎昆升级中存储成本下降对 DeFi 带来的影响相对较小,因为 DeFi 中需要记录的只有合约中用户资产现在所处的状态,是质押,借贷还是其他状态,所需存储的数据量相对其他两种类型的 DApp 少很多。但坎昆升级带来的以太坊 TPS 的提高可以极大促进 DeFi 中交易频率较高的套利业务以及需要短期内完成开仓平仓的杠杆业务。同时,在单一币币交换中无法明显体现的存储成本降低这一改进在杠杆和套利交易中累积下来也可以节省大量手续费。SocialFi:坎昆升级对于 SocialFi 性能带来的提升最为直接。通过坎昆升级, 可以提高 SocialFi 智能合约处理和存储大量数据的能力从而提供更接近 Web2 的优质用户体验。同时,SocialFi 上用户创造、评论、点赞等基础操作交互也可以具有更低的成本,从而吸引真正社交导向的长期参与者。GameFi:对于上轮牛市中的资产上链游戏,其受到的影响与 DeFi 类似,存储成本下降相对较小,但是 TPS 的增加为游戏的高频交互提供了条件,可以提高游戏的交互实时性,支持改善游戏可玩性的复杂交互功能。全链游戏受到坎昆升级的影响更为直接。由于游戏的所有逻辑,状态,数据都存储在了链上,坎昆升级将大大降低游戏的运营与用户交互成本。同时,游戏初始部署的成本也会大大下降,从而降低游戏的开发门槛,促使未来更多全链游戏的出现。
3.3.5 坎昆升级下 DApp 的机遇
Dark Forest:23 年三季度以来,或许是因为对于传统资产上链游戏不够去中心化的质疑,也可能单纯因为传统 GameFi 的叙事出现乏力,全链游戏开始爆火。但对于以太坊上的全链游戏来说,15 TPS 的交易速度与 CALLDATA 字段 16 gas 单个字节的存储成本大大限制了其发展上限。而坎昆升级的落地对这两个问题都能有很好的改善,结合 23 年下半年相关项目的不断开发,坎昆升级可能会为这个赛道带来比较大的利好。考虑到头部效应,Dark Forest 是少有的从少轮牛市走出来的 Fully On-Chain Game,有了比较完善的社区基础,而且还没有发行自己的代币,如果坎昆升级前后项目方有这个想法,应该能有一个不错的走势。
4. 总结
坎昆升级的落地将为以太坊带来更高的 TPS 和更低的存储费用,但同时也带来了激增的存储压力。明牌会受到巨大影响的是 DA 和 Layer2 的赛道。对于 DA 赛道,激增的存储压力下以太坊专用的 DA 将会迎来巨大利好,相关项目比如 EthStorage 值得关注,相比之下,数据底层存储中完全没有使用到以太坊的 DA 项目并未得到以太坊开发社区的支持,尽管也存在机会,但对待具体项目时需要更加谨慎。由于 ZK 系 Layer2 大多还未引入代币,加之近段坎昆升级预期下 Arbitrium 已经出现了明显走强,如果接下来 Arbitrium 不发生大的暴雷,Arb 及其生态相关的项目相比其他 Layer2 将在坎昆升级中更具优势。由于大量投机者的涌入,DYDX 项目在坎昆升级的节点也可能存在一定机会。最后,Rollup 对于 Layer2 相关交易历史数据的存储具有天然优势,如果说到提供历史数据访问服务,Layer2 上的 Rollup 也会是一个不错的选择。
如果从更长期的角度考虑,坎昆升级为各类 DApp 的开发以及性能创造了条件,将来势必看到 Web3 项目从交互功能和实时性上的逐渐向 Web2 的靠近,使得以太坊向世界计算机的目标进一步靠近,任何务实开发的项目方都值得进行长期投资。在最近一段时间大盘的涨幅中,以太坊相对比特币一直处于弱势状态,在比特币已经恢复到上轮牛市高点近 2/3 的情况下,以太坊却还未恢复上轮高点的 1/2。坎昆升级的到来或许能改变这一趋势,为以太坊带来一轮补涨,毕竟作为一条少有的能保持盈利并处于代币通缩的公链,现阶段确实存在价值的低估了。
Kernel Ventures是一个由研究和开发社区驱动的加密风险投资基金,拥有超过70个早期投资,专注于基础设施、中间件、DApps,尤其是ZK、Rollup、DEX、模块化区块链,以及将搭载未来数十亿加密用户的垂直领域,如账户抽象、数据可用性、可扩展性等。在过去的七年里,我们一直致力于支持世界各地的核心开发社区和大学区块链协会的发展。
参考
eips.ethereums-core:https://eips.ethereum.org/coreEthStorage 官网:https://eth-store.w3eth.io/#/EIP-1153: Transient storage opcodes:https://eips.ethereum.org/EIPS/eip-1153EIP-4788: Beacon block root in the EVM:https://eips.ethereum.org/EIPS/eip-4788EIP-5656: MCOPY - Memory copying instruction:https://eips.ethereum.org/EIPS/eip-5656EIP-6780: SELFDESTRUCT only in same transaction:https://eips.ethereum.org/EIPS/eip-6780零知识卷叠如何运作:https://ethereum.org/zh/developers/docs/scaling/zk-rollups#how-do-zk-rollups-workOPTIMISTIC ROLLUPS:https://ethereum.org/developers/docs/scaling/optimistic-rollupszk、zkVM、zkEVM 及其未来:https://foresightnews.pro/article/detail/11802重建与突破,探讨全链游戏的现在与未来:https://foresightnews.pro/article/detail/39608一文分析 Axie Infinity 背后的经济模式:https://www.tuoluo.cn/article/detail-10066131.html
$AUCTION 🚀🚀🚀
$AUCTION 🚀🚀🚀
LIVE
Binance News
--
Bounce Brand将于1月19日推出SatoshiVM原生代币$SAVM
据深潮TechFlow报道,Bounce Brand宣布将于1月19日在Bounce Launchpad上推出比特币ZK Rollup第2层解决方案SatoshiVM的原生代币$SAVM。该代币将采用Bounce的新初始LP收入发行模式。
LIVE
Kernel Ventures
--
The New Narrative of Inscription — Under the Support of Different Ecosystems
Author: Kernel Ventures Stanley
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua

TLDR:
This article delves into the development trends of Bitcoin inscription and the characteristics of various protocols.
Analyzing protocols on the Bitcoin chain such as Ordinals, BRC20, Atomical, RGB, Pipe, comparing them with other PoW chains like Dogechain and Litecoin, as well as Ethereum chains Ethscriptions and Evm.ink, and Solana chain's SPL20 protocol. The comparison includes aspects such as fees, divisibility, scalability, and user considerations, with particular emphasis on the low fees and high scalability of the RGB protocol.Examining market and product projection for the inscription ecosystem, highlighting the completeness of infrastructure on the wallet side, the launch of Bitcoin chain AMM DEX, and the potential for additional functionalities in the future, such as lending and derivatives. Unisat's open API interface opens the door to numerous tool projects.
In conclusion, this article provides a comprehensive exploration of the dynamics in the field of Bitcoin inscription, offering insights into the future development of inscription empowered by the ecosystem, providing readers with a thorough understanding and outlook.
Inscription Market Background
Market Overview
Since the introduction of the Bitcoin Ordinals protocol in January 2023, a wave of enthusiasm has swept through the Bitcoin chain with protocols like BRC20 and Ordinals assets, often referred to as the "world of retail investors." This is attributed to the Fair Launch model of scripts like BRC20, where chips are entirely minted by individual retail investors, devoid of institutions, project teams, or insider trading. The minting cost for Ordi is approximately $1 per inscription, but after its listing on the Gate.io exchange, the price surged to $20,000 per inscription. The staggering increase in value fueled the continued popularity of the BRC20 protocol, drawing in numerous Ordinals players and leading to a continuous spike in Gas fees on the Bitcoin chain. At its peak, the minimum confirmation Gas even reached 400 s/vb, surpassing the highest Gas levels in the past three years.
Using this as a starting point, this article will delve into the exploration of the script ecosystem on various chains, discussing the current state of various protocols and anticipating the developmental trends of scripts under the empowerment of the ecosystem.
Data Overview
The 3-year Bitcoin block-fee-rate chart vividly illustrates sharp spikes in fees during May-June and November of this year. This surge reflects the fervor of users towards script protocols, not just limited to the BRC20 protocol. Various protocols developed on the Bitcoin network were introduced during this period, sparking a wave known as "Bitcoin Summer."

Bitcoin rate in the past three years, image source: Mempool.space
From the casting data of Inscriptions, it is evident that the casting quantity has stabilized, consistently maintaining high levels.

Ordinals inscription casting quantity, image source: Dune @dgtl_asserts
Track analysis
This article will categorize various chains and analyze the script protocols on each of them.
Bitcoin Chain
Ordinals / BRC20
On January 21, 2023, Bitcoin developer Casey Rodarmor introduced the Ordinals protocol, allowing metadata to be inscribed on the Bitcoin chain and assigned a script number. In March of the same year, Twitter user @domodata released the BRC20 protocol, evolving token minting into on-chain strings. On November 7, Binance listed the BRC20 flagship token $ORDI, triggering a significant surge with a nearly 100% daily increase.
As the first protocol in the inscription ecosystem, Ordinals has encountered several issues:
BRC20 supports only four-letter tokens, imposing significant limitations.The casting names are susceptible to Sybil attacks, making casting transactions prone to frontrunning.The Ordinals protocol results in substantial redundant data on the Bitcoin network.
For example, after the BRC20 token minted out, the original inscriptions will become invalid once token transactions are sent. This causes significant data occupation, a reason why some early Bitcoin enthusiasts are reluctant to support Ordinals.
Atomicals
The Atomical protocol's ARC20 utilizes one satoshi to represent the deployed token and eliminates the four-character restriction, allowing for more diverse gameplay. A unique project within this framework is the "Realm", where each registered entity is a prefix text and ultimately holds pricing rights for all suffixes. In terms of basic functionality, the Realm can be used as a transfer and receipt address (payment name), and also it has various use cases such as building communities/DAOs, identity verification, social profiles, aligning seamlessly with our envisioned development of DID.

However, both ARC20 and $ATOM are still in the very early stages, and further development is required, including improvements in wallets and markets.

Realm casting quantity, image source: Dune @sankin
Pipe
Casey, the founder of Ordinals, proposed a specific inscription implementation called Rune designed for issuing FT (fungible tokens). This method allows the direct insertion of token data into the UTXO script, encompassing the token's ID, output, and quantity. Rune's implementation is very similar to ARC20, handing over token transfers directly to the BTC mainnet. The distinction lies in Rune including the token quantity in the script data.
While Rune's concept is still in the ideation stage, the founder of #Trac developed the first functional protocol based on this idea, issuing PIPE tokens. Leveraging Casey's high profile, PIPE quickly gained momentum, capitalizing on the speculative fervor inherited from BRC20. Rune's legitimacy is relatively stronger compared to BRC20, but gaining acceptance within the BTC community remains challenging.
RGB

Lightning Network Capacity, Image Source: Mempool.space
With the Ordinals protocol elevating the ecosystem of the Bitcoin network, an increasing number of developers and projects are turning their attention to the Lightning Network due to its extremely low transaction fees and 40 million TPS (transactions per second).
RGB is an intelligent contract system based on BTC and the Lightning Network, representing a more ultimate scaling solution. However, progress has been slow due to its complexity. RGB transforms the state of a smart contract into a concise proof, engraving this proof into the BTC UTXO output script. Users can verify this UTXO to inspect the state of the smart contract. When the smart contract state is updated, a new UTXO is created to store the proof of this state change.
All smart contract data is entirely on the BTC chain, operated by dedicated RGB nodes that record the complete data of the smart contract and handle the computational workload of transactions. Users verify the deterministic changes in contract status by scanning the entire UTXO of the BTC chain.
RGB can be viewed as BTC's Layer 2. This design leverages BTC's security to guarantee smart contracts. However, as the number of smart contracts increases, the demand for UTXO encapsulation data will also inevitably lead to significant redundancy in the BTC blockchain.
Since 2018, RGB has remained in the development stage without speculative content. Tether's issuing company, Tether Limited, is a significant supporter of RGB, aiming to issue a large amount of USDT on the BTC RGB.
In terms of products, the mainstream wallet currently in use is Bitmask, which supports Bitcoin and Lightning Network deposits, as well as assets of RGB-20 and RGB-21. Bitlight Labs is also developing the RGB network, with plans to build its own wallet system and write smart contracts for DEX (decentralized exchange). The project has acquired BitSwap (bitswap-bifi.github.io) and is preparing to integrate it into the RGB network.
RGB's biggest advantages lie in its low transaction fees and extremely high scalability. There was a time when smart contract development on the Bitcoin network was difficult and received little attention. However, with the Ordinals protocol raising the ecosystem's popularity, more developers are experimenting with smart contracts on the RGB network. These smart contracts are written in the Rust language, incompatible with Ethereum, leading to a higher learning curve and requiring further evaluation in terms of technology.
For more information on the technical aspects of the RGB protocol, Kernel Ventures’ previous articles have introduced it in detail. Article link: https://tokeninsight.com/en/research/market-analysis/a-brief-overview-on-rgb-can-rgb-replicate-the-ordinals-hype
Other POW Chain
During the heyday of inscriptions on the Bitcoin chain, as other PoW chains share the same origin and are also based on the UTXO spending model, Ordinals has been migrated to some leading PoW public chains. In this article, we will analyze the examples of Dogechain and Litecoin, which have high market acceptance and development completeness.
Dogechain:
The Drc-20 protocol on the Dogecoin chain is based on Ordinals and functions similarly to the Bitcoin chain. However, due to its low transaction fees and strong meme appeal, it has gained popularity.
Litecoin:
Similarly, the Ltc-20 protocol on the Litecoin chain is based on Ordinals. This protocol has received retweets and attention from the Litecoin official team and its founder, Charlie Lee. It can be considered as having a "noble pedigree." The trading markets Unilit and Litescribe, along with the wallet Litescribe, show a relatively high level of development completeness. The first token, $Lite, is already listed on the Gate exchange.
However, there were issues with the protocol before the index was introduced. After the index was launched, a bug causing increased issuance emerged, but it has since been fixed and is worth keeping an eye on. From the graph, it is evident that after the introduction of the LTC20 protocol, gas fees on the Litecoin chain surged.

Image source: Twitter @SatoshiLite

Litecoin rate in the past year, image source: Litecoinspace
Ethereum Chain
Ethscriptions
As of now, the trading platform Etch on the Ethscriptions protocol has achieved a transaction volume of 10,500 ETH. The floor price of the first token, Eths, is $4,300. For those who stayed in from the beginning and did not exit, the initial investment cost on June 18th was less than 1U. Those who held on have now gained returns of over 6,000 times their initial investment.

Eths transaction data, image source: ETCH Market
Tom Lehman proposed a novel Ethereum scaling solution on August 8th. Employing a technology similar to Ordinals, leveraging Calldata expansion, this solution aims to achieve cost-effectiveness in Ethereum mainnet gas fees and enhance the dimensionality of ecosystem applications.
At the core of Eths is the Ethscriptions Virtual Machine (ESC VM), which can be likened to the Ethereum Virtual Machine (EVM). The "Dumb Contracts" within the ESC VM enable Eths to break free from the limitations of inscriptions as NFT speculation, entering the realm of functionality and practicality. Eths has officially entered the competition in the base layer and L2 solutions arena.

Dumb Contracts running logic, picture source: Ethscriptions ESIP-4 proposal
"Eths represents another approach to Ethereum Layer 2. Unlike typical Layer 2 solutions that are separate chains and may have a backdoor, Eths conducts transactions on the Ethereum mainnet with gas fees as affordable as those on Layer 2. It enables various activities such as swapping, DeFi, and GameFi on the Eths platform. The key aspect is that it operates on the mainnet, making it secure and more decentralized than Layer 2," as excerpted from the Eths community.
However, articulating this new Layer 2 narrative is challenging. Firstly, token splitting is still in the developmental stage, and current inscriptions are still non-fungible tokens (NFTs) that can not be split into fungible tokens (FTs).
As of the latest information available, the FacetSwap (https://facetswap.com/) has introduced a splitting feature. However, it was noted that mainstream trading markets do not currently support split inscriptions. Users can wait for future adaptations. Currently, split inscriptions can be used for activities like swapping and adding liquidity on Factswap. All operations are resolved by a virtual address (non-existent address) 0x000...Face7. Users can embed messages in IDM and send the hexadecimal data of the message to the address ending with Face7 to perform operations like approve and transfer. As this is still in the early stages, its development trajectory will be observed in the future.
Other EVM Chain
Evm.ink
Evm.ink has migrated the protocol standards of Ethscriptions to other EVM-compatible chains, enabling these chains to also mint inscriptions and build indexes for other EVM chains. Recently popular projects such as POLS and AVAL use Evm.ink, which is essentially Ethscriptions' standard, for index recognition.

POLS casting data, image source: Dune @satsx

AVAL casting data, image source: Dune @helium_1990
POLS and AVAL both have a total supply of 21 million inscriptions. POLS has over 80,000 holders, while AVAL has more than 23,000 holders. The minting progress for both is around 2-3 days. This indicates a significant interest from the community in low-cost Layer 2 (L2) inscriptions, as they offer a high return on investment. Due to the low cost, users from the long tail of BTC and ETH chains are participating, leading to overflow. This trend is not limited to just these two chains; other chains like Heco and Fantom have also experienced a surge in gas fees, all related to inscriptions.

Number of daily transactions on the EVM chain, image source: Kernel Ventures
Solana
SPL20
Solana inscriptions commenced on November 17th at 4 AM and were completed by 8 AM, with a total supply of 21,000 inscriptions. Unlike other networks, the main body of the inscription is an NFT, and the Index Content is the actual inscription. NFTs can be created through any platform, and the index determines whether it is included based on the hash of the image or file. The second point is the embedded text; only inscriptions with matching hashes and embedded text are considered valid. Images are off-chain data, and text is on-chain data. Currently, major proxy platforms use IPFS, while others use AR.
Solana inscriptions share a significant limitation with Eths – They can not be split. Without the ability to split, they essentially function as NFTs, lacking the liquidity and operational convenience equivalent to tokens, let alone the vision of future Dex Swaps.
The protocol's founder is also the founder of TapPunk on the Tap protocol. The team behind the largest proxy platform, Liberplex (https://www.libreplex.io/), is very proactive. Since its launch, the team has made rapid progress in development, completing operations such as hash indexing and changing inscription attributes (immutability). They also conduct live coding sessions and Q&A sessions on their official Discord. The trading market Tensor (https://www.tensor.trade/) has also been successfully integrated, and the development progress is swift.
The first inscription, $Sols, had a casting cost of approximately $5. In the secondary market, it reached a peak price of 14 SOL, with a floor price of 7.4 SOL, equivalent to $428. The daily trading volume exceeded 20,000 SOL, equivalent to about $1.2 million, with active turnover rates.
Core comparison
Comparison of core protocols

Comparison of mainstream inscription protocols, Image source: Kernel Ventures
This chart compares several major inscription protocols based on four dimensions: fees, divisibility, scalability, and user base.
Fees: RGB protocol stands out with the optimal fee rate, leveraging the Lightning Network for virtually zero-cost transactions.Divisibility: Both Solana and recent EVM protocols lack the capability for divisibility, with expectations for future development in this aspect.Scalability: RGB protocol's smart contract functionality provides significant scalability. Solana's scalability is still under discussion, but the team and Solana Foundation express support, suggesting it may not be lacking in scalability.User Base: EVM chains, with their naturally low gas costs, attract a larger user base due to the lower trial-and-error cost for users. BRC20, being the first inscription token and ranking first in orthodoxy, has accumulated a substantial user base.
Comparison of protocol token data

Protocol Token Comparison, Image source: Kernel Ventures
Analyzing the mainstream tokens from various protocols, it's evident that the current market capitalization of these tokens is around $600 million, excluding smaller-cap currencies. Additionally, Ordi constitutes 80% of the total market capitalization, indicating significant development opportunities for other protocols. Notably, protocols like RGB are still in the process of refinement and haven't issued tokens.
In terms of the number of holders, Pols and Ordi dominate, while other protocols have fewer holders. Eths and Solana inscriptions have not been split, so a comprehensive analysis of holder distribution is pending further developments.
Innovations and risk analysis
Currently, the primary use of inscriptions is Fair Launch, allowing users to fairly access opportunities to participate in projects. However, the development of the inscription space is not limited to fair launches.
Recent developments in the inscription space have shown significant dynamism and innovation. The growth of this sector is largely attributed to key technological advancements in Bitcoin, such as SegWit, Bech32 encoding, Taproot upgrade, and Schnorr signatures. These technologies not only enhance the transaction efficiency and scalability of the Bitcoin network but also increase its programmability.
For instance, in the RGB protocol, smart contracts built on the Lightning Network of Bitcoin exhibit not only extremely high transactions per second (40 million) but also benefit from being part of the largest blockchain ecosystem, Bitcoin.
Regarding risks, caution is advised, particularly with some Launchpads. For example, the recent case of Rug project Ordstater, with the success of MUBI and TURT, has led to a proliferation of Launchpads. Some platforms may execute a Rug Pull directly after the Initial DEX Offering (IDO). Prior to engaging in any project, it is crucial to thoroughly read the whitepaper, research the background, and avoid blindly following KOLs due to FOMO.
Future deduction of inscription ecology
Market Deduction
Galaxy Research and Mining predicts that by 2025, the market value of the Ordinals market will reach $5 billion, with the number of inscriptions at that time estimated to be only 260,000. Currently, the number of inscriptions has already reached 33 million, a growth of 126 times in just six months. The market capitalization of $Ordi has reached $400 million, and $Sats has reached $300 million. This suggests that the predictions for the entire inscription market were significantly underestimated.
Product Deduction
Currently, BRC20 trading activities are primarily concentrated on OKX and Unisat. The Web3 wallet promoted by OKX this year provides a favorable experience for trading BRC20 assets. The completeness of wallet-side infrastructure further smoothens and shortens the entry path for "retail investors," allowing them to smoothly enter this new market. With the emergence of various protocols, different protocols have introduced their own trading markets and wallets, such as Atomicals, Dogechain, Litecoin, and more. However, the wallets currently available in the market are all modifications of Unisat, built upon the open-source foundation of Unisat.
Comparing Bitcoin (POW) with Ethereum, one can analogize various protocols to different chains, with the fundamental difference lying in the Chain ID. Therefore, future products might involve Unisat integrating different protocols, allowing users to switch between protocols within the wallet as needed, similar to the chain-switching functionality in wallets like Metamask.

Comparison of wallets across protocols, Image source: Kernel Ventures
Track deduction
With funds continuously flowing into the inscription market, users are no longer satisfied with meme-driven speculation and are shifting their focus towards applications built on inscriptions. Unisat has brought innovation to BRC20 by introducing BRC20-Swap, allowing users to easily exchange BRC20 tokens similar to AMM DEX. As the first product enhancing liquidity in the Ordinals ecosystem, Unisat is poised to unlock the potential of the Bitcoin DeFi ecosystem, potentially leading to the development of additional features such as lending and derivatives. Recently, Unisat has also opened API interfaces, which is user-friendly for small developers, enabling them to call various functions, such as automated batch order scanning and monitoring inscriptions for automatic minting. This can give rise to numerous utility projects.
While transaction fees on the Bitcoin network are relatively high, for layer2s' like Stacks and RIF, even though fees are lower, they lack a user base and sufficient infrastructure. This makes Bitcoin's EVM a compelling narrative. For example, BEVM is a project based on the Ethereum network, providing a Bitcoin ecosystem Layer2 with on-chain native tokens being BTC. Users can use the official cross-chain bridge to move Bitcoin from the mainnet to BEVM. The EVM compatibility of BEVM makes it easy to build applications on EVM chains, with low entry barriers for DeFi, swap, and more to migrate from other chains.
However, there are several issues to consider with Bitcoin's EVM. Questions include whether the assets crossing over can maintain decentralization and immutability, the consensus problem of EVM chain nodes, and how to synchronize transactions to the Bitcoin network (or decentralized storage). Since the threshold for Ethereum layer 2 is relatively low, security may be compromised, making it a primary concern for anyone interested in Bitcoin EVM at the moment.

Image source: BEVM Bridge
Summary
This article delves into the development trends in the Bitcoin inscription domain and the characteristics of various protocols. By analyzing protocols such as Ordinals (BRC20), Atomical, RGB, Pipe, and others on the Bitcoin chain, as well as comparing them with other Pow chains, Ethereum's Ethscriptions and Evm.ink, and Solana's SPL20 protocol, the differences in terms of fees, divisibility, scalability, and user aspects are explored.
In the context of the inscription market, starting with the Ordinals protocol, a wave of inscription protocols like BRC20 has been referred to as the "world of retail investors." The analysis includes an overview of data such as Bitcoin block fees and the number of inscriptions forged by Ordinals, providing insights into the development trends in the inscription ecosystem.
In the analysis of the racecourse, the core elements of mainstream inscription protocols, such as fees, divisibility, scalability, and user numbers, are compared to showcase their similarities and differences. Finally, through a comparison of protocol token data and core protocol comparisons, a comprehensive analysis of market value and user distribution for various mainstream protocols is provided. The conclusion emphasizes innovation points and risk analysis, highlighting the vitality and innovation within the inscription domain.
Looking ahead, the inscription domain is expected to witness continuous technological innovation, driving the practical application of more complex functionalities. The market's robust development is anticipated to maintain steady growth, providing more opportunities for investors and participants. Meanwhile, it is expected that more creative projects and protocols will emerge, further enriching the inscription ecosystems of Bitcoin and other public chains. Miners' earnings may also increase as the inscription domain offers them new income opportunities.
Reference link
Bitcoin block-fee-rates (3 year):https://mempool.space/zh/graphs/mining/block-fee-rates#3yESIP-4: The Ethscriptions Virtual Machine:https://docs.ethscriptions.com/esips/esip-4-the-ethscriptions-virtual-machineA comprehensive scan of the inscriptions industry:https://www.theblockbeats.info/news/47753?search=1Litecoin block-fee-rates (1 year):https://litecoinspace.org/zh/graphs/mining/block-fee-rates#1y
The New Narrative of Inscription — Under the Support of Different EcosystemsAuthor: Kernel Ventures Stanley Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua TLDR: This article delves into the development trends of Bitcoin inscription and the characteristics of various protocols. Analyzing protocols on the Bitcoin chain such as Ordinals, BRC20, Atomical, RGB, Pipe, comparing them with other PoW chains like Dogechain and Litecoin, as well as Ethereum chains Ethscriptions and Evm.ink, and Solana chain's SPL20 protocol. The comparison includes aspects such as fees, divisibility, scalability, and user considerations, with particular emphasis on the low fees and high scalability of the RGB protocol.Examining market and product projection for the inscription ecosystem, highlighting the completeness of infrastructure on the wallet side, the launch of Bitcoin chain AMM DEX, and the potential for additional functionalities in the future, such as lending and derivatives. Unisat's open API interface opens the door to numerous tool projects. In conclusion, this article provides a comprehensive exploration of the dynamics in the field of Bitcoin inscription, offering insights into the future development of inscription empowered by the ecosystem, providing readers with a thorough understanding and outlook. Inscription Market Background Market Overview Since the introduction of the Bitcoin Ordinals protocol in January 2023, a wave of enthusiasm has swept through the Bitcoin chain with protocols like BRC20 and Ordinals assets, often referred to as the "world of retail investors." This is attributed to the Fair Launch model of scripts like BRC20, where chips are entirely minted by individual retail investors, devoid of institutions, project teams, or insider trading. The minting cost for Ordi is approximately $1 per inscription, but after its listing on the Gate.io exchange, the price surged to $20,000 per inscription. The staggering increase in value fueled the continued popularity of the BRC20 protocol, drawing in numerous Ordinals players and leading to a continuous spike in Gas fees on the Bitcoin chain. At its peak, the minimum confirmation Gas even reached 400 s/vb, surpassing the highest Gas levels in the past three years. Using this as a starting point, this article will delve into the exploration of the script ecosystem on various chains, discussing the current state of various protocols and anticipating the developmental trends of scripts under the empowerment of the ecosystem. Data Overview The 3-year Bitcoin block-fee-rate chart vividly illustrates sharp spikes in fees during May-June and November of this year. This surge reflects the fervor of users towards script protocols, not just limited to the BRC20 protocol. Various protocols developed on the Bitcoin network were introduced during this period, sparking a wave known as "Bitcoin Summer." Bitcoin rate in the past three years, image source: Mempool.space From the casting data of Inscriptions, it is evident that the casting quantity has stabilized, consistently maintaining high levels. Ordinals inscription casting quantity, image source: Dune @dgtl_asserts Track analysis This article will categorize various chains and analyze the script protocols on each of them. Bitcoin Chain Ordinals / BRC20 On January 21, 2023, Bitcoin developer Casey Rodarmor introduced the Ordinals protocol, allowing metadata to be inscribed on the Bitcoin chain and assigned a script number. In March of the same year, Twitter user @domodata released the BRC20 protocol, evolving token minting into on-chain strings. On November 7, Binance listed the BRC20 flagship token $ORDI, triggering a significant surge with a nearly 100% daily increase. As the first protocol in the inscription ecosystem, Ordinals has encountered several issues: BRC20 supports only four-letter tokens, imposing significant limitations.The casting names are susceptible to Sybil attacks, making casting transactions prone to frontrunning.The Ordinals protocol results in substantial redundant data on the Bitcoin network. For example, after the BRC20 token minted out, the original inscriptions will become invalid once token transactions are sent. This causes significant data occupation, a reason why some early Bitcoin enthusiasts are reluctant to support Ordinals. Atomicals The Atomical protocol's ARC20 utilizes one satoshi to represent the deployed token and eliminates the four-character restriction, allowing for more diverse gameplay. A unique project within this framework is the "Realm", where each registered entity is a prefix text and ultimately holds pricing rights for all suffixes. In terms of basic functionality, the Realm can be used as a transfer and receipt address (payment name), and also it has various use cases such as building communities/DAOs, identity verification, social profiles, aligning seamlessly with our envisioned development of DID. However, both ARC20 and $ATOM are still in the very early stages, and further development is required, including improvements in wallets and markets. Realm casting quantity, image source: Dune @sankin Pipe Casey, the founder of Ordinals, proposed a specific inscription implementation called Rune designed for issuing FT (fungible tokens). This method allows the direct insertion of token data into the UTXO script, encompassing the token's ID, output, and quantity. Rune's implementation is very similar to ARC20, handing over token transfers directly to the BTC mainnet. The distinction lies in Rune including the token quantity in the script data. While Rune's concept is still in the ideation stage, the founder of #Trac developed the first functional protocol based on this idea, issuing PIPE tokens. Leveraging Casey's high profile, PIPE quickly gained momentum, capitalizing on the speculative fervor inherited from BRC20. Rune's legitimacy is relatively stronger compared to BRC20, but gaining acceptance within the BTC community remains challenging. RGB Lightning Network Capacity, Image Source: Mempool.space With the Ordinals protocol elevating the ecosystem of the Bitcoin network, an increasing number of developers and projects are turning their attention to the Lightning Network due to its extremely low transaction fees and 40 million TPS (transactions per second). RGB is an intelligent contract system based on BTC and the Lightning Network, representing a more ultimate scaling solution. However, progress has been slow due to its complexity. RGB transforms the state of a smart contract into a concise proof, engraving this proof into the BTC UTXO output script. Users can verify this UTXO to inspect the state of the smart contract. When the smart contract state is updated, a new UTXO is created to store the proof of this state change. All smart contract data is entirely on the BTC chain, operated by dedicated RGB nodes that record the complete data of the smart contract and handle the computational workload of transactions. Users verify the deterministic changes in contract status by scanning the entire UTXO of the BTC chain. RGB can be viewed as BTC's Layer 2. This design leverages BTC's security to guarantee smart contracts. However, as the number of smart contracts increases, the demand for UTXO encapsulation data will also inevitably lead to significant redundancy in the BTC blockchain. Since 2018, RGB has remained in the development stage without speculative content. Tether's issuing company, Tether Limited, is a significant supporter of RGB, aiming to issue a large amount of USDT on the BTC RGB. In terms of products, the mainstream wallet currently in use is Bitmask, which supports Bitcoin and Lightning Network deposits, as well as assets of RGB-20 and RGB-21. Bitlight Labs is also developing the RGB network, with plans to build its own wallet system and write smart contracts for DEX (decentralized exchange). The project has acquired BitSwap (bitswap-bifi.github.io) and is preparing to integrate it into the RGB network. RGB's biggest advantages lie in its low transaction fees and extremely high scalability. There was a time when smart contract development on the Bitcoin network was difficult and received little attention. However, with the Ordinals protocol raising the ecosystem's popularity, more developers are experimenting with smart contracts on the RGB network. These smart contracts are written in the Rust language, incompatible with Ethereum, leading to a higher learning curve and requiring further evaluation in terms of technology. For more information on the technical aspects of the RGB protocol, Kernel Ventures’ previous articles have introduced it in detail. Article link: https://tokeninsight.com/en/research/market-analysis/a-brief-overview-on-rgb-can-rgb-replicate-the-ordinals-hype Other POW Chain During the heyday of inscriptions on the Bitcoin chain, as other PoW chains share the same origin and are also based on the UTXO spending model, Ordinals has been migrated to some leading PoW public chains. In this article, we will analyze the examples of Dogechain and Litecoin, which have high market acceptance and development completeness. Dogechain: The Drc-20 protocol on the Dogecoin chain is based on Ordinals and functions similarly to the Bitcoin chain. However, due to its low transaction fees and strong meme appeal, it has gained popularity. Litecoin: Similarly, the Ltc-20 protocol on the Litecoin chain is based on Ordinals. This protocol has received retweets and attention from the Litecoin official team and its founder, Charlie Lee. It can be considered as having a "noble pedigree." The trading markets Unilit and Litescribe, along with the wallet Litescribe, show a relatively high level of development completeness. The first token, $Lite, is already listed on the Gate exchange. However, there were issues with the protocol before the index was introduced. After the index was launched, a bug causing increased issuance emerged, but it has since been fixed and is worth keeping an eye on. From the graph, it is evident that after the introduction of the LTC20 protocol, gas fees on the Litecoin chain surged. Image source: Twitter @SatoshiLite Litecoin rate in the past year, image source: Litecoinspace Ethereum Chain Ethscriptions As of now, the trading platform Etch on the Ethscriptions protocol has achieved a transaction volume of 10,500 ETH. The floor price of the first token, Eths, is $4,300. For those who stayed in from the beginning and did not exit, the initial investment cost on June 18th was less than 1U. Those who held on have now gained returns of over 6,000 times their initial investment. Eths transaction data, image source: ETCH Market Tom Lehman proposed a novel Ethereum scaling solution on August 8th. Employing a technology similar to Ordinals, leveraging Calldata expansion, this solution aims to achieve cost-effectiveness in Ethereum mainnet gas fees and enhance the dimensionality of ecosystem applications. At the core of Eths is the Ethscriptions Virtual Machine (ESC VM), which can be likened to the Ethereum Virtual Machine (EVM). The "Dumb Contracts" within the ESC VM enable Eths to break free from the limitations of inscriptions as NFT speculation, entering the realm of functionality and practicality. Eths has officially entered the competition in the base layer and L2 solutions arena. Dumb Contracts running logic, picture source: Ethscriptions ESIP-4 proposal "Eths represents another approach to Ethereum Layer 2. Unlike typical Layer 2 solutions that are separate chains and may have a backdoor, Eths conducts transactions on the Ethereum mainnet with gas fees as affordable as those on Layer 2. It enables various activities such as swapping, DeFi, and GameFi on the Eths platform. The key aspect is that it operates on the mainnet, making it secure and more decentralized than Layer 2," as excerpted from the Eths community. However, articulating this new Layer 2 narrative is challenging. Firstly, token splitting is still in the developmental stage, and current inscriptions are still non-fungible tokens (NFTs) that can not be split into fungible tokens (FTs). As of the latest information available, the FacetSwap (https://facetswap.com/) has introduced a splitting feature. However, it was noted that mainstream trading markets do not currently support split inscriptions. Users can wait for future adaptations. Currently, split inscriptions can be used for activities like swapping and adding liquidity on Factswap. All operations are resolved by a virtual address (non-existent address) 0x000...Face7. Users can embed messages in IDM and send the hexadecimal data of the message to the address ending with Face7 to perform operations like approve and transfer. As this is still in the early stages, its development trajectory will be observed in the future. Other EVM Chain Evm.ink Evm.ink has migrated the protocol standards of Ethscriptions to other EVM-compatible chains, enabling these chains to also mint inscriptions and build indexes for other EVM chains. Recently popular projects such as POLS and AVAL use Evm.ink, which is essentially Ethscriptions' standard, for index recognition. POLS casting data, image source: Dune @satsx AVAL casting data, image source: Dune @helium_1990 POLS and AVAL both have a total supply of 21 million inscriptions. POLS has over 80,000 holders, while AVAL has more than 23,000 holders. The minting progress for both is around 2-3 days. This indicates a significant interest from the community in low-cost Layer 2 (L2) inscriptions, as they offer a high return on investment. Due to the low cost, users from the long tail of BTC and ETH chains are participating, leading to overflow. This trend is not limited to just these two chains; other chains like Heco and Fantom have also experienced a surge in gas fees, all related to inscriptions. Number of daily transactions on the EVM chain, image source: Kernel Ventures Solana SPL20 Solana inscriptions commenced on November 17th at 4 AM and were completed by 8 AM, with a total supply of 21,000 inscriptions. Unlike other networks, the main body of the inscription is an NFT, and the Index Content is the actual inscription. NFTs can be created through any platform, and the index determines whether it is included based on the hash of the image or file. The second point is the embedded text; only inscriptions with matching hashes and embedded text are considered valid. Images are off-chain data, and text is on-chain data. Currently, major proxy platforms use IPFS, while others use AR. Solana inscriptions share a significant limitation with Eths – They can not be split. Without the ability to split, they essentially function as NFTs, lacking the liquidity and operational convenience equivalent to tokens, let alone the vision of future Dex Swaps. The protocol's founder is also the founder of TapPunk on the Tap protocol. The team behind the largest proxy platform, Liberplex (https://www.libreplex.io/), is very proactive. Since its launch, the team has made rapid progress in development, completing operations such as hash indexing and changing inscription attributes (immutability). They also conduct live coding sessions and Q&A sessions on their official Discord. The trading market Tensor (https://www.tensor.trade/) has also been successfully integrated, and the development progress is swift. The first inscription, $Sols, had a casting cost of approximately $5. In the secondary market, it reached a peak price of 14 SOL, with a floor price of 7.4 SOL, equivalent to $428. The daily trading volume exceeded 20,000 SOL, equivalent to about $1.2 million, with active turnover rates. Core comparison Comparison of core protocols Comparison of mainstream inscription protocols, Image source: Kernel Ventures This chart compares several major inscription protocols based on four dimensions: fees, divisibility, scalability, and user base. Fees: RGB protocol stands out with the optimal fee rate, leveraging the Lightning Network for virtually zero-cost transactions.Divisibility: Both Solana and recent EVM protocols lack the capability for divisibility, with expectations for future development in this aspect.Scalability: RGB protocol's smart contract functionality provides significant scalability. Solana's scalability is still under discussion, but the team and Solana Foundation express support, suggesting it may not be lacking in scalability.User Base: EVM chains, with their naturally low gas costs, attract a larger user base due to the lower trial-and-error cost for users. BRC20, being the first inscription token and ranking first in orthodoxy, has accumulated a substantial user base. Comparison of protocol token data Protocol Token Comparison, Image source: Kernel Ventures Analyzing the mainstream tokens from various protocols, it's evident that the current market capitalization of these tokens is around $600 million, excluding smaller-cap currencies. Additionally, Ordi constitutes 80% of the total market capitalization, indicating significant development opportunities for other protocols. Notably, protocols like RGB are still in the process of refinement and haven't issued tokens. In terms of the number of holders, Pols and Ordi dominate, while other protocols have fewer holders. Eths and Solana inscriptions have not been split, so a comprehensive analysis of holder distribution is pending further developments. Innovations and risk analysis Currently, the primary use of inscriptions is Fair Launch, allowing users to fairly access opportunities to participate in projects. However, the development of the inscription space is not limited to fair launches. Recent developments in the inscription space have shown significant dynamism and innovation. The growth of this sector is largely attributed to key technological advancements in Bitcoin, such as SegWit, Bech32 encoding, Taproot upgrade, and Schnorr signatures. These technologies not only enhance the transaction efficiency and scalability of the Bitcoin network but also increase its programmability. For instance, in the RGB protocol, smart contracts built on the Lightning Network of Bitcoin exhibit not only extremely high transactions per second (40 million) but also benefit from being part of the largest blockchain ecosystem, Bitcoin. Regarding risks, caution is advised, particularly with some Launchpads. For example, the recent case of Rug project Ordstater, with the success of MUBI and TURT, has led to a proliferation of Launchpads. Some platforms may execute a Rug Pull directly after the Initial DEX Offering (IDO). Prior to engaging in any project, it is crucial to thoroughly read the whitepaper, research the background, and avoid blindly following KOLs due to FOMO. Future deduction of inscription ecology Market Deduction Galaxy Research and Mining predicts that by 2025, the market value of the Ordinals market will reach $5 billion, with the number of inscriptions at that time estimated to be only 260,000. Currently, the number of inscriptions has already reached 33 million, a growth of 126 times in just six months. The market capitalization of $Ordi has reached $400 million, and $Sats has reached $300 million. This suggests that the predictions for the entire inscription market were significantly underestimated. Product Deduction Currently, BRC20 trading activities are primarily concentrated on OKX and Unisat. The Web3 wallet promoted by OKX this year provides a favorable experience for trading BRC20 assets. The completeness of wallet-side infrastructure further smoothens and shortens the entry path for "retail investors," allowing them to smoothly enter this new market. With the emergence of various protocols, different protocols have introduced their own trading markets and wallets, such as Atomicals, Dogechain, Litecoin, and more. However, the wallets currently available in the market are all modifications of Unisat, built upon the open-source foundation of Unisat. Comparing Bitcoin (POW) with Ethereum, one can analogize various protocols to different chains, with the fundamental difference lying in the Chain ID. Therefore, future products might involve Unisat integrating different protocols, allowing users to switch between protocols within the wallet as needed, similar to the chain-switching functionality in wallets like Metamask. Comparison of wallets across protocols, Image source: Kernel Ventures Track deduction With funds continuously flowing into the inscription market, users are no longer satisfied with meme-driven speculation and are shifting their focus towards applications built on inscriptions. Unisat has brought innovation to BRC20 by introducing BRC20-Swap, allowing users to easily exchange BRC20 tokens similar to AMM DEX. As the first product enhancing liquidity in the Ordinals ecosystem, Unisat is poised to unlock the potential of the Bitcoin DeFi ecosystem, potentially leading to the development of additional features such as lending and derivatives. Recently, Unisat has also opened API interfaces, which is user-friendly for small developers, enabling them to call various functions, such as automated batch order scanning and monitoring inscriptions for automatic minting. This can give rise to numerous utility projects. While transaction fees on the Bitcoin network are relatively high, for layer2s' like Stacks and RIF, even though fees are lower, they lack a user base and sufficient infrastructure. This makes Bitcoin's EVM a compelling narrative. For example, BEVM is a project based on the Ethereum network, providing a Bitcoin ecosystem Layer2 with on-chain native tokens being BTC. Users can use the official cross-chain bridge to move Bitcoin from the mainnet to BEVM. The EVM compatibility of BEVM makes it easy to build applications on EVM chains, with low entry barriers for DeFi, swap, and more to migrate from other chains. However, there are several issues to consider with Bitcoin's EVM. Questions include whether the assets crossing over can maintain decentralization and immutability, the consensus problem of EVM chain nodes, and how to synchronize transactions to the Bitcoin network (or decentralized storage). Since the threshold for Ethereum layer 2 is relatively low, security may be compromised, making it a primary concern for anyone interested in Bitcoin EVM at the moment. Image source: BEVM Bridge Summary This article delves into the development trends in the Bitcoin inscription domain and the characteristics of various protocols. By analyzing protocols such as Ordinals (BRC20), Atomical, RGB, Pipe, and others on the Bitcoin chain, as well as comparing them with other Pow chains, Ethereum's Ethscriptions and Evm.ink, and Solana's SPL20 protocol, the differences in terms of fees, divisibility, scalability, and user aspects are explored. In the context of the inscription market, starting with the Ordinals protocol, a wave of inscription protocols like BRC20 has been referred to as the "world of retail investors." The analysis includes an overview of data such as Bitcoin block fees and the number of inscriptions forged by Ordinals, providing insights into the development trends in the inscription ecosystem. In the analysis of the racecourse, the core elements of mainstream inscription protocols, such as fees, divisibility, scalability, and user numbers, are compared to showcase their similarities and differences. Finally, through a comparison of protocol token data and core protocol comparisons, a comprehensive analysis of market value and user distribution for various mainstream protocols is provided. The conclusion emphasizes innovation points and risk analysis, highlighting the vitality and innovation within the inscription domain. Looking ahead, the inscription domain is expected to witness continuous technological innovation, driving the practical application of more complex functionalities. The market's robust development is anticipated to maintain steady growth, providing more opportunities for investors and participants. Meanwhile, it is expected that more creative projects and protocols will emerge, further enriching the inscription ecosystems of Bitcoin and other public chains. Miners' earnings may also increase as the inscription domain offers them new income opportunities. Reference link Bitcoin block-fee-rates (3 year):https://mempool.space/zh/graphs/mining/block-fee-rates#3yESIP-4: The Ethscriptions Virtual Machine:https://docs.ethscriptions.com/esips/esip-4-the-ethscriptions-virtual-machineA comprehensive scan of the inscriptions industry:https://www.theblockbeats.info/news/47753?search=1Litecoin block-fee-rates (1 year):https://litecoinspace.org/zh/graphs/mining/block-fee-rates#1y

The New Narrative of Inscription — Under the Support of Different Ecosystems

Author: Kernel Ventures Stanley
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua

TLDR:
This article delves into the development trends of Bitcoin inscription and the characteristics of various protocols.
Analyzing protocols on the Bitcoin chain such as Ordinals, BRC20, Atomical, RGB, Pipe, comparing them with other PoW chains like Dogechain and Litecoin, as well as Ethereum chains Ethscriptions and Evm.ink, and Solana chain's SPL20 protocol. The comparison includes aspects such as fees, divisibility, scalability, and user considerations, with particular emphasis on the low fees and high scalability of the RGB protocol.Examining market and product projection for the inscription ecosystem, highlighting the completeness of infrastructure on the wallet side, the launch of Bitcoin chain AMM DEX, and the potential for additional functionalities in the future, such as lending and derivatives. Unisat's open API interface opens the door to numerous tool projects.
In conclusion, this article provides a comprehensive exploration of the dynamics in the field of Bitcoin inscription, offering insights into the future development of inscription empowered by the ecosystem, providing readers with a thorough understanding and outlook.
Inscription Market Background
Market Overview
Since the introduction of the Bitcoin Ordinals protocol in January 2023, a wave of enthusiasm has swept through the Bitcoin chain with protocols like BRC20 and Ordinals assets, often referred to as the "world of retail investors." This is attributed to the Fair Launch model of scripts like BRC20, where chips are entirely minted by individual retail investors, devoid of institutions, project teams, or insider trading. The minting cost for Ordi is approximately $1 per inscription, but after its listing on the Gate.io exchange, the price surged to $20,000 per inscription. The staggering increase in value fueled the continued popularity of the BRC20 protocol, drawing in numerous Ordinals players and leading to a continuous spike in Gas fees on the Bitcoin chain. At its peak, the minimum confirmation Gas even reached 400 s/vb, surpassing the highest Gas levels in the past three years.
Using this as a starting point, this article will delve into the exploration of the script ecosystem on various chains, discussing the current state of various protocols and anticipating the developmental trends of scripts under the empowerment of the ecosystem.
Data Overview
The 3-year Bitcoin block-fee-rate chart vividly illustrates sharp spikes in fees during May-June and November of this year. This surge reflects the fervor of users towards script protocols, not just limited to the BRC20 protocol. Various protocols developed on the Bitcoin network were introduced during this period, sparking a wave known as "Bitcoin Summer."

Bitcoin rate in the past three years, image source: Mempool.space
From the casting data of Inscriptions, it is evident that the casting quantity has stabilized, consistently maintaining high levels.

Ordinals inscription casting quantity, image source: Dune @dgtl_asserts
Track analysis
This article will categorize various chains and analyze the script protocols on each of them.
Bitcoin Chain
Ordinals / BRC20
On January 21, 2023, Bitcoin developer Casey Rodarmor introduced the Ordinals protocol, allowing metadata to be inscribed on the Bitcoin chain and assigned a script number. In March of the same year, Twitter user @domodata released the BRC20 protocol, evolving token minting into on-chain strings. On November 7, Binance listed the BRC20 flagship token $ORDI, triggering a significant surge with a nearly 100% daily increase.
As the first protocol in the inscription ecosystem, Ordinals has encountered several issues:
BRC20 supports only four-letter tokens, imposing significant limitations.The casting names are susceptible to Sybil attacks, making casting transactions prone to frontrunning.The Ordinals protocol results in substantial redundant data on the Bitcoin network.
For example, after the BRC20 token minted out, the original inscriptions will become invalid once token transactions are sent. This causes significant data occupation, a reason why some early Bitcoin enthusiasts are reluctant to support Ordinals.
Atomicals
The Atomical protocol's ARC20 utilizes one satoshi to represent the deployed token and eliminates the four-character restriction, allowing for more diverse gameplay. A unique project within this framework is the "Realm", where each registered entity is a prefix text and ultimately holds pricing rights for all suffixes. In terms of basic functionality, the Realm can be used as a transfer and receipt address (payment name), and also it has various use cases such as building communities/DAOs, identity verification, social profiles, aligning seamlessly with our envisioned development of DID.

However, both ARC20 and $ATOM are still in the very early stages, and further development is required, including improvements in wallets and markets.

Realm casting quantity, image source: Dune @sankin
Pipe
Casey, the founder of Ordinals, proposed a specific inscription implementation called Rune designed for issuing FT (fungible tokens). This method allows the direct insertion of token data into the UTXO script, encompassing the token's ID, output, and quantity. Rune's implementation is very similar to ARC20, handing over token transfers directly to the BTC mainnet. The distinction lies in Rune including the token quantity in the script data.
While Rune's concept is still in the ideation stage, the founder of #Trac developed the first functional protocol based on this idea, issuing PIPE tokens. Leveraging Casey's high profile, PIPE quickly gained momentum, capitalizing on the speculative fervor inherited from BRC20. Rune's legitimacy is relatively stronger compared to BRC20, but gaining acceptance within the BTC community remains challenging.
RGB

Lightning Network Capacity, Image Source: Mempool.space
With the Ordinals protocol elevating the ecosystem of the Bitcoin network, an increasing number of developers and projects are turning their attention to the Lightning Network due to its extremely low transaction fees and 40 million TPS (transactions per second).
RGB is an intelligent contract system based on BTC and the Lightning Network, representing a more ultimate scaling solution. However, progress has been slow due to its complexity. RGB transforms the state of a smart contract into a concise proof, engraving this proof into the BTC UTXO output script. Users can verify this UTXO to inspect the state of the smart contract. When the smart contract state is updated, a new UTXO is created to store the proof of this state change.
All smart contract data is entirely on the BTC chain, operated by dedicated RGB nodes that record the complete data of the smart contract and handle the computational workload of transactions. Users verify the deterministic changes in contract status by scanning the entire UTXO of the BTC chain.
RGB can be viewed as BTC's Layer 2. This design leverages BTC's security to guarantee smart contracts. However, as the number of smart contracts increases, the demand for UTXO encapsulation data will also inevitably lead to significant redundancy in the BTC blockchain.
Since 2018, RGB has remained in the development stage without speculative content. Tether's issuing company, Tether Limited, is a significant supporter of RGB, aiming to issue a large amount of USDT on the BTC RGB.
In terms of products, the mainstream wallet currently in use is Bitmask, which supports Bitcoin and Lightning Network deposits, as well as assets of RGB-20 and RGB-21. Bitlight Labs is also developing the RGB network, with plans to build its own wallet system and write smart contracts for DEX (decentralized exchange). The project has acquired BitSwap (bitswap-bifi.github.io) and is preparing to integrate it into the RGB network.
RGB's biggest advantages lie in its low transaction fees and extremely high scalability. There was a time when smart contract development on the Bitcoin network was difficult and received little attention. However, with the Ordinals protocol raising the ecosystem's popularity, more developers are experimenting with smart contracts on the RGB network. These smart contracts are written in the Rust language, incompatible with Ethereum, leading to a higher learning curve and requiring further evaluation in terms of technology.
For more information on the technical aspects of the RGB protocol, Kernel Ventures’ previous articles have introduced it in detail. Article link: https://tokeninsight.com/en/research/market-analysis/a-brief-overview-on-rgb-can-rgb-replicate-the-ordinals-hype
Other POW Chain
During the heyday of inscriptions on the Bitcoin chain, as other PoW chains share the same origin and are also based on the UTXO spending model, Ordinals has been migrated to some leading PoW public chains. In this article, we will analyze the examples of Dogechain and Litecoin, which have high market acceptance and development completeness.
Dogechain:
The Drc-20 protocol on the Dogecoin chain is based on Ordinals and functions similarly to the Bitcoin chain. However, due to its low transaction fees and strong meme appeal, it has gained popularity.
Litecoin:
Similarly, the Ltc-20 protocol on the Litecoin chain is based on Ordinals. This protocol has received retweets and attention from the Litecoin official team and its founder, Charlie Lee. It can be considered as having a "noble pedigree." The trading markets Unilit and Litescribe, along with the wallet Litescribe, show a relatively high level of development completeness. The first token, $Lite, is already listed on the Gate exchange.
However, there were issues with the protocol before the index was introduced. After the index was launched, a bug causing increased issuance emerged, but it has since been fixed and is worth keeping an eye on. From the graph, it is evident that after the introduction of the LTC20 protocol, gas fees on the Litecoin chain surged.

Image source: Twitter @SatoshiLite

Litecoin rate in the past year, image source: Litecoinspace
Ethereum Chain
Ethscriptions
As of now, the trading platform Etch on the Ethscriptions protocol has achieved a transaction volume of 10,500 ETH. The floor price of the first token, Eths, is $4,300. For those who stayed in from the beginning and did not exit, the initial investment cost on June 18th was less than 1U. Those who held on have now gained returns of over 6,000 times their initial investment.

Eths transaction data, image source: ETCH Market
Tom Lehman proposed a novel Ethereum scaling solution on August 8th. Employing a technology similar to Ordinals, leveraging Calldata expansion, this solution aims to achieve cost-effectiveness in Ethereum mainnet gas fees and enhance the dimensionality of ecosystem applications.
At the core of Eths is the Ethscriptions Virtual Machine (ESC VM), which can be likened to the Ethereum Virtual Machine (EVM). The "Dumb Contracts" within the ESC VM enable Eths to break free from the limitations of inscriptions as NFT speculation, entering the realm of functionality and practicality. Eths has officially entered the competition in the base layer and L2 solutions arena.

Dumb Contracts running logic, picture source: Ethscriptions ESIP-4 proposal
"Eths represents another approach to Ethereum Layer 2. Unlike typical Layer 2 solutions that are separate chains and may have a backdoor, Eths conducts transactions on the Ethereum mainnet with gas fees as affordable as those on Layer 2. It enables various activities such as swapping, DeFi, and GameFi on the Eths platform. The key aspect is that it operates on the mainnet, making it secure and more decentralized than Layer 2," as excerpted from the Eths community.
However, articulating this new Layer 2 narrative is challenging. Firstly, token splitting is still in the developmental stage, and current inscriptions are still non-fungible tokens (NFTs) that can not be split into fungible tokens (FTs).
As of the latest information available, the FacetSwap (https://facetswap.com/) has introduced a splitting feature. However, it was noted that mainstream trading markets do not currently support split inscriptions. Users can wait for future adaptations. Currently, split inscriptions can be used for activities like swapping and adding liquidity on Factswap. All operations are resolved by a virtual address (non-existent address) 0x000...Face7. Users can embed messages in IDM and send the hexadecimal data of the message to the address ending with Face7 to perform operations like approve and transfer. As this is still in the early stages, its development trajectory will be observed in the future.
Other EVM Chain
Evm.ink
Evm.ink has migrated the protocol standards of Ethscriptions to other EVM-compatible chains, enabling these chains to also mint inscriptions and build indexes for other EVM chains. Recently popular projects such as POLS and AVAL use Evm.ink, which is essentially Ethscriptions' standard, for index recognition.

POLS casting data, image source: Dune @satsx

AVAL casting data, image source: Dune @helium_1990
POLS and AVAL both have a total supply of 21 million inscriptions. POLS has over 80,000 holders, while AVAL has more than 23,000 holders. The minting progress for both is around 2-3 days. This indicates a significant interest from the community in low-cost Layer 2 (L2) inscriptions, as they offer a high return on investment. Due to the low cost, users from the long tail of BTC and ETH chains are participating, leading to overflow. This trend is not limited to just these two chains; other chains like Heco and Fantom have also experienced a surge in gas fees, all related to inscriptions.

Number of daily transactions on the EVM chain, image source: Kernel Ventures
Solana
SPL20
Solana inscriptions commenced on November 17th at 4 AM and were completed by 8 AM, with a total supply of 21,000 inscriptions. Unlike other networks, the main body of the inscription is an NFT, and the Index Content is the actual inscription. NFTs can be created through any platform, and the index determines whether it is included based on the hash of the image or file. The second point is the embedded text; only inscriptions with matching hashes and embedded text are considered valid. Images are off-chain data, and text is on-chain data. Currently, major proxy platforms use IPFS, while others use AR.
Solana inscriptions share a significant limitation with Eths – They can not be split. Without the ability to split, they essentially function as NFTs, lacking the liquidity and operational convenience equivalent to tokens, let alone the vision of future Dex Swaps.
The protocol's founder is also the founder of TapPunk on the Tap protocol. The team behind the largest proxy platform, Liberplex (https://www.libreplex.io/), is very proactive. Since its launch, the team has made rapid progress in development, completing operations such as hash indexing and changing inscription attributes (immutability). They also conduct live coding sessions and Q&A sessions on their official Discord. The trading market Tensor (https://www.tensor.trade/) has also been successfully integrated, and the development progress is swift.
The first inscription, $Sols, had a casting cost of approximately $5. In the secondary market, it reached a peak price of 14 SOL, with a floor price of 7.4 SOL, equivalent to $428. The daily trading volume exceeded 20,000 SOL, equivalent to about $1.2 million, with active turnover rates.
Core comparison
Comparison of core protocols

Comparison of mainstream inscription protocols, Image source: Kernel Ventures
This chart compares several major inscription protocols based on four dimensions: fees, divisibility, scalability, and user base.
Fees: RGB protocol stands out with the optimal fee rate, leveraging the Lightning Network for virtually zero-cost transactions.Divisibility: Both Solana and recent EVM protocols lack the capability for divisibility, with expectations for future development in this aspect.Scalability: RGB protocol's smart contract functionality provides significant scalability. Solana's scalability is still under discussion, but the team and Solana Foundation express support, suggesting it may not be lacking in scalability.User Base: EVM chains, with their naturally low gas costs, attract a larger user base due to the lower trial-and-error cost for users. BRC20, being the first inscription token and ranking first in orthodoxy, has accumulated a substantial user base.
Comparison of protocol token data

Protocol Token Comparison, Image source: Kernel Ventures
Analyzing the mainstream tokens from various protocols, it's evident that the current market capitalization of these tokens is around $600 million, excluding smaller-cap currencies. Additionally, Ordi constitutes 80% of the total market capitalization, indicating significant development opportunities for other protocols. Notably, protocols like RGB are still in the process of refinement and haven't issued tokens.
In terms of the number of holders, Pols and Ordi dominate, while other protocols have fewer holders. Eths and Solana inscriptions have not been split, so a comprehensive analysis of holder distribution is pending further developments.
Innovations and risk analysis
Currently, the primary use of inscriptions is Fair Launch, allowing users to fairly access opportunities to participate in projects. However, the development of the inscription space is not limited to fair launches.
Recent developments in the inscription space have shown significant dynamism and innovation. The growth of this sector is largely attributed to key technological advancements in Bitcoin, such as SegWit, Bech32 encoding, Taproot upgrade, and Schnorr signatures. These technologies not only enhance the transaction efficiency and scalability of the Bitcoin network but also increase its programmability.
For instance, in the RGB protocol, smart contracts built on the Lightning Network of Bitcoin exhibit not only extremely high transactions per second (40 million) but also benefit from being part of the largest blockchain ecosystem, Bitcoin.
Regarding risks, caution is advised, particularly with some Launchpads. For example, the recent case of Rug project Ordstater, with the success of MUBI and TURT, has led to a proliferation of Launchpads. Some platforms may execute a Rug Pull directly after the Initial DEX Offering (IDO). Prior to engaging in any project, it is crucial to thoroughly read the whitepaper, research the background, and avoid blindly following KOLs due to FOMO.
Future deduction of inscription ecology
Market Deduction
Galaxy Research and Mining predicts that by 2025, the market value of the Ordinals market will reach $5 billion, with the number of inscriptions at that time estimated to be only 260,000. Currently, the number of inscriptions has already reached 33 million, a growth of 126 times in just six months. The market capitalization of $Ordi has reached $400 million, and $Sats has reached $300 million. This suggests that the predictions for the entire inscription market were significantly underestimated.
Product Deduction
Currently, BRC20 trading activities are primarily concentrated on OKX and Unisat. The Web3 wallet promoted by OKX this year provides a favorable experience for trading BRC20 assets. The completeness of wallet-side infrastructure further smoothens and shortens the entry path for "retail investors," allowing them to smoothly enter this new market. With the emergence of various protocols, different protocols have introduced their own trading markets and wallets, such as Atomicals, Dogechain, Litecoin, and more. However, the wallets currently available in the market are all modifications of Unisat, built upon the open-source foundation of Unisat.
Comparing Bitcoin (POW) with Ethereum, one can analogize various protocols to different chains, with the fundamental difference lying in the Chain ID. Therefore, future products might involve Unisat integrating different protocols, allowing users to switch between protocols within the wallet as needed, similar to the chain-switching functionality in wallets like Metamask.

Comparison of wallets across protocols, Image source: Kernel Ventures
Track deduction
With funds continuously flowing into the inscription market, users are no longer satisfied with meme-driven speculation and are shifting their focus towards applications built on inscriptions. Unisat has brought innovation to BRC20 by introducing BRC20-Swap, allowing users to easily exchange BRC20 tokens similar to AMM DEX. As the first product enhancing liquidity in the Ordinals ecosystem, Unisat is poised to unlock the potential of the Bitcoin DeFi ecosystem, potentially leading to the development of additional features such as lending and derivatives. Recently, Unisat has also opened API interfaces, which is user-friendly for small developers, enabling them to call various functions, such as automated batch order scanning and monitoring inscriptions for automatic minting. This can give rise to numerous utility projects.
While transaction fees on the Bitcoin network are relatively high, for layer2s' like Stacks and RIF, even though fees are lower, they lack a user base and sufficient infrastructure. This makes Bitcoin's EVM a compelling narrative. For example, BEVM is a project based on the Ethereum network, providing a Bitcoin ecosystem Layer2 with on-chain native tokens being BTC. Users can use the official cross-chain bridge to move Bitcoin from the mainnet to BEVM. The EVM compatibility of BEVM makes it easy to build applications on EVM chains, with low entry barriers for DeFi, swap, and more to migrate from other chains.
However, there are several issues to consider with Bitcoin's EVM. Questions include whether the assets crossing over can maintain decentralization and immutability, the consensus problem of EVM chain nodes, and how to synchronize transactions to the Bitcoin network (or decentralized storage). Since the threshold for Ethereum layer 2 is relatively low, security may be compromised, making it a primary concern for anyone interested in Bitcoin EVM at the moment.

Image source: BEVM Bridge
Summary
This article delves into the development trends in the Bitcoin inscription domain and the characteristics of various protocols. By analyzing protocols such as Ordinals (BRC20), Atomical, RGB, Pipe, and others on the Bitcoin chain, as well as comparing them with other Pow chains, Ethereum's Ethscriptions and Evm.ink, and Solana's SPL20 protocol, the differences in terms of fees, divisibility, scalability, and user aspects are explored.
In the context of the inscription market, starting with the Ordinals protocol, a wave of inscription protocols like BRC20 has been referred to as the "world of retail investors." The analysis includes an overview of data such as Bitcoin block fees and the number of inscriptions forged by Ordinals, providing insights into the development trends in the inscription ecosystem.
In the analysis of the racecourse, the core elements of mainstream inscription protocols, such as fees, divisibility, scalability, and user numbers, are compared to showcase their similarities and differences. Finally, through a comparison of protocol token data and core protocol comparisons, a comprehensive analysis of market value and user distribution for various mainstream protocols is provided. The conclusion emphasizes innovation points and risk analysis, highlighting the vitality and innovation within the inscription domain.
Looking ahead, the inscription domain is expected to witness continuous technological innovation, driving the practical application of more complex functionalities. The market's robust development is anticipated to maintain steady growth, providing more opportunities for investors and participants. Meanwhile, it is expected that more creative projects and protocols will emerge, further enriching the inscription ecosystems of Bitcoin and other public chains. Miners' earnings may also increase as the inscription domain offers them new income opportunities.
Reference link
Bitcoin block-fee-rates (3 year):https://mempool.space/zh/graphs/mining/block-fee-rates#3yESIP-4: The Ethscriptions Virtual Machine:https://docs.ethscriptions.com/esips/esip-4-the-ethscriptions-virtual-machineA comprehensive scan of the inscriptions industry:https://www.theblockbeats.info/news/47753?search=1Litecoin block-fee-rates (1 year):https://litecoinspace.org/zh/graphs/mining/block-fee-rates#1y
LIVE
Kernel Ventures
--
Kernel Ventures: 铭文新叙事 — 在生态赋能下的铭文是否能跑出一条新赛道?
作者:Kernel Ventures Stanley
审稿:Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
本文深入探讨了比特币铭文领域的发展趋势和各个协议的特点。
分析 Ordinals、BRC20、Atomical、RGB、Pipe 等比特币链上的协议与其他 Pow 链,如 Dogechain 和 Litecoin,以及以太坊链 Ethscriptions 和 Evm.ink、Solana 链 SPL20 协议,在费率、可拆分性、可扩展性、用户方面进行对比,尤其突出了 RGB 协议的低费率和高扩展性。对铭文生态进行市场与产品推演,钱包端的基础设施的完备,比特币链 AMM DEX 的推出,未来可能会出现更多功能比如借贷和衍生品。 UniSat 开放 API 接口,可以产生非常多的工具类项目。
总体而言,本文详实探讨了比特币铭文领域的动态,展望了在生态赋能下的铭文发展情况,为读者提供了全面的了解和展望。
铭文市场背景
市场背景
自2023年1月份,比特币 Ordinals 协议出现后,BRC20,Ordinals 资产等在比特币链上掀起一波浪潮,有人称之为“散户的天下”,因为 BRC20 等铭文的 Fair Launch 模式,筹码全部由散户自行铸造,没有机构,没有项目方,也没有老鼠仓。其中 Ordi 铸造成本约为1刀一张,上线 Gate 交易所之后,涨至20000刀一张,夸张的涨幅使得 BRC20 协议热度持续攀升,吸引了众多 Ordinals 玩家涌入 BRC20 之中,也造成比特币链上 Gas 持续走高,最高峰时的最低确认 Gas 甚至达到了400 s/vb,突破三年内的最高 Gas 。
本文将以此为开端,展开对于目前各个链上,各种协议的铭文生态进行探讨,并展望生态赋能下铭文的发展动向。
数据概览
3年期比特币区块费率图可以直观地看出,在今年5-6月和今年11月,费率直线飙升,这也体现了用户对于铭文协议的热情,不光是 BRC20 协议,各种基于比特币网络开发的协议均在此阶段发行,带来了一波 “Bitcoin Summer” 。

近三年比特币费率,图片来源:Mempool.space
从 Inscriptions 的铸造数据来看,铸造数量趋于稳定,保持在较高的数量。

Ordinals 铭文铸造数量,图片来源:Dune @dgtl_asserts
赛道解析
本文将以各条链为分类,对其链上的铭文协议进行分析。
比特币链
Ordinals / BRC20 协议
2023 年 1 月 21 日,比特币开发者 Casey Rodarmor 推出了 Ordinals 协议,该协议可以把元数据铭刻至比特币链上,并赋予其铭文编号。同年3月,推特用户 @domodata 发布 BRC20 协议,该协议把代币的铸造演化为字符串作的上链,11月7日,币安上线 BRC20 龙头 $ORDI ,推动 $ORDI 大幅上涨,单日涨幅接近100%。
作为铭文生态的第一个协议,Ordinals 也存在许多问题:
BRC20 只支持四字代币,局限性较大铸造名称会被智子攻击,铸造交易容易被抢跑Ordinals 协议会对比特币网络造成极大冗余数据
举例说明:BRC20 的铸造铭文,在结束铸造之后,可通过转账铭文发送代币,原有铸造铭文则变为无效铭文,极大程度造成数据占用,这也是一些比特币早期极客不希望支持 Ordinals 的原因。
Atomical 原子协议
Atomical 协议的 ARC20 使用一个聪来代表被部署的代币,且取消了四字符的限制,玩法更多样化。独特项目:Realm 领域,Realm 每个注册的都是前缀的文本,最后拥有所有后缀的定价权。在基础功能上,领域可以用作转账收款地址(支付名);在拓展用法上,有构建社区 / DAO、身份验证、社交档案等多种使用场景,也完全契合我们对 DID 的发展构想。
但是,ARC20 和 $ATOM 还非常早期,还需要等待钱包、市场的完善。

Realm铸造数量,图片来源:Dune @sankin
Pipe协议
Ordinals 创始人 Casey 提出过一种专门用于发行 FT 的铭文实现方式即 Rune,可直接在 UTXO 的脚本中写入 Token 数据,这包含了 Token 的 ID、输出与数量。Rune 的实现与 ARC20 非常相似,将 token 转账直接交给 BTC 主网处理。区别在于,Rune 在脚本数据中写入了 Token 数量。
Rune 的想法仅是个构想,#Trac 的创始人基于此编写了第一个可用协议,并发行了 PIPE。由于 Casey 较高的知名度,PIPE承接了 BRC20 延续而来的炒作热情,快速地完成了第一波炒作。Rune 的正统性相较 BRC20 更强,但想要被 BTC 社区接受依然艰难。
RGB协议

闪电网络容量,图片来源:Mempool.space
随着 Ordinals 协议把比特币网络的生态抬高,越来越多的开发者和项目方关注到了闪电网络,因为其极为低廉的手续费和4000万的 TPS 。
RGB 是基于 BTC 和闪电网络的智能合约系统,属于是较为终极的扩容方式,但也因为其复杂程度而进展缓慢。RGB 将一个智能合约的状态转化为一个简短的证明,将证明刻入 BTC UTXO 的输出脚本中。用户可以通过验证这个 UTXO 来检查智能合约的状态。智能合约状态更新,就创建一个新的 UTXO 存储这个状态变更证明。
智能合约所有数据完全在 BTC 链下,由专用 RGB 节点运行,RGB 节点本身记录着智能合约的完整数据,并处理交易的计算量。用户通过扫描整条 BTC 链的 UTXO,来验证合约状态变化的确定性。
可以把 RGB 看作是 BTC 的 L2,这种设计的好处是利用了 BTC 的安全性来担保智能合约,但随着智能合约数量的增加,对 UTXO 封装数据的需求也会增加,最终也会不可避免地给 BTC 区块链造成大量冗余。
2018 年至今 RGB 仍处于开发阶段,没有可炒作的内容,USDT 的发行方泰达公司是 RGB 的重要推动者,他们一直说要把 USDT 重新大量地发行在 BTC RGB 上。
产品方面,目前主流钱包使用 Bitmask,该钱包支持比特币和闪电网络充值,支持 RGB-20、RGB-21 格式的资产,也是目前市面上用户最多的钱包。Bitlight Labs 目前也正在对 RGB 网络进行开发,该项目的目标是自己构建一套钱包体系,且自行编写智能合约,做 DEX 。目前该项目已收购 BitSwap(bitswap-bifi.github.io),准备将其接入 RGB 网络。
RGB 协议的最大优势在于其低廉的手续费以及极高的扩展性。曾几何时,比特币网络的智能合约开发困难,无人问津,但是随着 Ordinals 协议把比特币网络的生态热度抬高,越来越多的开发者会试用 RGB 网络上的智能合约,该智能合约为 Rust 语言编写,并不与以太坊兼容,所以学习成本较高,在技术层面有待后期评测。
关于 RGB 协议在技术层面的更多信息,Kernel Ventures 往期的文章已做过详细介绍,文章链接:https://tokeninsight.com/zh/research/market-analysis/a-brief-overview-on-rgb-can-rgb-replicate-the-ordinals-hype
其他POW链
在比特币链上的铭文大红大紫时,其他 Pow 链因为同源同根,也是基于 UTXO 的转账花费模型,所以 Ordinals 被迁移至一些头部 Pow 公链上,本文以市场接受度,开发完整度较高的 Dogechain 和 Litecoin 为例,进行分析。
Dogechain
Drc-20 协议是狗狗币链上基于 Ordinals 的协议,其功能类似于比特币链,但是由于其低廉的转账费用,以及极强的 meme 属性,受到大家欢迎。
Litecoin
LTC20 协议同理,是莱特币链上基于 Ordinals 的协议。该协议获得莱特币官方以及创始人李启威的转推和关注,可以说是“出身名门”,交易市场 Unilit、Litescribe ;钱包 Litescribe 的开发完成度也较高,且目前第一个代币 $Lite 已上线 Gate 交易所。
但是协议之前无索引,在索引推出之后,出现增发 bug ,目前已修复,值得关注。从图中也可以直观看出,LTC20 协议推出之后,莱特币链上 Gas 费暴涨。

图片来源:推特 李启威@SatoshiLite

近一年莱特币费率,图片来源:Litecoinspace
以太坊链
Ethscriptions
截止目前,Ethscriptions 交易平台 Etch 成交额已达10,500 ETH,首个代币 Eths 地板 4300刀,在 6 月 18 日时的打新成本不到 1U,那些从始至终未离场的人已经有了 6000 倍以上的收益。

Eths成交数据,图片来源:ETCH Market
Tom Lehman 在8月8日提出的一种新型以太坊扩容解决方案。采用类 Ordinals 的铭文技术,利用 Calldata 拓展,实现以太坊主网 Gas Fee 的廉价化和生态应用的高维化,这便是 Ethscriptions 。
Eths 的核心是 Ethscriptions Virtual Machine (ESC VM),可以类比 EVM (以太坊虚拟机)。ESC VM 中的智能合约 “Dumb Contracts” 哑巴合约,使 Eths 脱离了铭文作为 NFT 炒作的限制,跨入了功能性和实用性的领域,并正式进入基础层与 L2 解决方案竞争。

Dumb Contracts运行逻辑,图片来源:Ethscriptions ESIP-4提案
“Eths 是以太坊二层的另一个思路,二层是单独的链,而且可以关后门。Eths 在以太坊主网上面交易,Gas 费跟二层一样便宜,主网上面的 Swap,Defi,Gamefi 都可以在 Eths 上面实现。最主要的是他在主网上面运行,是不可以关后门的,比二层更安全更去中心化”,摘自 Eths 社区。
而这个 L2 新叙事其实并不好写,首先代币的拆分仍在进行阶段,目前的铭文仍旧是 NFT,无法拆分为 FT 。
截至发文的最新消息,FacetSwap 官网(https://facetswap.com/)已上线拆分功能,但是体验之后发现目前主流交易市场等并不支持拆分后的铭文,可以等待后期适配。目前拆分后的铭文可以在 Factswap 中进行Swap,Add Liquidity 等等,所有操作均由一虚拟地址(不存在的地址)0x000....Face7 来解析,用户只需把消息内置在 IDM 中,发送该消息的十六进制数据到尾号 Face7 的地址,便可以进行 approve、transfer 等操作,由于较早期,所以有待后期观察其发展动向。
其他EVM链
Evm.ink
Evm.ink 把 Ethscriptions 的协议标准迁移至了其他的 EVM 链上,使得其他链也可以进行铭文的铸造,构建其他 EVM 链的索引。最近较火的 POLS、AVAL 等,都是使用的 Evm.ink 也就是 Ethscriptions 的标准进行索引识别。

POLS铸造数据,图片来源:Dune @satsx

AVAL铸造数据,图片来源:Dune @helium_1990
POLS 和 AVAL 的总量都是2100万张,POLS 持币地址80000+,AVAL 持币地址23000+,铸造进度在2-3天左右,可以看出目前大家对于低成本的二层(Layer2)铭文较有参与兴趣,因为其成本低回报率高,所以BTC、ETH 链的一些长尾用户均会出现外溢。不光光是这两条链,Heco 链,Fantom 链等均出现 Gas 费飙升的情况,均与铭文有关。

EVM 链每日交易笔数,图片来源:Kernel Ventures
Solana链
SPL20
Solana 铭文在11.17日凌晨4点开始,8点完成铸造,总量21000张。与其他网络不同,铭文主体是 NFT ,Index Content 才是铭文。NFT 可以通过任何平台进行创建,索引会通过该图片(文件)的哈希,判断是否收录。第二点是内置文本,只有哈希和内置文本都符合的才会记为有效铭文。图像是链下数据,文本是链上数据,目前主要的代打平台使用的是 IPFS,其他有代打平台使用 AR。
Solana 铭文最大的缺陷和 Eths一样,拆分。无法拆分的话,本质就是 NFT ,无法有代币相当的流动性以及操作便捷度,更不用展望未来 Dex Swap 的愿景。
协议创始人是 Tap 协议上 TapPunk 的创始人。目前最大代打平台 Liberplex(https://www.libreplex.io/)的团队做事非常积极,自推出以来,团队开发进度迅速,已经完成了哈希索引,铭文属性(不可变性)更改功能等操作,且会在官方 Discord 直播写代码,直播答疑。交易市场 Tensor(https://www.tensor.trade/)也已经对接好,目前开发进度飞速。
第一个铭文 $Sols 铸造成本约5刀,二级市场最高价 14SOL ,截至撰文,地板价 7.4SOL ,折合428刀,单日成交量突破 20,000 SOL,折合约120万美金,成交量换手率均活跃。
核心对比
核心协议对比

主流铭文协议对比,图片来源:Kernel Ventures
本图以费率、可拆分、可扩展性、用户人数四个维度,对目前几大主流的铭文协议进行对比。可以从图中直观地看出,RGB 协议的费率最优,基于闪电网络的0费率,使得交易几乎无成本。
在可拆分方面,近期的 Solana 以及 EVM 的协议均无法拆分,等待后续开发。在可扩展性方面,RGB 协议的智能合约功能为其带来了极大的可扩展性,Solana 目前扩展性有待探讨,但是团队和 Solana Foundation 都表示支持,个人认为扩展性不会太差。在用户方面,EVM 链由于其天然的低 Gas 属性,用户的试错成本更小,所以用户较多。BRC20 是首个铭文代币,在正统性方面排名第一,所以用户存量也非常多。
协议代币数据对比

协议代币对比,图片来源:Kernel Ventures
从各协议的主流代币分析,可以看出目前主流代币的市值在6亿美金左右,未纳入其他小市值币种。且 Ordi 占总市值的80%,所以其他协议的发展空间巨大,且 RGB 等协议还正在完善中,未发行代币。
从持币人数来看,Pols 和 Ordi 占据主导,其他协议持币人数较少,且 Eths & Solana 铭文并未拆分,所以等待后期发展后的情况再进行持币分析。
创新点与风险分析
目前铭文最大的用途便是 Fair Launch ,用户可以公平地获取项目参与的机会,但是铭文赛道的发展不可能只停滞在公平发射。
近期铭文赛道的发展呈现出显著的活力和创新,这一赛道的成长主要得益于比特币的关键技术进步如 SegWit、Bech32 编码、Taproot 升级和 Schnorr 签名,这些技术不仅提升了比特币网络的交易效率和可扩展性,还增加了其可编程性。
比如说 RGB 协议中的智能合约,如果在比特币的闪电网络上构建智能合约,不仅 TPS 极高(4000万),且背靠比特币这一最大的区块链生态。
在风险方面,需要注意一些 Launchpad ,比如说前阵子 Rug 的 Ordstater,由于 MUBI 和 TURT 的成功,市面上涌现出很多 Launchpad ,一些平台在 IDO 之后,直接 Rug Pull 。参与项目之前请仔细阅读白皮书,研究背景,切勿 fomo 盲目跟随 KOL 冲项目。
铭文生态的未来推演
市场推演
Galaxy Research and Mining 预测,到 2025 年,Ordinals 市场的市值将达到 50 亿美元,当时的铭文数量仅为 26 万个,而现在铭文的数量已经达到 3300 万个,仅仅半年时间,增长了 126 倍,而且 $Ordi 的市值也已经到达 4 亿美金,$Sats 的市值也已经有 3 亿美金。由此可见,对整个铭文市场的预测还是被远远低估了。
产品推演
目前 BRC20 的交易活动主要集中在 OKX 和 UniSat。今年 OKX主推的 Web3 钱包在 BRC20 类资产交易上的良好体验,钱包端基础设施的完备,就进一步平滑和缩短了“大妈系投机者”进场的路径,让他们得以顺利进入这个新市场。随着其他各类协议的涌现,不同协议都纷纷出现了不同的交易市场和自己的交易钱包,比如 Atomicals、Dogechain、Litecoin 等等。但是目前市面上的钱包均是UniSat的改版,在 UniSat 的开源基础上进行修改。我们把比特币(POW)与以太坊进行对比,可以把各种协议比作各种链,其本质的差别只是在于 Chain ID,所以说未来的产品可能是 UniSat 接入不同协议,需要时可以在不同协议之间切换,直接在钱包中操作即可,类似于小狐狸钱包的切换链。

各协议钱包对比,图片来源:Kernel Ventures
赛道推演
随着资金不断地涌入铭文市场,用户不再满足于 meme 的炒作,开始将目光转向基于铭文的应用。UniSat 也为 BRC20 带来创新,通过 BRC20-Swap,用户可以像 AMM DEX 一样轻松交换 BRC20 代币,作为第一个提高 Ordinals 生态流动性的产品,它有望将比特币 DeFi 生态系统的潜力释放出来,未来可能会出现更多功能比如借贷和衍生品。近期 UniSat 还开放了 API 接口,这对于小型开发者来说非常友好,可以调用很多功能,比如自动批量扫单,监控铭文并自动 mint,由此可以产生非常多的工具类项目。
比特币网络的手续费较为昂贵,对于 Stacks、RIF 这一些比特币二层来说,虽然手续费降低了,但是没有用户基数,基础设施不够完备。由此比特币的 EVM 便是一个很好的叙事。
比如 BEVM ,该项目是基于以太坊网络的比特币生态 Layer2 ,链上 Native Token 也是 BTC ,用户可以通过官方跨链桥,把比特币通过主网跨至 BEVM ,BEVM 的 EVM 兼容性,使得其在 EVM 链上应用构建门槛极低,Defi、Swap 等均可以从其他链迁移。
但是比特币的 EVM 需要考虑的问题还有很多,比如说跨链过去的资产是否能保证去中心化、不可增发等等,EVM 链的节点 sequence 共识问题,以及怎么把 Txns 同步至比特币网络(或去中心化存储),因为以太坊二层的门槛较低,所以随之而来安全性也会降低,这应该是目前所有关注比特币 EVM 该考虑的首要问题。

图片来源:BEVM Bridge
总结
本文深入研究了比特币铭文领域的发展趋势和各个协议的特点。通过分析 Ordinals(BRC20)、Atomical、RGB、Pipe 等比特币链上的协议与其他 Pow 链,以及以太坊链 Ethscriptions 和 Evm.ink、Solana 链 SPL20 协议,对比了它们在费率、可拆分性、可扩展性和用户方面的不同之处,得出结论:目前可拆分依旧是瓶颈,其他比特币链上协议中RGB发展较为全面和具有前景。
在铭文市场背景方面,以 Ordinals 协议为开端,BRC20 等铭文协议掀起了一波浪潮,被戏称为“散户的天下”。分析了比特币区块费率图和 Ordinals 铭文铸造数量等数据概览,窥见了铭文生态的发展趋势。
在赛道解析方面,通过对比主流铭文协议的核心要素,如费率、可拆分性、可扩展性和用户人数,展现了它们之间的异同。最后,通过协议代币数据对比和核心协议对比,对各主流协议的市值和用户分布进行了综合分析。总结时提到了创新点与风险分析,强调了铭文领域的活力和创新。
展望未来,铭文领域有望持续见证技术的不断创新,推动更多复杂功能的实际应用。市场的蓬勃发展预计将保持稳健增长,为投资者和参与者提供更多机会。与此同时,预计将涌现更多富有创意的项目和协议,进一步丰富比特币和其他公链的铭文生态系统。矿工的收益也可能随之增长,因为铭文领域为他们提供了崭新的收入契机。
Kernel Ventures是一个由研究和开发社区驱动的加密风险投资基金,拥有超过70个早期投资,专注于基础设施、中间件、dApps,尤其是ZK、Rollup、DEX、模块化区块链,以及将搭载未来数十亿加密用户的垂直领域,如账户抽象、数据可用性、可扩展性等。在过去的七年里,我们一直致力于支持世界各地的核心开发社区和大学区块链协会的发展。
参考资料:
Bitcoin block-fee-rates (3 year):https://mempool.space/zh/graphs/mining/block-fee-rates#3yESIP-4: The Ethscriptions Virtual Machine:https://docs.ethscriptions.com/esips/esip-4-the-ethscriptions-virtual-machineA comprehensive scan of the inscriptions industry:https://www.theblockbeats.info/news/47753?search=1Litecoin block-fee-rates (1 year):https://litecoinspace.org/zh/graphs/mining/block-fee-rates#1y
Kernel Ventures: 铭文新叙事 — 在生态赋能下的铭文是否能跑出一条新赛道?作者:Kernel Ventures Stanley 审稿:Kernel Ventures Mandy, Kernel Ventures Joshua TLDR: 本文深入探讨了比特币铭文领域的发展趋势和各个协议的特点。 分析 Ordinals、BRC20、Atomical、RGB、Pipe 等比特币链上的协议与其他 Pow 链,如 Dogechain 和 Litecoin,以及以太坊链 Ethscriptions 和 Evm.ink、Solana 链 SPL20 协议,在费率、可拆分性、可扩展性、用户方面进行对比,尤其突出了 RGB 协议的低费率和高扩展性。对铭文生态进行市场与产品推演,钱包端的基础设施的完备,比特币链 AMM DEX 的推出,未来可能会出现更多功能比如借贷和衍生品。 UniSat 开放 API 接口,可以产生非常多的工具类项目。 总体而言,本文详实探讨了比特币铭文领域的动态,展望了在生态赋能下的铭文发展情况,为读者提供了全面的了解和展望。 铭文市场背景 市场背景 自2023年1月份,比特币 Ordinals 协议出现后,BRC20,Ordinals 资产等在比特币链上掀起一波浪潮,有人称之为“散户的天下”,因为 BRC20 等铭文的 Fair Launch 模式,筹码全部由散户自行铸造,没有机构,没有项目方,也没有老鼠仓。其中 Ordi 铸造成本约为1刀一张,上线 Gate 交易所之后,涨至20000刀一张,夸张的涨幅使得 BRC20 协议热度持续攀升,吸引了众多 Ordinals 玩家涌入 BRC20 之中,也造成比特币链上 Gas 持续走高,最高峰时的最低确认 Gas 甚至达到了400 s/vb,突破三年内的最高 Gas 。 本文将以此为开端,展开对于目前各个链上,各种协议的铭文生态进行探讨,并展望生态赋能下铭文的发展动向。 数据概览 3年期比特币区块费率图可以直观地看出,在今年5-6月和今年11月,费率直线飙升,这也体现了用户对于铭文协议的热情,不光是 BRC20 协议,各种基于比特币网络开发的协议均在此阶段发行,带来了一波 “Bitcoin Summer” 。 近三年比特币费率,图片来源:Mempool.space 从 Inscriptions 的铸造数据来看,铸造数量趋于稳定,保持在较高的数量。 Ordinals 铭文铸造数量,图片来源:Dune @dgtl_asserts 赛道解析 本文将以各条链为分类,对其链上的铭文协议进行分析。 比特币链 Ordinals / BRC20 协议 2023 年 1 月 21 日,比特币开发者 Casey Rodarmor 推出了 Ordinals 协议,该协议可以把元数据铭刻至比特币链上,并赋予其铭文编号。同年3月,推特用户 @domodata 发布 BRC20 协议,该协议把代币的铸造演化为字符串作的上链,11月7日,币安上线 BRC20 龙头 $ORDI ,推动 $ORDI 大幅上涨,单日涨幅接近100%。 作为铭文生态的第一个协议,Ordinals 也存在许多问题: BRC20 只支持四字代币,局限性较大铸造名称会被智子攻击,铸造交易容易被抢跑Ordinals 协议会对比特币网络造成极大冗余数据 举例说明:BRC20 的铸造铭文,在结束铸造之后,可通过转账铭文发送代币,原有铸造铭文则变为无效铭文,极大程度造成数据占用,这也是一些比特币早期极客不希望支持 Ordinals 的原因。 Atomical 原子协议 Atomical 协议的 ARC20 使用一个聪来代表被部署的代币,且取消了四字符的限制,玩法更多样化。独特项目:Realm 领域,Realm 每个注册的都是前缀的文本,最后拥有所有后缀的定价权。在基础功能上,领域可以用作转账收款地址(支付名);在拓展用法上,有构建社区 / DAO、身份验证、社交档案等多种使用场景,也完全契合我们对 DID 的发展构想。 但是,ARC20 和 $ATOM 还非常早期,还需要等待钱包、市场的完善。 Realm铸造数量,图片来源:Dune @sankin Pipe协议 Ordinals 创始人 Casey 提出过一种专门用于发行 FT 的铭文实现方式即 Rune,可直接在 UTXO 的脚本中写入 Token 数据,这包含了 Token 的 ID、输出与数量。Rune 的实现与 ARC20 非常相似,将 token 转账直接交给 BTC 主网处理。区别在于,Rune 在脚本数据中写入了 Token 数量。 Rune 的想法仅是个构想,#Trac 的创始人基于此编写了第一个可用协议,并发行了 PIPE。由于 Casey 较高的知名度,PIPE承接了 BRC20 延续而来的炒作热情,快速地完成了第一波炒作。Rune 的正统性相较 BRC20 更强,但想要被 BTC 社区接受依然艰难。 RGB协议 闪电网络容量,图片来源:Mempool.space 随着 Ordinals 协议把比特币网络的生态抬高,越来越多的开发者和项目方关注到了闪电网络,因为其极为低廉的手续费和4000万的 TPS 。 RGB 是基于 BTC 和闪电网络的智能合约系统,属于是较为终极的扩容方式,但也因为其复杂程度而进展缓慢。RGB 将一个智能合约的状态转化为一个简短的证明,将证明刻入 BTC UTXO 的输出脚本中。用户可以通过验证这个 UTXO 来检查智能合约的状态。智能合约状态更新,就创建一个新的 UTXO 存储这个状态变更证明。 智能合约所有数据完全在 BTC 链下,由专用 RGB 节点运行,RGB 节点本身记录着智能合约的完整数据,并处理交易的计算量。用户通过扫描整条 BTC 链的 UTXO,来验证合约状态变化的确定性。 可以把 RGB 看作是 BTC 的 L2,这种设计的好处是利用了 BTC 的安全性来担保智能合约,但随着智能合约数量的增加,对 UTXO 封装数据的需求也会增加,最终也会不可避免地给 BTC 区块链造成大量冗余。 2018 年至今 RGB 仍处于开发阶段,没有可炒作的内容,USDT 的发行方泰达公司是 RGB 的重要推动者,他们一直说要把 USDT 重新大量地发行在 BTC RGB 上。 产品方面,目前主流钱包使用 Bitmask,该钱包支持比特币和闪电网络充值,支持 RGB-20、RGB-21 格式的资产,也是目前市面上用户最多的钱包。Bitlight Labs 目前也正在对 RGB 网络进行开发,该项目的目标是自己构建一套钱包体系,且自行编写智能合约,做 DEX 。目前该项目已收购 BitSwap(bitswap-bifi.github.io),准备将其接入 RGB 网络。 RGB 协议的最大优势在于其低廉的手续费以及极高的扩展性。曾几何时,比特币网络的智能合约开发困难,无人问津,但是随着 Ordinals 协议把比特币网络的生态热度抬高,越来越多的开发者会试用 RGB 网络上的智能合约,该智能合约为 Rust 语言编写,并不与以太坊兼容,所以学习成本较高,在技术层面有待后期评测。 关于 RGB 协议在技术层面的更多信息,Kernel Ventures 往期的文章已做过详细介绍,文章链接:https://tokeninsight.com/zh/research/market-analysis/a-brief-overview-on-rgb-can-rgb-replicate-the-ordinals-hype 其他POW链 在比特币链上的铭文大红大紫时,其他 Pow 链因为同源同根,也是基于 UTXO 的转账花费模型,所以 Ordinals 被迁移至一些头部 Pow 公链上,本文以市场接受度,开发完整度较高的 Dogechain 和 Litecoin 为例,进行分析。 Dogechain Drc-20 协议是狗狗币链上基于 Ordinals 的协议,其功能类似于比特币链,但是由于其低廉的转账费用,以及极强的 meme 属性,受到大家欢迎。 Litecoin LTC20 协议同理,是莱特币链上基于 Ordinals 的协议。该协议获得莱特币官方以及创始人李启威的转推和关注,可以说是“出身名门”,交易市场 Unilit、Litescribe ;钱包 Litescribe 的开发完成度也较高,且目前第一个代币 $Lite 已上线 Gate 交易所。 但是协议之前无索引,在索引推出之后,出现增发 bug ,目前已修复,值得关注。从图中也可以直观看出,LTC20 协议推出之后,莱特币链上 Gas 费暴涨。 图片来源:推特 李启威@SatoshiLite 近一年莱特币费率,图片来源:Litecoinspace 以太坊链 Ethscriptions 截止目前,Ethscriptions 交易平台 Etch 成交额已达10,500 ETH,首个代币 Eths 地板 4300刀,在 6 月 18 日时的打新成本不到 1U,那些从始至终未离场的人已经有了 6000 倍以上的收益。 Eths成交数据,图片来源:ETCH Market Tom Lehman 在8月8日提出的一种新型以太坊扩容解决方案。采用类 Ordinals 的铭文技术,利用 Calldata 拓展,实现以太坊主网 Gas Fee 的廉价化和生态应用的高维化,这便是 Ethscriptions 。 Eths 的核心是 Ethscriptions Virtual Machine (ESC VM),可以类比 EVM (以太坊虚拟机)。ESC VM 中的智能合约 “Dumb Contracts” 哑巴合约,使 Eths 脱离了铭文作为 NFT 炒作的限制,跨入了功能性和实用性的领域,并正式进入基础层与 L2 解决方案竞争。 Dumb Contracts运行逻辑,图片来源:Ethscriptions ESIP-4提案 “Eths 是以太坊二层的另一个思路,二层是单独的链,而且可以关后门。Eths 在以太坊主网上面交易,Gas 费跟二层一样便宜,主网上面的 Swap,Defi,Gamefi 都可以在 Eths 上面实现。最主要的是他在主网上面运行,是不可以关后门的,比二层更安全更去中心化”,摘自 Eths 社区。 而这个 L2 新叙事其实并不好写,首先代币的拆分仍在进行阶段,目前的铭文仍旧是 NFT,无法拆分为 FT 。 截至发文的最新消息,FacetSwap 官网(https://facetswap.com/)已上线拆分功能,但是体验之后发现目前主流交易市场等并不支持拆分后的铭文,可以等待后期适配。目前拆分后的铭文可以在 Factswap 中进行Swap,Add Liquidity 等等,所有操作均由一虚拟地址(不存在的地址)0x000....Face7 来解析,用户只需把消息内置在 IDM 中,发送该消息的十六进制数据到尾号 Face7 的地址,便可以进行 approve、transfer 等操作,由于较早期,所以有待后期观察其发展动向。 其他EVM链 Evm.ink Evm.ink 把 Ethscriptions 的协议标准迁移至了其他的 EVM 链上,使得其他链也可以进行铭文的铸造,构建其他 EVM 链的索引。最近较火的 POLS、AVAL 等,都是使用的 Evm.ink 也就是 Ethscriptions 的标准进行索引识别。 POLS铸造数据,图片来源:Dune @satsx AVAL铸造数据,图片来源:Dune @helium_1990 POLS 和 AVAL 的总量都是2100万张,POLS 持币地址80000+,AVAL 持币地址23000+,铸造进度在2-3天左右,可以看出目前大家对于低成本的二层(Layer2)铭文较有参与兴趣,因为其成本低回报率高,所以BTC、ETH 链的一些长尾用户均会出现外溢。不光光是这两条链,Heco 链,Fantom 链等均出现 Gas 费飙升的情况,均与铭文有关。 EVM 链每日交易笔数,图片来源:Kernel Ventures Solana链 SPL20 Solana 铭文在11.17日凌晨4点开始,8点完成铸造,总量21000张。与其他网络不同,铭文主体是 NFT ,Index Content 才是铭文。NFT 可以通过任何平台进行创建,索引会通过该图片(文件)的哈希,判断是否收录。第二点是内置文本,只有哈希和内置文本都符合的才会记为有效铭文。图像是链下数据,文本是链上数据,目前主要的代打平台使用的是 IPFS,其他有代打平台使用 AR。 Solana 铭文最大的缺陷和 Eths一样,拆分。无法拆分的话,本质就是 NFT ,无法有代币相当的流动性以及操作便捷度,更不用展望未来 Dex Swap 的愿景。 协议创始人是 Tap 协议上 TapPunk 的创始人。目前最大代打平台 Liberplex(https://www.libreplex.io/)的团队做事非常积极,自推出以来,团队开发进度迅速,已经完成了哈希索引,铭文属性(不可变性)更改功能等操作,且会在官方 Discord 直播写代码,直播答疑。交易市场 Tensor(https://www.tensor.trade/)也已经对接好,目前开发进度飞速。 第一个铭文 $Sols 铸造成本约5刀,二级市场最高价 14SOL ,截至撰文,地板价 7.4SOL ,折合428刀,单日成交量突破 20,000 SOL,折合约120万美金,成交量换手率均活跃。 核心对比 核心协议对比 主流铭文协议对比,图片来源:Kernel Ventures 本图以费率、可拆分、可扩展性、用户人数四个维度,对目前几大主流的铭文协议进行对比。可以从图中直观地看出,RGB 协议的费率最优,基于闪电网络的0费率,使得交易几乎无成本。 在可拆分方面,近期的 Solana 以及 EVM 的协议均无法拆分,等待后续开发。在可扩展性方面,RGB 协议的智能合约功能为其带来了极大的可扩展性,Solana 目前扩展性有待探讨,但是团队和 Solana Foundation 都表示支持,个人认为扩展性不会太差。在用户方面,EVM 链由于其天然的低 Gas 属性,用户的试错成本更小,所以用户较多。BRC20 是首个铭文代币,在正统性方面排名第一,所以用户存量也非常多。 协议代币数据对比 协议代币对比,图片来源:Kernel Ventures 从各协议的主流代币分析,可以看出目前主流代币的市值在6亿美金左右,未纳入其他小市值币种。且 Ordi 占总市值的80%,所以其他协议的发展空间巨大,且 RGB 等协议还正在完善中,未发行代币。 从持币人数来看,Pols 和 Ordi 占据主导,其他协议持币人数较少,且 Eths & Solana 铭文并未拆分,所以等待后期发展后的情况再进行持币分析。 创新点与风险分析 目前铭文最大的用途便是 Fair Launch ,用户可以公平地获取项目参与的机会,但是铭文赛道的发展不可能只停滞在公平发射。 近期铭文赛道的发展呈现出显著的活力和创新,这一赛道的成长主要得益于比特币的关键技术进步如 SegWit、Bech32 编码、Taproot 升级和 Schnorr 签名,这些技术不仅提升了比特币网络的交易效率和可扩展性,还增加了其可编程性。 比如说 RGB 协议中的智能合约,如果在比特币的闪电网络上构建智能合约,不仅 TPS 极高(4000万),且背靠比特币这一最大的区块链生态。 在风险方面,需要注意一些 Launchpad ,比如说前阵子 Rug 的 Ordstater,由于 MUBI 和 TURT 的成功,市面上涌现出很多 Launchpad ,一些平台在 IDO 之后,直接 Rug Pull 。参与项目之前请仔细阅读白皮书,研究背景,切勿 fomo 盲目跟随 KOL 冲项目。 铭文生态的未来推演 市场推演 Galaxy Research and Mining 预测,到 2025 年,Ordinals 市场的市值将达到 50 亿美元,当时的铭文数量仅为 26 万个,而现在铭文的数量已经达到 3300 万个,仅仅半年时间,增长了 126 倍,而且 $Ordi 的市值也已经到达 4 亿美金,$Sats 的市值也已经有 3 亿美金。由此可见,对整个铭文市场的预测还是被远远低估了。 产品推演 目前 BRC20 的交易活动主要集中在 OKX 和 UniSat。今年 OKX主推的 Web3 钱包在 BRC20 类资产交易上的良好体验,钱包端基础设施的完备,就进一步平滑和缩短了“大妈系投机者”进场的路径,让他们得以顺利进入这个新市场。随着其他各类协议的涌现,不同协议都纷纷出现了不同的交易市场和自己的交易钱包,比如 Atomicals、Dogechain、Litecoin 等等。但是目前市面上的钱包均是UniSat的改版,在 UniSat 的开源基础上进行修改。我们把比特币(POW)与以太坊进行对比,可以把各种协议比作各种链,其本质的差别只是在于 Chain ID,所以说未来的产品可能是 UniSat 接入不同协议,需要时可以在不同协议之间切换,直接在钱包中操作即可,类似于小狐狸钱包的切换链。 各协议钱包对比,图片来源:Kernel Ventures 赛道推演 随着资金不断地涌入铭文市场,用户不再满足于 meme 的炒作,开始将目光转向基于铭文的应用。UniSat 也为 BRC20 带来创新,通过 BRC20-Swap,用户可以像 AMM DEX 一样轻松交换 BRC20 代币,作为第一个提高 Ordinals 生态流动性的产品,它有望将比特币 DeFi 生态系统的潜力释放出来,未来可能会出现更多功能比如借贷和衍生品。近期 UniSat 还开放了 API 接口,这对于小型开发者来说非常友好,可以调用很多功能,比如自动批量扫单,监控铭文并自动 mint,由此可以产生非常多的工具类项目。 比特币网络的手续费较为昂贵,对于 Stacks、RIF 这一些比特币二层来说,虽然手续费降低了,但是没有用户基数,基础设施不够完备。由此比特币的 EVM 便是一个很好的叙事。 比如 BEVM ,该项目是基于以太坊网络的比特币生态 Layer2 ,链上 Native Token 也是 BTC ,用户可以通过官方跨链桥,把比特币通过主网跨至 BEVM ,BEVM 的 EVM 兼容性,使得其在 EVM 链上应用构建门槛极低,Defi、Swap 等均可以从其他链迁移。 但是比特币的 EVM 需要考虑的问题还有很多,比如说跨链过去的资产是否能保证去中心化、不可增发等等,EVM 链的节点 sequence 共识问题,以及怎么把 Txns 同步至比特币网络(或去中心化存储),因为以太坊二层的门槛较低,所以随之而来安全性也会降低,这应该是目前所有关注比特币 EVM 该考虑的首要问题。 图片来源:BEVM Bridge 总结 本文深入研究了比特币铭文领域的发展趋势和各个协议的特点。通过分析 Ordinals(BRC20)、Atomical、RGB、Pipe 等比特币链上的协议与其他 Pow 链,以及以太坊链 Ethscriptions 和 Evm.ink、Solana 链 SPL20 协议,对比了它们在费率、可拆分性、可扩展性和用户方面的不同之处,得出结论:目前可拆分依旧是瓶颈,其他比特币链上协议中RGB发展较为全面和具有前景。 在铭文市场背景方面,以 Ordinals 协议为开端,BRC20 等铭文协议掀起了一波浪潮,被戏称为“散户的天下”。分析了比特币区块费率图和 Ordinals 铭文铸造数量等数据概览,窥见了铭文生态的发展趋势。 在赛道解析方面,通过对比主流铭文协议的核心要素,如费率、可拆分性、可扩展性和用户人数,展现了它们之间的异同。最后,通过协议代币数据对比和核心协议对比,对各主流协议的市值和用户分布进行了综合分析。总结时提到了创新点与风险分析,强调了铭文领域的活力和创新。 展望未来,铭文领域有望持续见证技术的不断创新,推动更多复杂功能的实际应用。市场的蓬勃发展预计将保持稳健增长,为投资者和参与者提供更多机会。与此同时,预计将涌现更多富有创意的项目和协议,进一步丰富比特币和其他公链的铭文生态系统。矿工的收益也可能随之增长,因为铭文领域为他们提供了崭新的收入契机。 Kernel Ventures是一个由研究和开发社区驱动的加密风险投资基金,拥有超过70个早期投资,专注于基础设施、中间件、dApps,尤其是ZK、Rollup、DEX、模块化区块链,以及将搭载未来数十亿加密用户的垂直领域,如账户抽象、数据可用性、可扩展性等。在过去的七年里,我们一直致力于支持世界各地的核心开发社区和大学区块链协会的发展。 参考资料: Bitcoin block-fee-rates (3 year):https://mempool.space/zh/graphs/mining/block-fee-rates#3yESIP-4: The Ethscriptions Virtual Machine:https://docs.ethscriptions.com/esips/esip-4-the-ethscriptions-virtual-machineA comprehensive scan of the inscriptions industry:https://www.theblockbeats.info/news/47753?search=1Litecoin block-fee-rates (1 year):https://litecoinspace.org/zh/graphs/mining/block-fee-rates#1y

Kernel Ventures: 铭文新叙事 — 在生态赋能下的铭文是否能跑出一条新赛道?

作者:Kernel Ventures Stanley
审稿:Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
本文深入探讨了比特币铭文领域的发展趋势和各个协议的特点。
分析 Ordinals、BRC20、Atomical、RGB、Pipe 等比特币链上的协议与其他 Pow 链,如 Dogechain 和 Litecoin,以及以太坊链 Ethscriptions 和 Evm.ink、Solana 链 SPL20 协议,在费率、可拆分性、可扩展性、用户方面进行对比,尤其突出了 RGB 协议的低费率和高扩展性。对铭文生态进行市场与产品推演,钱包端的基础设施的完备,比特币链 AMM DEX 的推出,未来可能会出现更多功能比如借贷和衍生品。 UniSat 开放 API 接口,可以产生非常多的工具类项目。
总体而言,本文详实探讨了比特币铭文领域的动态,展望了在生态赋能下的铭文发展情况,为读者提供了全面的了解和展望。
铭文市场背景
市场背景
自2023年1月份,比特币 Ordinals 协议出现后,BRC20,Ordinals 资产等在比特币链上掀起一波浪潮,有人称之为“散户的天下”,因为 BRC20 等铭文的 Fair Launch 模式,筹码全部由散户自行铸造,没有机构,没有项目方,也没有老鼠仓。其中 Ordi 铸造成本约为1刀一张,上线 Gate 交易所之后,涨至20000刀一张,夸张的涨幅使得 BRC20 协议热度持续攀升,吸引了众多 Ordinals 玩家涌入 BRC20 之中,也造成比特币链上 Gas 持续走高,最高峰时的最低确认 Gas 甚至达到了400 s/vb,突破三年内的最高 Gas 。
本文将以此为开端,展开对于目前各个链上,各种协议的铭文生态进行探讨,并展望生态赋能下铭文的发展动向。
数据概览
3年期比特币区块费率图可以直观地看出,在今年5-6月和今年11月,费率直线飙升,这也体现了用户对于铭文协议的热情,不光是 BRC20 协议,各种基于比特币网络开发的协议均在此阶段发行,带来了一波 “Bitcoin Summer” 。

近三年比特币费率,图片来源:Mempool.space
从 Inscriptions 的铸造数据来看,铸造数量趋于稳定,保持在较高的数量。

Ordinals 铭文铸造数量,图片来源:Dune @dgtl_asserts
赛道解析
本文将以各条链为分类,对其链上的铭文协议进行分析。
比特币链
Ordinals / BRC20 协议
2023 年 1 月 21 日,比特币开发者 Casey Rodarmor 推出了 Ordinals 协议,该协议可以把元数据铭刻至比特币链上,并赋予其铭文编号。同年3月,推特用户 @domodata 发布 BRC20 协议,该协议把代币的铸造演化为字符串作的上链,11月7日,币安上线 BRC20 龙头 $ORDI ,推动 $ORDI 大幅上涨,单日涨幅接近100%。
作为铭文生态的第一个协议,Ordinals 也存在许多问题:
BRC20 只支持四字代币,局限性较大铸造名称会被智子攻击,铸造交易容易被抢跑Ordinals 协议会对比特币网络造成极大冗余数据
举例说明:BRC20 的铸造铭文,在结束铸造之后,可通过转账铭文发送代币,原有铸造铭文则变为无效铭文,极大程度造成数据占用,这也是一些比特币早期极客不希望支持 Ordinals 的原因。
Atomical 原子协议
Atomical 协议的 ARC20 使用一个聪来代表被部署的代币,且取消了四字符的限制,玩法更多样化。独特项目:Realm 领域,Realm 每个注册的都是前缀的文本,最后拥有所有后缀的定价权。在基础功能上,领域可以用作转账收款地址(支付名);在拓展用法上,有构建社区 / DAO、身份验证、社交档案等多种使用场景,也完全契合我们对 DID 的发展构想。
但是,ARC20 和 $ATOM 还非常早期,还需要等待钱包、市场的完善。

Realm铸造数量,图片来源:Dune @sankin
Pipe协议
Ordinals 创始人 Casey 提出过一种专门用于发行 FT 的铭文实现方式即 Rune,可直接在 UTXO 的脚本中写入 Token 数据,这包含了 Token 的 ID、输出与数量。Rune 的实现与 ARC20 非常相似,将 token 转账直接交给 BTC 主网处理。区别在于,Rune 在脚本数据中写入了 Token 数量。
Rune 的想法仅是个构想,#Trac 的创始人基于此编写了第一个可用协议,并发行了 PIPE。由于 Casey 较高的知名度,PIPE承接了 BRC20 延续而来的炒作热情,快速地完成了第一波炒作。Rune 的正统性相较 BRC20 更强,但想要被 BTC 社区接受依然艰难。
RGB协议

闪电网络容量,图片来源:Mempool.space
随着 Ordinals 协议把比特币网络的生态抬高,越来越多的开发者和项目方关注到了闪电网络,因为其极为低廉的手续费和4000万的 TPS 。
RGB 是基于 BTC 和闪电网络的智能合约系统,属于是较为终极的扩容方式,但也因为其复杂程度而进展缓慢。RGB 将一个智能合约的状态转化为一个简短的证明,将证明刻入 BTC UTXO 的输出脚本中。用户可以通过验证这个 UTXO 来检查智能合约的状态。智能合约状态更新,就创建一个新的 UTXO 存储这个状态变更证明。
智能合约所有数据完全在 BTC 链下,由专用 RGB 节点运行,RGB 节点本身记录着智能合约的完整数据,并处理交易的计算量。用户通过扫描整条 BTC 链的 UTXO,来验证合约状态变化的确定性。
可以把 RGB 看作是 BTC 的 L2,这种设计的好处是利用了 BTC 的安全性来担保智能合约,但随着智能合约数量的增加,对 UTXO 封装数据的需求也会增加,最终也会不可避免地给 BTC 区块链造成大量冗余。
2018 年至今 RGB 仍处于开发阶段,没有可炒作的内容,USDT 的发行方泰达公司是 RGB 的重要推动者,他们一直说要把 USDT 重新大量地发行在 BTC RGB 上。
产品方面,目前主流钱包使用 Bitmask,该钱包支持比特币和闪电网络充值,支持 RGB-20、RGB-21 格式的资产,也是目前市面上用户最多的钱包。Bitlight Labs 目前也正在对 RGB 网络进行开发,该项目的目标是自己构建一套钱包体系,且自行编写智能合约,做 DEX 。目前该项目已收购 BitSwap(bitswap-bifi.github.io),准备将其接入 RGB 网络。
RGB 协议的最大优势在于其低廉的手续费以及极高的扩展性。曾几何时,比特币网络的智能合约开发困难,无人问津,但是随着 Ordinals 协议把比特币网络的生态热度抬高,越来越多的开发者会试用 RGB 网络上的智能合约,该智能合约为 Rust 语言编写,并不与以太坊兼容,所以学习成本较高,在技术层面有待后期评测。
关于 RGB 协议在技术层面的更多信息,Kernel Ventures 往期的文章已做过详细介绍,文章链接:https://tokeninsight.com/zh/research/market-analysis/a-brief-overview-on-rgb-can-rgb-replicate-the-ordinals-hype
其他POW链
在比特币链上的铭文大红大紫时,其他 Pow 链因为同源同根,也是基于 UTXO 的转账花费模型,所以 Ordinals 被迁移至一些头部 Pow 公链上,本文以市场接受度,开发完整度较高的 Dogechain 和 Litecoin 为例,进行分析。
Dogechain
Drc-20 协议是狗狗币链上基于 Ordinals 的协议,其功能类似于比特币链,但是由于其低廉的转账费用,以及极强的 meme 属性,受到大家欢迎。
Litecoin
LTC20 协议同理,是莱特币链上基于 Ordinals 的协议。该协议获得莱特币官方以及创始人李启威的转推和关注,可以说是“出身名门”,交易市场 Unilit、Litescribe ;钱包 Litescribe 的开发完成度也较高,且目前第一个代币 $Lite 已上线 Gate 交易所。
但是协议之前无索引,在索引推出之后,出现增发 bug ,目前已修复,值得关注。从图中也可以直观看出,LTC20 协议推出之后,莱特币链上 Gas 费暴涨。

图片来源:推特 李启威@SatoshiLite

近一年莱特币费率,图片来源:Litecoinspace
以太坊链
Ethscriptions
截止目前,Ethscriptions 交易平台 Etch 成交额已达10,500 ETH,首个代币 Eths 地板 4300刀,在 6 月 18 日时的打新成本不到 1U,那些从始至终未离场的人已经有了 6000 倍以上的收益。

Eths成交数据,图片来源:ETCH Market
Tom Lehman 在8月8日提出的一种新型以太坊扩容解决方案。采用类 Ordinals 的铭文技术,利用 Calldata 拓展,实现以太坊主网 Gas Fee 的廉价化和生态应用的高维化,这便是 Ethscriptions 。
Eths 的核心是 Ethscriptions Virtual Machine (ESC VM),可以类比 EVM (以太坊虚拟机)。ESC VM 中的智能合约 “Dumb Contracts” 哑巴合约,使 Eths 脱离了铭文作为 NFT 炒作的限制,跨入了功能性和实用性的领域,并正式进入基础层与 L2 解决方案竞争。

Dumb Contracts运行逻辑,图片来源:Ethscriptions ESIP-4提案
“Eths 是以太坊二层的另一个思路,二层是单独的链,而且可以关后门。Eths 在以太坊主网上面交易,Gas 费跟二层一样便宜,主网上面的 Swap,Defi,Gamefi 都可以在 Eths 上面实现。最主要的是他在主网上面运行,是不可以关后门的,比二层更安全更去中心化”,摘自 Eths 社区。
而这个 L2 新叙事其实并不好写,首先代币的拆分仍在进行阶段,目前的铭文仍旧是 NFT,无法拆分为 FT 。
截至发文的最新消息,FacetSwap 官网(https://facetswap.com/)已上线拆分功能,但是体验之后发现目前主流交易市场等并不支持拆分后的铭文,可以等待后期适配。目前拆分后的铭文可以在 Factswap 中进行Swap,Add Liquidity 等等,所有操作均由一虚拟地址(不存在的地址)0x000....Face7 来解析,用户只需把消息内置在 IDM 中,发送该消息的十六进制数据到尾号 Face7 的地址,便可以进行 approve、transfer 等操作,由于较早期,所以有待后期观察其发展动向。
其他EVM链
Evm.ink
Evm.ink 把 Ethscriptions 的协议标准迁移至了其他的 EVM 链上,使得其他链也可以进行铭文的铸造,构建其他 EVM 链的索引。最近较火的 POLS、AVAL 等,都是使用的 Evm.ink 也就是 Ethscriptions 的标准进行索引识别。

POLS铸造数据,图片来源:Dune @satsx

AVAL铸造数据,图片来源:Dune @helium_1990
POLS 和 AVAL 的总量都是2100万张,POLS 持币地址80000+,AVAL 持币地址23000+,铸造进度在2-3天左右,可以看出目前大家对于低成本的二层(Layer2)铭文较有参与兴趣,因为其成本低回报率高,所以BTC、ETH 链的一些长尾用户均会出现外溢。不光光是这两条链,Heco 链,Fantom 链等均出现 Gas 费飙升的情况,均与铭文有关。

EVM 链每日交易笔数,图片来源:Kernel Ventures
Solana链
SPL20
Solana 铭文在11.17日凌晨4点开始,8点完成铸造,总量21000张。与其他网络不同,铭文主体是 NFT ,Index Content 才是铭文。NFT 可以通过任何平台进行创建,索引会通过该图片(文件)的哈希,判断是否收录。第二点是内置文本,只有哈希和内置文本都符合的才会记为有效铭文。图像是链下数据,文本是链上数据,目前主要的代打平台使用的是 IPFS,其他有代打平台使用 AR。
Solana 铭文最大的缺陷和 Eths一样,拆分。无法拆分的话,本质就是 NFT ,无法有代币相当的流动性以及操作便捷度,更不用展望未来 Dex Swap 的愿景。
协议创始人是 Tap 协议上 TapPunk 的创始人。目前最大代打平台 Liberplex(https://www.libreplex.io/)的团队做事非常积极,自推出以来,团队开发进度迅速,已经完成了哈希索引,铭文属性(不可变性)更改功能等操作,且会在官方 Discord 直播写代码,直播答疑。交易市场 Tensor(https://www.tensor.trade/)也已经对接好,目前开发进度飞速。
第一个铭文 $Sols 铸造成本约5刀,二级市场最高价 14SOL ,截至撰文,地板价 7.4SOL ,折合428刀,单日成交量突破 20,000 SOL,折合约120万美金,成交量换手率均活跃。
核心对比
核心协议对比

主流铭文协议对比,图片来源:Kernel Ventures
本图以费率、可拆分、可扩展性、用户人数四个维度,对目前几大主流的铭文协议进行对比。可以从图中直观地看出,RGB 协议的费率最优,基于闪电网络的0费率,使得交易几乎无成本。
在可拆分方面,近期的 Solana 以及 EVM 的协议均无法拆分,等待后续开发。在可扩展性方面,RGB 协议的智能合约功能为其带来了极大的可扩展性,Solana 目前扩展性有待探讨,但是团队和 Solana Foundation 都表示支持,个人认为扩展性不会太差。在用户方面,EVM 链由于其天然的低 Gas 属性,用户的试错成本更小,所以用户较多。BRC20 是首个铭文代币,在正统性方面排名第一,所以用户存量也非常多。
协议代币数据对比

协议代币对比,图片来源:Kernel Ventures
从各协议的主流代币分析,可以看出目前主流代币的市值在6亿美金左右,未纳入其他小市值币种。且 Ordi 占总市值的80%,所以其他协议的发展空间巨大,且 RGB 等协议还正在完善中,未发行代币。
从持币人数来看,Pols 和 Ordi 占据主导,其他协议持币人数较少,且 Eths & Solana 铭文并未拆分,所以等待后期发展后的情况再进行持币分析。
创新点与风险分析
目前铭文最大的用途便是 Fair Launch ,用户可以公平地获取项目参与的机会,但是铭文赛道的发展不可能只停滞在公平发射。
近期铭文赛道的发展呈现出显著的活力和创新,这一赛道的成长主要得益于比特币的关键技术进步如 SegWit、Bech32 编码、Taproot 升级和 Schnorr 签名,这些技术不仅提升了比特币网络的交易效率和可扩展性,还增加了其可编程性。
比如说 RGB 协议中的智能合约,如果在比特币的闪电网络上构建智能合约,不仅 TPS 极高(4000万),且背靠比特币这一最大的区块链生态。
在风险方面,需要注意一些 Launchpad ,比如说前阵子 Rug 的 Ordstater,由于 MUBI 和 TURT 的成功,市面上涌现出很多 Launchpad ,一些平台在 IDO 之后,直接 Rug Pull 。参与项目之前请仔细阅读白皮书,研究背景,切勿 fomo 盲目跟随 KOL 冲项目。
铭文生态的未来推演
市场推演
Galaxy Research and Mining 预测,到 2025 年,Ordinals 市场的市值将达到 50 亿美元,当时的铭文数量仅为 26 万个,而现在铭文的数量已经达到 3300 万个,仅仅半年时间,增长了 126 倍,而且 $Ordi 的市值也已经到达 4 亿美金,$Sats 的市值也已经有 3 亿美金。由此可见,对整个铭文市场的预测还是被远远低估了。
产品推演
目前 BRC20 的交易活动主要集中在 OKX 和 UniSat。今年 OKX主推的 Web3 钱包在 BRC20 类资产交易上的良好体验,钱包端基础设施的完备,就进一步平滑和缩短了“大妈系投机者”进场的路径,让他们得以顺利进入这个新市场。随着其他各类协议的涌现,不同协议都纷纷出现了不同的交易市场和自己的交易钱包,比如 Atomicals、Dogechain、Litecoin 等等。但是目前市面上的钱包均是UniSat的改版,在 UniSat 的开源基础上进行修改。我们把比特币(POW)与以太坊进行对比,可以把各种协议比作各种链,其本质的差别只是在于 Chain ID,所以说未来的产品可能是 UniSat 接入不同协议,需要时可以在不同协议之间切换,直接在钱包中操作即可,类似于小狐狸钱包的切换链。

各协议钱包对比,图片来源:Kernel Ventures
赛道推演
随着资金不断地涌入铭文市场,用户不再满足于 meme 的炒作,开始将目光转向基于铭文的应用。UniSat 也为 BRC20 带来创新,通过 BRC20-Swap,用户可以像 AMM DEX 一样轻松交换 BRC20 代币,作为第一个提高 Ordinals 生态流动性的产品,它有望将比特币 DeFi 生态系统的潜力释放出来,未来可能会出现更多功能比如借贷和衍生品。近期 UniSat 还开放了 API 接口,这对于小型开发者来说非常友好,可以调用很多功能,比如自动批量扫单,监控铭文并自动 mint,由此可以产生非常多的工具类项目。
比特币网络的手续费较为昂贵,对于 Stacks、RIF 这一些比特币二层来说,虽然手续费降低了,但是没有用户基数,基础设施不够完备。由此比特币的 EVM 便是一个很好的叙事。
比如 BEVM ,该项目是基于以太坊网络的比特币生态 Layer2 ,链上 Native Token 也是 BTC ,用户可以通过官方跨链桥,把比特币通过主网跨至 BEVM ,BEVM 的 EVM 兼容性,使得其在 EVM 链上应用构建门槛极低,Defi、Swap 等均可以从其他链迁移。
但是比特币的 EVM 需要考虑的问题还有很多,比如说跨链过去的资产是否能保证去中心化、不可增发等等,EVM 链的节点 sequence 共识问题,以及怎么把 Txns 同步至比特币网络(或去中心化存储),因为以太坊二层的门槛较低,所以随之而来安全性也会降低,这应该是目前所有关注比特币 EVM 该考虑的首要问题。

图片来源:BEVM Bridge
总结
本文深入研究了比特币铭文领域的发展趋势和各个协议的特点。通过分析 Ordinals(BRC20)、Atomical、RGB、Pipe 等比特币链上的协议与其他 Pow 链,以及以太坊链 Ethscriptions 和 Evm.ink、Solana 链 SPL20 协议,对比了它们在费率、可拆分性、可扩展性和用户方面的不同之处,得出结论:目前可拆分依旧是瓶颈,其他比特币链上协议中RGB发展较为全面和具有前景。
在铭文市场背景方面,以 Ordinals 协议为开端,BRC20 等铭文协议掀起了一波浪潮,被戏称为“散户的天下”。分析了比特币区块费率图和 Ordinals 铭文铸造数量等数据概览,窥见了铭文生态的发展趋势。
在赛道解析方面,通过对比主流铭文协议的核心要素,如费率、可拆分性、可扩展性和用户人数,展现了它们之间的异同。最后,通过协议代币数据对比和核心协议对比,对各主流协议的市值和用户分布进行了综合分析。总结时提到了创新点与风险分析,强调了铭文领域的活力和创新。
展望未来,铭文领域有望持续见证技术的不断创新,推动更多复杂功能的实际应用。市场的蓬勃发展预计将保持稳健增长,为投资者和参与者提供更多机会。与此同时,预计将涌现更多富有创意的项目和协议,进一步丰富比特币和其他公链的铭文生态系统。矿工的收益也可能随之增长,因为铭文领域为他们提供了崭新的收入契机。
Kernel Ventures是一个由研究和开发社区驱动的加密风险投资基金,拥有超过70个早期投资,专注于基础设施、中间件、dApps,尤其是ZK、Rollup、DEX、模块化区块链,以及将搭载未来数十亿加密用户的垂直领域,如账户抽象、数据可用性、可扩展性等。在过去的七年里,我们一直致力于支持世界各地的核心开发社区和大学区块链协会的发展。
参考资料:
Bitcoin block-fee-rates (3 year):https://mempool.space/zh/graphs/mining/block-fee-rates#3yESIP-4: The Ethscriptions Virtual Machine:https://docs.ethscriptions.com/esips/esip-4-the-ethscriptions-virtual-machineA comprehensive scan of the inscriptions industry:https://www.theblockbeats.info/news/47753?search=1Litecoin block-fee-rates (1 year):https://litecoinspace.org/zh/graphs/mining/block-fee-rates#1y
😇🧐
😇🧐
LIVE
Kernel Ventures
--
Kernel Ventures:一文探讨 DA 和历史数据层设计
作者:Kernel Ventures Jerry Luo
审稿:Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
早期公链要求全网节点保持数据一致性,以确保安全与去中心化。然而,随着区块链生态的发展,存储压力不断增大,导致节点运营出现中心化的趋势。现阶段 Layer1 急需解决 TPS 增长带来的存储成本问题。面对这一问题,开发者需要在兼顾安全性,存储成本,数据读取速度与 DA 层通用性的前提下,提出新的历史数据存储方案。在解决这一问题的过程中,许多新技术与新思路涌现,包括 Sharding,DAS,Verkle Tree,DA 中间组件等。他们分别从减少数据冗余与提高数据校验效率等途径出发,尝试优化 DA 层的存储方案。现阶段的 DA 方案从数据存储位置出发大体分为两类,分别是主链 DA 与第三方的 DA。主链 DA 分别从定期清理数据与对数据分片存储的角度出发,以减小节点存储压力。而第三方 DA 设计需求均旨在为存储服务,对于大量的数据有合理的解决方案。因而主要是在单链兼容性与多链兼容性之间进行 trade-off,提出了主链专用 DA,模块化 DA,存储公链 DA 三种解决方案。支付型的公链对于历史数据安全有极高的要求,适合使用主链作为 DA 层。不过对于运行了很长时间而又有大量矿工在运行网络的公链,采取不涉及共识层又兼顾安全性的第三方 DA 会更加合适。而综合性的公链更适合使用数据容量更大,成本更低又兼顾安全性的主链专用 DA 存储。但是考虑到跨链的需求,模块化 DA 也是不错的选项。总体上来说,区块链正在朝减少数据冗余以及多链分工的方向发展。
1. 背景
区块链作为分布式账本,需要在所有节点上都对历史数据作一份存储,以确保数据存储的安全与足够去中心化。由于每一次状态变动的正确性都与上一个状态(交易来源)有关,为了确保交易的正确性,一条区块链原则上应当存储从第一笔交易产生到当下交易的所有历史记录。以以太坊为例,即便按照平均每个区块 20 kb 的大小估计,当前以太坊区块的总大小也已达到 370 GB,而一个全节点除了区块本身,还要对状态和交易收据记录。算上这部分,单个节点存储总量已超过 1 TB,这使得节点的运营向少数人集中。

以太坊最新区块高度,图片来源:Etherscan
而最近的以太坊坎昆升级旨在将以太坊的 TPS 提高到 1000 附近,届时以太坊每年的存储增长都会超过现在的存储量之和。而在最近火热的各种高性能公链中,上万 TPS 的交易速度更是可能带来日均数百 GB 的数据新增。全网节点共同数据冗余的方式明显无法适应这样的存储压力,Layer1 必须找到一种合适的方案以兼顾 TPS 的增长与节点的存储成本。
2. DA 性能指标
2.1 安全性
区块链相对于数据库或者链表存储结构而言,其不可篡改性来自于可以通过历史数据对新产生的数据进行校验,因而确保其历史数据的安全性是 DA 层存储中首先要考虑的问题。对于区块链系统数据安全性的评判,我们往往从数据的冗余数量和数据可用性的校验方式进行分析
冗余数量:对于区块链系统中数据的冗余,其主要可以起到以下作用:首先,如果网络中冗余数量越多,当验证者需要查看某个历史区块中的账户状态以对当下某笔交易进行验证的时候,其可以得到最多的样本进行参考,从中选取被大多数节点记载的数据。而在传统的数据库中,由于只在某个节点以键值对的形式存储数据,要更改历史数据只用在单一节点进行,攻击成本极低,理论上说,冗余数量越多,数据的可信程度越高。同时,存储的节点越多,数据相应越不容易丢失。这点也可以对比存储 Web2 游戏的中心化服务器,一旦后台服务器全部关闭,就会出现彻底闭服的情况。但是这个数量也并非越多越好,因为每一份冗余都会带来额外的存储空间,过多数据冗余会给系统带来过大的存储压力,好的 DA 层应该选择一种合适的冗余方式在安全性和存储效率中取得平衡。数据可用性校验:冗余数量保证了网络中对于数据足够多的记录,但是要使用的数据还要对其准确性和完整性进行校验。现阶段的区块链中常用校验方式是密码学的承诺算法,既保留一个很小的密码学承诺供全网记录,这个承诺由交易数据混合得到的。而要检验某条历史数据的真实性时需要通过该数据还原密码学承诺,检验这个还原得到这个密码学承诺是否和全网的记录一致,如果一致则验证通过。常用的密码学校验算法有 Merkle Root 和 Verkle Root。高安全性的数据可用性验证算法只需要很少的校验数据,可以快速的对历史数据进行校验。
2.2 存储成本
在确保了基础安全性的前提下,DA 层下个需要实现的核心目标便是降本增效。首先是降低存储成本,在不考虑硬件性能差异的情况下,也就是降低存储单位大小数据造成的内存占用。现阶段区块链中降低存储成本的方式主要是采取分片技术以及使用奖励式存储以确保数据被有效存储基础上降低数据备份数量。但是从以上改进方式不难看出,存储成本与数据的安全性存在博弈关系,降低存储的占用也往往意味着安全性的下降。因而一个优秀的 DA 层需要实现存储成本与数据安全性之间的平衡。此外,如果 DA 层如果是一条单独的公链的话,还需要通过尽量减少数据交换所经历的中间过程以减少成本,在每一次中转过程都需要留下索引数据以供后续查询时的调用,因而越长的调用过程就会留有越多的索引数据而增加存储成本。最后,数据的存储成本直接和数据的持久性直接挂钩。一般情况下,数据的存储成本越高,公链越难以对数据持久化存储。
2.3 数据读取速度
实现了降本,下一步便是增效,也就是当需要使用数据时将其迅速从 DA 层中调用出来的能力。这个过程涉及两个步骤,首先是搜寻存储数据的节点,这个过程主要是对于未实现全网数据一致性的公链而言的,如果公链实现了全网节点的数据同步,便可以忽略这一过程的时间消耗。其次,现阶段主流的区块链系统,包括 Bitcoin,Ethereum,Filecoin 中,节点存储方式为 Leveldb 数据库。在 Leveldb 中,数据以三种方式存储。首先是即时写入的数据会存储在 Memtable 类型文件中,当 Memtable 存储满了后则会将文件类型从 Memtable 改为 Immutable Memtable。这两种类型的文件均存储在内存中,但是 Immutable Memtable 文件无法再做更改,只能从中读取数据。IPFS 网络中使用的热存储就是将数据存储在了这个部分,当要调用时就可以快速从内存读取,但是一个普通节点的移动内存往往都是 GB 级别,很容易就会写慢,并且当节点出现宕机等异常情况后,内存中的数据便会永久丢失。如果希望数据持久存储,则需要以 SST 文件的形式存储到固态硬盘(SSD),但读取数据时需要先将数据读到内存,因而大大降低数据索引速度。最后,对于采取了分片存储的系统,其数据还原时需要向多个节点发送数据请求并进行还原,这个过程也会降低数据的读取速度。

Leveldb 数据存储方式,图片来源:Leveldb-handbook
2.4 DA 层通用性
随着 DeFi 的发展,以及 CEX 的种种问题,用户对于去中心化资产跨链交易的要求也不断增长。而无论是采取哈希锁定,公证人还是中继链的跨链机制,都避免不了对两条链上历史数据的同时确定。这个问题的关键在于两条链上数据的分离,不同的去中心化系统中无法实现直接沟通。因而现阶段通过改变 DA 层存储方式提出了一种解决方案,既将多条公链的历史数据存储在同一条可信的公链上,验证的时候只需要在这条公链上调用数据即可。这需要 DA 层能够与不同类型的公链建立安全的通信方式,也就是 DA 层具有较好的通用性。
3. DA 相关技术探索
3.1 Sharding
传统的分布式系统中,一份文件不会以完整的形式存储在某一个节点上,而是将原始数据分成多个 Blocks 后在每一个节点中存储一个 Block。并且 Block 往往不会仅存储在一个节点上,而是会在其他节点上留有适当的备份,现有主流分布式系统中,这个备份数量通常设置为 2。这种 Sharding 机制可以减少单个节点的存储压力,将系统的总容量扩展为各个节点存储量的总和,同时又通过适当的数据冗余确保存储的安全性。区块链中采取的 Sharding 方案大体与之类似,但在具体细节上会存在不同。首先是由于区块链中默认各个节点是不可信的,实现 Sharding 的过程中需要足够大的数据量备份以供后续数据真实性的判断,所以这个节点的备份数量需要远超过 2。理想情况下,在采用这种方案存储的区块链系统中,如果验证节点总数为 T,分片数量为 N,那么备份数量应该为 T/N。其次是对 Block 的存储过程,传统分布式系统中节点较少,因而往往是一个节点适配多个数据块,首先是通过一致性哈希算法将数据映射到哈希环上去,然后每个节点存储某个范围内编号的数据块,并且可以接受某个节点在某次存储中并没有分配存储任务。而在区块链上,每个节点是否分配到 Block 不再是随机事件而是必然事件,每个节点都会随机抽取一个 Block 进行存储,这一过程通过将带有区块原始数据与节点自身信息的数据哈希后的结果对分片数取余完成。假设每份数据被分为了 N 个 Blocks,每个节点的实际存储大小仅为原来的 1/N。通过适当设置 N,可以实现增长的 TPS 和节点存储压力的平衡。

Sharding 后的数据存储方式,图片来源:Kernel Ventures
3.2 DAS(Data Availability Sampling)
DAS 技术是基于 Sharding 在存储方式上的进一步优化。在 Sharding 过程中,由于节点简单的随机存储,可能会出现某个 Block 丢失的情况。其次,对于分片后的数据,还原过程中如何确认数据的真实性与完整性也非常重要。在 DAS 中,通过 Eraser code 与 KZG 多项式承诺对这两个问题进行了解决。
Eraser code:考虑以太坊庞大的验证节点数量,某个 Block 没有被任何节点存储的概率几乎为 0,但是理论上来说仍然存在这种极端情况发生的可能。为了减轻这一可能造成的存储缺失的威胁,此方案下往往不直接将原始数据切分为 Block 进行存储,而是先将原始数据映射到一个 n 阶多项式的系数上,然后在多项式上取 2n 个点,并让节点从中随机选择一个进行存储。对于这个 n 阶多项式,只需要 n+1 个点便可以进行还原,因而只需要有一半的 Block 有被节点选中,我们便可以实现对原始数据的还原。通过 Eraser code,提高了数据存储的安全程度与网络对于数据的恢复能力。KZG 多项式承诺:在数据存储中非常重要的一环便是对于数据真实性的检验。在没有采用 Eraser code 的网络中,校验环节可以采用多样的方法,但是如果引入了上文的 Eraser code 以提高数据安全性,那么比较合适的方法是使用 KZG 多项式承诺。KZG 多项式承诺可以直接以多项式的形式对单个 Block 内容校验,从而省去将多项式还原为二进制数据的过程,验证的形式总体与 Merkle Tree 类似,但是不需要具体的 Path 节点数据,只需要 KZG Root 与 Block 数据便可对其真伪进行验证。
3.3 DA 层数据校验方式
数据校验既确保从节点中调用的数据未被篡改且具有没有出现丢失。为了尽可能减少校验过程中所需要的数据量以及计算成本,DA 层现阶段采用树结构做为主流的校验方式。最简单的形式便是使用 Merkle Tree 进行校验,使用完全二叉树的形式记录,只需要保留一个 Merkle Root 以及节点路径上另一侧子树的哈希值便可以进行校验,校验的时间复杂度为 O(logN) 级别(如果 logN 不加底数默认为 log2(N))。虽然已经极大简化了校验过程,但是验证过程的数据量总体还是随着数据的增加而增长。为了解决增加的验证量问题,现阶段提出了另一种验证方式,Verkle Tree。Verkle Tree 中每个节点除了存储 value 还会附带一个 Vector Commitment ,通过原始节点的值和这个承诺性证明就可以快速对数据真实性进行验证,而不需要调用其他姐妹节点的值,这使得每次验证的计算次数只和 Verkle Tree 的深度有关,是一个固定的常数,从而大大加速了验证速度。但是 Vector Commitment 的计算需要同一层所有姐妹节点的参与,这大大增大了写入数据与更改数据的成本。但是对于历史数据这类做永久性存储而不能篡改的数据,只有读而没有写的需求,Verkle Tree 就显得极为合适了。此外 Merkle Tree 与 Verkle Tree 本身还有 K-ary 形式下的变体,其具体实现机制相似,只是改变了每个节点下子树的数量,其具体性能的对比可以见下表。

数据校验方式时间性能对比,图片来源:Verkle Trees
3.4 通用 DA 中间件
区块链生态的不断扩大,随之带来公链数量的不断增加。由于各条公链在各自领域的优势与不可替代性,短时间内 Layer1 公链几无可能走向统一。但是随着 DeFi 的发展,以及 CEX 的种种问题,用户对于去中心化跨链交易资产的要求也不断增长。因此,可以消除跨链数据交互中的安全问题的 DA 层多链数据存储得到了越来越多的关注。但是要接受来自不同公链的历史数据,需要 DA 层提供数据流标准化存储与验证的去中心化协议,比如基于 Arweave 的存储中间件 kvye ,采取主动从链上抓取数据的方式,可以将所有链上的数据以标准的形式存储至 Arweave,以最小化数据传输过程的差异性。相对来说,专门为某条公链提供 DA 层数据存储的 Layer2 通过内部共享节点的方式进行数据交互,虽然降低了交互的成本并提高了安全性,但是具有比较大的局限性,仅能向特定公链提供服务。
4. DA 层存储方案
4.1 主链 DA
4.1.1 类 DankSharding
这类存储方案暂时还没有确定的名称,而其中最突出的代表就是以太坊上的 DankSharding,因而本文中使用类 DankSharding 代称这一类方案。这类方案主要使用了上述的两种 DA 存储技术,Sharding 和 DAS。首先通过 Sharding 将数据分成合适的份数,然后再让每个节点以 DAS 的形式抽取一个数据 Block 进行存储。对于全网节点足够多的情况,我们可以取一个较大的分片数 N,这样每个节点的存储压力只有原来的 1/N,从而实现整体存储空间的 N 倍扩容。同时为了保证防止某个 Block 没有被任一区块存储的极端情况,DankSharding 对数据使用 Eraser Code 进行了编码,只需要一半的数据就可以进行完整还原。最后是对数据的检验过程,使用了 Verkle 树的结构与多项式承诺,实现了快速的校验。
4.1.2 短期存储
对于主链的 DA,一种最为简单的数据处理方式就是对历史数据进行短期存储。本质上来说,区块链所起的是一个公示账本的作用,在全网共同见证的前提下实现对账本内容的更改,而并没有永久化存储的需求。以 Solana 为例,虽然其历史数据被同步到了 Arweave 上,但是主网节点只保留了近两日的交易数据。基于账户记录的公链上,每一时刻的历史数据保留了区块链上账户最终的状态,便足以为下一时刻的更改提供验证依据。而对于这个时间段之前数据有特殊需求的项目方,可以自己在其他去中心化公链上或者交由可信第三方进行存储。也就是说对于数据有额外需求的人,需要对历史数据存储进行付费。
4.2 第三方 DA
4.2.1 主链专用 DA:EthStorage
主链专用DA:DA 层最重要的就是数据传输的安全性,这一点上安全性最高的便是主链的 DA。但是主链存储受到存储空间的限制以及资源的竞争,因而当网络数据量增长较快时,如果要实现对数据的长期存储,第三方 DA 会是一个更好的选择。第三方 DA 如果与主网有更高的兼容性,可以实现节点的共用,数据交互过程中也会具有更高的安全性。因而在考虑安全性的前提下,主链专用 DA 会存在巨大优势。以以太坊为例,主链专用 DA 的一个基本要求是可以与 EVM 兼容,保证和以太坊数据与合约间的互操作性,代表性的项目有 Topia,EthStorage 等。其中 EthStorage 是兼容性方面目前开发最完善的,因为除了 EVM 层面的兼容,其还专门设置了相关接口与 Remix,Hardhat 等以太坊开发工具对接,实现以太坊开发工具层面的兼容。EthStorage:EthStorage 是一条独立于以太坊的公链,但其上运行的节点是以太坊节点的超群,也就是运行 EthStorage 的节点也可以同时运行以太坊,通过以太坊上的操作码便可以直接对 EthStorage 进行操作。EthStorage 的存储模式中,仅在以太坊主网保留少量元数据以供索引,本质上是为以太坊创建了一个去中心化的数据库。现阶段的解决方案中,EthStorage 通过在以太坊主网上部署了一份 EthStorage Contract 实现了以太坊主网与 EthStorage 的交互。如果以太坊要存入数据,则需要调用合约中的 put() 函数,输入参数是两个字节变量 key, data,其中 data 表示要存入的数据,而 key 则是其在以太坊网络中的标识,可以将其看成类似于IPFS中 CID 的存在。在(key,data)数据对成功存储到 EthStorage 网络后,EthStorage 会生成一个 kvldx 返回给以太坊主网,并于以太坊上的 key 对应,这个值对应了数据在 EthStorage 上的存储地址,这样原来可能需要存储大量数据的问题现在就变为了存储一个单一的 (key,kvldx)对,从而大大降低了以太坊主网的存储成本。如果需要对之前存储的数据进行调用,则需要使用 EthStorage 中的 get() 函数,并输入 key 参数,通过以太坊存储的 kvldx 便可在 EthStorage 上对数据进行一个快速查找。

EthStorage 合约,图片来源:Kernel Ventures
在节点具体存储数据的方式上,EthStorage 借鉴了 Arweave 的模式。首先是对于来自 ETH 的大量 (k,v)对进行了分片,每个 Sharding 包含固定数量个(k,v)数据对,其中每个(k,v)对的具体大小也存在一个限制,通过这种方式保证后续对于矿工存储奖励过程中的工作量大小的公平性。对于奖励的发放,需要先对节点是否存储数据进行验证。这个过程中,EthStorage 会把一个 Sharding(TB 级别大小)分成非常多的 chunk,并在以太坊主网保留一个 Merkle root 以做验证。接着需要矿工首先提供一个 nonce 来与 EthStorage 上前一个区块的哈希通过随机算法生成出几个 chunk 的地址,矿工需要提供这几个 chunk 的数据以证明其确实存储了整个 Sharding。但这个 nonce 不能随意选取,否则节点会选取出合适的 nonce 只对应其存储的 chunk 从而通过验证,所以这个 nonce 必须使得其所生成的 chunk 经过混合与哈希后可以使难度值满足网络要求,并且只有第一个提交 nonce 和随机访问证明的节点才可以获取奖励。
4.2.2 模块化 DA:Celestia
区块链模块:现阶段 Layer1 公链所需执行的事务主要分为以下四个部分:(1)设计网络底层逻辑,按照某种方式选取验证节点,写入区块并为网络维护者分配奖励;(2)打包处理交易并发布相关事务;(3)对将要上链的交易进行验证并确定最终状态;(4)对于区块链上的历史数据进行存储与维护。根据所完成功能的不同,我们可以将区块链分别划分为四个模块,即共识层、执行层、结算层、数据可用性层(DA 层)。模块化区块链设计:很长一段时间,这四个模块都被整合到了一条公链上,这样的区块链称为单体区块链。这种形式更加稳定并便于维护,但也给单条公链带来了巨大的压力。实际运行过程中,这四个模块之间互相约束并竞争公链有限的计算与存储资源。例如,要提高处理层的处理速度,相应就会给数据可用性层带来更大的存储压力;要保证执行层的安全性就需要更复杂的验证机制但拖慢交易处理的速度。因此,公链的开发往往面临着这四个模块间的权衡。为了突破这一公链性能提升的瓶颈,开发者提出了模块化区块链的方案。模块化区块链的核心思想是将上述的四个模块中的一个或几个剥离出来,交给一条单独的公链实现。这样在该条公链上就可以仅专注于交易速度或者存储能力的提升,突破之前由于短板效应对于区块链整体性能造成的限制。模块化 DA:将 DA 层从区块链业务中剥离出来单独交由一条公链复杂的方法被认为是面对 Layer1 日益增长历史数据的一种可行解决方案。现阶段这方面的探索仍处于早期阶段,目前最具代表性的项目是 Celestia。在存储的具体方式上,Celestia 借鉴了 Danksharding 的存储方法,也是将数据分成多个 Block,由各个节点抽取一部分进行存储并同时使用 KZG 多项式承诺对数据完整性进行验证。同时,Celestia 使用了先进的二维 RS 纠删码,通过 k*k 矩阵的形式改写原始数据,最终只需要 25% 的部分便可以对原始数据实现恢复。然而,数据分片存储本质上只是将全网节点的存储压力在总数据量上乘以了一个系数,节点的存储压力与数据量仍然是保持线性增长。随着 Layer1 对于交易速度的不断改进,节点的存储压力某天仍可能达到一个无法接受的临界。为了解决这一问题,Celestia 中引入了 IPLD 组件进行处理。对于 k*k 矩阵中的数据,并不直接存储在 Celestia 上,而是存储在 LL-IPFS 网络中,仅在节点中保留该数据在 IPFS 上的 CID 码。当用户请求某份历史数据时,节点会向 IPLD 组件发送对应 CID,通过该 CID 在 IPFS 上对原始数据进行调用。如果在 IPFS 上存在数据,则会经由 IPLD 组件和节点返回回来;如果不存在,则无法返回数据。

Celestia 数据读取方式,图片来源:Celestia Core
Celestia:以 Celestia 为例,我们可以窥见模块化区块链在解决以太坊存储问题中的落地应用。Rollup 节点会将打包并验证好的交易数据发送给 Celestia 并在 Celestia 上对数据进行存储,这个过程中 Celestia 只管对数据进行存储,而不会有过多的感知,最后根据存储空间的大小 Rollup 节点会向 Celestia 支付相应 tia代币作为存储费用。在Celstia中的存储利用了类似于 EIP4844 中的 DAS 和纠删码,但是对 EIP4844 中的多项式纠删码进行了升级,使用了二维 RS 纠删码,将存储安全进行了再次升级,仅需 25% 的 fractures 便可以对整个交易数据进行还原。本质上只是一条存储成本低廉的 POS 公链,如果要实现用来解决以太坊的历史数据存储问题,还需要许多其他具体模块来与 Celestia 进行配合。比如 Rollup 方面,Celestia 官网上大力推荐的一种 Rollup 模式是 Sovereign Rollup。不同于 Layer2 上常见的 Rollup,仅仅对交易进行计算和验证,也就是完成执行层的操作。Sovereign Rollup 包含了整个执行和结算的过程,这最小化了 Celestia 上对交易的处理,在 Celestia 整体安全性弱于以太坊的情况下,这种措施可以最大提升整体交易过程的安全性。而在以太坊主网 Celestia 调用数据的安全性保障方面,当下最主流的方案是量子引力桥智能合约。对于 Celestia 上存储的数据,其会生成一个 Merkle Root(数据可用性证明) 并保持在以太坊主网的量子引力桥合约上,当以太坊每次调用 Celestia 上历史数据时,都会将其哈希结果与 Merkle Root 进行比较,如果符合才表示其确实是真实的历史数据。
4.2.3 存储公链 DA
在主链 DA 技术原理上,向存储公链借鉴了类似 Sharding 的许多技术。而在第三方 DA 中,有些更是直接借助存储公链完成了部分存储任务,比如 Celestia 中具体的交易数据就是放在了 LL-IPFS 网络上。第三方 DA 的方案中,除了搭建一条单独的公链解决 Layer1 的存储问题之外,一种更直接的方式是直接让存储公链和 Layer1 对接,存储 Layer1 上庞大的历史数据。对于高性能区块链来说,历史数据的体量更为庞大,在全速运行的情况下,高性能公链 Solana 的数据量大小接近 4 PG,完全超出了普通节点的存储范围。Solana 选择的解决方案是将历史数据存储在去中心化存储网络 Arweave 上,只在主网的节点上保留 2 日的数据用来验证。为了确保存储过程的安全性 Solana 与 Arweave 链自己专门设计了一个存储桥协议 Solar Bridge。Solana 节点验证后的数据会同步到 Arweave 上并返回相应 tag。只需要通过该 tag,Solana 节点便可以对 Solana 区块链任意时刻的历史数据进行查看。而在 Arweave 上,不需要全网节点保持数据一致性,并以此作为参与网络运行的门槛,而是采取了奖励存储的方式。首先 Arweave 并没有采用传统链结构构建区块,而更类似一种图的结构。在 Arweave 中,一个新的区块不仅会指向前一个区块,还会随机指向一个已生成的区块 Recall Block。Recall Block 的具体位置由其前一区块与其区块高度的哈希结果决定,在前一区块被挖出之前,Recall Block 的位置是未知的。但是在生成新区块的过程中,需要节点具有 Recall Block 的数据以使用 POW 机制计算规定难度的哈希,只有最先计算出符合难度哈希的矿工才可以获得奖励,鼓励了矿工存储尽可能多的历史数据。同时,存储某个历史区块的人越少,节点在生成符合难度 nonce 时会有更少的竞争对手,鼓励矿工存储网络中备份较少的区块。最后,为了保证节点在 Arweave 中对数据做永久性存储,其引入了 WildFire 的节点评分机制。节点间会倾向于与可以较快的提供更多历史数据的节点通信,而评分等级较低的节点往往无法第一时间获得最新的区块与交易数据从而无法在 POW 的竞争中占取先机。

Arweave 区块构建方式,图片来源:Arweave Yellow-Paper
5. 综合对比
接下来,我们将从 DA 性能指标的四个维度出发,分别对 5 种存储方案的优劣进行比较。
安全性:数据安全问题的最大的来源是数据传输过程中导致的遗失以及来自不诚信节点的恶意篡改,而跨链过程中由于两条公链的独立性与状态不共享,所以是数据传输安全的重灾区。此外,现阶段需要专门 DA 层的 Layer 1 往往有强大的共识群体,自身安全性会远高于普通存储公链。因而主链 DA 的方案具更高的安全性。而在确保了数据传输安全后,接下来就是要保证调用数据的安全。只考虑用来验证交易的短期历史数据的话,同一份数据在临时存储的网络中得到了全网共同的备份,而在类 DankSharding 的方案中数据平均的备份数量只有全网节点数的 1/N,更多的数据冗余可以使得数据更不容易丢失,同时也可以在验证时提供更多的参考样本。因而临时存储相对会有更高的数据安全性。而在第三方 DA 的方案中,主链专用 DA 由于和主链使用公共节点,跨链过程中数据可以通过这些中继节点直接传输,因而也会有比其他 DA 方案相对较高的安全性。存储成本:对存储成本最大的影响因素是数据的冗余数量。在主链 DA 的短期存储方案中,使用全网节点数据同步的形式进行存储,任何一份新存储的数据需要在全网节点中得到备份,具有最高的存储成本。高昂的存储成本反过来也决定了,在高 TPS 的网络中,该方式只适合做临时存储。其次是 Sharding 的存储方式,包括了在主链的 Sharding 以及第三方 DA 中的 Sharding。由于主链往往有更多的节点,因而相应一个 Block 也会有更多的备份,所以主链 Sharding 方案会有更高的成本。而存储成本最低的则是采取奖励存储方式的存储公链 DA ,此方案下数据冗余的数量往往在一个固定的常数附近波动。同时存储公链 DA 中还引入了动态调节的机制,通过提高奖励吸引节点存储备份较少的数据以确保数据安全。数据读取速度:数据的存储速度主要受到数据在存储空间中的存储位置、数据索引路径以及数据在节点中的分布的影响。其中,数据在节点的存储位置对速度的影响更大,因为将数据存储在内存或 SSD 中可能导致读取速度相差数十倍。存储公链 DA 多采取 SSD 存储,因为该链上的负载不仅包括 DA 层的数据,还包括用户上传的视频、图片等高内存占用的个人数据。如果网络不使用 SSD 作为存储空间,难以承载巨大的存储压力并满足长期存储的需求。其次,对于使用内存态存储数据的第三方 DA 和主链 DA,第三方 DA 首先需要在主链中搜索相应的索引数据,然后将该索引数据跨链传输到第三方 DA,并通过存储桥返回数据。相比之下,主链 DA 可以直接从节点查询数据,因此具有更快的数据检索速度。最后,在主链 DA 内部,采用 Sharding 方式需要从多个节点调用 Block,并对原始数据进行还原。因此相对于不分片存储的短期存储方式而言,速度会较慢。DA 层通用性:主链 DA 通用性接近于零,因为不可能将存储空间不足的公链上的数据转移到另一条存储空间不足的公链上。在第三方 DA 中,方案的通用性与其与特定主链的兼容性是一对矛盾的指标。例如,对于专为某条主链设计的主链专用 DA 方案中,其在节点类型和网络共识层面进行了大量改进以适配该公链,因而在与其他公链通信时,这些改进会起到巨大的阻碍作用。而在第三方 DA 内部,与模块化 DA 相比, 存储公链 DA 在通用性方面表现更好。存储公链 DA 具有更庞大的开发者社区和更多的拓展设施,可以适应不同公链的情况。同时,存储公链 DA 对于数据的获取方式更多是通过抓包主动获取,而不是被动接收来自其他公链传输的信息。因此,它可以以自己的方式对数据进行编码,实现数据流的标准化存储,便于管理来自不同主链的数据信息,并提高存储效率。

存储方案性能比较,图片来源:Kernel Ventures
6. 总结
现阶段的区块链正在经历从 Crypto 向更具包容性的 Web3 转换的过程中,这个过程中带来的不仅是区块链上项目的丰富。为了在 Layer1 上容纳如此多项目的同时运行,同时保证 Gamefi 和 Socialfi 项目的体验,以以太坊为代表的 Layer1 采取了 Rollup 和 Blobs 等方式来提高 TPS。而新生区块链中,高性能区块链的数量也是不断增长。但是更高的 TPS 不仅意味着更高的性能,也意味着网络中更大的存储压力。对于海量的历史数据,现阶段提出了主链和基于第三方的多种 DA 方式,以适应链上存储压力的增长。改进方式各有利弊,在不同情境下有不同适用性。
以支付为主的区块链对于历史数据的安全性有着极高的要求,而不追求特别高的 TPS。如果这类公链还处于筹备阶段,可以采取类 DankSharding 的存储方式,在保证安全性的同时也可以实现存储容量的巨大提升。但如果是比特币这种已经成型并有大量节点的公链,在共识层贸然进行改进存在巨大风险,因而可以采取链外存储中安全性较高的主链专用 DA 来兼顾安全性与存储问题。但值得注意的是,区块链的功能并不是一成不变而是不断变化的。比如早期的以太坊的功能主要也局限于支付以及使用智能合约对资产和交易进行简单的自动化处理,但是随着区块链版图的不断拓展,以太坊上逐渐加入了各种 Socialfi 与 Defi 项目,使以太坊向着更加综合性的方向发展。而最近伴随着比特币上铭文生态的爆发,比特币网络的交易手续费自 8 月以来激增了近 20 倍,背后反映的是现阶段比特币网络的交易速度无法满足交易需求,交易者只能拉高手续费使交易尽快得到处理。现在,比特币社区需要做出一个 trade-off,是接受高昂的手续费以及缓慢的交易速度,还是降低网络安全性以提高交易速度但违背支付系统的初衷。如果比特币社区选择了后者,那么面对增长的数据压力,相应的存储方案也需要做出调整。

比特币主网交易费用波动,图片来源:OKLINK
而对于综合功能的公链,其对 TPS 有着更高的追求,历史数据的增长更加巨大,采取类 DankSharding 的方案长期来看难以适应 TPS 的快速增长。因此,较为合适的方式是将数据迁移到第三方 DA 进行存储。其中,主链专用 DA 具有最高的兼容性,如果只考虑单条公链的存储问题,可能更具优势。但是在 Layer1 公链百花齐放的今天,跨链资产转移与数据交互也成为区块链社区的普遍追求。如果考虑到整个区块链生态的长期发展,将不同公链的历史数据存储在同一条公链上可以消除许多数据交换与验证过程中的安全问题,因此,模块化 DA 和存储公链 DA 的方式可能是一个更好的选择。在通用性接近的前提下,模块化 DA 专注于提供区块链 DA 层的服务,引入了更精细化的索引数据管理历史数据,可以对不同公链数据进行一个合理归类,与存储公链相比具有更多优势。然而,上述方案并未考虑在已有公链上进行共识层调整的成本,这个过程具有极高的风险性,一旦出现问题可能会导致系统性的漏洞,使得公链失去社区共识。因此,如果是区块链扩容过程中的过渡方案,最简单的主链临时存储可能更合适。最后,以上讨论都基于实际运行过程中的性能出发,但如果某条公链的目标是发展自身生态,吸引更多项目方和参与者,也有可能会倾向于受到自身基金会扶持和资助的项目。比如在同等甚至总体性能略低于存储公链存储方案的情况下,以太坊社区也会倾向于 EthStorage 这类以太坊基金会支持的 Layer2 项目,以持续发展以太坊生态。
总而言之,当今区块链的功能越来越复杂,也带来了更大的存储空间需求。在 Layer1 验证节点足够多的情况下,历史数据并不需要全网所有节点共同备份,只需要备份数量达到某个数值后便可保证相对的安全性。与此同时,公链的分工也变得越来越细致,Layer1 负责共识和执行,Rollup 负责计算和验证,再使用单独的一条区块链进行数据存储。每个部分都可以专注于某一功能,不受其他部分性能的限制。然而,具体存储多少数量或让多少比例的节点存储历史数据才能实现安全性与效率的平衡,以及如何保证不同区块链之间的安全互操作,这是需要区块链开发者进行思考和不断完善的问题。对于投资者而言,可以关注以太坊上的主链专用 DA 项目,因为现阶段以太坊已有足够多的支持者,不需要再借助其他社区扩大自己的影响力。更多的需要是完善与发展自己的社区,吸引更多项目落地以太坊生态。但是对处于追赶者地位的公链,比如 Solana,Aptos 来说,单链本身没有那么完善的生态,因而可能更倾向于联合其他社区的力量,搭建一个庞大的跨链生态以扩大影响力。因而对于新兴的 Layer1 ,通用的第三方 DA 值得更多的关注。
Kernel Ventures是一个由研究和开发社区驱动的加密风险投资基金,拥有超过70个早期投资,专注于基础设施、中间件、dApps,尤其是ZK、Rollup、DEX、模块化区块链,以及将搭载未来数十亿加密用户的垂直领域,如账户抽象、数据可用性、可扩展性等。在过去的七年里,我们一直致力于支持世界各地的核心开发社区和大学区块链协会的发展。
参考文献
Celestia:模块化区块链的星辰大海:https://foresightnews.pro/article/detail/15497DHT usage and future work:https://github.com/celestiaorg/celestia-node/issues/11Celestia-core:https://github.com/celestiaorg/celestia-coreSolana labs:https://github.com/solana-labs/solana?source=post_page-----cf47a61a9274--------------------------------Announcing The SOLAR Bridge:https://medium.com/solana-labs/announcing-the-solar-bridge-c90718a49fa2leveldb-handbook:https://leveldb-handbook.readthedocs.io/zh/latest/sstable.htmlKuszmaul J. Verkle trees[J]. Verkle Trees, 2019, 1: 1.:https://math.mit.edu/research/highschool/primes/materials/2018/Kuszmaul.pdfArweave 官网:https://www.arweave.org/Arweave 黄皮书:https://www.arweave.org/yellow-paper.pdf
🥳😈
🥳😈
LIVE
Kernel Ventures
--
Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design
Author: Kernel Ventures Jerry Luo
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
In the early stage of blockchain, maintaining data consistency is considered extremely important to ensure security and decentralization. However, with the development of the blockchain ecosystem, the storage pressure is also increasing, leading to a trend of centralization in node operation. Such being the case, the storage cost problem brought by TPS growth in Layer1 needs to be solved urgently.Faced with this problem, developers should propose a solution that takes security, storage cost, data reading speed, and DA layer versatility fully into account.In the process of solving this problem, many new technologies and ideas have emerged, including Sharding, DAS, Verkle Tree, DA intermediate components, and so on. They try to optimize the storage scheme of the DA layer by reducing data redundancy and improving data validation efficiency.DA solutions are broadly categorized into two types from the perspective of data storage location, namely, main-chain DAs and third-party DAs. Main-chain DAs are designed from the perspectives of regular data cleansing and sliced data storage to reduce the storage pressure on nodes, while the third-party DAs are designed to serve the storage needs which have reasonable solutions for large amounts of data. As a result, we mainly trade-off between single-chain compatibility and multi-chain compatibility in third-party DAs, and propose three kinds of solutions: main-chain-specific DAs, modularized DAs, and storage public-chain DAs.Payment-type public chains have very high requirements for historical data security and, thus are suitable to use the main chain as the DA layer. However, for public chains that have been running for a long time and have a large number of miners running the network, it is more suitable to adopt a third-party DA that does not involve the consensus layer change with relatively high security. For comprehensive public chains, it is more suitable to use the main chain's dedicated DA storage with larger data capacity, lower cost, and security. However, considering the demand for cross-chain, modular DA is also a good option.Overall, blockchain is moving towards reducing data redundancy as well as multi-chain division of labor.
1. Background
Blockchain, as a distributed ledger, needs to make a copy of the historical data stored on all nodes to ensure that the data storage is secure and sufficiently decentralized. Since the correctness of each state change is related to the previous state (the source of the transaction), in order to ensure the correctness of the transaction, a blockchain should store all the history of transactions from the generation of the first transaction to the current transaction. Taking Ethereum as an example, even taking 20 kb per block as the average size, the total size of the current data in Ethereum has reached 370 GB. For a full node, in addition to the block itself, it has to record the state and transaction receipts. Including this part, the total amount of storage of a single node has exceeded 1 TB, which makes the operation of the node gradually centralized.

Source: Etherscan
The recent Cancun upgrade of Ethereum aims to increase Ethereum's TPS to near 1000, at which point Ethereum's annual storage growth will exceed the sum of its current storage. In high-performance public chains, the transaction speed of tens of thousands of TPS may bring hundreds of GB of data addition per day. The common data redundancy of all nodes on the network obviously can not adapt to such storage pressure. So, Layer1 must find a suitable solution to balance the TPS growth and the storage cost of the nodes.
2. Performance Indicators of DA
2.1 Safety
Compared with a database or linked list, blockchain's immutability comes from the fact that its newly generated data can be verified by historical data, thus ensuring the security of its historical data is the first issue to be considered in DA layer storage. To judge the data security of blockchain systems, we often analyze the redundancy amount of data and the checking method of data availability.
Number of redundancy: The redundancy of data in the blockchain system mainly plays such roles: first, more redundancy in the network can provide more samples for reference when the verifier needs to check the account status which can help the node select the data recorded by the majority of nodes with higher security. In traditional databases, since the data is only stored in the form of key-value pairs in a certain node, changing the historical data is only carried out in a single node, with a low cost of the attack, and theoretically, the more the number of redundancies is, the higher the degree of credibility of the data is. Theoretically, the more redundancy there is, the more trustworthy the data will be. What's more, the more nodes there are, the less likely the data will be lost. This point can also be compared to the centralized servers that store Web2 games, once the background servers are all shut down, there will be a complete closure of the service. But it is not better with more redundancy, because redundancy will bring additional storage space, which will bring too much storage pressure to the system. A good DA layer should choose a suitable redundancy way to strike a balance between security and storage efficiency.Data Availability Checking: The amount of redundancy can ensure enough records of data in the network, but the data to be used must be checked for accuracy and completeness. Current blockchains commonly use cryptographic commitment algorithms as the verification methods, which just keep a small cryptographic commitment obtained by transaction data mixing, for the whole network to record. To test the authenticity of historical data, we should try to recover the commitment with the data. If the recovery commitment is identical to the original commitment, the verification passes. Commonly used cryptographic verification algorithms are Merkle Root and Verkle Root. High-security data availability verification algorithms can quickly verify historical data with the help of as little third-party data as possible.
2.2 Storage Cost
After ensuring basic security, the next goal of the DA layer is to reduce costs and increase efficiency. The first step is to reduce the storage cost presented by the memory consumption caused by storing data per unit size, regardless of the difference in hardware performance. Nowadays, the main ways to reduce storage costs in blockchain are to adopt sharding technology and use reward storage to reduce the number of data backups while keeping its security. However, it is not difficult to see from the above improvement methods that there is a game relationship between storage cost and data security, and reducing storage occupancy often means a decrease in security. Therefore, an excellent DA layer needs to realize the balance between storage cost and data security. In addition, if the DA layer is a separate public chain, it also needs to reduce the cost by minimizing the intermediate process of data exchange, in which every transit process needs to leave index data for subsequent retrieval. So the longer the calling process, the more index data will be left, which will increase the storage cost. Finally, the cost of storing data is directly linked to the persistence of the data. In general, the higher the cost of data storage, the more difficult it is for the public chain to store data persistently.
2.3 Data Reading Speed
Having achieved cost reduction, the next step is efficiency which means the ability to quickly recall data from the DA layer when needed. This process involves two steps, the first is to search for nodes to store data, mainly for public chains that have not achieved data consistency across the network, if the public chain has achieved data synchronization of nodes across the network, the time consumption of this process can be ignored. Then, in the mainstream blockchain systems at this stage, including Bitcoin, Ethereum, and Filecoin, the nodes' storage method is all Leveldb database. In Leveldb, data is stored in three ways. First, data written on-the-fly is stored in Memtable type files until the Memtable is full, then, the file type is changed from Memtable to Immutable Memtable. Both two types are stored in memory, but Immutable Memtable files are read-only. The hot storage used in the IPFS network stores data in this part of the network, so that it can be quickly read from memory when it is called, but an average node only has GBs of removable memory, which can easily be slowed down, and when a node goes down, the data in memory is lost permanently. If you want persistent data storage, you need to store the data in the form of SST files on the solid state disk (SSD), but when reading the data, you need to read the data to the memory first, which greatly reduces the speed of data indexing. Finally, for a system with storage sharding, data restoration requires sending data requests to multiple nodes and restoring them, a process that also slows down the reading of data.

Source: Leveldb-handbook
2.4 DA Layer Generalization
With the development of DeFi and various problems of CEX, users' requirements for cross-chain transactions of decentralized assets are growing. Whether we adopt the cross-chain mechanism of hash-locking, notary, or relay chain, we can't avoid the simultaneous determination of historical data on two chains. The key to this problem lies in the separation of data on the two chains, which cannot be directly communicated in different decentralized systems. Therefore, a solution is proposed by changing the storage method of the DA layer, which stores the historical data of multiple public chains on the same trusted public chain and only needs to call the data on this public chain when verifying. This requires the DA layer to be able to establish secure communication with different types of public chains which means that the DA layer has good versatility.
3. Techniques Concerning DA
3.1 Sharding
In traditional distributed systems, a file is not stored in a complete form on a node, but to divide original data into multiple blocks and store them in each node. Also, the block is often not only stored in one node but leaves appropriate backup in other nodes. In the existing mainstream distributed systems, the number of backups is usually set to 2. This sharding mechanism can reduce the storage pressure of individual nodes, expand the total capacity of the system to the sum of the storage capacity of each node, and at the same time ensure the security of storage through appropriate data redundancy. The sharding scheme adopted in blockchain is generally similar to the traditional distributed systems, but there are differences in some details. Firstly, since the default nodes in the blockchain are untrustworthy, the process of realizing sharding requires a sufficiently large amount of data backups for the subsequent judgment of data authenticity, so the number of backups of this node needs to be much more than 2. Ideally, in the blockchain system that adopts this scheme of storage, if the total number of authentication nodes is T and the number of shards is N, the number of backups should be T/N. Secondly, as to the storage process of a block, a traditional distributed system with fewer nodes often has the mode that a node adapted to multiple data blocks. Firstly, the data is mapped to the hash ring by the consistent hash algorithm, then each node stores a certain range of numbered blocks with the hash ring's assignments. It can be accepted in the system that one single node does not have a storage task in certain storage. While on the blockchain, the storage block is no longer a random but an inevitable event for the nodes. Each node will randomly select a block for storage in the blockchain, with the process completed by the hashing result of data mixed with the node's information to modulo slice number. Assuming that each data is divided into N blocks, the actual storage size of each node is only 1/N. By setting N appropriately, we can achieve a balance between the growth of TPS and the pressure on node storage.

Source: Kernel Ventures
3.2 DAS (Data Availability Sampling)
DAS technology is a further optimization of the storage method based on sharding. In the process of sharding, due to the simple random storage of nodes, a block loss may occur. Secondly, for the data after sharding, how to confirm the authenticity and integrity of the data during the restoration process is also very important. In DAS, these two problems are solved by Eraser code and KZG polynomial commitment.
Eraser code: Given the large number of verified nodes in Ethereum, it's possible that a block is not being stored by any node although it is a probability event. To mitigate the threat of missing storage, instead of slicing and dicing the raw data into blocks, this scheme maps the raw data to the coefficients of an nth-degree polynomial, then takes 2n points on the polynomial and lets the nodes randomly choose one of them to store. For this nth-degree polynomial, only n+1 points are needed for the reduction, and thus only half of the blocks need to be selected by the nodes for us to realize the reduction of the original data. The Eraser code improves the security of the data storage and the network's ability to recover the data.KZG polynomial commitment: A very important aspect of data storage is the verification of data authenticity. In networks that do not use Eraser code, various methods can be used for verification, but if the Eraser code above is introduced to improve data security, then it is more appropriate to use the KZG polynomial commitment, which can verify the contents of a single block directly in the form of a polynomial, thus eliminating the need to reduce the polynomial to binary data. KZG polynomial commitment can directly verify the content of a single block in the form of polynomials, thus eliminating the need to reduce the polynomials to binary data, and the overall form of verification is similar to that of Merkle Tree, but it does not require specific Path node data and only requires the KZG Root and block data to verify the authenticity of the block.
3.3 Data Validation Method in DA
Data validation ensures that the data called from a node are accurate and complete. To minimize the amount of data and computational cost required in the validation process, the DA layer now uses a tree structure as the mainstream validation method. The simplest form is to use Merkle Tree for verification, which uses the form of complete binary tree records, only need to keep a Merkle Root and the hash value of the subtree on the other side of the path of the node can be verified, the time complexity of the verification is O(logN) level (the logN is default log2(N)). Although the validation process has been greatly simplified, the amount of data for the validation process in general still grows with the increase of data. To solve the problem of increasing validation volume, another validation method, Verkle Tree, is proposed at this stage, in which each node in the Verkle Tree not only stores the value but also attaches a Vector Commitment, which can quickly validate the authenticity of the data by using the value of the original node and the commitment proof, without the need to call the values of other sister nodes, which makes the computation of each validation easier and faster. This makes the number of computations for each verification only related to the depth of the Verkle Tree, which is a fixed constant, thus greatly accelerating the verification speed. However, the calculation of Vector Commitment requires the participation of all sister nodes in the same layer, which greatly increases the cost of writing and changing data. However, for data such as historical data, which is permanently stored and cannot be tampered with, also, can only be read but not written, the Verkle Tree is extremely suitable. In addition, Merkle Tree and Verkle Tree itself have a K-ary form of variants, the specific implementation of the mechanism is similar, just change the number of subtrees under each node, the specific performance comparison can be seen in the following table.

Source: Verkle Trees
3.4 Generic DA Middleware
The continuous expansion of the blockchain ecosystem has brought about an increasing number of public chains. Due to the advantages and irreplaceability of each public chain in their respective fields, it is impossible for Layer1 public chains to become unified in a short time. However, with the development of DeFi and the problems of CEX, users' demand for decentralized cross-chain trading assets is growing. Therefore, DA layer multi-chain data storage, which can eliminate the security problems in cross-chain data interaction, has gained more and more attention. However, to accept historical data from different public chains, it is necessary for the DA layer to provide decentralized protocols for standardized storage and validation of data flow. For example, kvye, a storage middleware based on Arweave, adopts the method of actively crawling data from the main chains, and it can store the data from all the chains in a standardized form to Arweave in order to minimize the differences in the data transmission process. Comparatively speaking, Layer2, which specializes in providing DA layer data storage for a certain public chain, carries out data interaction by way of internal shared nodes. Although it reduces the cost of interaction and improves security, it has greater limitations and can only provide services to specific public chains.
4. Storage Methods of DA
4.1 Main Chain DA
4.1.1 DankSharding-like
There is no definitive name for this type of storage scheme, but the most prominent one is Dank Sharding on Ethereum, so in this paper, we use the term Dank Sharding-like to refer to this type of scheme. This type of scheme mainly uses the two DA storage techniques mentioned above, sharding and DAS, firstly, the data is divided into an appropriate number of shares by sharding, and then each node extracts a data block in the form of DAS for storage. For the case that there are enough nodes in the whole network, we can take a larger number of slices N, so that the storage pressure of each node is only 1/N of the original, thus realizing N-fold expansion of the overall storage space. At the same time, to prevent the extreme case that a block is not stored by any block, Dank Sharding encodes the data using Eraser Code, which requires only half of the data for complete restoration. Lastly, the data is verified using a Verkle Tree structure with polynomial commitments for fast checksums.
4.1.2 Temporary Storage
For the DA of the main chain, one of the simplest ways to handle data is to store historical data for a short period of time. Essentially, the blockchain acts as a public ledger, where changes are made to the content of the ledger in the presence of the entire network, and there is no need for permanent storage. In the case of Solana, for example, although its historical data is synchronized to Arweave, the main network nodes only retain the transaction data of the last two days. On a public chain based on account records, each moment of historical data retains the final state of the account on the blockchain, which is sufficient to provide a basis for verification of changes at the next moment. Those who have special needs for data before this time, can store it on other decentralized public chains or hand it over to a trusted third party. In other words, those who have additional needs for data will need to pay for historical data storage.
4.2 Third Party DA
4.2.1 DA for Main Chain: EthStorage
DA for Main Chain: The most important thing for the DA layer is the security of data transmission, and the DA with the highest security is the DA of the main chain, but the main chain storage is limited by the storage space and the competition of resources, so when the data volume of the network grows fast, the third-party DA is a better choice if it wants to realize the long-term storage of data. If the third-party DA has higher compatibility with the main network, it can realize the sharing of nodes, and the data interaction process will have higher security. Therefore, under the premise of considering security, a dedicated DA for the main chain will have a huge advantage. Taking Ethereum as an example, one of the basic requirements for a DA dedicated to the main chain is that it can be compatible with EVM to ensure interoperability with Ethereum data and contracts, and representative projects include Topia, EthStorage, etc. Among them, EthStorage is the most compatible DA in terms of compatibility. Representative projects include Topia, EthStorage, and so on. Among them, EthStorage is the most well-developed in terms of compatibility, because in addition to EVM compatibility, it has also set up relevant interfaces to interface with Remix, Hardhat, and other Ethereum development tools to realize compatibility with Ethereum development tools.EthStorage: EthStorage is a public chain independent of Ethereum, but the nodes running on it are a supergroup of Ethereum nodes, which means that the nodes running EthStorage can also run Ethereum at the same time. What's more, we can also directly operate EthStorage through the opcodes on Ethereum. EthStorage's storage model retains only a small amount of metadata for indexing on the main Ethereum network, essentially creating a decentralized database for Ethereum. In the current solution, EthStorage deploys an EthStorage Contract on the main Ethereum to realize the interaction between the main Ethereum and EthStorage. If Ethereum wants to deposit data, it needs to call the put() function in the contract, and the input parameters are two-byte variables key, data, where data represents the data to be deposited, and the key is its identity in the Ethereum network, which can be regarded as similar to the existence of CID in IPFS. After the (key, data) data pair is successfully stored in the EthStorage network, EthStorage will generate a kvldx to be returned to the Ethereum host network, which corresponds to the key on the Ethereum network, and this value corresponds to the storage address of the data on EthStorage so that the original problem of storing a large amount of data can now be changed to storing a single (key, kvldx). (key, kvldx) pair, which greatly reduces the storage cost of the main Ethereum network. If you need to call the previously stored data, you need to use the get() function in EthStorage and enter the key parameter, and then you can do a quick lookup of the data on EthStorage by using the kvldx stored in Ethereum.

Source: Kernel Ventures
In terms of how nodes store data, EthStorage learns from the Arweave model. First of all, a large number of (k,v) pairs from ETH are sharded, and each sharding contains a fixed number of (k, v) pairs, of which there is a limit on the size of each (k, v) pair to ensure the fairness of workload in the process of storing rewards for miners. For the issuance of rewards, it is necessary to verify whether the node stores data to begin with. In this process, EthStorage will divide a sharding (TB size) into many chunks and keep a Merkle root on the Ethereum mainnet for verification. Then the miner needs to provide a nonce to generate a few chunks by a random algorithm with the hash of the previous block on EthStorage, and the miner needs to provide the data of these chunks to prove that he/she has stored the whole sharding, but this nonce can not be chosen arbitrarily, or else the node will choose the appropriate nonce corresponding to the chunks stored by him/her and pass the verification. However, this nonce cannot be chosen randomly, otherwise the node will choose a suitable nonce that corresponds only to its stored chunks and thus pass the verification, so this nonce must make the generated chunks after mixing and hashing so that the difficulty value meets the requirements of the network, and only the first node that submits the nonce and the random-access proof can get the reward.
4.2.2 Modularization DA: Celsetia
Blockchain Module: The transactions to be performed on the Layer1 public chain are divided into the following four parts: (1) designing the underlying logic of the network, selecting validation nodes in a certain way, writing blocks, and allocating rewards for network maintainers; (2) packaging and processing transactions and publishing related transactions; (3) validating transactions to be uploaded to the blockchain and determining the final status; (4) storing and maintaining historical data on the blockchain. According to the different functions performed, we can divide the blockchain into four modules, consensus layer, execution layer, settlement layer, and data availability layer (DA layer).Modular Blockchain design: for a long time, these four modules have been integrated into a single public chain, such a blockchain is called a monolithic blockchain. This form is more stable and easier to maintain, but it also puts tremendous pressure on the single public chain. In practice, the four modules constrain each other and compete for the limited computational and storage resources of the public chain. For example, increasing the processing speed of the processing layer will bring more storage pressure to the data availability layer; ensuring the security of the execution layer requires a more complex verification mechanism but slows down the speed of transaction processing. Therefore, the development of a public chain often faces a trade-off between these four modules. To break through this bottleneck of public chain performance improvement, developers have proposed a modular blockchain solution. The core idea of modular blockchain is to strip out one or several of the four modules mentioned above and give them to a separate public chain for implementation. In this way, the public chain can focus on the improvement of transaction speed or storage capacity, breaking through the previous limitations on the overall performance of the blockchain due to the short board effect.Modular DA: The complex approach of separating the DA layer from the blockchain business and placing it on a separate public chain is considered a viable solution for Layer1's growing historical data. At this stage, the exploration in this area is still at an early stage, and the most representative project is Celestia, which uses the storage method of Sharding, which also divides the data into multiple blocks, and each node extracts a part of it for storage and uses the KZG polynomial commitment to verify the data integrity. At the same time, Celestia uses advanced two-dimensional RS corrective codes to rewrite the original data in the form of a k*k matrix, which ultimately requires only 25% of the original data to be recovered. However, sliced data storage is essentially just multiplying the storage pressure of nodes across the network by a factor of the total data volume, and the storage pressure of nodes still grows linearly with the data volume. As Layer1 continues to improve for transaction speed, the storage pressure on nodes may still reach an unacceptable threshold someday. To address this issue, an IPLD component is introduced in Celestia. Instead of storing the data in the k*k matrix directly on Celestia, the data is stored in the LL-IPFS network, with only the CID code of the data kept in the node. When a user requests a piece of historical data, the node sends the corresponding CID to the IPLD component, which is used to call the original data on IPFS. If the data exists on IPFS, it is returned via the IPLD component and the node. If it does not exist, the data can not be returned.

Source: Celestia Core
Celestia: Taking Celestia as an example, we can see the application of modular blockchain in solving the storage problem of Ethereum, Rollup node will send the packaged and verified transaction data to Celestia and store the data on Celestia, during the process, Celestia only stores the data without having too much perception. In this process, Celestia just stores the data without sensing it, and in the end, according to the size of the storage space, the Rollup node will pay the corresponding tia tokens to Celestia as the storage fee. The storage in Celestia utilizes a similar DAS and debugging code as in EIP4844, but the polynomial debugging code in EIP4844 is upgraded to use a two-dimensional RS debugging code, which upgrades the security of the storage again, and only 25% of the fractions are needed to recover the entire transaction data. It is essentially a POS public chain with low storage costs, and if it is to be realized as a solution to Ethereum's historical data storage problem, many other specific modules are needed to work with Celestia. For example, in terms of rollup, one of the roll-up models highly recommended by Celestia's official website is Sovereign Rollup, which is different from the common rollup on Layer2, which can only calculate and verify transactions, just completing the execution layer, and includes the entire execution and settlement process, which minimizes the need for the execution and settlement process on Celestia. This minimizes the processing of transactions on Celestia, which maximizes the overall security of the transaction process when the overall security of Celestia is weaker than that of Ethereum. As for the security of the data called by Celestia on the main network of Ethereum, the most mainstream solution is the Quantum Gravity Bridge smart contract. For the data stored on Celestia, it will generate a Merkle Root (data availability certificate) and keep it on the Quantum Gravity Bridge contract on the main network of EtherCenter. When EtherCenter calls the historical data on Celestia every time, it will compare the hash result with the Merkle Root, and if it matches, then it means that it is indeed the real historical data.
4.2.3 Storage Chain DA
In terms of the technical principles of mainchain DAs, many techniques similar to sharding have been borrowed from storage public chains. In third-party DAs, some of them even fulfill part of the storage tasks directly with the help of storage public chains, for example, the specific transaction data in Celestia is put on the LL-IPFS network. In the solutions of third-party DAs, besides building a separate public chain to solve the storage problem of Layer1, a more direct way is to directly connect the storage public chain to Layer1 to store the huge historical data on Layer1. For high-performance blockchain, the volume of historical data is even larger, under full-speed operation, the data volume of high-performance public chain Solana is close to 4 PG, which is completely beyond the storage range of ordinary nodes. Solana chooses a solution to store historical data on the decentralized storage network Arweave and only retains 2 days of data on the nodes of the main network for verification. To ensure the security of the storage process, Solana and the Arweave chain have designed a storage bridge protocol, Solar Bridge, which synchronizes the validated data from Solana nodes to Arweave and returns the corresponding tag, which allows Solana nodes to view the historical data of the Solana blockchain at any point in time. The Solana node can view historical data from any point in time on the Solana blockchain. On Arweave, instead of requiring nodes across the network to maintain data consistency as a necessity for participation, the network adopts a reward storage approach. First of all, Arweave doesn't use a traditional chain structure to build blocks, but more like a graph structure. In Arweave, a new block will not only point to the previous block, but also randomly point to a generated block Recall block, whose exact location is determined by the hash result of the previous block and its block height, and the location of the Recall block is unknown until the previous block is mined out. However, in the process of generating new blocks, nodes are required to have the data of the Recall block to use the POW mechanism to calculate the hash of the specified difficulty, and only the miner who is the first to calculate the hash that meets the difficulty can be rewarded, which encourages miners to store as much historical data as possible. At the same time, the fewer people storing a particular historical block, the fewer competitors a node will have when generating a difficulty-compliant nonce, encouraging miners to store blocks with fewer backups in the network. Finally, to ensure that nodes store data permanently, WildFire's node scoring mechanism is introduced in Arweave. Nodes will prefer to communicate with nodes that can provide historical data more and faster, while nodes with lower ratings will not be able to get the latest block and transaction data the first time, thus failing to get a head start in the POW competition.

Source: Arweave Yellow-Paper
5. Synthesized Comparison
We will compare the advantages and disadvantages of each of the five storage solutions in terms of the four dimensions of DA performance metrics.
Safety: The biggest source of data security problems is the loss of data caused by the data transmission process and malicious tampering from dishonest nodes, and the cross-chain process is the hardest hit area of data transmission security due to the independence of the two public chains and the state is not shared. In addition, Layer1, which requires a specialized DA layer at this stage, often has a strong consensus group, and its security will be much higher than that of ordinary storage public chains. Therefore, the main chain DA solution has higher security. After ensuring the security of data transmission, the next step is to ensure the security of calling data. Considering only the short-term historical data used to verify the transaction, the same data is backed up by the whole network in the temporary storage network, while the average number of data backups in the DankSharding-like scheme is only 1/N of the number of nodes in the whole network, which means more data redundancy can make the data less prone to be lost, and at the same time, it can provide more reference samples for verification. Therefore, temporary storage will have higher data security. In the third-party DA scheme, because of the public nodes used in the main chain, the data can be directly transmitted through these relay nodes in the process of cross-chaining, and thus it will also have a relatively higher security than other DA schemes.Storage Cost: The factor that has the greatest impact on storage cost is the amount of redundancy in the data. In the short-term storage scheme of the main chain DA, which uses the form of network-wide node data synchronization for storage, any newly stored data needs to be backed up in the network-wide nodes, having the highest storage cost. The high storage cost in turn determines that in a high TPS network, this approach is only suitable for temporary storage. Next is the sharding storage method, including sharding in the main chain and sharding in the third-party DA. Because the main chain often has more nodes, and thus the corresponding block will have more backups, the main chain sharding scheme will have a higher cost. The lowest storage cost is in the storage public chain DA that adopts the reward storage method, and the amount of data redundancy in this scheme tends to fluctuate around a fixed constant. At the same time, the storage public chain DA also introduces a dynamic adjustment mechanism, which attracts nodes to store less backup data by increasing the reward to ensure data security.Data Read Speed: Data storage speed is primarily affected by where the data is stored in the storage space, the data index path, and the distribution of the data among the nodes. Among them, where the data is stored in the nodes has a greater impact on the speed, because storing the data in memory or SSD can lead to a tens of times difference in read speed. Storage public chain DAs mostly take SSD storage because the load on that chain includes not only data from the DA layer but also highly memory-hungry personal data such as videos and images uploaded by users. If the network does not use SSDs as storage space, it is difficult to carry the huge storage pressure and meet the demand for long-term storage. Second, for third-party DAs and main-chain DAs that use memory state to store data, third-party DAs first need to search for the corresponding indexed data in the main chain, and then transfer the indexed data across the chain to third-party DAs and return the data via the storage bridge. In contrast, the mainchain DA can query data directly from nodes, and thus has faster data retrieval speed. Finally, within the main-chain DA, the sharding approach requires calling blocks from multiple nodes and restoring the original data. Therefore, it is slower than the short-term storage method without sharding.DA Layer Universality: Mainchain DA universality is close to zero because it is not possible to transfer data from a public chain with insufficient storage space to another public chain with insufficient storage space. In third-party DAs, the generality of a solution and its compatibility with a particular mainchain are contradictory metrics. For example, in the case of a mainchain-specific DA solution designed for a particular mainchain, it has made a lot of improvements at the level of node types and network consensus to adapt to that particular public chain, and thus these improvements can act as a huge obstacle when communicating with other public chains. Within third-party DAs, storage public chain DAs perform better in terms of generalizability than modular DAs. Storage public chain DAs have a larger developer community and more expansion facilities to adapt to different public chains. At the same time, the storage public chain DA can obtain data more actively through packet capture rather than passively receiving information transmitted from other public chains. Therefore, it can encode the data in its way, achieve standardized storage of data flow, facilitate the management of data information from different main chains, and improve storage efficiency.

Source: Kernel Ventures
6. Conclusion
Blockchain is undergoing the process of conversion from Crypto to Web3, and it brings an abundance of projects on the blockchain, but also data storage problems. To accommodate the simultaneous operation of so many projects on Layer1 and ensure the experience of the Gamefi and Socialfi projects, Layer1 represented by Ethereum has adopted Rollup and Blobs to improve the TPS. What's more, the n/umber of high-performance blockchains in the newborn blockchain is also growing. But higher TPS not only means higher performance but also means more storage pressure in the network. For the huge amount of historical data, multiple DA approaches, both main chain and third-party based are proposed at this stage to adapt to the growth of storage pressure on the chain. Improvements have their advantages and disadvantages and have different applicability in different contexts. In the case of payment-based blockchains, which have very high requirements for the security of historical data and do not pursue particularly high TPS, those are still in the preparatory stage, they can adopt a DankSharding-like storage method, which can ensure security and a huge increase in storage capacity at the same time realize. However, if it is a public chain like Bitcoin, which has already been formed and has a large number of nodes, there is a huge risk of rashly improving the consensus layer, so it can adopt a special DA for the main chain with higher security in the off-chain storage to balance the security and storage issues. However, it is worth noting that the function of the blockchain is changing over time. For example, in the early days, Ethereum's functionality was limited to payments and simple automated processing of assets and transactions using smart contracts, but as the blockchain landscape has expanded, various Socialfi and Defi projects have been added to Ethereum, pushing it to a more comprehensive direction. With the recent explosion of the inscription ecosystem on Bitcoin, transaction fees on the Bitcoin network have surged nearly 20 times since August, reflecting the fact that the network's transaction speeds are not able to meet the demand for transactions at this stage. Traders have to raise fees to get transactions processed as quickly as possible. Now, the Bitcoin community needs to make a trade-off between accepting high fees and slow transaction speed or reducing network security to increase transaction speeds while defeating the purpose of the payment system in the first place. If the Bitcoin community chooses the latter, then the storage solution will need to be adjusted in the face of increasing data pressure.

Source: OKLINK
As for the public chain with comprehensive functions, its pursuit of TPS is higher, with the enormous growth of historical data, it is difficult to adapt to the rapid growth of TPS in the long run by adopting the DankSharding-like solution. Therefore, a more appropriate way is to migrate the data to a third-party DA for storage. Among them, main chain-specific DAs have the highest compatibility and may be more advantageous if only the storage of a single public chain is considered. However, nowadays, when Layer1 public chains are blooming, cross-chain asset transfer and data interaction have also become a common pursuit of the blockchain community. If we consider the long-term development of the whole blockchain ecosystem, storing historical data from different public chains on the same public chain can eliminate many security problems in the process of data exchange and validation, so the modularized DA and the way of storing public chain DAs may be a better choice. Under the premise of close generality, modular DA focuses on providing blockchain DA layer services, introduces more refined index data to manage historical data, and can make a reasonable categorization of different public chain data, which has more advantages compared with storage public chains. However, the above proposal does not consider the cost of consensus layer adjustment on the existing public chain, which is extremely risky. A tiny systematic loophole may make the public chain lose community consensus. Therefore, if it is a transitional solution in the process of blockchain transformation, the temporary storage on the main chain may be more appropriate. Finally, all the above discussions are based on the performance during actual operation, but if the goal of a certain public chain is to develop its ecology and attract more project parties and participants, it may also tend to favor projects that are supported and funded by its foundation. For example, if the overall performance is equal to or even slightly lower than that of the storage public chain storage solution, the Ethereum community will also favor EthStorage, which is a Layer2 project supported by the Ethereum Foundation, to continue to develop the Ethereum ecosystem.
All in all, the increasing complexity of today's blockchains brings with it a greater need for storage space. With enough Layer1 validation nodes, historical data does not need to be backed up by all nodes in the whole network butcan ensure security after a certain threshold. At the same time,the division of labor of the public chain has become more and more detailed, Layer1 is responsible for consensus and execution, Rollup is responsible for calculation and verification, and then a separate blockchain is used for data storage. Each part can focus on a certain function without being limited by the performance of the other parts. However, the specific number of storage or the proportion of nodes allowed to store historical data in order toachieve a balance between security and efficiency,as well as how toensure secure interoperability between different blockchainsis a problem that needs to be considered by blockchain developers. Investors canpay attention to the main chain-specific DA project on Ethereum, because Ethereum already has enough supporters at this stage, without the need to use the power of other communities to expand its influence. It is more important to improve and develop its community to attract more projects to the Ethereum ecosystem. However, for public chains that are catching up, such as Solana and Aptos, the single chain itself does not have such a perfect ecosystem, so they may prefer to join forces with other communities to build a large cross-chain ecosystem to expand their influence. Therefore,for the emerging Layer1, a general-purpose third-party DA deserves more attention.
Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world.
Reference
Celestia: 模块化区块链的星辰大海: https://foresightnews.pro/article/detail/15497DHT usage and future work: https://github.com/celestiaorg/celestia-node/issues/11Celestia-core: https://github.com/celestiaorg/celestia-coreSolana labs: https://github.com/solana-labs/solana?source=post_page-----cf47a61a9274--------------------------------Announcing The SOLAR Bridge: https://medium.com/solana-labs/announcing-the-solar-bridge-c90718a49fa2leveldb-handbook: https://leveldb-handbook.readthedocs.io/zh/latest/sstable.htmlKuszmaul J. Verkle trees[J]. Verkle Trees, 2019, 1: 1.: https://math.mit.edu/research/highschool/primes/materials/2018/Kuszmaul.pdfArweave Network: https://www.arweave.org/Arweave Yellow-book: https://www.arweave.org/yellow-paper.pdf
Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer DesignAuthor: Kernel Ventures Jerry Luo Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua TLDR: In the early stage of blockchain, maintaining data consistency is considered extremely important to ensure security and decentralization. However, with the development of the blockchain ecosystem, the storage pressure is also increasing, leading to a trend of centralization in node operation. Such being the case, the storage cost problem brought by TPS growth in Layer1 needs to be solved urgently.Faced with this problem, developers should propose a solution that takes security, storage cost, data reading speed, and DA layer versatility fully into account.In the process of solving this problem, many new technologies and ideas have emerged, including Sharding, DAS, Verkle Tree, DA intermediate components, and so on. They try to optimize the storage scheme of the DA layer by reducing data redundancy and improving data validation efficiency.DA solutions are broadly categorized into two types from the perspective of data storage location, namely, main-chain DAs and third-party DAs. Main-chain DAs are designed from the perspectives of regular data cleansing and sliced data storage to reduce the storage pressure on nodes, while the third-party DAs are designed to serve the storage needs which have reasonable solutions for large amounts of data. As a result, we mainly trade-off between single-chain compatibility and multi-chain compatibility in third-party DAs, and propose three kinds of solutions: main-chain-specific DAs, modularized DAs, and storage public-chain DAs.Payment-type public chains have very high requirements for historical data security and, thus are suitable to use the main chain as the DA layer. However, for public chains that have been running for a long time and have a large number of miners running the network, it is more suitable to adopt a third-party DA that does not involve the consensus layer change with relatively high security. For comprehensive public chains, it is more suitable to use the main chain's dedicated DA storage with larger data capacity, lower cost, and security. However, considering the demand for cross-chain, modular DA is also a good option.Overall, blockchain is moving towards reducing data redundancy as well as multi-chain division of labor. 1. Background Blockchain, as a distributed ledger, needs to make a copy of the historical data stored on all nodes to ensure that the data storage is secure and sufficiently decentralized. Since the correctness of each state change is related to the previous state (the source of the transaction), in order to ensure the correctness of the transaction, a blockchain should store all the history of transactions from the generation of the first transaction to the current transaction. Taking Ethereum as an example, even taking 20 kb per block as the average size, the total size of the current data in Ethereum has reached 370 GB. For a full node, in addition to the block itself, it has to record the state and transaction receipts. Including this part, the total amount of storage of a single node has exceeded 1 TB, which makes the operation of the node gradually centralized. Source: Etherscan The recent Cancun upgrade of Ethereum aims to increase Ethereum's TPS to near 1000, at which point Ethereum's annual storage growth will exceed the sum of its current storage. In high-performance public chains, the transaction speed of tens of thousands of TPS may bring hundreds of GB of data addition per day. The common data redundancy of all nodes on the network obviously can not adapt to such storage pressure. So, Layer1 must find a suitable solution to balance the TPS growth and the storage cost of the nodes. 2. Performance Indicators of DA 2.1 Safety Compared with a database or linked list, blockchain's immutability comes from the fact that its newly generated data can be verified by historical data, thus ensuring the security of its historical data is the first issue to be considered in DA layer storage. To judge the data security of blockchain systems, we often analyze the redundancy amount of data and the checking method of data availability. Number of redundancy: The redundancy of data in the blockchain system mainly plays such roles: first, more redundancy in the network can provide more samples for reference when the verifier needs to check the account status which can help the node select the data recorded by the majority of nodes with higher security. In traditional databases, since the data is only stored in the form of key-value pairs in a certain node, changing the historical data is only carried out in a single node, with a low cost of the attack, and theoretically, the more the number of redundancies is, the higher the degree of credibility of the data is. Theoretically, the more redundancy there is, the more trustworthy the data will be. What's more, the more nodes there are, the less likely the data will be lost. This point can also be compared to the centralized servers that store Web2 games, once the background servers are all shut down, there will be a complete closure of the service. But it is not better with more redundancy, because redundancy will bring additional storage space, which will bring too much storage pressure to the system. A good DA layer should choose a suitable redundancy way to strike a balance between security and storage efficiency.Data Availability Checking: The amount of redundancy can ensure enough records of data in the network, but the data to be used must be checked for accuracy and completeness. Current blockchains commonly use cryptographic commitment algorithms as the verification methods, which just keep a small cryptographic commitment obtained by transaction data mixing, for the whole network to record. To test the authenticity of historical data, we should try to recover the commitment with the data. If the recovery commitment is identical to the original commitment, the verification passes. Commonly used cryptographic verification algorithms are Merkle Root and Verkle Root. High-security data availability verification algorithms can quickly verify historical data with the help of as little third-party data as possible. 2.2 Storage Cost After ensuring basic security, the next goal of the DA layer is to reduce costs and increase efficiency. The first step is to reduce the storage cost presented by the memory consumption caused by storing data per unit size, regardless of the difference in hardware performance. Nowadays, the main ways to reduce storage costs in blockchain are to adopt sharding technology and use reward storage to reduce the number of data backups while keeping its security. However, it is not difficult to see from the above improvement methods that there is a game relationship between storage cost and data security, and reducing storage occupancy often means a decrease in security. Therefore, an excellent DA layer needs to realize the balance between storage cost and data security. In addition, if the DA layer is a separate public chain, it also needs to reduce the cost by minimizing the intermediate process of data exchange, in which every transit process needs to leave index data for subsequent retrieval. So the longer the calling process, the more index data will be left, which will increase the storage cost. Finally, the cost of storing data is directly linked to the persistence of the data. In general, the higher the cost of data storage, the more difficult it is for the public chain to store data persistently. 2.3 Data Reading Speed Having achieved cost reduction, the next step is efficiency which means the ability to quickly recall data from the DA layer when needed. This process involves two steps, the first is to search for nodes to store data, mainly for public chains that have not achieved data consistency across the network, if the public chain has achieved data synchronization of nodes across the network, the time consumption of this process can be ignored. Then, in the mainstream blockchain systems at this stage, including Bitcoin, Ethereum, and Filecoin, the nodes' storage method is all Leveldb database. In Leveldb, data is stored in three ways. First, data written on-the-fly is stored in Memtable type files until the Memtable is full, then, the file type is changed from Memtable to Immutable Memtable. Both two types are stored in memory, but Immutable Memtable files are read-only. The hot storage used in the IPFS network stores data in this part of the network, so that it can be quickly read from memory when it is called, but an average node only has GBs of removable memory, which can easily be slowed down, and when a node goes down, the data in memory is lost permanently. If you want persistent data storage, you need to store the data in the form of SST files on the solid state disk (SSD), but when reading the data, you need to read the data to the memory first, which greatly reduces the speed of data indexing. Finally, for a system with storage sharding, data restoration requires sending data requests to multiple nodes and restoring them, a process that also slows down the reading of data. Source: Leveldb-handbook 2.4 DA Layer Generalization With the development of DeFi and various problems of CEX, users' requirements for cross-chain transactions of decentralized assets are growing. Whether we adopt the cross-chain mechanism of hash-locking, notary, or relay chain, we can't avoid the simultaneous determination of historical data on two chains. The key to this problem lies in the separation of data on the two chains, which cannot be directly communicated in different decentralized systems. Therefore, a solution is proposed by changing the storage method of the DA layer, which stores the historical data of multiple public chains on the same trusted public chain and only needs to call the data on this public chain when verifying. This requires the DA layer to be able to establish secure communication with different types of public chains which means that the DA layer has good versatility. 3. Techniques Concerning DA 3.1 Sharding In traditional distributed systems, a file is not stored in a complete form on a node, but to divide original data into multiple blocks and store them in each node. Also, the block is often not only stored in one node but leaves appropriate backup in other nodes. In the existing mainstream distributed systems, the number of backups is usually set to 2. This sharding mechanism can reduce the storage pressure of individual nodes, expand the total capacity of the system to the sum of the storage capacity of each node, and at the same time ensure the security of storage through appropriate data redundancy. The sharding scheme adopted in blockchain is generally similar to the traditional distributed systems, but there are differences in some details. Firstly, since the default nodes in the blockchain are untrustworthy, the process of realizing sharding requires a sufficiently large amount of data backups for the subsequent judgment of data authenticity, so the number of backups of this node needs to be much more than 2. Ideally, in the blockchain system that adopts this scheme of storage, if the total number of authentication nodes is T and the number of shards is N, the number of backups should be T/N. Secondly, as to the storage process of a block, a traditional distributed system with fewer nodes often has the mode that a node adapted to multiple data blocks. Firstly, the data is mapped to the hash ring by the consistent hash algorithm, then each node stores a certain range of numbered blocks with the hash ring's assignments. It can be accepted in the system that one single node does not have a storage task in certain storage. While on the blockchain, the storage block is no longer a random but an inevitable event for the nodes. Each node will randomly select a block for storage in the blockchain, with the process completed by the hashing result of data mixed with the node's information to modulo slice number. Assuming that each data is divided into N blocks, the actual storage size of each node is only 1/N. By setting N appropriately, we can achieve a balance between the growth of TPS and the pressure on node storage. Source: Kernel Ventures 3.2 DAS (Data Availability Sampling) DAS technology is a further optimization of the storage method based on sharding. In the process of sharding, due to the simple random storage of nodes, a block loss may occur. Secondly, for the data after sharding, how to confirm the authenticity and integrity of the data during the restoration process is also very important. In DAS, these two problems are solved by Eraser code and KZG polynomial commitment. Eraser code: Given the large number of verified nodes in Ethereum, it's possible that a block is not being stored by any node although it is a probability event. To mitigate the threat of missing storage, instead of slicing and dicing the raw data into blocks, this scheme maps the raw data to the coefficients of an nth-degree polynomial, then takes 2n points on the polynomial and lets the nodes randomly choose one of them to store. For this nth-degree polynomial, only n+1 points are needed for the reduction, and thus only half of the blocks need to be selected by the nodes for us to realize the reduction of the original data. The Eraser code improves the security of the data storage and the network's ability to recover the data.KZG polynomial commitment: A very important aspect of data storage is the verification of data authenticity. In networks that do not use Eraser code, various methods can be used for verification, but if the Eraser code above is introduced to improve data security, then it is more appropriate to use the KZG polynomial commitment, which can verify the contents of a single block directly in the form of a polynomial, thus eliminating the need to reduce the polynomial to binary data. KZG polynomial commitment can directly verify the content of a single block in the form of polynomials, thus eliminating the need to reduce the polynomials to binary data, and the overall form of verification is similar to that of Merkle Tree, but it does not require specific Path node data and only requires the KZG Root and block data to verify the authenticity of the block. 3.3 Data Validation Method in DA Data validation ensures that the data called from a node are accurate and complete. To minimize the amount of data and computational cost required in the validation process, the DA layer now uses a tree structure as the mainstream validation method. The simplest form is to use Merkle Tree for verification, which uses the form of complete binary tree records, only need to keep a Merkle Root and the hash value of the subtree on the other side of the path of the node can be verified, the time complexity of the verification is O(logN) level (the logN is default log2(N)). Although the validation process has been greatly simplified, the amount of data for the validation process in general still grows with the increase of data. To solve the problem of increasing validation volume, another validation method, Verkle Tree, is proposed at this stage, in which each node in the Verkle Tree not only stores the value but also attaches a Vector Commitment, which can quickly validate the authenticity of the data by using the value of the original node and the commitment proof, without the need to call the values of other sister nodes, which makes the computation of each validation easier and faster. This makes the number of computations for each verification only related to the depth of the Verkle Tree, which is a fixed constant, thus greatly accelerating the verification speed. However, the calculation of Vector Commitment requires the participation of all sister nodes in the same layer, which greatly increases the cost of writing and changing data. However, for data such as historical data, which is permanently stored and cannot be tampered with, also, can only be read but not written, the Verkle Tree is extremely suitable. In addition, Merkle Tree and Verkle Tree itself have a K-ary form of variants, the specific implementation of the mechanism is similar, just change the number of subtrees under each node, the specific performance comparison can be seen in the following table. Source: Verkle Trees 3.4 Generic DA Middleware The continuous expansion of the blockchain ecosystem has brought about an increasing number of public chains. Due to the advantages and irreplaceability of each public chain in their respective fields, it is impossible for Layer1 public chains to become unified in a short time. However, with the development of DeFi and the problems of CEX, users' demand for decentralized cross-chain trading assets is growing. Therefore, DA layer multi-chain data storage, which can eliminate the security problems in cross-chain data interaction, has gained more and more attention. However, to accept historical data from different public chains, it is necessary for the DA layer to provide decentralized protocols for standardized storage and validation of data flow. For example, kvye, a storage middleware based on Arweave, adopts the method of actively crawling data from the main chains, and it can store the data from all the chains in a standardized form to Arweave in order to minimize the differences in the data transmission process. Comparatively speaking, Layer2, which specializes in providing DA layer data storage for a certain public chain, carries out data interaction by way of internal shared nodes. Although it reduces the cost of interaction and improves security, it has greater limitations and can only provide services to specific public chains. 4. Storage Methods of DA 4.1 Main Chain DA 4.1.1 DankSharding-like There is no definitive name for this type of storage scheme, but the most prominent one is Dank Sharding on Ethereum, so in this paper, we use the term Dank Sharding-like to refer to this type of scheme. This type of scheme mainly uses the two DA storage techniques mentioned above, sharding and DAS, firstly, the data is divided into an appropriate number of shares by sharding, and then each node extracts a data block in the form of DAS for storage. For the case that there are enough nodes in the whole network, we can take a larger number of slices N, so that the storage pressure of each node is only 1/N of the original, thus realizing N-fold expansion of the overall storage space. At the same time, to prevent the extreme case that a block is not stored by any block, Dank Sharding encodes the data using Eraser Code, which requires only half of the data for complete restoration. Lastly, the data is verified using a Verkle Tree structure with polynomial commitments for fast checksums. 4.1.2 Temporary Storage For the DA of the main chain, one of the simplest ways to handle data is to store historical data for a short period of time. Essentially, the blockchain acts as a public ledger, where changes are made to the content of the ledger in the presence of the entire network, and there is no need for permanent storage. In the case of Solana, for example, although its historical data is synchronized to Arweave, the main network nodes only retain the transaction data of the last two days. On a public chain based on account records, each moment of historical data retains the final state of the account on the blockchain, which is sufficient to provide a basis for verification of changes at the next moment. Those who have special needs for data before this time, can store it on other decentralized public chains or hand it over to a trusted third party. In other words, those who have additional needs for data will need to pay for historical data storage. 4.2 Third Party DA 4.2.1 DA for Main Chain: EthStorage DA for Main Chain: The most important thing for the DA layer is the security of data transmission, and the DA with the highest security is the DA of the main chain, but the main chain storage is limited by the storage space and the competition of resources, so when the data volume of the network grows fast, the third-party DA is a better choice if it wants to realize the long-term storage of data. If the third-party DA has higher compatibility with the main network, it can realize the sharing of nodes, and the data interaction process will have higher security. Therefore, under the premise of considering security, a dedicated DA for the main chain will have a huge advantage. Taking Ethereum as an example, one of the basic requirements for a DA dedicated to the main chain is that it can be compatible with EVM to ensure interoperability with Ethereum data and contracts, and representative projects include Topia, EthStorage, etc. Among them, EthStorage is the most compatible DA in terms of compatibility. Representative projects include Topia, EthStorage, and so on. Among them, EthStorage is the most well-developed in terms of compatibility, because in addition to EVM compatibility, it has also set up relevant interfaces to interface with Remix, Hardhat, and other Ethereum development tools to realize compatibility with Ethereum development tools.EthStorage: EthStorage is a public chain independent of Ethereum, but the nodes running on it are a supergroup of Ethereum nodes, which means that the nodes running EthStorage can also run Ethereum at the same time. What's more, we can also directly operate EthStorage through the opcodes on Ethereum. EthStorage's storage model retains only a small amount of metadata for indexing on the main Ethereum network, essentially creating a decentralized database for Ethereum. In the current solution, EthStorage deploys an EthStorage Contract on the main Ethereum to realize the interaction between the main Ethereum and EthStorage. If Ethereum wants to deposit data, it needs to call the put() function in the contract, and the input parameters are two-byte variables key, data, where data represents the data to be deposited, and the key is its identity in the Ethereum network, which can be regarded as similar to the existence of CID in IPFS. After the (key, data) data pair is successfully stored in the EthStorage network, EthStorage will generate a kvldx to be returned to the Ethereum host network, which corresponds to the key on the Ethereum network, and this value corresponds to the storage address of the data on EthStorage so that the original problem of storing a large amount of data can now be changed to storing a single (key, kvldx). (key, kvldx) pair, which greatly reduces the storage cost of the main Ethereum network. If you need to call the previously stored data, you need to use the get() function in EthStorage and enter the key parameter, and then you can do a quick lookup of the data on EthStorage by using the kvldx stored in Ethereum. Source: Kernel Ventures In terms of how nodes store data, EthStorage learns from the Arweave model. First of all, a large number of (k,v) pairs from ETH are sharded, and each sharding contains a fixed number of (k, v) pairs, of which there is a limit on the size of each (k, v) pair to ensure the fairness of workload in the process of storing rewards for miners. For the issuance of rewards, it is necessary to verify whether the node stores data to begin with. In this process, EthStorage will divide a sharding (TB size) into many chunks and keep a Merkle root on the Ethereum mainnet for verification. Then the miner needs to provide a nonce to generate a few chunks by a random algorithm with the hash of the previous block on EthStorage, and the miner needs to provide the data of these chunks to prove that he/she has stored the whole sharding, but this nonce can not be chosen arbitrarily, or else the node will choose the appropriate nonce corresponding to the chunks stored by him/her and pass the verification. However, this nonce cannot be chosen randomly, otherwise the node will choose a suitable nonce that corresponds only to its stored chunks and thus pass the verification, so this nonce must make the generated chunks after mixing and hashing so that the difficulty value meets the requirements of the network, and only the first node that submits the nonce and the random-access proof can get the reward. 4.2.2 Modularization DA: Celsetia Blockchain Module: The transactions to be performed on the Layer1 public chain are divided into the following four parts: (1) designing the underlying logic of the network, selecting validation nodes in a certain way, writing blocks, and allocating rewards for network maintainers; (2) packaging and processing transactions and publishing related transactions; (3) validating transactions to be uploaded to the blockchain and determining the final status; (4) storing and maintaining historical data on the blockchain. According to the different functions performed, we can divide the blockchain into four modules, consensus layer, execution layer, settlement layer, and data availability layer (DA layer).Modular Blockchain design: for a long time, these four modules have been integrated into a single public chain, such a blockchain is called a monolithic blockchain. This form is more stable and easier to maintain, but it also puts tremendous pressure on the single public chain. In practice, the four modules constrain each other and compete for the limited computational and storage resources of the public chain. For example, increasing the processing speed of the processing layer will bring more storage pressure to the data availability layer; ensuring the security of the execution layer requires a more complex verification mechanism but slows down the speed of transaction processing. Therefore, the development of a public chain often faces a trade-off between these four modules. To break through this bottleneck of public chain performance improvement, developers have proposed a modular blockchain solution. The core idea of modular blockchain is to strip out one or several of the four modules mentioned above and give them to a separate public chain for implementation. In this way, the public chain can focus on the improvement of transaction speed or storage capacity, breaking through the previous limitations on the overall performance of the blockchain due to the short board effect.Modular DA: The complex approach of separating the DA layer from the blockchain business and placing it on a separate public chain is considered a viable solution for Layer1's growing historical data. At this stage, the exploration in this area is still at an early stage, and the most representative project is Celestia, which uses the storage method of Sharding, which also divides the data into multiple blocks, and each node extracts a part of it for storage and uses the KZG polynomial commitment to verify the data integrity. At the same time, Celestia uses advanced two-dimensional RS corrective codes to rewrite the original data in the form of a k*k matrix, which ultimately requires only 25% of the original data to be recovered. However, sliced data storage is essentially just multiplying the storage pressure of nodes across the network by a factor of the total data volume, and the storage pressure of nodes still grows linearly with the data volume. As Layer1 continues to improve for transaction speed, the storage pressure on nodes may still reach an unacceptable threshold someday. To address this issue, an IPLD component is introduced in Celestia. Instead of storing the data in the k*k matrix directly on Celestia, the data is stored in the LL-IPFS network, with only the CID code of the data kept in the node. When a user requests a piece of historical data, the node sends the corresponding CID to the IPLD component, which is used to call the original data on IPFS. If the data exists on IPFS, it is returned via the IPLD component and the node. If it does not exist, the data can not be returned. Source: Celestia Core Celestia: Taking Celestia as an example, we can see the application of modular blockchain in solving the storage problem of Ethereum, Rollup node will send the packaged and verified transaction data to Celestia and store the data on Celestia, during the process, Celestia only stores the data without having too much perception. In this process, Celestia just stores the data without sensing it, and in the end, according to the size of the storage space, the Rollup node will pay the corresponding tia tokens to Celestia as the storage fee. The storage in Celestia utilizes a similar DAS and debugging code as in EIP4844, but the polynomial debugging code in EIP4844 is upgraded to use a two-dimensional RS debugging code, which upgrades the security of the storage again, and only 25% of the fractions are needed to recover the entire transaction data. It is essentially a POS public chain with low storage costs, and if it is to be realized as a solution to Ethereum's historical data storage problem, many other specific modules are needed to work with Celestia. For example, in terms of rollup, one of the roll-up models highly recommended by Celestia's official website is Sovereign Rollup, which is different from the common rollup on Layer2, which can only calculate and verify transactions, just completing the execution layer, and includes the entire execution and settlement process, which minimizes the need for the execution and settlement process on Celestia. This minimizes the processing of transactions on Celestia, which maximizes the overall security of the transaction process when the overall security of Celestia is weaker than that of Ethereum. As for the security of the data called by Celestia on the main network of Ethereum, the most mainstream solution is the Quantum Gravity Bridge smart contract. For the data stored on Celestia, it will generate a Merkle Root (data availability certificate) and keep it on the Quantum Gravity Bridge contract on the main network of EtherCenter. When EtherCenter calls the historical data on Celestia every time, it will compare the hash result with the Merkle Root, and if it matches, then it means that it is indeed the real historical data. 4.2.3 Storage Chain DA In terms of the technical principles of mainchain DAs, many techniques similar to sharding have been borrowed from storage public chains. In third-party DAs, some of them even fulfill part of the storage tasks directly with the help of storage public chains, for example, the specific transaction data in Celestia is put on the LL-IPFS network. In the solutions of third-party DAs, besides building a separate public chain to solve the storage problem of Layer1, a more direct way is to directly connect the storage public chain to Layer1 to store the huge historical data on Layer1. For high-performance blockchain, the volume of historical data is even larger, under full-speed operation, the data volume of high-performance public chain Solana is close to 4 PG, which is completely beyond the storage range of ordinary nodes. Solana chooses a solution to store historical data on the decentralized storage network Arweave and only retains 2 days of data on the nodes of the main network for verification. To ensure the security of the storage process, Solana and the Arweave chain have designed a storage bridge protocol, Solar Bridge, which synchronizes the validated data from Solana nodes to Arweave and returns the corresponding tag, which allows Solana nodes to view the historical data of the Solana blockchain at any point in time. The Solana node can view historical data from any point in time on the Solana blockchain. On Arweave, instead of requiring nodes across the network to maintain data consistency as a necessity for participation, the network adopts a reward storage approach. First of all, Arweave doesn't use a traditional chain structure to build blocks, but more like a graph structure. In Arweave, a new block will not only point to the previous block, but also randomly point to a generated block Recall block, whose exact location is determined by the hash result of the previous block and its block height, and the location of the Recall block is unknown until the previous block is mined out. However, in the process of generating new blocks, nodes are required to have the data of the Recall block to use the POW mechanism to calculate the hash of the specified difficulty, and only the miner who is the first to calculate the hash that meets the difficulty can be rewarded, which encourages miners to store as much historical data as possible. At the same time, the fewer people storing a particular historical block, the fewer competitors a node will have when generating a difficulty-compliant nonce, encouraging miners to store blocks with fewer backups in the network. Finally, to ensure that nodes store data permanently, WildFire's node scoring mechanism is introduced in Arweave. Nodes will prefer to communicate with nodes that can provide historical data more and faster, while nodes with lower ratings will not be able to get the latest block and transaction data the first time, thus failing to get a head start in the POW competition. Source: Arweave Yellow-Paper 5. Synthesized Comparison We will compare the advantages and disadvantages of each of the five storage solutions in terms of the four dimensions of DA performance metrics. Safety: The biggest source of data security problems is the loss of data caused by the data transmission process and malicious tampering from dishonest nodes, and the cross-chain process is the hardest hit area of data transmission security due to the independence of the two public chains and the state is not shared. In addition, Layer1, which requires a specialized DA layer at this stage, often has a strong consensus group, and its security will be much higher than that of ordinary storage public chains. Therefore, the main chain DA solution has higher security. After ensuring the security of data transmission, the next step is to ensure the security of calling data. Considering only the short-term historical data used to verify the transaction, the same data is backed up by the whole network in the temporary storage network, while the average number of data backups in the DankSharding-like scheme is only 1/N of the number of nodes in the whole network, which means more data redundancy can make the data less prone to be lost, and at the same time, it can provide more reference samples for verification. Therefore, temporary storage will have higher data security. In the third-party DA scheme, because of the public nodes used in the main chain, the data can be directly transmitted through these relay nodes in the process of cross-chaining, and thus it will also have a relatively higher security than other DA schemes.Storage Cost: The factor that has the greatest impact on storage cost is the amount of redundancy in the data. In the short-term storage scheme of the main chain DA, which uses the form of network-wide node data synchronization for storage, any newly stored data needs to be backed up in the network-wide nodes, having the highest storage cost. The high storage cost in turn determines that in a high TPS network, this approach is only suitable for temporary storage. Next is the sharding storage method, including sharding in the main chain and sharding in the third-party DA. Because the main chain often has more nodes, and thus the corresponding block will have more backups, the main chain sharding scheme will have a higher cost. The lowest storage cost is in the storage public chain DA that adopts the reward storage method, and the amount of data redundancy in this scheme tends to fluctuate around a fixed constant. At the same time, the storage public chain DA also introduces a dynamic adjustment mechanism, which attracts nodes to store less backup data by increasing the reward to ensure data security.Data Read Speed: Data storage speed is primarily affected by where the data is stored in the storage space, the data index path, and the distribution of the data among the nodes. Among them, where the data is stored in the nodes has a greater impact on the speed, because storing the data in memory or SSD can lead to a tens of times difference in read speed. Storage public chain DAs mostly take SSD storage because the load on that chain includes not only data from the DA layer but also highly memory-hungry personal data such as videos and images uploaded by users. If the network does not use SSDs as storage space, it is difficult to carry the huge storage pressure and meet the demand for long-term storage. Second, for third-party DAs and main-chain DAs that use memory state to store data, third-party DAs first need to search for the corresponding indexed data in the main chain, and then transfer the indexed data across the chain to third-party DAs and return the data via the storage bridge. In contrast, the mainchain DA can query data directly from nodes, and thus has faster data retrieval speed. Finally, within the main-chain DA, the sharding approach requires calling blocks from multiple nodes and restoring the original data. Therefore, it is slower than the short-term storage method without sharding.DA Layer Universality: Mainchain DA universality is close to zero because it is not possible to transfer data from a public chain with insufficient storage space to another public chain with insufficient storage space. In third-party DAs, the generality of a solution and its compatibility with a particular mainchain are contradictory metrics. For example, in the case of a mainchain-specific DA solution designed for a particular mainchain, it has made a lot of improvements at the level of node types and network consensus to adapt to that particular public chain, and thus these improvements can act as a huge obstacle when communicating with other public chains. Within third-party DAs, storage public chain DAs perform better in terms of generalizability than modular DAs. Storage public chain DAs have a larger developer community and more expansion facilities to adapt to different public chains. At the same time, the storage public chain DA can obtain data more actively through packet capture rather than passively receiving information transmitted from other public chains. Therefore, it can encode the data in its way, achieve standardized storage of data flow, facilitate the management of data information from different main chains, and improve storage efficiency. Source: Kernel Ventures 6. Conclusion Blockchain is undergoing the process of conversion from Crypto to Web3, and it brings an abundance of projects on the blockchain, but also data storage problems. To accommodate the simultaneous operation of so many projects on Layer1 and ensure the experience of the Gamefi and Socialfi projects, Layer1 represented by Ethereum has adopted Rollup and Blobs to improve the TPS. What's more, the n/umber of high-performance blockchains in the newborn blockchain is also growing. But higher TPS not only means higher performance but also means more storage pressure in the network. For the huge amount of historical data, multiple DA approaches, both main chain and third-party based are proposed at this stage to adapt to the growth of storage pressure on the chain. Improvements have their advantages and disadvantages and have different applicability in different contexts. In the case of payment-based blockchains, which have very high requirements for the security of historical data and do not pursue particularly high TPS, those are still in the preparatory stage, they can adopt a DankSharding-like storage method, which can ensure security and a huge increase in storage capacity at the same time realize. However, if it is a public chain like Bitcoin, which has already been formed and has a large number of nodes, there is a huge risk of rashly improving the consensus layer, so it can adopt a special DA for the main chain with higher security in the off-chain storage to balance the security and storage issues. However, it is worth noting that the function of the blockchain is changing over time. For example, in the early days, Ethereum's functionality was limited to payments and simple automated processing of assets and transactions using smart contracts, but as the blockchain landscape has expanded, various Socialfi and Defi projects have been added to Ethereum, pushing it to a more comprehensive direction. With the recent explosion of the inscription ecosystem on Bitcoin, transaction fees on the Bitcoin network have surged nearly 20 times since August, reflecting the fact that the network's transaction speeds are not able to meet the demand for transactions at this stage. Traders have to raise fees to get transactions processed as quickly as possible. Now, the Bitcoin community needs to make a trade-off between accepting high fees and slow transaction speed or reducing network security to increase transaction speeds while defeating the purpose of the payment system in the first place. If the Bitcoin community chooses the latter, then the storage solution will need to be adjusted in the face of increasing data pressure. Source: OKLINK As for the public chain with comprehensive functions, its pursuit of TPS is higher, with the enormous growth of historical data, it is difficult to adapt to the rapid growth of TPS in the long run by adopting the DankSharding-like solution. Therefore, a more appropriate way is to migrate the data to a third-party DA for storage. Among them, main chain-specific DAs have the highest compatibility and may be more advantageous if only the storage of a single public chain is considered. However, nowadays, when Layer1 public chains are blooming, cross-chain asset transfer and data interaction have also become a common pursuit of the blockchain community. If we consider the long-term development of the whole blockchain ecosystem, storing historical data from different public chains on the same public chain can eliminate many security problems in the process of data exchange and validation, so the modularized DA and the way of storing public chain DAs may be a better choice. Under the premise of close generality, modular DA focuses on providing blockchain DA layer services, introduces more refined index data to manage historical data, and can make a reasonable categorization of different public chain data, which has more advantages compared with storage public chains. However, the above proposal does not consider the cost of consensus layer adjustment on the existing public chain, which is extremely risky. A tiny systematic loophole may make the public chain lose community consensus. Therefore, if it is a transitional solution in the process of blockchain transformation, the temporary storage on the main chain may be more appropriate. Finally, all the above discussions are based on the performance during actual operation, but if the goal of a certain public chain is to develop its ecology and attract more project parties and participants, it may also tend to favor projects that are supported and funded by its foundation. For example, if the overall performance is equal to or even slightly lower than that of the storage public chain storage solution, the Ethereum community will also favor EthStorage, which is a Layer2 project supported by the Ethereum Foundation, to continue to develop the Ethereum ecosystem. All in all, the increasing complexity of today's blockchains brings with it a greater need for storage space. With enough Layer1 validation nodes, historical data does not need to be backed up by all nodes in the whole network butcan ensure security after a certain threshold. At the same time,the division of labor of the public chain has become more and more detailed, Layer1 is responsible for consensus and execution, Rollup is responsible for calculation and verification, and then a separate blockchain is used for data storage. Each part can focus on a certain function without being limited by the performance of the other parts. However, the specific number of storage or the proportion of nodes allowed to store historical data in order toachieve a balance between security and efficiency,as well as how toensure secure interoperability between different blockchainsis a problem that needs to be considered by blockchain developers. Investors canpay attention to the main chain-specific DA project on Ethereum, because Ethereum already has enough supporters at this stage, without the need to use the power of other communities to expand its influence. It is more important to improve and develop its community to attract more projects to the Ethereum ecosystem. However, for public chains that are catching up, such as Solana and Aptos, the single chain itself does not have such a perfect ecosystem, so they may prefer to join forces with other communities to build a large cross-chain ecosystem to expand their influence. Therefore,for the emerging Layer1, a general-purpose third-party DA deserves more attention. Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world. Reference Celestia: 模块化区块链的星辰大海: https://foresightnews.pro/article/detail/15497DHT usage and future work: https://github.com/celestiaorg/celestia-node/issues/11Celestia-core: https://github.com/celestiaorg/celestia-coreSolana labs: https://github.com/solana-labs/solana?source=post_page-----cf47a61a9274--------------------------------Announcing The SOLAR Bridge: https://medium.com/solana-labs/announcing-the-solar-bridge-c90718a49fa2leveldb-handbook: https://leveldb-handbook.readthedocs.io/zh/latest/sstable.htmlKuszmaul J. Verkle trees[J]. Verkle Trees, 2019, 1: 1.: https://math.mit.edu/research/highschool/primes/materials/2018/Kuszmaul.pdfArweave Network: https://www.arweave.org/Arweave Yellow-book: https://www.arweave.org/yellow-paper.pdf

Kernel Ventures: Exploring Data Availability — In Relation to Historical Data Layer Design

Author: Kernel Ventures Jerry Luo
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
In the early stage of blockchain, maintaining data consistency is considered extremely important to ensure security and decentralization. However, with the development of the blockchain ecosystem, the storage pressure is also increasing, leading to a trend of centralization in node operation. Such being the case, the storage cost problem brought by TPS growth in Layer1 needs to be solved urgently.Faced with this problem, developers should propose a solution that takes security, storage cost, data reading speed, and DA layer versatility fully into account.In the process of solving this problem, many new technologies and ideas have emerged, including Sharding, DAS, Verkle Tree, DA intermediate components, and so on. They try to optimize the storage scheme of the DA layer by reducing data redundancy and improving data validation efficiency.DA solutions are broadly categorized into two types from the perspective of data storage location, namely, main-chain DAs and third-party DAs. Main-chain DAs are designed from the perspectives of regular data cleansing and sliced data storage to reduce the storage pressure on nodes, while the third-party DAs are designed to serve the storage needs which have reasonable solutions for large amounts of data. As a result, we mainly trade-off between single-chain compatibility and multi-chain compatibility in third-party DAs, and propose three kinds of solutions: main-chain-specific DAs, modularized DAs, and storage public-chain DAs.Payment-type public chains have very high requirements for historical data security and, thus are suitable to use the main chain as the DA layer. However, for public chains that have been running for a long time and have a large number of miners running the network, it is more suitable to adopt a third-party DA that does not involve the consensus layer change with relatively high security. For comprehensive public chains, it is more suitable to use the main chain's dedicated DA storage with larger data capacity, lower cost, and security. However, considering the demand for cross-chain, modular DA is also a good option.Overall, blockchain is moving towards reducing data redundancy as well as multi-chain division of labor.
1. Background
Blockchain, as a distributed ledger, needs to make a copy of the historical data stored on all nodes to ensure that the data storage is secure and sufficiently decentralized. Since the correctness of each state change is related to the previous state (the source of the transaction), in order to ensure the correctness of the transaction, a blockchain should store all the history of transactions from the generation of the first transaction to the current transaction. Taking Ethereum as an example, even taking 20 kb per block as the average size, the total size of the current data in Ethereum has reached 370 GB. For a full node, in addition to the block itself, it has to record the state and transaction receipts. Including this part, the total amount of storage of a single node has exceeded 1 TB, which makes the operation of the node gradually centralized.

Source: Etherscan
The recent Cancun upgrade of Ethereum aims to increase Ethereum's TPS to near 1000, at which point Ethereum's annual storage growth will exceed the sum of its current storage. In high-performance public chains, the transaction speed of tens of thousands of TPS may bring hundreds of GB of data addition per day. The common data redundancy of all nodes on the network obviously can not adapt to such storage pressure. So, Layer1 must find a suitable solution to balance the TPS growth and the storage cost of the nodes.
2. Performance Indicators of DA
2.1 Safety
Compared with a database or linked list, blockchain's immutability comes from the fact that its newly generated data can be verified by historical data, thus ensuring the security of its historical data is the first issue to be considered in DA layer storage. To judge the data security of blockchain systems, we often analyze the redundancy amount of data and the checking method of data availability.
Number of redundancy: The redundancy of data in the blockchain system mainly plays such roles: first, more redundancy in the network can provide more samples for reference when the verifier needs to check the account status which can help the node select the data recorded by the majority of nodes with higher security. In traditional databases, since the data is only stored in the form of key-value pairs in a certain node, changing the historical data is only carried out in a single node, with a low cost of the attack, and theoretically, the more the number of redundancies is, the higher the degree of credibility of the data is. Theoretically, the more redundancy there is, the more trustworthy the data will be. What's more, the more nodes there are, the less likely the data will be lost. This point can also be compared to the centralized servers that store Web2 games, once the background servers are all shut down, there will be a complete closure of the service. But it is not better with more redundancy, because redundancy will bring additional storage space, which will bring too much storage pressure to the system. A good DA layer should choose a suitable redundancy way to strike a balance between security and storage efficiency.Data Availability Checking: The amount of redundancy can ensure enough records of data in the network, but the data to be used must be checked for accuracy and completeness. Current blockchains commonly use cryptographic commitment algorithms as the verification methods, which just keep a small cryptographic commitment obtained by transaction data mixing, for the whole network to record. To test the authenticity of historical data, we should try to recover the commitment with the data. If the recovery commitment is identical to the original commitment, the verification passes. Commonly used cryptographic verification algorithms are Merkle Root and Verkle Root. High-security data availability verification algorithms can quickly verify historical data with the help of as little third-party data as possible.
2.2 Storage Cost
After ensuring basic security, the next goal of the DA layer is to reduce costs and increase efficiency. The first step is to reduce the storage cost presented by the memory consumption caused by storing data per unit size, regardless of the difference in hardware performance. Nowadays, the main ways to reduce storage costs in blockchain are to adopt sharding technology and use reward storage to reduce the number of data backups while keeping its security. However, it is not difficult to see from the above improvement methods that there is a game relationship between storage cost and data security, and reducing storage occupancy often means a decrease in security. Therefore, an excellent DA layer needs to realize the balance between storage cost and data security. In addition, if the DA layer is a separate public chain, it also needs to reduce the cost by minimizing the intermediate process of data exchange, in which every transit process needs to leave index data for subsequent retrieval. So the longer the calling process, the more index data will be left, which will increase the storage cost. Finally, the cost of storing data is directly linked to the persistence of the data. In general, the higher the cost of data storage, the more difficult it is for the public chain to store data persistently.
2.3 Data Reading Speed
Having achieved cost reduction, the next step is efficiency which means the ability to quickly recall data from the DA layer when needed. This process involves two steps, the first is to search for nodes to store data, mainly for public chains that have not achieved data consistency across the network, if the public chain has achieved data synchronization of nodes across the network, the time consumption of this process can be ignored. Then, in the mainstream blockchain systems at this stage, including Bitcoin, Ethereum, and Filecoin, the nodes' storage method is all Leveldb database. In Leveldb, data is stored in three ways. First, data written on-the-fly is stored in Memtable type files until the Memtable is full, then, the file type is changed from Memtable to Immutable Memtable. Both two types are stored in memory, but Immutable Memtable files are read-only. The hot storage used in the IPFS network stores data in this part of the network, so that it can be quickly read from memory when it is called, but an average node only has GBs of removable memory, which can easily be slowed down, and when a node goes down, the data in memory is lost permanently. If you want persistent data storage, you need to store the data in the form of SST files on the solid state disk (SSD), but when reading the data, you need to read the data to the memory first, which greatly reduces the speed of data indexing. Finally, for a system with storage sharding, data restoration requires sending data requests to multiple nodes and restoring them, a process that also slows down the reading of data.

Source: Leveldb-handbook
2.4 DA Layer Generalization
With the development of DeFi and various problems of CEX, users' requirements for cross-chain transactions of decentralized assets are growing. Whether we adopt the cross-chain mechanism of hash-locking, notary, or relay chain, we can't avoid the simultaneous determination of historical data on two chains. The key to this problem lies in the separation of data on the two chains, which cannot be directly communicated in different decentralized systems. Therefore, a solution is proposed by changing the storage method of the DA layer, which stores the historical data of multiple public chains on the same trusted public chain and only needs to call the data on this public chain when verifying. This requires the DA layer to be able to establish secure communication with different types of public chains which means that the DA layer has good versatility.
3. Techniques Concerning DA
3.1 Sharding
In traditional distributed systems, a file is not stored in a complete form on a node, but to divide original data into multiple blocks and store them in each node. Also, the block is often not only stored in one node but leaves appropriate backup in other nodes. In the existing mainstream distributed systems, the number of backups is usually set to 2. This sharding mechanism can reduce the storage pressure of individual nodes, expand the total capacity of the system to the sum of the storage capacity of each node, and at the same time ensure the security of storage through appropriate data redundancy. The sharding scheme adopted in blockchain is generally similar to the traditional distributed systems, but there are differences in some details. Firstly, since the default nodes in the blockchain are untrustworthy, the process of realizing sharding requires a sufficiently large amount of data backups for the subsequent judgment of data authenticity, so the number of backups of this node needs to be much more than 2. Ideally, in the blockchain system that adopts this scheme of storage, if the total number of authentication nodes is T and the number of shards is N, the number of backups should be T/N. Secondly, as to the storage process of a block, a traditional distributed system with fewer nodes often has the mode that a node adapted to multiple data blocks. Firstly, the data is mapped to the hash ring by the consistent hash algorithm, then each node stores a certain range of numbered blocks with the hash ring's assignments. It can be accepted in the system that one single node does not have a storage task in certain storage. While on the blockchain, the storage block is no longer a random but an inevitable event for the nodes. Each node will randomly select a block for storage in the blockchain, with the process completed by the hashing result of data mixed with the node's information to modulo slice number. Assuming that each data is divided into N blocks, the actual storage size of each node is only 1/N. By setting N appropriately, we can achieve a balance between the growth of TPS and the pressure on node storage.

Source: Kernel Ventures
3.2 DAS (Data Availability Sampling)
DAS technology is a further optimization of the storage method based on sharding. In the process of sharding, due to the simple random storage of nodes, a block loss may occur. Secondly, for the data after sharding, how to confirm the authenticity and integrity of the data during the restoration process is also very important. In DAS, these two problems are solved by Eraser code and KZG polynomial commitment.
Eraser code: Given the large number of verified nodes in Ethereum, it's possible that a block is not being stored by any node although it is a probability event. To mitigate the threat of missing storage, instead of slicing and dicing the raw data into blocks, this scheme maps the raw data to the coefficients of an nth-degree polynomial, then takes 2n points on the polynomial and lets the nodes randomly choose one of them to store. For this nth-degree polynomial, only n+1 points are needed for the reduction, and thus only half of the blocks need to be selected by the nodes for us to realize the reduction of the original data. The Eraser code improves the security of the data storage and the network's ability to recover the data.KZG polynomial commitment: A very important aspect of data storage is the verification of data authenticity. In networks that do not use Eraser code, various methods can be used for verification, but if the Eraser code above is introduced to improve data security, then it is more appropriate to use the KZG polynomial commitment, which can verify the contents of a single block directly in the form of a polynomial, thus eliminating the need to reduce the polynomial to binary data. KZG polynomial commitment can directly verify the content of a single block in the form of polynomials, thus eliminating the need to reduce the polynomials to binary data, and the overall form of verification is similar to that of Merkle Tree, but it does not require specific Path node data and only requires the KZG Root and block data to verify the authenticity of the block.
3.3 Data Validation Method in DA
Data validation ensures that the data called from a node are accurate and complete. To minimize the amount of data and computational cost required in the validation process, the DA layer now uses a tree structure as the mainstream validation method. The simplest form is to use Merkle Tree for verification, which uses the form of complete binary tree records, only need to keep a Merkle Root and the hash value of the subtree on the other side of the path of the node can be verified, the time complexity of the verification is O(logN) level (the logN is default log2(N)). Although the validation process has been greatly simplified, the amount of data for the validation process in general still grows with the increase of data. To solve the problem of increasing validation volume, another validation method, Verkle Tree, is proposed at this stage, in which each node in the Verkle Tree not only stores the value but also attaches a Vector Commitment, which can quickly validate the authenticity of the data by using the value of the original node and the commitment proof, without the need to call the values of other sister nodes, which makes the computation of each validation easier and faster. This makes the number of computations for each verification only related to the depth of the Verkle Tree, which is a fixed constant, thus greatly accelerating the verification speed. However, the calculation of Vector Commitment requires the participation of all sister nodes in the same layer, which greatly increases the cost of writing and changing data. However, for data such as historical data, which is permanently stored and cannot be tampered with, also, can only be read but not written, the Verkle Tree is extremely suitable. In addition, Merkle Tree and Verkle Tree itself have a K-ary form of variants, the specific implementation of the mechanism is similar, just change the number of subtrees under each node, the specific performance comparison can be seen in the following table.

Source: Verkle Trees
3.4 Generic DA Middleware
The continuous expansion of the blockchain ecosystem has brought about an increasing number of public chains. Due to the advantages and irreplaceability of each public chain in their respective fields, it is impossible for Layer1 public chains to become unified in a short time. However, with the development of DeFi and the problems of CEX, users' demand for decentralized cross-chain trading assets is growing. Therefore, DA layer multi-chain data storage, which can eliminate the security problems in cross-chain data interaction, has gained more and more attention. However, to accept historical data from different public chains, it is necessary for the DA layer to provide decentralized protocols for standardized storage and validation of data flow. For example, kvye, a storage middleware based on Arweave, adopts the method of actively crawling data from the main chains, and it can store the data from all the chains in a standardized form to Arweave in order to minimize the differences in the data transmission process. Comparatively speaking, Layer2, which specializes in providing DA layer data storage for a certain public chain, carries out data interaction by way of internal shared nodes. Although it reduces the cost of interaction and improves security, it has greater limitations and can only provide services to specific public chains.
4. Storage Methods of DA
4.1 Main Chain DA
4.1.1 DankSharding-like
There is no definitive name for this type of storage scheme, but the most prominent one is Dank Sharding on Ethereum, so in this paper, we use the term Dank Sharding-like to refer to this type of scheme. This type of scheme mainly uses the two DA storage techniques mentioned above, sharding and DAS, firstly, the data is divided into an appropriate number of shares by sharding, and then each node extracts a data block in the form of DAS for storage. For the case that there are enough nodes in the whole network, we can take a larger number of slices N, so that the storage pressure of each node is only 1/N of the original, thus realizing N-fold expansion of the overall storage space. At the same time, to prevent the extreme case that a block is not stored by any block, Dank Sharding encodes the data using Eraser Code, which requires only half of the data for complete restoration. Lastly, the data is verified using a Verkle Tree structure with polynomial commitments for fast checksums.
4.1.2 Temporary Storage
For the DA of the main chain, one of the simplest ways to handle data is to store historical data for a short period of time. Essentially, the blockchain acts as a public ledger, where changes are made to the content of the ledger in the presence of the entire network, and there is no need for permanent storage. In the case of Solana, for example, although its historical data is synchronized to Arweave, the main network nodes only retain the transaction data of the last two days. On a public chain based on account records, each moment of historical data retains the final state of the account on the blockchain, which is sufficient to provide a basis for verification of changes at the next moment. Those who have special needs for data before this time, can store it on other decentralized public chains or hand it over to a trusted third party. In other words, those who have additional needs for data will need to pay for historical data storage.
4.2 Third Party DA
4.2.1 DA for Main Chain: EthStorage
DA for Main Chain: The most important thing for the DA layer is the security of data transmission, and the DA with the highest security is the DA of the main chain, but the main chain storage is limited by the storage space and the competition of resources, so when the data volume of the network grows fast, the third-party DA is a better choice if it wants to realize the long-term storage of data. If the third-party DA has higher compatibility with the main network, it can realize the sharing of nodes, and the data interaction process will have higher security. Therefore, under the premise of considering security, a dedicated DA for the main chain will have a huge advantage. Taking Ethereum as an example, one of the basic requirements for a DA dedicated to the main chain is that it can be compatible with EVM to ensure interoperability with Ethereum data and contracts, and representative projects include Topia, EthStorage, etc. Among them, EthStorage is the most compatible DA in terms of compatibility. Representative projects include Topia, EthStorage, and so on. Among them, EthStorage is the most well-developed in terms of compatibility, because in addition to EVM compatibility, it has also set up relevant interfaces to interface with Remix, Hardhat, and other Ethereum development tools to realize compatibility with Ethereum development tools.EthStorage: EthStorage is a public chain independent of Ethereum, but the nodes running on it are a supergroup of Ethereum nodes, which means that the nodes running EthStorage can also run Ethereum at the same time. What's more, we can also directly operate EthStorage through the opcodes on Ethereum. EthStorage's storage model retains only a small amount of metadata for indexing on the main Ethereum network, essentially creating a decentralized database for Ethereum. In the current solution, EthStorage deploys an EthStorage Contract on the main Ethereum to realize the interaction between the main Ethereum and EthStorage. If Ethereum wants to deposit data, it needs to call the put() function in the contract, and the input parameters are two-byte variables key, data, where data represents the data to be deposited, and the key is its identity in the Ethereum network, which can be regarded as similar to the existence of CID in IPFS. After the (key, data) data pair is successfully stored in the EthStorage network, EthStorage will generate a kvldx to be returned to the Ethereum host network, which corresponds to the key on the Ethereum network, and this value corresponds to the storage address of the data on EthStorage so that the original problem of storing a large amount of data can now be changed to storing a single (key, kvldx). (key, kvldx) pair, which greatly reduces the storage cost of the main Ethereum network. If you need to call the previously stored data, you need to use the get() function in EthStorage and enter the key parameter, and then you can do a quick lookup of the data on EthStorage by using the kvldx stored in Ethereum.

Source: Kernel Ventures
In terms of how nodes store data, EthStorage learns from the Arweave model. First of all, a large number of (k,v) pairs from ETH are sharded, and each sharding contains a fixed number of (k, v) pairs, of which there is a limit on the size of each (k, v) pair to ensure the fairness of workload in the process of storing rewards for miners. For the issuance of rewards, it is necessary to verify whether the node stores data to begin with. In this process, EthStorage will divide a sharding (TB size) into many chunks and keep a Merkle root on the Ethereum mainnet for verification. Then the miner needs to provide a nonce to generate a few chunks by a random algorithm with the hash of the previous block on EthStorage, and the miner needs to provide the data of these chunks to prove that he/she has stored the whole sharding, but this nonce can not be chosen arbitrarily, or else the node will choose the appropriate nonce corresponding to the chunks stored by him/her and pass the verification. However, this nonce cannot be chosen randomly, otherwise the node will choose a suitable nonce that corresponds only to its stored chunks and thus pass the verification, so this nonce must make the generated chunks after mixing and hashing so that the difficulty value meets the requirements of the network, and only the first node that submits the nonce and the random-access proof can get the reward.
4.2.2 Modularization DA: Celsetia
Blockchain Module: The transactions to be performed on the Layer1 public chain are divided into the following four parts: (1) designing the underlying logic of the network, selecting validation nodes in a certain way, writing blocks, and allocating rewards for network maintainers; (2) packaging and processing transactions and publishing related transactions; (3) validating transactions to be uploaded to the blockchain and determining the final status; (4) storing and maintaining historical data on the blockchain. According to the different functions performed, we can divide the blockchain into four modules, consensus layer, execution layer, settlement layer, and data availability layer (DA layer).Modular Blockchain design: for a long time, these four modules have been integrated into a single public chain, such a blockchain is called a monolithic blockchain. This form is more stable and easier to maintain, but it also puts tremendous pressure on the single public chain. In practice, the four modules constrain each other and compete for the limited computational and storage resources of the public chain. For example, increasing the processing speed of the processing layer will bring more storage pressure to the data availability layer; ensuring the security of the execution layer requires a more complex verification mechanism but slows down the speed of transaction processing. Therefore, the development of a public chain often faces a trade-off between these four modules. To break through this bottleneck of public chain performance improvement, developers have proposed a modular blockchain solution. The core idea of modular blockchain is to strip out one or several of the four modules mentioned above and give them to a separate public chain for implementation. In this way, the public chain can focus on the improvement of transaction speed or storage capacity, breaking through the previous limitations on the overall performance of the blockchain due to the short board effect.Modular DA: The complex approach of separating the DA layer from the blockchain business and placing it on a separate public chain is considered a viable solution for Layer1's growing historical data. At this stage, the exploration in this area is still at an early stage, and the most representative project is Celestia, which uses the storage method of Sharding, which also divides the data into multiple blocks, and each node extracts a part of it for storage and uses the KZG polynomial commitment to verify the data integrity. At the same time, Celestia uses advanced two-dimensional RS corrective codes to rewrite the original data in the form of a k*k matrix, which ultimately requires only 25% of the original data to be recovered. However, sliced data storage is essentially just multiplying the storage pressure of nodes across the network by a factor of the total data volume, and the storage pressure of nodes still grows linearly with the data volume. As Layer1 continues to improve for transaction speed, the storage pressure on nodes may still reach an unacceptable threshold someday. To address this issue, an IPLD component is introduced in Celestia. Instead of storing the data in the k*k matrix directly on Celestia, the data is stored in the LL-IPFS network, with only the CID code of the data kept in the node. When a user requests a piece of historical data, the node sends the corresponding CID to the IPLD component, which is used to call the original data on IPFS. If the data exists on IPFS, it is returned via the IPLD component and the node. If it does not exist, the data can not be returned.

Source: Celestia Core
Celestia: Taking Celestia as an example, we can see the application of modular blockchain in solving the storage problem of Ethereum, Rollup node will send the packaged and verified transaction data to Celestia and store the data on Celestia, during the process, Celestia only stores the data without having too much perception. In this process, Celestia just stores the data without sensing it, and in the end, according to the size of the storage space, the Rollup node will pay the corresponding tia tokens to Celestia as the storage fee. The storage in Celestia utilizes a similar DAS and debugging code as in EIP4844, but the polynomial debugging code in EIP4844 is upgraded to use a two-dimensional RS debugging code, which upgrades the security of the storage again, and only 25% of the fractions are needed to recover the entire transaction data. It is essentially a POS public chain with low storage costs, and if it is to be realized as a solution to Ethereum's historical data storage problem, many other specific modules are needed to work with Celestia. For example, in terms of rollup, one of the roll-up models highly recommended by Celestia's official website is Sovereign Rollup, which is different from the common rollup on Layer2, which can only calculate and verify transactions, just completing the execution layer, and includes the entire execution and settlement process, which minimizes the need for the execution and settlement process on Celestia. This minimizes the processing of transactions on Celestia, which maximizes the overall security of the transaction process when the overall security of Celestia is weaker than that of Ethereum. As for the security of the data called by Celestia on the main network of Ethereum, the most mainstream solution is the Quantum Gravity Bridge smart contract. For the data stored on Celestia, it will generate a Merkle Root (data availability certificate) and keep it on the Quantum Gravity Bridge contract on the main network of EtherCenter. When EtherCenter calls the historical data on Celestia every time, it will compare the hash result with the Merkle Root, and if it matches, then it means that it is indeed the real historical data.
4.2.3 Storage Chain DA
In terms of the technical principles of mainchain DAs, many techniques similar to sharding have been borrowed from storage public chains. In third-party DAs, some of them even fulfill part of the storage tasks directly with the help of storage public chains, for example, the specific transaction data in Celestia is put on the LL-IPFS network. In the solutions of third-party DAs, besides building a separate public chain to solve the storage problem of Layer1, a more direct way is to directly connect the storage public chain to Layer1 to store the huge historical data on Layer1. For high-performance blockchain, the volume of historical data is even larger, under full-speed operation, the data volume of high-performance public chain Solana is close to 4 PG, which is completely beyond the storage range of ordinary nodes. Solana chooses a solution to store historical data on the decentralized storage network Arweave and only retains 2 days of data on the nodes of the main network for verification. To ensure the security of the storage process, Solana and the Arweave chain have designed a storage bridge protocol, Solar Bridge, which synchronizes the validated data from Solana nodes to Arweave and returns the corresponding tag, which allows Solana nodes to view the historical data of the Solana blockchain at any point in time. The Solana node can view historical data from any point in time on the Solana blockchain. On Arweave, instead of requiring nodes across the network to maintain data consistency as a necessity for participation, the network adopts a reward storage approach. First of all, Arweave doesn't use a traditional chain structure to build blocks, but more like a graph structure. In Arweave, a new block will not only point to the previous block, but also randomly point to a generated block Recall block, whose exact location is determined by the hash result of the previous block and its block height, and the location of the Recall block is unknown until the previous block is mined out. However, in the process of generating new blocks, nodes are required to have the data of the Recall block to use the POW mechanism to calculate the hash of the specified difficulty, and only the miner who is the first to calculate the hash that meets the difficulty can be rewarded, which encourages miners to store as much historical data as possible. At the same time, the fewer people storing a particular historical block, the fewer competitors a node will have when generating a difficulty-compliant nonce, encouraging miners to store blocks with fewer backups in the network. Finally, to ensure that nodes store data permanently, WildFire's node scoring mechanism is introduced in Arweave. Nodes will prefer to communicate with nodes that can provide historical data more and faster, while nodes with lower ratings will not be able to get the latest block and transaction data the first time, thus failing to get a head start in the POW competition.

Source: Arweave Yellow-Paper
5. Synthesized Comparison
We will compare the advantages and disadvantages of each of the five storage solutions in terms of the four dimensions of DA performance metrics.
Safety: The biggest source of data security problems is the loss of data caused by the data transmission process and malicious tampering from dishonest nodes, and the cross-chain process is the hardest hit area of data transmission security due to the independence of the two public chains and the state is not shared. In addition, Layer1, which requires a specialized DA layer at this stage, often has a strong consensus group, and its security will be much higher than that of ordinary storage public chains. Therefore, the main chain DA solution has higher security. After ensuring the security of data transmission, the next step is to ensure the security of calling data. Considering only the short-term historical data used to verify the transaction, the same data is backed up by the whole network in the temporary storage network, while the average number of data backups in the DankSharding-like scheme is only 1/N of the number of nodes in the whole network, which means more data redundancy can make the data less prone to be lost, and at the same time, it can provide more reference samples for verification. Therefore, temporary storage will have higher data security. In the third-party DA scheme, because of the public nodes used in the main chain, the data can be directly transmitted through these relay nodes in the process of cross-chaining, and thus it will also have a relatively higher security than other DA schemes.Storage Cost: The factor that has the greatest impact on storage cost is the amount of redundancy in the data. In the short-term storage scheme of the main chain DA, which uses the form of network-wide node data synchronization for storage, any newly stored data needs to be backed up in the network-wide nodes, having the highest storage cost. The high storage cost in turn determines that in a high TPS network, this approach is only suitable for temporary storage. Next is the sharding storage method, including sharding in the main chain and sharding in the third-party DA. Because the main chain often has more nodes, and thus the corresponding block will have more backups, the main chain sharding scheme will have a higher cost. The lowest storage cost is in the storage public chain DA that adopts the reward storage method, and the amount of data redundancy in this scheme tends to fluctuate around a fixed constant. At the same time, the storage public chain DA also introduces a dynamic adjustment mechanism, which attracts nodes to store less backup data by increasing the reward to ensure data security.Data Read Speed: Data storage speed is primarily affected by where the data is stored in the storage space, the data index path, and the distribution of the data among the nodes. Among them, where the data is stored in the nodes has a greater impact on the speed, because storing the data in memory or SSD can lead to a tens of times difference in read speed. Storage public chain DAs mostly take SSD storage because the load on that chain includes not only data from the DA layer but also highly memory-hungry personal data such as videos and images uploaded by users. If the network does not use SSDs as storage space, it is difficult to carry the huge storage pressure and meet the demand for long-term storage. Second, for third-party DAs and main-chain DAs that use memory state to store data, third-party DAs first need to search for the corresponding indexed data in the main chain, and then transfer the indexed data across the chain to third-party DAs and return the data via the storage bridge. In contrast, the mainchain DA can query data directly from nodes, and thus has faster data retrieval speed. Finally, within the main-chain DA, the sharding approach requires calling blocks from multiple nodes and restoring the original data. Therefore, it is slower than the short-term storage method without sharding.DA Layer Universality: Mainchain DA universality is close to zero because it is not possible to transfer data from a public chain with insufficient storage space to another public chain with insufficient storage space. In third-party DAs, the generality of a solution and its compatibility with a particular mainchain are contradictory metrics. For example, in the case of a mainchain-specific DA solution designed for a particular mainchain, it has made a lot of improvements at the level of node types and network consensus to adapt to that particular public chain, and thus these improvements can act as a huge obstacle when communicating with other public chains. Within third-party DAs, storage public chain DAs perform better in terms of generalizability than modular DAs. Storage public chain DAs have a larger developer community and more expansion facilities to adapt to different public chains. At the same time, the storage public chain DA can obtain data more actively through packet capture rather than passively receiving information transmitted from other public chains. Therefore, it can encode the data in its way, achieve standardized storage of data flow, facilitate the management of data information from different main chains, and improve storage efficiency.

Source: Kernel Ventures
6. Conclusion
Blockchain is undergoing the process of conversion from Crypto to Web3, and it brings an abundance of projects on the blockchain, but also data storage problems. To accommodate the simultaneous operation of so many projects on Layer1 and ensure the experience of the Gamefi and Socialfi projects, Layer1 represented by Ethereum has adopted Rollup and Blobs to improve the TPS. What's more, the n/umber of high-performance blockchains in the newborn blockchain is also growing. But higher TPS not only means higher performance but also means more storage pressure in the network. For the huge amount of historical data, multiple DA approaches, both main chain and third-party based are proposed at this stage to adapt to the growth of storage pressure on the chain. Improvements have their advantages and disadvantages and have different applicability in different contexts. In the case of payment-based blockchains, which have very high requirements for the security of historical data and do not pursue particularly high TPS, those are still in the preparatory stage, they can adopt a DankSharding-like storage method, which can ensure security and a huge increase in storage capacity at the same time realize. However, if it is a public chain like Bitcoin, which has already been formed and has a large number of nodes, there is a huge risk of rashly improving the consensus layer, so it can adopt a special DA for the main chain with higher security in the off-chain storage to balance the security and storage issues. However, it is worth noting that the function of the blockchain is changing over time. For example, in the early days, Ethereum's functionality was limited to payments and simple automated processing of assets and transactions using smart contracts, but as the blockchain landscape has expanded, various Socialfi and Defi projects have been added to Ethereum, pushing it to a more comprehensive direction. With the recent explosion of the inscription ecosystem on Bitcoin, transaction fees on the Bitcoin network have surged nearly 20 times since August, reflecting the fact that the network's transaction speeds are not able to meet the demand for transactions at this stage. Traders have to raise fees to get transactions processed as quickly as possible. Now, the Bitcoin community needs to make a trade-off between accepting high fees and slow transaction speed or reducing network security to increase transaction speeds while defeating the purpose of the payment system in the first place. If the Bitcoin community chooses the latter, then the storage solution will need to be adjusted in the face of increasing data pressure.

Source: OKLINK
As for the public chain with comprehensive functions, its pursuit of TPS is higher, with the enormous growth of historical data, it is difficult to adapt to the rapid growth of TPS in the long run by adopting the DankSharding-like solution. Therefore, a more appropriate way is to migrate the data to a third-party DA for storage. Among them, main chain-specific DAs have the highest compatibility and may be more advantageous if only the storage of a single public chain is considered. However, nowadays, when Layer1 public chains are blooming, cross-chain asset transfer and data interaction have also become a common pursuit of the blockchain community. If we consider the long-term development of the whole blockchain ecosystem, storing historical data from different public chains on the same public chain can eliminate many security problems in the process of data exchange and validation, so the modularized DA and the way of storing public chain DAs may be a better choice. Under the premise of close generality, modular DA focuses on providing blockchain DA layer services, introduces more refined index data to manage historical data, and can make a reasonable categorization of different public chain data, which has more advantages compared with storage public chains. However, the above proposal does not consider the cost of consensus layer adjustment on the existing public chain, which is extremely risky. A tiny systematic loophole may make the public chain lose community consensus. Therefore, if it is a transitional solution in the process of blockchain transformation, the temporary storage on the main chain may be more appropriate. Finally, all the above discussions are based on the performance during actual operation, but if the goal of a certain public chain is to develop its ecology and attract more project parties and participants, it may also tend to favor projects that are supported and funded by its foundation. For example, if the overall performance is equal to or even slightly lower than that of the storage public chain storage solution, the Ethereum community will also favor EthStorage, which is a Layer2 project supported by the Ethereum Foundation, to continue to develop the Ethereum ecosystem.
All in all, the increasing complexity of today's blockchains brings with it a greater need for storage space. With enough Layer1 validation nodes, historical data does not need to be backed up by all nodes in the whole network butcan ensure security after a certain threshold. At the same time,the division of labor of the public chain has become more and more detailed, Layer1 is responsible for consensus and execution, Rollup is responsible for calculation and verification, and then a separate blockchain is used for data storage. Each part can focus on a certain function without being limited by the performance of the other parts. However, the specific number of storage or the proportion of nodes allowed to store historical data in order toachieve a balance between security and efficiency,as well as how toensure secure interoperability between different blockchainsis a problem that needs to be considered by blockchain developers. Investors canpay attention to the main chain-specific DA project on Ethereum, because Ethereum already has enough supporters at this stage, without the need to use the power of other communities to expand its influence. It is more important to improve and develop its community to attract more projects to the Ethereum ecosystem. However, for public chains that are catching up, such as Solana and Aptos, the single chain itself does not have such a perfect ecosystem, so they may prefer to join forces with other communities to build a large cross-chain ecosystem to expand their influence. Therefore,for the emerging Layer1, a general-purpose third-party DA deserves more attention.
Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world.
Reference
Celestia: 模块化区块链的星辰大海: https://foresightnews.pro/article/detail/15497DHT usage and future work: https://github.com/celestiaorg/celestia-node/issues/11Celestia-core: https://github.com/celestiaorg/celestia-coreSolana labs: https://github.com/solana-labs/solana?source=post_page-----cf47a61a9274--------------------------------Announcing The SOLAR Bridge: https://medium.com/solana-labs/announcing-the-solar-bridge-c90718a49fa2leveldb-handbook: https://leveldb-handbook.readthedocs.io/zh/latest/sstable.htmlKuszmaul J. Verkle trees[J]. Verkle Trees, 2019, 1: 1.: https://math.mit.edu/research/highschool/primes/materials/2018/Kuszmaul.pdfArweave Network: https://www.arweave.org/Arweave Yellow-book: https://www.arweave.org/yellow-paper.pdf
Kernel Ventures:一文探讨 DA 和历史数据层设计作者:Kernel Ventures Jerry Luo 审稿:Kernel Ventures Mandy, Kernel Ventures Joshua TLDR: 早期公链要求全网节点保持数据一致性,以确保安全与去中心化。然而,随着区块链生态的发展,存储压力不断增大,导致节点运营出现中心化的趋势。现阶段 Layer1 急需解决 TPS 增长带来的存储成本问题。面对这一问题,开发者需要在兼顾安全性,存储成本,数据读取速度与 DA 层通用性的前提下,提出新的历史数据存储方案。在解决这一问题的过程中,许多新技术与新思路涌现,包括 Sharding,DAS,Verkle Tree,DA 中间组件等。他们分别从减少数据冗余与提高数据校验效率等途径出发,尝试优化 DA 层的存储方案。现阶段的 DA 方案从数据存储位置出发大体分为两类,分别是主链 DA 与第三方的 DA。主链 DA 分别从定期清理数据与对数据分片存储的角度出发,以减小节点存储压力。而第三方 DA 设计需求均旨在为存储服务,对于大量的数据有合理的解决方案。因而主要是在单链兼容性与多链兼容性之间进行 trade-off,提出了主链专用 DA,模块化 DA,存储公链 DA 三种解决方案。支付型的公链对于历史数据安全有极高的要求,适合使用主链作为 DA 层。不过对于运行了很长时间而又有大量矿工在运行网络的公链,采取不涉及共识层又兼顾安全性的第三方 DA 会更加合适。而综合性的公链更适合使用数据容量更大,成本更低又兼顾安全性的主链专用 DA 存储。但是考虑到跨链的需求,模块化 DA 也是不错的选项。总体上来说,区块链正在朝减少数据冗余以及多链分工的方向发展。 1. 背景 区块链作为分布式账本,需要在所有节点上都对历史数据作一份存储,以确保数据存储的安全与足够去中心化。由于每一次状态变动的正确性都与上一个状态(交易来源)有关,为了确保交易的正确性,一条区块链原则上应当存储从第一笔交易产生到当下交易的所有历史记录。以以太坊为例,即便按照平均每个区块 20 kb 的大小估计,当前以太坊区块的总大小也已达到 370 GB,而一个全节点除了区块本身,还要对状态和交易收据记录。算上这部分,单个节点存储总量已超过 1 TB,这使得节点的运营向少数人集中。 以太坊最新区块高度,图片来源:Etherscan 而最近的以太坊坎昆升级旨在将以太坊的 TPS 提高到 1000 附近,届时以太坊每年的存储增长都会超过现在的存储量之和。而在最近火热的各种高性能公链中,上万 TPS 的交易速度更是可能带来日均数百 GB 的数据新增。全网节点共同数据冗余的方式明显无法适应这样的存储压力,Layer1 必须找到一种合适的方案以兼顾 TPS 的增长与节点的存储成本。 2. DA 性能指标 2.1 安全性 区块链相对于数据库或者链表存储结构而言,其不可篡改性来自于可以通过历史数据对新产生的数据进行校验,因而确保其历史数据的安全性是 DA 层存储中首先要考虑的问题。对于区块链系统数据安全性的评判,我们往往从数据的冗余数量和数据可用性的校验方式进行分析 冗余数量:对于区块链系统中数据的冗余,其主要可以起到以下作用:首先,如果网络中冗余数量越多,当验证者需要查看某个历史区块中的账户状态以对当下某笔交易进行验证的时候,其可以得到最多的样本进行参考,从中选取被大多数节点记载的数据。而在传统的数据库中,由于只在某个节点以键值对的形式存储数据,要更改历史数据只用在单一节点进行,攻击成本极低,理论上说,冗余数量越多,数据的可信程度越高。同时,存储的节点越多,数据相应越不容易丢失。这点也可以对比存储 Web2 游戏的中心化服务器,一旦后台服务器全部关闭,就会出现彻底闭服的情况。但是这个数量也并非越多越好,因为每一份冗余都会带来额外的存储空间,过多数据冗余会给系统带来过大的存储压力,好的 DA 层应该选择一种合适的冗余方式在安全性和存储效率中取得平衡。数据可用性校验:冗余数量保证了网络中对于数据足够多的记录,但是要使用的数据还要对其准确性和完整性进行校验。现阶段的区块链中常用校验方式是密码学的承诺算法,既保留一个很小的密码学承诺供全网记录,这个承诺由交易数据混合得到的。而要检验某条历史数据的真实性时需要通过该数据还原密码学承诺,检验这个还原得到这个密码学承诺是否和全网的记录一致,如果一致则验证通过。常用的密码学校验算法有 Merkle Root 和 Verkle Root。高安全性的数据可用性验证算法只需要很少的校验数据,可以快速的对历史数据进行校验。 2.2 存储成本 在确保了基础安全性的前提下,DA 层下个需要实现的核心目标便是降本增效。首先是降低存储成本,在不考虑硬件性能差异的情况下,也就是降低存储单位大小数据造成的内存占用。现阶段区块链中降低存储成本的方式主要是采取分片技术以及使用奖励式存储以确保数据被有效存储基础上降低数据备份数量。但是从以上改进方式不难看出,存储成本与数据的安全性存在博弈关系,降低存储的占用也往往意味着安全性的下降。因而一个优秀的 DA 层需要实现存储成本与数据安全性之间的平衡。此外,如果 DA 层如果是一条单独的公链的话,还需要通过尽量减少数据交换所经历的中间过程以减少成本,在每一次中转过程都需要留下索引数据以供后续查询时的调用,因而越长的调用过程就会留有越多的索引数据而增加存储成本。最后,数据的存储成本直接和数据的持久性直接挂钩。一般情况下,数据的存储成本越高,公链越难以对数据持久化存储。 2.3 数据读取速度 实现了降本,下一步便是增效,也就是当需要使用数据时将其迅速从 DA 层中调用出来的能力。这个过程涉及两个步骤,首先是搜寻存储数据的节点,这个过程主要是对于未实现全网数据一致性的公链而言的,如果公链实现了全网节点的数据同步,便可以忽略这一过程的时间消耗。其次,现阶段主流的区块链系统,包括 Bitcoin,Ethereum,Filecoin 中,节点存储方式为 Leveldb 数据库。在 Leveldb 中,数据以三种方式存储。首先是即时写入的数据会存储在 Memtable 类型文件中,当 Memtable 存储满了后则会将文件类型从 Memtable 改为 Immutable Memtable。这两种类型的文件均存储在内存中,但是 Immutable Memtable 文件无法再做更改,只能从中读取数据。IPFS 网络中使用的热存储就是将数据存储在了这个部分,当要调用时就可以快速从内存读取,但是一个普通节点的移动内存往往都是 GB 级别,很容易就会写慢,并且当节点出现宕机等异常情况后,内存中的数据便会永久丢失。如果希望数据持久存储,则需要以 SST 文件的形式存储到固态硬盘(SSD),但读取数据时需要先将数据读到内存,因而大大降低数据索引速度。最后,对于采取了分片存储的系统,其数据还原时需要向多个节点发送数据请求并进行还原,这个过程也会降低数据的读取速度。 Leveldb 数据存储方式,图片来源:Leveldb-handbook 2.4 DA 层通用性 随着 DeFi 的发展,以及 CEX 的种种问题,用户对于去中心化资产跨链交易的要求也不断增长。而无论是采取哈希锁定,公证人还是中继链的跨链机制,都避免不了对两条链上历史数据的同时确定。这个问题的关键在于两条链上数据的分离,不同的去中心化系统中无法实现直接沟通。因而现阶段通过改变 DA 层存储方式提出了一种解决方案,既将多条公链的历史数据存储在同一条可信的公链上,验证的时候只需要在这条公链上调用数据即可。这需要 DA 层能够与不同类型的公链建立安全的通信方式,也就是 DA 层具有较好的通用性。 3. DA 相关技术探索 3.1 Sharding 传统的分布式系统中,一份文件不会以完整的形式存储在某一个节点上,而是将原始数据分成多个 Blocks 后在每一个节点中存储一个 Block。并且 Block 往往不会仅存储在一个节点上,而是会在其他节点上留有适当的备份,现有主流分布式系统中,这个备份数量通常设置为 2。这种 Sharding 机制可以减少单个节点的存储压力,将系统的总容量扩展为各个节点存储量的总和,同时又通过适当的数据冗余确保存储的安全性。区块链中采取的 Sharding 方案大体与之类似,但在具体细节上会存在不同。首先是由于区块链中默认各个节点是不可信的,实现 Sharding 的过程中需要足够大的数据量备份以供后续数据真实性的判断,所以这个节点的备份数量需要远超过 2。理想情况下,在采用这种方案存储的区块链系统中,如果验证节点总数为 T,分片数量为 N,那么备份数量应该为 T/N。其次是对 Block 的存储过程,传统分布式系统中节点较少,因而往往是一个节点适配多个数据块,首先是通过一致性哈希算法将数据映射到哈希环上去,然后每个节点存储某个范围内编号的数据块,并且可以接受某个节点在某次存储中并没有分配存储任务。而在区块链上,每个节点是否分配到 Block 不再是随机事件而是必然事件,每个节点都会随机抽取一个 Block 进行存储,这一过程通过将带有区块原始数据与节点自身信息的数据哈希后的结果对分片数取余完成。假设每份数据被分为了 N 个 Blocks,每个节点的实际存储大小仅为原来的 1/N。通过适当设置 N,可以实现增长的 TPS 和节点存储压力的平衡。 Sharding 后的数据存储方式,图片来源:Kernel Ventures 3.2 DAS(Data Availability Sampling) DAS 技术是基于 Sharding 在存储方式上的进一步优化。在 Sharding 过程中,由于节点简单的随机存储,可能会出现某个 Block 丢失的情况。其次,对于分片后的数据,还原过程中如何确认数据的真实性与完整性也非常重要。在 DAS 中,通过 Eraser code 与 KZG 多项式承诺对这两个问题进行了解决。 Eraser code:考虑以太坊庞大的验证节点数量,某个 Block 没有被任何节点存储的概率几乎为 0,但是理论上来说仍然存在这种极端情况发生的可能。为了减轻这一可能造成的存储缺失的威胁,此方案下往往不直接将原始数据切分为 Block 进行存储,而是先将原始数据映射到一个 n 阶多项式的系数上,然后在多项式上取 2n 个点,并让节点从中随机选择一个进行存储。对于这个 n 阶多项式,只需要 n+1 个点便可以进行还原,因而只需要有一半的 Block 有被节点选中,我们便可以实现对原始数据的还原。通过 Eraser code,提高了数据存储的安全程度与网络对于数据的恢复能力。KZG 多项式承诺:在数据存储中非常重要的一环便是对于数据真实性的检验。在没有采用 Eraser code 的网络中,校验环节可以采用多样的方法,但是如果引入了上文的 Eraser code 以提高数据安全性,那么比较合适的方法是使用 KZG 多项式承诺。KZG 多项式承诺可以直接以多项式的形式对单个 Block 内容校验,从而省去将多项式还原为二进制数据的过程,验证的形式总体与 Merkle Tree 类似,但是不需要具体的 Path 节点数据,只需要 KZG Root 与 Block 数据便可对其真伪进行验证。 3.3 DA 层数据校验方式 数据校验既确保从节点中调用的数据未被篡改且具有没有出现丢失。为了尽可能减少校验过程中所需要的数据量以及计算成本,DA 层现阶段采用树结构做为主流的校验方式。最简单的形式便是使用 Merkle Tree 进行校验,使用完全二叉树的形式记录,只需要保留一个 Merkle Root 以及节点路径上另一侧子树的哈希值便可以进行校验,校验的时间复杂度为 O(logN) 级别(如果 logN 不加底数默认为 log2(N))。虽然已经极大简化了校验过程,但是验证过程的数据量总体还是随着数据的增加而增长。为了解决增加的验证量问题,现阶段提出了另一种验证方式,Verkle Tree。Verkle Tree 中每个节点除了存储 value 还会附带一个 Vector Commitment ,通过原始节点的值和这个承诺性证明就可以快速对数据真实性进行验证,而不需要调用其他姐妹节点的值,这使得每次验证的计算次数只和 Verkle Tree 的深度有关,是一个固定的常数,从而大大加速了验证速度。但是 Vector Commitment 的计算需要同一层所有姐妹节点的参与,这大大增大了写入数据与更改数据的成本。但是对于历史数据这类做永久性存储而不能篡改的数据,只有读而没有写的需求,Verkle Tree 就显得极为合适了。此外 Merkle Tree 与 Verkle Tree 本身还有 K-ary 形式下的变体,其具体实现机制相似,只是改变了每个节点下子树的数量,其具体性能的对比可以见下表。 数据校验方式时间性能对比,图片来源:Verkle Trees 3.4 通用 DA 中间件 区块链生态的不断扩大,随之带来公链数量的不断增加。由于各条公链在各自领域的优势与不可替代性,短时间内 Layer1 公链几无可能走向统一。但是随着 DeFi 的发展,以及 CEX 的种种问题,用户对于去中心化跨链交易资产的要求也不断增长。因此,可以消除跨链数据交互中的安全问题的 DA 层多链数据存储得到了越来越多的关注。但是要接受来自不同公链的历史数据,需要 DA 层提供数据流标准化存储与验证的去中心化协议,比如基于 Arweave 的存储中间件 kvye ,采取主动从链上抓取数据的方式,可以将所有链上的数据以标准的形式存储至 Arweave,以最小化数据传输过程的差异性。相对来说,专门为某条公链提供 DA 层数据存储的 Layer2 通过内部共享节点的方式进行数据交互,虽然降低了交互的成本并提高了安全性,但是具有比较大的局限性,仅能向特定公链提供服务。 4. DA 层存储方案 4.1 主链 DA 4.1.1 类 DankSharding 这类存储方案暂时还没有确定的名称,而其中最突出的代表就是以太坊上的 DankSharding,因而本文中使用类 DankSharding 代称这一类方案。这类方案主要使用了上述的两种 DA 存储技术,Sharding 和 DAS。首先通过 Sharding 将数据分成合适的份数,然后再让每个节点以 DAS 的形式抽取一个数据 Block 进行存储。对于全网节点足够多的情况,我们可以取一个较大的分片数 N,这样每个节点的存储压力只有原来的 1/N,从而实现整体存储空间的 N 倍扩容。同时为了保证防止某个 Block 没有被任一区块存储的极端情况,DankSharding 对数据使用 Eraser Code 进行了编码,只需要一半的数据就可以进行完整还原。最后是对数据的检验过程,使用了 Verkle 树的结构与多项式承诺,实现了快速的校验。 4.1.2 短期存储 对于主链的 DA,一种最为简单的数据处理方式就是对历史数据进行短期存储。本质上来说,区块链所起的是一个公示账本的作用,在全网共同见证的前提下实现对账本内容的更改,而并没有永久化存储的需求。以 Solana 为例,虽然其历史数据被同步到了 Arweave 上,但是主网节点只保留了近两日的交易数据。基于账户记录的公链上,每一时刻的历史数据保留了区块链上账户最终的状态,便足以为下一时刻的更改提供验证依据。而对于这个时间段之前数据有特殊需求的项目方,可以自己在其他去中心化公链上或者交由可信第三方进行存储。也就是说对于数据有额外需求的人,需要对历史数据存储进行付费。 4.2 第三方 DA 4.2.1 主链专用 DA:EthStorage 主链专用DA:DA 层最重要的就是数据传输的安全性,这一点上安全性最高的便是主链的 DA。但是主链存储受到存储空间的限制以及资源的竞争,因而当网络数据量增长较快时,如果要实现对数据的长期存储,第三方 DA 会是一个更好的选择。第三方 DA 如果与主网有更高的兼容性,可以实现节点的共用,数据交互过程中也会具有更高的安全性。因而在考虑安全性的前提下,主链专用 DA 会存在巨大优势。以以太坊为例,主链专用 DA 的一个基本要求是可以与 EVM 兼容,保证和以太坊数据与合约间的互操作性,代表性的项目有 Topia,EthStorage 等。其中 EthStorage 是兼容性方面目前开发最完善的,因为除了 EVM 层面的兼容,其还专门设置了相关接口与 Remix,Hardhat 等以太坊开发工具对接,实现以太坊开发工具层面的兼容。EthStorage:EthStorage 是一条独立于以太坊的公链,但其上运行的节点是以太坊节点的超群,也就是运行 EthStorage 的节点也可以同时运行以太坊,通过以太坊上的操作码便可以直接对 EthStorage 进行操作。EthStorage 的存储模式中,仅在以太坊主网保留少量元数据以供索引,本质上是为以太坊创建了一个去中心化的数据库。现阶段的解决方案中,EthStorage 通过在以太坊主网上部署了一份 EthStorage Contract 实现了以太坊主网与 EthStorage 的交互。如果以太坊要存入数据,则需要调用合约中的 put() 函数,输入参数是两个字节变量 key, data,其中 data 表示要存入的数据,而 key 则是其在以太坊网络中的标识,可以将其看成类似于IPFS中 CID 的存在。在(key,data)数据对成功存储到 EthStorage 网络后,EthStorage 会生成一个 kvldx 返回给以太坊主网,并于以太坊上的 key 对应,这个值对应了数据在 EthStorage 上的存储地址,这样原来可能需要存储大量数据的问题现在就变为了存储一个单一的 (key,kvldx)对,从而大大降低了以太坊主网的存储成本。如果需要对之前存储的数据进行调用,则需要使用 EthStorage 中的 get() 函数,并输入 key 参数,通过以太坊存储的 kvldx 便可在 EthStorage 上对数据进行一个快速查找。 EthStorage 合约,图片来源:Kernel Ventures 在节点具体存储数据的方式上,EthStorage 借鉴了 Arweave 的模式。首先是对于来自 ETH 的大量 (k,v)对进行了分片,每个 Sharding 包含固定数量个(k,v)数据对,其中每个(k,v)对的具体大小也存在一个限制,通过这种方式保证后续对于矿工存储奖励过程中的工作量大小的公平性。对于奖励的发放,需要先对节点是否存储数据进行验证。这个过程中,EthStorage 会把一个 Sharding(TB 级别大小)分成非常多的 chunk,并在以太坊主网保留一个 Merkle root 以做验证。接着需要矿工首先提供一个 nonce 来与 EthStorage 上前一个区块的哈希通过随机算法生成出几个 chunk 的地址,矿工需要提供这几个 chunk 的数据以证明其确实存储了整个 Sharding。但这个 nonce 不能随意选取,否则节点会选取出合适的 nonce 只对应其存储的 chunk 从而通过验证,所以这个 nonce 必须使得其所生成的 chunk 经过混合与哈希后可以使难度值满足网络要求,并且只有第一个提交 nonce 和随机访问证明的节点才可以获取奖励。 4.2.2 模块化 DA:Celestia 区块链模块:现阶段 Layer1 公链所需执行的事务主要分为以下四个部分:(1)设计网络底层逻辑,按照某种方式选取验证节点,写入区块并为网络维护者分配奖励;(2)打包处理交易并发布相关事务;(3)对将要上链的交易进行验证并确定最终状态;(4)对于区块链上的历史数据进行存储与维护。根据所完成功能的不同,我们可以将区块链分别划分为四个模块,即共识层、执行层、结算层、数据可用性层(DA 层)。模块化区块链设计:很长一段时间,这四个模块都被整合到了一条公链上,这样的区块链称为单体区块链。这种形式更加稳定并便于维护,但也给单条公链带来了巨大的压力。实际运行过程中,这四个模块之间互相约束并竞争公链有限的计算与存储资源。例如,要提高处理层的处理速度,相应就会给数据可用性层带来更大的存储压力;要保证执行层的安全性就需要更复杂的验证机制但拖慢交易处理的速度。因此,公链的开发往往面临着这四个模块间的权衡。为了突破这一公链性能提升的瓶颈,开发者提出了模块化区块链的方案。模块化区块链的核心思想是将上述的四个模块中的一个或几个剥离出来,交给一条单独的公链实现。这样在该条公链上就可以仅专注于交易速度或者存储能力的提升,突破之前由于短板效应对于区块链整体性能造成的限制。模块化 DA:将 DA 层从区块链业务中剥离出来单独交由一条公链复杂的方法被认为是面对 Layer1 日益增长历史数据的一种可行解决方案。现阶段这方面的探索仍处于早期阶段,目前最具代表性的项目是 Celestia。在存储的具体方式上,Celestia 借鉴了 Danksharding 的存储方法,也是将数据分成多个 Block,由各个节点抽取一部分进行存储并同时使用 KZG 多项式承诺对数据完整性进行验证。同时,Celestia 使用了先进的二维 RS 纠删码,通过 k*k 矩阵的形式改写原始数据,最终只需要 25% 的部分便可以对原始数据实现恢复。然而,数据分片存储本质上只是将全网节点的存储压力在总数据量上乘以了一个系数,节点的存储压力与数据量仍然是保持线性增长。随着 Layer1 对于交易速度的不断改进,节点的存储压力某天仍可能达到一个无法接受的临界。为了解决这一问题,Celestia 中引入了 IPLD 组件进行处理。对于 k*k 矩阵中的数据,并不直接存储在 Celestia 上,而是存储在 LL-IPFS 网络中,仅在节点中保留该数据在 IPFS 上的 CID 码。当用户请求某份历史数据时,节点会向 IPLD 组件发送对应 CID,通过该 CID 在 IPFS 上对原始数据进行调用。如果在 IPFS 上存在数据,则会经由 IPLD 组件和节点返回回来;如果不存在,则无法返回数据。 Celestia 数据读取方式,图片来源:Celestia Core Celestia:以 Celestia 为例,我们可以窥见模块化区块链在解决以太坊存储问题中的落地应用。Rollup 节点会将打包并验证好的交易数据发送给 Celestia 并在 Celestia 上对数据进行存储,这个过程中 Celestia 只管对数据进行存储,而不会有过多的感知,最后根据存储空间的大小 Rollup 节点会向 Celestia 支付相应 tia代币作为存储费用。在Celstia中的存储利用了类似于 EIP4844 中的 DAS 和纠删码,但是对 EIP4844 中的多项式纠删码进行了升级,使用了二维 RS 纠删码,将存储安全进行了再次升级,仅需 25% 的 fractures 便可以对整个交易数据进行还原。本质上只是一条存储成本低廉的 POS 公链,如果要实现用来解决以太坊的历史数据存储问题,还需要许多其他具体模块来与 Celestia 进行配合。比如 Rollup 方面,Celestia 官网上大力推荐的一种 Rollup 模式是 Sovereign Rollup。不同于 Layer2 上常见的 Rollup,仅仅对交易进行计算和验证,也就是完成执行层的操作。Sovereign Rollup 包含了整个执行和结算的过程,这最小化了 Celestia 上对交易的处理,在 Celestia 整体安全性弱于以太坊的情况下,这种措施可以最大提升整体交易过程的安全性。而在以太坊主网 Celestia 调用数据的安全性保障方面,当下最主流的方案是量子引力桥智能合约。对于 Celestia 上存储的数据,其会生成一个 Merkle Root(数据可用性证明) 并保持在以太坊主网的量子引力桥合约上,当以太坊每次调用 Celestia 上历史数据时,都会将其哈希结果与 Merkle Root 进行比较,如果符合才表示其确实是真实的历史数据。 4.2.3 存储公链 DA 在主链 DA 技术原理上,向存储公链借鉴了类似 Sharding 的许多技术。而在第三方 DA 中,有些更是直接借助存储公链完成了部分存储任务,比如 Celestia 中具体的交易数据就是放在了 LL-IPFS 网络上。第三方 DA 的方案中,除了搭建一条单独的公链解决 Layer1 的存储问题之外,一种更直接的方式是直接让存储公链和 Layer1 对接,存储 Layer1 上庞大的历史数据。对于高性能区块链来说,历史数据的体量更为庞大,在全速运行的情况下,高性能公链 Solana 的数据量大小接近 4 PG,完全超出了普通节点的存储范围。Solana 选择的解决方案是将历史数据存储在去中心化存储网络 Arweave 上,只在主网的节点上保留 2 日的数据用来验证。为了确保存储过程的安全性 Solana 与 Arweave 链自己专门设计了一个存储桥协议 Solar Bridge。Solana 节点验证后的数据会同步到 Arweave 上并返回相应 tag。只需要通过该 tag,Solana 节点便可以对 Solana 区块链任意时刻的历史数据进行查看。而在 Arweave 上,不需要全网节点保持数据一致性,并以此作为参与网络运行的门槛,而是采取了奖励存储的方式。首先 Arweave 并没有采用传统链结构构建区块,而更类似一种图的结构。在 Arweave 中,一个新的区块不仅会指向前一个区块,还会随机指向一个已生成的区块 Recall Block。Recall Block 的具体位置由其前一区块与其区块高度的哈希结果决定,在前一区块被挖出之前,Recall Block 的位置是未知的。但是在生成新区块的过程中,需要节点具有 Recall Block 的数据以使用 POW 机制计算规定难度的哈希,只有最先计算出符合难度哈希的矿工才可以获得奖励,鼓励了矿工存储尽可能多的历史数据。同时,存储某个历史区块的人越少,节点在生成符合难度 nonce 时会有更少的竞争对手,鼓励矿工存储网络中备份较少的区块。最后,为了保证节点在 Arweave 中对数据做永久性存储,其引入了 WildFire 的节点评分机制。节点间会倾向于与可以较快的提供更多历史数据的节点通信,而评分等级较低的节点往往无法第一时间获得最新的区块与交易数据从而无法在 POW 的竞争中占取先机。 Arweave 区块构建方式,图片来源:Arweave Yellow-Paper 5. 综合对比 接下来,我们将从 DA 性能指标的四个维度出发,分别对 5 种存储方案的优劣进行比较。 安全性:数据安全问题的最大的来源是数据传输过程中导致的遗失以及来自不诚信节点的恶意篡改,而跨链过程中由于两条公链的独立性与状态不共享,所以是数据传输安全的重灾区。此外,现阶段需要专门 DA 层的 Layer 1 往往有强大的共识群体,自身安全性会远高于普通存储公链。因而主链 DA 的方案具更高的安全性。而在确保了数据传输安全后,接下来就是要保证调用数据的安全。只考虑用来验证交易的短期历史数据的话,同一份数据在临时存储的网络中得到了全网共同的备份,而在类 DankSharding 的方案中数据平均的备份数量只有全网节点数的 1/N,更多的数据冗余可以使得数据更不容易丢失,同时也可以在验证时提供更多的参考样本。因而临时存储相对会有更高的数据安全性。而在第三方 DA 的方案中,主链专用 DA 由于和主链使用公共节点,跨链过程中数据可以通过这些中继节点直接传输,因而也会有比其他 DA 方案相对较高的安全性。存储成本:对存储成本最大的影响因素是数据的冗余数量。在主链 DA 的短期存储方案中,使用全网节点数据同步的形式进行存储,任何一份新存储的数据需要在全网节点中得到备份,具有最高的存储成本。高昂的存储成本反过来也决定了,在高 TPS 的网络中,该方式只适合做临时存储。其次是 Sharding 的存储方式,包括了在主链的 Sharding 以及第三方 DA 中的 Sharding。由于主链往往有更多的节点,因而相应一个 Block 也会有更多的备份,所以主链 Sharding 方案会有更高的成本。而存储成本最低的则是采取奖励存储方式的存储公链 DA ,此方案下数据冗余的数量往往在一个固定的常数附近波动。同时存储公链 DA 中还引入了动态调节的机制,通过提高奖励吸引节点存储备份较少的数据以确保数据安全。数据读取速度:数据的存储速度主要受到数据在存储空间中的存储位置、数据索引路径以及数据在节点中的分布的影响。其中,数据在节点的存储位置对速度的影响更大,因为将数据存储在内存或 SSD 中可能导致读取速度相差数十倍。存储公链 DA 多采取 SSD 存储,因为该链上的负载不仅包括 DA 层的数据,还包括用户上传的视频、图片等高内存占用的个人数据。如果网络不使用 SSD 作为存储空间,难以承载巨大的存储压力并满足长期存储的需求。其次,对于使用内存态存储数据的第三方 DA 和主链 DA,第三方 DA 首先需要在主链中搜索相应的索引数据,然后将该索引数据跨链传输到第三方 DA,并通过存储桥返回数据。相比之下,主链 DA 可以直接从节点查询数据,因此具有更快的数据检索速度。最后,在主链 DA 内部,采用 Sharding 方式需要从多个节点调用 Block,并对原始数据进行还原。因此相对于不分片存储的短期存储方式而言,速度会较慢。DA 层通用性:主链 DA 通用性接近于零,因为不可能将存储空间不足的公链上的数据转移到另一条存储空间不足的公链上。在第三方 DA 中,方案的通用性与其与特定主链的兼容性是一对矛盾的指标。例如,对于专为某条主链设计的主链专用 DA 方案中,其在节点类型和网络共识层面进行了大量改进以适配该公链,因而在与其他公链通信时,这些改进会起到巨大的阻碍作用。而在第三方 DA 内部,与模块化 DA 相比, 存储公链 DA 在通用性方面表现更好。存储公链 DA 具有更庞大的开发者社区和更多的拓展设施,可以适应不同公链的情况。同时,存储公链 DA 对于数据的获取方式更多是通过抓包主动获取,而不是被动接收来自其他公链传输的信息。因此,它可以以自己的方式对数据进行编码,实现数据流的标准化存储,便于管理来自不同主链的数据信息,并提高存储效率。 存储方案性能比较,图片来源:Kernel Ventures 6. 总结 现阶段的区块链正在经历从 Crypto 向更具包容性的 Web3 转换的过程中,这个过程中带来的不仅是区块链上项目的丰富。为了在 Layer1 上容纳如此多项目的同时运行,同时保证 Gamefi 和 Socialfi 项目的体验,以以太坊为代表的 Layer1 采取了 Rollup 和 Blobs 等方式来提高 TPS。而新生区块链中,高性能区块链的数量也是不断增长。但是更高的 TPS 不仅意味着更高的性能,也意味着网络中更大的存储压力。对于海量的历史数据,现阶段提出了主链和基于第三方的多种 DA 方式,以适应链上存储压力的增长。改进方式各有利弊,在不同情境下有不同适用性。 以支付为主的区块链对于历史数据的安全性有着极高的要求,而不追求特别高的 TPS。如果这类公链还处于筹备阶段,可以采取类 DankSharding 的存储方式,在保证安全性的同时也可以实现存储容量的巨大提升。但如果是比特币这种已经成型并有大量节点的公链,在共识层贸然进行改进存在巨大风险,因而可以采取链外存储中安全性较高的主链专用 DA 来兼顾安全性与存储问题。但值得注意的是,区块链的功能并不是一成不变而是不断变化的。比如早期的以太坊的功能主要也局限于支付以及使用智能合约对资产和交易进行简单的自动化处理,但是随着区块链版图的不断拓展,以太坊上逐渐加入了各种 Socialfi 与 Defi 项目,使以太坊向着更加综合性的方向发展。而最近伴随着比特币上铭文生态的爆发,比特币网络的交易手续费自 8 月以来激增了近 20 倍,背后反映的是现阶段比特币网络的交易速度无法满足交易需求,交易者只能拉高手续费使交易尽快得到处理。现在,比特币社区需要做出一个 trade-off,是接受高昂的手续费以及缓慢的交易速度,还是降低网络安全性以提高交易速度但违背支付系统的初衷。如果比特币社区选择了后者,那么面对增长的数据压力,相应的存储方案也需要做出调整。 比特币主网交易费用波动,图片来源:OKLINK 而对于综合功能的公链,其对 TPS 有着更高的追求,历史数据的增长更加巨大,采取类 DankSharding 的方案长期来看难以适应 TPS 的快速增长。因此,较为合适的方式是将数据迁移到第三方 DA 进行存储。其中,主链专用 DA 具有最高的兼容性,如果只考虑单条公链的存储问题,可能更具优势。但是在 Layer1 公链百花齐放的今天,跨链资产转移与数据交互也成为区块链社区的普遍追求。如果考虑到整个区块链生态的长期发展,将不同公链的历史数据存储在同一条公链上可以消除许多数据交换与验证过程中的安全问题,因此,模块化 DA 和存储公链 DA 的方式可能是一个更好的选择。在通用性接近的前提下,模块化 DA 专注于提供区块链 DA 层的服务,引入了更精细化的索引数据管理历史数据,可以对不同公链数据进行一个合理归类,与存储公链相比具有更多优势。然而,上述方案并未考虑在已有公链上进行共识层调整的成本,这个过程具有极高的风险性,一旦出现问题可能会导致系统性的漏洞,使得公链失去社区共识。因此,如果是区块链扩容过程中的过渡方案,最简单的主链临时存储可能更合适。最后,以上讨论都基于实际运行过程中的性能出发,但如果某条公链的目标是发展自身生态,吸引更多项目方和参与者,也有可能会倾向于受到自身基金会扶持和资助的项目。比如在同等甚至总体性能略低于存储公链存储方案的情况下,以太坊社区也会倾向于 EthStorage 这类以太坊基金会支持的 Layer2 项目,以持续发展以太坊生态。 总而言之,当今区块链的功能越来越复杂,也带来了更大的存储空间需求。在 Layer1 验证节点足够多的情况下,历史数据并不需要全网所有节点共同备份,只需要备份数量达到某个数值后便可保证相对的安全性。与此同时,公链的分工也变得越来越细致,Layer1 负责共识和执行,Rollup 负责计算和验证,再使用单独的一条区块链进行数据存储。每个部分都可以专注于某一功能,不受其他部分性能的限制。然而,具体存储多少数量或让多少比例的节点存储历史数据才能实现安全性与效率的平衡,以及如何保证不同区块链之间的安全互操作,这是需要区块链开发者进行思考和不断完善的问题。对于投资者而言,可以关注以太坊上的主链专用 DA 项目,因为现阶段以太坊已有足够多的支持者,不需要再借助其他社区扩大自己的影响力。更多的需要是完善与发展自己的社区,吸引更多项目落地以太坊生态。但是对处于追赶者地位的公链,比如 Solana,Aptos 来说,单链本身没有那么完善的生态,因而可能更倾向于联合其他社区的力量,搭建一个庞大的跨链生态以扩大影响力。因而对于新兴的 Layer1 ,通用的第三方 DA 值得更多的关注。 Kernel Ventures是一个由研究和开发社区驱动的加密风险投资基金,拥有超过70个早期投资,专注于基础设施、中间件、dApps,尤其是ZK、Rollup、DEX、模块化区块链,以及将搭载未来数十亿加密用户的垂直领域,如账户抽象、数据可用性、可扩展性等。在过去的七年里,我们一直致力于支持世界各地的核心开发社区和大学区块链协会的发展。 参考文献 Celestia:模块化区块链的星辰大海:https://foresightnews.pro/article/detail/15497DHT usage and future work:https://github.com/celestiaorg/celestia-node/issues/11Celestia-core:https://github.com/celestiaorg/celestia-coreSolana labs:https://github.com/solana-labs/solana?source=post_page-----cf47a61a9274--------------------------------Announcing The SOLAR Bridge:https://medium.com/solana-labs/announcing-the-solar-bridge-c90718a49fa2leveldb-handbook:https://leveldb-handbook.readthedocs.io/zh/latest/sstable.htmlKuszmaul J. Verkle trees[J]. Verkle Trees, 2019, 1: 1.:https://math.mit.edu/research/highschool/primes/materials/2018/Kuszmaul.pdfArweave 官网:https://www.arweave.org/Arweave 黄皮书:https://www.arweave.org/yellow-paper.pdf

Kernel Ventures:一文探讨 DA 和历史数据层设计

作者:Kernel Ventures Jerry Luo
审稿:Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
早期公链要求全网节点保持数据一致性,以确保安全与去中心化。然而,随着区块链生态的发展,存储压力不断增大,导致节点运营出现中心化的趋势。现阶段 Layer1 急需解决 TPS 增长带来的存储成本问题。面对这一问题,开发者需要在兼顾安全性,存储成本,数据读取速度与 DA 层通用性的前提下,提出新的历史数据存储方案。在解决这一问题的过程中,许多新技术与新思路涌现,包括 Sharding,DAS,Verkle Tree,DA 中间组件等。他们分别从减少数据冗余与提高数据校验效率等途径出发,尝试优化 DA 层的存储方案。现阶段的 DA 方案从数据存储位置出发大体分为两类,分别是主链 DA 与第三方的 DA。主链 DA 分别从定期清理数据与对数据分片存储的角度出发,以减小节点存储压力。而第三方 DA 设计需求均旨在为存储服务,对于大量的数据有合理的解决方案。因而主要是在单链兼容性与多链兼容性之间进行 trade-off,提出了主链专用 DA,模块化 DA,存储公链 DA 三种解决方案。支付型的公链对于历史数据安全有极高的要求,适合使用主链作为 DA 层。不过对于运行了很长时间而又有大量矿工在运行网络的公链,采取不涉及共识层又兼顾安全性的第三方 DA 会更加合适。而综合性的公链更适合使用数据容量更大,成本更低又兼顾安全性的主链专用 DA 存储。但是考虑到跨链的需求,模块化 DA 也是不错的选项。总体上来说,区块链正在朝减少数据冗余以及多链分工的方向发展。
1. 背景
区块链作为分布式账本,需要在所有节点上都对历史数据作一份存储,以确保数据存储的安全与足够去中心化。由于每一次状态变动的正确性都与上一个状态(交易来源)有关,为了确保交易的正确性,一条区块链原则上应当存储从第一笔交易产生到当下交易的所有历史记录。以以太坊为例,即便按照平均每个区块 20 kb 的大小估计,当前以太坊区块的总大小也已达到 370 GB,而一个全节点除了区块本身,还要对状态和交易收据记录。算上这部分,单个节点存储总量已超过 1 TB,这使得节点的运营向少数人集中。

以太坊最新区块高度,图片来源:Etherscan
而最近的以太坊坎昆升级旨在将以太坊的 TPS 提高到 1000 附近,届时以太坊每年的存储增长都会超过现在的存储量之和。而在最近火热的各种高性能公链中,上万 TPS 的交易速度更是可能带来日均数百 GB 的数据新增。全网节点共同数据冗余的方式明显无法适应这样的存储压力,Layer1 必须找到一种合适的方案以兼顾 TPS 的增长与节点的存储成本。
2. DA 性能指标
2.1 安全性
区块链相对于数据库或者链表存储结构而言,其不可篡改性来自于可以通过历史数据对新产生的数据进行校验,因而确保其历史数据的安全性是 DA 层存储中首先要考虑的问题。对于区块链系统数据安全性的评判,我们往往从数据的冗余数量和数据可用性的校验方式进行分析
冗余数量:对于区块链系统中数据的冗余,其主要可以起到以下作用:首先,如果网络中冗余数量越多,当验证者需要查看某个历史区块中的账户状态以对当下某笔交易进行验证的时候,其可以得到最多的样本进行参考,从中选取被大多数节点记载的数据。而在传统的数据库中,由于只在某个节点以键值对的形式存储数据,要更改历史数据只用在单一节点进行,攻击成本极低,理论上说,冗余数量越多,数据的可信程度越高。同时,存储的节点越多,数据相应越不容易丢失。这点也可以对比存储 Web2 游戏的中心化服务器,一旦后台服务器全部关闭,就会出现彻底闭服的情况。但是这个数量也并非越多越好,因为每一份冗余都会带来额外的存储空间,过多数据冗余会给系统带来过大的存储压力,好的 DA 层应该选择一种合适的冗余方式在安全性和存储效率中取得平衡。数据可用性校验:冗余数量保证了网络中对于数据足够多的记录,但是要使用的数据还要对其准确性和完整性进行校验。现阶段的区块链中常用校验方式是密码学的承诺算法,既保留一个很小的密码学承诺供全网记录,这个承诺由交易数据混合得到的。而要检验某条历史数据的真实性时需要通过该数据还原密码学承诺,检验这个还原得到这个密码学承诺是否和全网的记录一致,如果一致则验证通过。常用的密码学校验算法有 Merkle Root 和 Verkle Root。高安全性的数据可用性验证算法只需要很少的校验数据,可以快速的对历史数据进行校验。
2.2 存储成本
在确保了基础安全性的前提下,DA 层下个需要实现的核心目标便是降本增效。首先是降低存储成本,在不考虑硬件性能差异的情况下,也就是降低存储单位大小数据造成的内存占用。现阶段区块链中降低存储成本的方式主要是采取分片技术以及使用奖励式存储以确保数据被有效存储基础上降低数据备份数量。但是从以上改进方式不难看出,存储成本与数据的安全性存在博弈关系,降低存储的占用也往往意味着安全性的下降。因而一个优秀的 DA 层需要实现存储成本与数据安全性之间的平衡。此外,如果 DA 层如果是一条单独的公链的话,还需要通过尽量减少数据交换所经历的中间过程以减少成本,在每一次中转过程都需要留下索引数据以供后续查询时的调用,因而越长的调用过程就会留有越多的索引数据而增加存储成本。最后,数据的存储成本直接和数据的持久性直接挂钩。一般情况下,数据的存储成本越高,公链越难以对数据持久化存储。
2.3 数据读取速度
实现了降本,下一步便是增效,也就是当需要使用数据时将其迅速从 DA 层中调用出来的能力。这个过程涉及两个步骤,首先是搜寻存储数据的节点,这个过程主要是对于未实现全网数据一致性的公链而言的,如果公链实现了全网节点的数据同步,便可以忽略这一过程的时间消耗。其次,现阶段主流的区块链系统,包括 Bitcoin,Ethereum,Filecoin 中,节点存储方式为 Leveldb 数据库。在 Leveldb 中,数据以三种方式存储。首先是即时写入的数据会存储在 Memtable 类型文件中,当 Memtable 存储满了后则会将文件类型从 Memtable 改为 Immutable Memtable。这两种类型的文件均存储在内存中,但是 Immutable Memtable 文件无法再做更改,只能从中读取数据。IPFS 网络中使用的热存储就是将数据存储在了这个部分,当要调用时就可以快速从内存读取,但是一个普通节点的移动内存往往都是 GB 级别,很容易就会写慢,并且当节点出现宕机等异常情况后,内存中的数据便会永久丢失。如果希望数据持久存储,则需要以 SST 文件的形式存储到固态硬盘(SSD),但读取数据时需要先将数据读到内存,因而大大降低数据索引速度。最后,对于采取了分片存储的系统,其数据还原时需要向多个节点发送数据请求并进行还原,这个过程也会降低数据的读取速度。

Leveldb 数据存储方式,图片来源:Leveldb-handbook
2.4 DA 层通用性
随着 DeFi 的发展,以及 CEX 的种种问题,用户对于去中心化资产跨链交易的要求也不断增长。而无论是采取哈希锁定,公证人还是中继链的跨链机制,都避免不了对两条链上历史数据的同时确定。这个问题的关键在于两条链上数据的分离,不同的去中心化系统中无法实现直接沟通。因而现阶段通过改变 DA 层存储方式提出了一种解决方案,既将多条公链的历史数据存储在同一条可信的公链上,验证的时候只需要在这条公链上调用数据即可。这需要 DA 层能够与不同类型的公链建立安全的通信方式,也就是 DA 层具有较好的通用性。
3. DA 相关技术探索
3.1 Sharding
传统的分布式系统中,一份文件不会以完整的形式存储在某一个节点上,而是将原始数据分成多个 Blocks 后在每一个节点中存储一个 Block。并且 Block 往往不会仅存储在一个节点上,而是会在其他节点上留有适当的备份,现有主流分布式系统中,这个备份数量通常设置为 2。这种 Sharding 机制可以减少单个节点的存储压力,将系统的总容量扩展为各个节点存储量的总和,同时又通过适当的数据冗余确保存储的安全性。区块链中采取的 Sharding 方案大体与之类似,但在具体细节上会存在不同。首先是由于区块链中默认各个节点是不可信的,实现 Sharding 的过程中需要足够大的数据量备份以供后续数据真实性的判断,所以这个节点的备份数量需要远超过 2。理想情况下,在采用这种方案存储的区块链系统中,如果验证节点总数为 T,分片数量为 N,那么备份数量应该为 T/N。其次是对 Block 的存储过程,传统分布式系统中节点较少,因而往往是一个节点适配多个数据块,首先是通过一致性哈希算法将数据映射到哈希环上去,然后每个节点存储某个范围内编号的数据块,并且可以接受某个节点在某次存储中并没有分配存储任务。而在区块链上,每个节点是否分配到 Block 不再是随机事件而是必然事件,每个节点都会随机抽取一个 Block 进行存储,这一过程通过将带有区块原始数据与节点自身信息的数据哈希后的结果对分片数取余完成。假设每份数据被分为了 N 个 Blocks,每个节点的实际存储大小仅为原来的 1/N。通过适当设置 N,可以实现增长的 TPS 和节点存储压力的平衡。

Sharding 后的数据存储方式,图片来源:Kernel Ventures
3.2 DAS(Data Availability Sampling)
DAS 技术是基于 Sharding 在存储方式上的进一步优化。在 Sharding 过程中,由于节点简单的随机存储,可能会出现某个 Block 丢失的情况。其次,对于分片后的数据,还原过程中如何确认数据的真实性与完整性也非常重要。在 DAS 中,通过 Eraser code 与 KZG 多项式承诺对这两个问题进行了解决。
Eraser code:考虑以太坊庞大的验证节点数量,某个 Block 没有被任何节点存储的概率几乎为 0,但是理论上来说仍然存在这种极端情况发生的可能。为了减轻这一可能造成的存储缺失的威胁,此方案下往往不直接将原始数据切分为 Block 进行存储,而是先将原始数据映射到一个 n 阶多项式的系数上,然后在多项式上取 2n 个点,并让节点从中随机选择一个进行存储。对于这个 n 阶多项式,只需要 n+1 个点便可以进行还原,因而只需要有一半的 Block 有被节点选中,我们便可以实现对原始数据的还原。通过 Eraser code,提高了数据存储的安全程度与网络对于数据的恢复能力。KZG 多项式承诺:在数据存储中非常重要的一环便是对于数据真实性的检验。在没有采用 Eraser code 的网络中,校验环节可以采用多样的方法,但是如果引入了上文的 Eraser code 以提高数据安全性,那么比较合适的方法是使用 KZG 多项式承诺。KZG 多项式承诺可以直接以多项式的形式对单个 Block 内容校验,从而省去将多项式还原为二进制数据的过程,验证的形式总体与 Merkle Tree 类似,但是不需要具体的 Path 节点数据,只需要 KZG Root 与 Block 数据便可对其真伪进行验证。
3.3 DA 层数据校验方式
数据校验既确保从节点中调用的数据未被篡改且具有没有出现丢失。为了尽可能减少校验过程中所需要的数据量以及计算成本,DA 层现阶段采用树结构做为主流的校验方式。最简单的形式便是使用 Merkle Tree 进行校验,使用完全二叉树的形式记录,只需要保留一个 Merkle Root 以及节点路径上另一侧子树的哈希值便可以进行校验,校验的时间复杂度为 O(logN) 级别(如果 logN 不加底数默认为 log2(N))。虽然已经极大简化了校验过程,但是验证过程的数据量总体还是随着数据的增加而增长。为了解决增加的验证量问题,现阶段提出了另一种验证方式,Verkle Tree。Verkle Tree 中每个节点除了存储 value 还会附带一个 Vector Commitment ,通过原始节点的值和这个承诺性证明就可以快速对数据真实性进行验证,而不需要调用其他姐妹节点的值,这使得每次验证的计算次数只和 Verkle Tree 的深度有关,是一个固定的常数,从而大大加速了验证速度。但是 Vector Commitment 的计算需要同一层所有姐妹节点的参与,这大大增大了写入数据与更改数据的成本。但是对于历史数据这类做永久性存储而不能篡改的数据,只有读而没有写的需求,Verkle Tree 就显得极为合适了。此外 Merkle Tree 与 Verkle Tree 本身还有 K-ary 形式下的变体,其具体实现机制相似,只是改变了每个节点下子树的数量,其具体性能的对比可以见下表。

数据校验方式时间性能对比,图片来源:Verkle Trees
3.4 通用 DA 中间件
区块链生态的不断扩大,随之带来公链数量的不断增加。由于各条公链在各自领域的优势与不可替代性,短时间内 Layer1 公链几无可能走向统一。但是随着 DeFi 的发展,以及 CEX 的种种问题,用户对于去中心化跨链交易资产的要求也不断增长。因此,可以消除跨链数据交互中的安全问题的 DA 层多链数据存储得到了越来越多的关注。但是要接受来自不同公链的历史数据,需要 DA 层提供数据流标准化存储与验证的去中心化协议,比如基于 Arweave 的存储中间件 kvye ,采取主动从链上抓取数据的方式,可以将所有链上的数据以标准的形式存储至 Arweave,以最小化数据传输过程的差异性。相对来说,专门为某条公链提供 DA 层数据存储的 Layer2 通过内部共享节点的方式进行数据交互,虽然降低了交互的成本并提高了安全性,但是具有比较大的局限性,仅能向特定公链提供服务。
4. DA 层存储方案
4.1 主链 DA
4.1.1 类 DankSharding
这类存储方案暂时还没有确定的名称,而其中最突出的代表就是以太坊上的 DankSharding,因而本文中使用类 DankSharding 代称这一类方案。这类方案主要使用了上述的两种 DA 存储技术,Sharding 和 DAS。首先通过 Sharding 将数据分成合适的份数,然后再让每个节点以 DAS 的形式抽取一个数据 Block 进行存储。对于全网节点足够多的情况,我们可以取一个较大的分片数 N,这样每个节点的存储压力只有原来的 1/N,从而实现整体存储空间的 N 倍扩容。同时为了保证防止某个 Block 没有被任一区块存储的极端情况,DankSharding 对数据使用 Eraser Code 进行了编码,只需要一半的数据就可以进行完整还原。最后是对数据的检验过程,使用了 Verkle 树的结构与多项式承诺,实现了快速的校验。
4.1.2 短期存储
对于主链的 DA,一种最为简单的数据处理方式就是对历史数据进行短期存储。本质上来说,区块链所起的是一个公示账本的作用,在全网共同见证的前提下实现对账本内容的更改,而并没有永久化存储的需求。以 Solana 为例,虽然其历史数据被同步到了 Arweave 上,但是主网节点只保留了近两日的交易数据。基于账户记录的公链上,每一时刻的历史数据保留了区块链上账户最终的状态,便足以为下一时刻的更改提供验证依据。而对于这个时间段之前数据有特殊需求的项目方,可以自己在其他去中心化公链上或者交由可信第三方进行存储。也就是说对于数据有额外需求的人,需要对历史数据存储进行付费。
4.2 第三方 DA
4.2.1 主链专用 DA:EthStorage
主链专用DA:DA 层最重要的就是数据传输的安全性,这一点上安全性最高的便是主链的 DA。但是主链存储受到存储空间的限制以及资源的竞争,因而当网络数据量增长较快时,如果要实现对数据的长期存储,第三方 DA 会是一个更好的选择。第三方 DA 如果与主网有更高的兼容性,可以实现节点的共用,数据交互过程中也会具有更高的安全性。因而在考虑安全性的前提下,主链专用 DA 会存在巨大优势。以以太坊为例,主链专用 DA 的一个基本要求是可以与 EVM 兼容,保证和以太坊数据与合约间的互操作性,代表性的项目有 Topia,EthStorage 等。其中 EthStorage 是兼容性方面目前开发最完善的,因为除了 EVM 层面的兼容,其还专门设置了相关接口与 Remix,Hardhat 等以太坊开发工具对接,实现以太坊开发工具层面的兼容。EthStorage:EthStorage 是一条独立于以太坊的公链,但其上运行的节点是以太坊节点的超群,也就是运行 EthStorage 的节点也可以同时运行以太坊,通过以太坊上的操作码便可以直接对 EthStorage 进行操作。EthStorage 的存储模式中,仅在以太坊主网保留少量元数据以供索引,本质上是为以太坊创建了一个去中心化的数据库。现阶段的解决方案中,EthStorage 通过在以太坊主网上部署了一份 EthStorage Contract 实现了以太坊主网与 EthStorage 的交互。如果以太坊要存入数据,则需要调用合约中的 put() 函数,输入参数是两个字节变量 key, data,其中 data 表示要存入的数据,而 key 则是其在以太坊网络中的标识,可以将其看成类似于IPFS中 CID 的存在。在(key,data)数据对成功存储到 EthStorage 网络后,EthStorage 会生成一个 kvldx 返回给以太坊主网,并于以太坊上的 key 对应,这个值对应了数据在 EthStorage 上的存储地址,这样原来可能需要存储大量数据的问题现在就变为了存储一个单一的 (key,kvldx)对,从而大大降低了以太坊主网的存储成本。如果需要对之前存储的数据进行调用,则需要使用 EthStorage 中的 get() 函数,并输入 key 参数,通过以太坊存储的 kvldx 便可在 EthStorage 上对数据进行一个快速查找。

EthStorage 合约,图片来源:Kernel Ventures
在节点具体存储数据的方式上,EthStorage 借鉴了 Arweave 的模式。首先是对于来自 ETH 的大量 (k,v)对进行了分片,每个 Sharding 包含固定数量个(k,v)数据对,其中每个(k,v)对的具体大小也存在一个限制,通过这种方式保证后续对于矿工存储奖励过程中的工作量大小的公平性。对于奖励的发放,需要先对节点是否存储数据进行验证。这个过程中,EthStorage 会把一个 Sharding(TB 级别大小)分成非常多的 chunk,并在以太坊主网保留一个 Merkle root 以做验证。接着需要矿工首先提供一个 nonce 来与 EthStorage 上前一个区块的哈希通过随机算法生成出几个 chunk 的地址,矿工需要提供这几个 chunk 的数据以证明其确实存储了整个 Sharding。但这个 nonce 不能随意选取,否则节点会选取出合适的 nonce 只对应其存储的 chunk 从而通过验证,所以这个 nonce 必须使得其所生成的 chunk 经过混合与哈希后可以使难度值满足网络要求,并且只有第一个提交 nonce 和随机访问证明的节点才可以获取奖励。
4.2.2 模块化 DA:Celestia
区块链模块:现阶段 Layer1 公链所需执行的事务主要分为以下四个部分:(1)设计网络底层逻辑,按照某种方式选取验证节点,写入区块并为网络维护者分配奖励;(2)打包处理交易并发布相关事务;(3)对将要上链的交易进行验证并确定最终状态;(4)对于区块链上的历史数据进行存储与维护。根据所完成功能的不同,我们可以将区块链分别划分为四个模块,即共识层、执行层、结算层、数据可用性层(DA 层)。模块化区块链设计:很长一段时间,这四个模块都被整合到了一条公链上,这样的区块链称为单体区块链。这种形式更加稳定并便于维护,但也给单条公链带来了巨大的压力。实际运行过程中,这四个模块之间互相约束并竞争公链有限的计算与存储资源。例如,要提高处理层的处理速度,相应就会给数据可用性层带来更大的存储压力;要保证执行层的安全性就需要更复杂的验证机制但拖慢交易处理的速度。因此,公链的开发往往面临着这四个模块间的权衡。为了突破这一公链性能提升的瓶颈,开发者提出了模块化区块链的方案。模块化区块链的核心思想是将上述的四个模块中的一个或几个剥离出来,交给一条单独的公链实现。这样在该条公链上就可以仅专注于交易速度或者存储能力的提升,突破之前由于短板效应对于区块链整体性能造成的限制。模块化 DA:将 DA 层从区块链业务中剥离出来单独交由一条公链复杂的方法被认为是面对 Layer1 日益增长历史数据的一种可行解决方案。现阶段这方面的探索仍处于早期阶段,目前最具代表性的项目是 Celestia。在存储的具体方式上,Celestia 借鉴了 Danksharding 的存储方法,也是将数据分成多个 Block,由各个节点抽取一部分进行存储并同时使用 KZG 多项式承诺对数据完整性进行验证。同时,Celestia 使用了先进的二维 RS 纠删码,通过 k*k 矩阵的形式改写原始数据,最终只需要 25% 的部分便可以对原始数据实现恢复。然而,数据分片存储本质上只是将全网节点的存储压力在总数据量上乘以了一个系数,节点的存储压力与数据量仍然是保持线性增长。随着 Layer1 对于交易速度的不断改进,节点的存储压力某天仍可能达到一个无法接受的临界。为了解决这一问题,Celestia 中引入了 IPLD 组件进行处理。对于 k*k 矩阵中的数据,并不直接存储在 Celestia 上,而是存储在 LL-IPFS 网络中,仅在节点中保留该数据在 IPFS 上的 CID 码。当用户请求某份历史数据时,节点会向 IPLD 组件发送对应 CID,通过该 CID 在 IPFS 上对原始数据进行调用。如果在 IPFS 上存在数据,则会经由 IPLD 组件和节点返回回来;如果不存在,则无法返回数据。

Celestia 数据读取方式,图片来源:Celestia Core
Celestia:以 Celestia 为例,我们可以窥见模块化区块链在解决以太坊存储问题中的落地应用。Rollup 节点会将打包并验证好的交易数据发送给 Celestia 并在 Celestia 上对数据进行存储,这个过程中 Celestia 只管对数据进行存储,而不会有过多的感知,最后根据存储空间的大小 Rollup 节点会向 Celestia 支付相应 tia代币作为存储费用。在Celstia中的存储利用了类似于 EIP4844 中的 DAS 和纠删码,但是对 EIP4844 中的多项式纠删码进行了升级,使用了二维 RS 纠删码,将存储安全进行了再次升级,仅需 25% 的 fractures 便可以对整个交易数据进行还原。本质上只是一条存储成本低廉的 POS 公链,如果要实现用来解决以太坊的历史数据存储问题,还需要许多其他具体模块来与 Celestia 进行配合。比如 Rollup 方面,Celestia 官网上大力推荐的一种 Rollup 模式是 Sovereign Rollup。不同于 Layer2 上常见的 Rollup,仅仅对交易进行计算和验证,也就是完成执行层的操作。Sovereign Rollup 包含了整个执行和结算的过程,这最小化了 Celestia 上对交易的处理,在 Celestia 整体安全性弱于以太坊的情况下,这种措施可以最大提升整体交易过程的安全性。而在以太坊主网 Celestia 调用数据的安全性保障方面,当下最主流的方案是量子引力桥智能合约。对于 Celestia 上存储的数据,其会生成一个 Merkle Root(数据可用性证明) 并保持在以太坊主网的量子引力桥合约上,当以太坊每次调用 Celestia 上历史数据时,都会将其哈希结果与 Merkle Root 进行比较,如果符合才表示其确实是真实的历史数据。
4.2.3 存储公链 DA
在主链 DA 技术原理上,向存储公链借鉴了类似 Sharding 的许多技术。而在第三方 DA 中,有些更是直接借助存储公链完成了部分存储任务,比如 Celestia 中具体的交易数据就是放在了 LL-IPFS 网络上。第三方 DA 的方案中,除了搭建一条单独的公链解决 Layer1 的存储问题之外,一种更直接的方式是直接让存储公链和 Layer1 对接,存储 Layer1 上庞大的历史数据。对于高性能区块链来说,历史数据的体量更为庞大,在全速运行的情况下,高性能公链 Solana 的数据量大小接近 4 PG,完全超出了普通节点的存储范围。Solana 选择的解决方案是将历史数据存储在去中心化存储网络 Arweave 上,只在主网的节点上保留 2 日的数据用来验证。为了确保存储过程的安全性 Solana 与 Arweave 链自己专门设计了一个存储桥协议 Solar Bridge。Solana 节点验证后的数据会同步到 Arweave 上并返回相应 tag。只需要通过该 tag,Solana 节点便可以对 Solana 区块链任意时刻的历史数据进行查看。而在 Arweave 上,不需要全网节点保持数据一致性,并以此作为参与网络运行的门槛,而是采取了奖励存储的方式。首先 Arweave 并没有采用传统链结构构建区块,而更类似一种图的结构。在 Arweave 中,一个新的区块不仅会指向前一个区块,还会随机指向一个已生成的区块 Recall Block。Recall Block 的具体位置由其前一区块与其区块高度的哈希结果决定,在前一区块被挖出之前,Recall Block 的位置是未知的。但是在生成新区块的过程中,需要节点具有 Recall Block 的数据以使用 POW 机制计算规定难度的哈希,只有最先计算出符合难度哈希的矿工才可以获得奖励,鼓励了矿工存储尽可能多的历史数据。同时,存储某个历史区块的人越少,节点在生成符合难度 nonce 时会有更少的竞争对手,鼓励矿工存储网络中备份较少的区块。最后,为了保证节点在 Arweave 中对数据做永久性存储,其引入了 WildFire 的节点评分机制。节点间会倾向于与可以较快的提供更多历史数据的节点通信,而评分等级较低的节点往往无法第一时间获得最新的区块与交易数据从而无法在 POW 的竞争中占取先机。

Arweave 区块构建方式,图片来源:Arweave Yellow-Paper
5. 综合对比
接下来,我们将从 DA 性能指标的四个维度出发,分别对 5 种存储方案的优劣进行比较。
安全性:数据安全问题的最大的来源是数据传输过程中导致的遗失以及来自不诚信节点的恶意篡改,而跨链过程中由于两条公链的独立性与状态不共享,所以是数据传输安全的重灾区。此外,现阶段需要专门 DA 层的 Layer 1 往往有强大的共识群体,自身安全性会远高于普通存储公链。因而主链 DA 的方案具更高的安全性。而在确保了数据传输安全后,接下来就是要保证调用数据的安全。只考虑用来验证交易的短期历史数据的话,同一份数据在临时存储的网络中得到了全网共同的备份,而在类 DankSharding 的方案中数据平均的备份数量只有全网节点数的 1/N,更多的数据冗余可以使得数据更不容易丢失,同时也可以在验证时提供更多的参考样本。因而临时存储相对会有更高的数据安全性。而在第三方 DA 的方案中,主链专用 DA 由于和主链使用公共节点,跨链过程中数据可以通过这些中继节点直接传输,因而也会有比其他 DA 方案相对较高的安全性。存储成本:对存储成本最大的影响因素是数据的冗余数量。在主链 DA 的短期存储方案中,使用全网节点数据同步的形式进行存储,任何一份新存储的数据需要在全网节点中得到备份,具有最高的存储成本。高昂的存储成本反过来也决定了,在高 TPS 的网络中,该方式只适合做临时存储。其次是 Sharding 的存储方式,包括了在主链的 Sharding 以及第三方 DA 中的 Sharding。由于主链往往有更多的节点,因而相应一个 Block 也会有更多的备份,所以主链 Sharding 方案会有更高的成本。而存储成本最低的则是采取奖励存储方式的存储公链 DA ,此方案下数据冗余的数量往往在一个固定的常数附近波动。同时存储公链 DA 中还引入了动态调节的机制,通过提高奖励吸引节点存储备份较少的数据以确保数据安全。数据读取速度:数据的存储速度主要受到数据在存储空间中的存储位置、数据索引路径以及数据在节点中的分布的影响。其中,数据在节点的存储位置对速度的影响更大,因为将数据存储在内存或 SSD 中可能导致读取速度相差数十倍。存储公链 DA 多采取 SSD 存储,因为该链上的负载不仅包括 DA 层的数据,还包括用户上传的视频、图片等高内存占用的个人数据。如果网络不使用 SSD 作为存储空间,难以承载巨大的存储压力并满足长期存储的需求。其次,对于使用内存态存储数据的第三方 DA 和主链 DA,第三方 DA 首先需要在主链中搜索相应的索引数据,然后将该索引数据跨链传输到第三方 DA,并通过存储桥返回数据。相比之下,主链 DA 可以直接从节点查询数据,因此具有更快的数据检索速度。最后,在主链 DA 内部,采用 Sharding 方式需要从多个节点调用 Block,并对原始数据进行还原。因此相对于不分片存储的短期存储方式而言,速度会较慢。DA 层通用性:主链 DA 通用性接近于零,因为不可能将存储空间不足的公链上的数据转移到另一条存储空间不足的公链上。在第三方 DA 中,方案的通用性与其与特定主链的兼容性是一对矛盾的指标。例如,对于专为某条主链设计的主链专用 DA 方案中,其在节点类型和网络共识层面进行了大量改进以适配该公链,因而在与其他公链通信时,这些改进会起到巨大的阻碍作用。而在第三方 DA 内部,与模块化 DA 相比, 存储公链 DA 在通用性方面表现更好。存储公链 DA 具有更庞大的开发者社区和更多的拓展设施,可以适应不同公链的情况。同时,存储公链 DA 对于数据的获取方式更多是通过抓包主动获取,而不是被动接收来自其他公链传输的信息。因此,它可以以自己的方式对数据进行编码,实现数据流的标准化存储,便于管理来自不同主链的数据信息,并提高存储效率。

存储方案性能比较,图片来源:Kernel Ventures
6. 总结
现阶段的区块链正在经历从 Crypto 向更具包容性的 Web3 转换的过程中,这个过程中带来的不仅是区块链上项目的丰富。为了在 Layer1 上容纳如此多项目的同时运行,同时保证 Gamefi 和 Socialfi 项目的体验,以以太坊为代表的 Layer1 采取了 Rollup 和 Blobs 等方式来提高 TPS。而新生区块链中,高性能区块链的数量也是不断增长。但是更高的 TPS 不仅意味着更高的性能,也意味着网络中更大的存储压力。对于海量的历史数据,现阶段提出了主链和基于第三方的多种 DA 方式,以适应链上存储压力的增长。改进方式各有利弊,在不同情境下有不同适用性。
以支付为主的区块链对于历史数据的安全性有着极高的要求,而不追求特别高的 TPS。如果这类公链还处于筹备阶段,可以采取类 DankSharding 的存储方式,在保证安全性的同时也可以实现存储容量的巨大提升。但如果是比特币这种已经成型并有大量节点的公链,在共识层贸然进行改进存在巨大风险,因而可以采取链外存储中安全性较高的主链专用 DA 来兼顾安全性与存储问题。但值得注意的是,区块链的功能并不是一成不变而是不断变化的。比如早期的以太坊的功能主要也局限于支付以及使用智能合约对资产和交易进行简单的自动化处理,但是随着区块链版图的不断拓展,以太坊上逐渐加入了各种 Socialfi 与 Defi 项目,使以太坊向着更加综合性的方向发展。而最近伴随着比特币上铭文生态的爆发,比特币网络的交易手续费自 8 月以来激增了近 20 倍,背后反映的是现阶段比特币网络的交易速度无法满足交易需求,交易者只能拉高手续费使交易尽快得到处理。现在,比特币社区需要做出一个 trade-off,是接受高昂的手续费以及缓慢的交易速度,还是降低网络安全性以提高交易速度但违背支付系统的初衷。如果比特币社区选择了后者,那么面对增长的数据压力,相应的存储方案也需要做出调整。

比特币主网交易费用波动,图片来源:OKLINK
而对于综合功能的公链,其对 TPS 有着更高的追求,历史数据的增长更加巨大,采取类 DankSharding 的方案长期来看难以适应 TPS 的快速增长。因此,较为合适的方式是将数据迁移到第三方 DA 进行存储。其中,主链专用 DA 具有最高的兼容性,如果只考虑单条公链的存储问题,可能更具优势。但是在 Layer1 公链百花齐放的今天,跨链资产转移与数据交互也成为区块链社区的普遍追求。如果考虑到整个区块链生态的长期发展,将不同公链的历史数据存储在同一条公链上可以消除许多数据交换与验证过程中的安全问题,因此,模块化 DA 和存储公链 DA 的方式可能是一个更好的选择。在通用性接近的前提下,模块化 DA 专注于提供区块链 DA 层的服务,引入了更精细化的索引数据管理历史数据,可以对不同公链数据进行一个合理归类,与存储公链相比具有更多优势。然而,上述方案并未考虑在已有公链上进行共识层调整的成本,这个过程具有极高的风险性,一旦出现问题可能会导致系统性的漏洞,使得公链失去社区共识。因此,如果是区块链扩容过程中的过渡方案,最简单的主链临时存储可能更合适。最后,以上讨论都基于实际运行过程中的性能出发,但如果某条公链的目标是发展自身生态,吸引更多项目方和参与者,也有可能会倾向于受到自身基金会扶持和资助的项目。比如在同等甚至总体性能略低于存储公链存储方案的情况下,以太坊社区也会倾向于 EthStorage 这类以太坊基金会支持的 Layer2 项目,以持续发展以太坊生态。
总而言之,当今区块链的功能越来越复杂,也带来了更大的存储空间需求。在 Layer1 验证节点足够多的情况下,历史数据并不需要全网所有节点共同备份,只需要备份数量达到某个数值后便可保证相对的安全性。与此同时,公链的分工也变得越来越细致,Layer1 负责共识和执行,Rollup 负责计算和验证,再使用单独的一条区块链进行数据存储。每个部分都可以专注于某一功能,不受其他部分性能的限制。然而,具体存储多少数量或让多少比例的节点存储历史数据才能实现安全性与效率的平衡,以及如何保证不同区块链之间的安全互操作,这是需要区块链开发者进行思考和不断完善的问题。对于投资者而言,可以关注以太坊上的主链专用 DA 项目,因为现阶段以太坊已有足够多的支持者,不需要再借助其他社区扩大自己的影响力。更多的需要是完善与发展自己的社区,吸引更多项目落地以太坊生态。但是对处于追赶者地位的公链,比如 Solana,Aptos 来说,单链本身没有那么完善的生态,因而可能更倾向于联合其他社区的力量,搭建一个庞大的跨链生态以扩大影响力。因而对于新兴的 Layer1 ,通用的第三方 DA 值得更多的关注。
Kernel Ventures是一个由研究和开发社区驱动的加密风险投资基金,拥有超过70个早期投资,专注于基础设施、中间件、dApps,尤其是ZK、Rollup、DEX、模块化区块链,以及将搭载未来数十亿加密用户的垂直领域,如账户抽象、数据可用性、可扩展性等。在过去的七年里,我们一直致力于支持世界各地的核心开发社区和大学区块链协会的发展。
参考文献
Celestia:模块化区块链的星辰大海:https://foresightnews.pro/article/detail/15497DHT usage and future work:https://github.com/celestiaorg/celestia-node/issues/11Celestia-core:https://github.com/celestiaorg/celestia-coreSolana labs:https://github.com/solana-labs/solana?source=post_page-----cf47a61a9274--------------------------------Announcing The SOLAR Bridge:https://medium.com/solana-labs/announcing-the-solar-bridge-c90718a49fa2leveldb-handbook:https://leveldb-handbook.readthedocs.io/zh/latest/sstable.htmlKuszmaul J. Verkle trees[J]. Verkle Trees, 2019, 1: 1.:https://math.mit.edu/research/highschool/primes/materials/2018/Kuszmaul.pdfArweave 官网:https://www.arweave.org/Arweave 黄皮书:https://www.arweave.org/yellow-paper.pdf
🫡🫥
🫡🫥
LIVE
Kernel Ventures
--
Kernel Ventures: 为 Dapp 赋予链下计算能力 — ZK 协处理器
作者:Kernel Ventures Turbo Guo
审稿:Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR:
ZK 协处理器是一种让 dApp 利用链下计算资源的方案,本文主要讨论了协处理器的实现方式、各类应用以及未来的发展方向,主要内容有:
RISC Zero 的 zkVM 是一种 ZK 协处理器解决方案,它让链上合约能调用链下 zkVM 跑特定的 Rust 代码,并将结果返回给链上,同时提供 zkp 供合约验证计算是否正确。ZK 协处理器有不同的实现方式,除了 zkVM,用户还可以自行为程序写定制化 ZK 电路,或者使用预制的框架写电路,进而让合约能利用链下计算资源。ZK 协处理器可以在 DeFi 上发挥作用,例如把 AMM 的计算放在链下进行,进而让协议捕获类似 MEV 的价值,或者让 AMM 实现复杂且需要大量计算的运行逻辑。ZK 协处理器还可以让借贷协议实时计算利率,使保证金计算透明化等。zkAMM 有两种实现方式,一种是用 zkVM,另一种是用 zkOracle。ZK 协处理器还有其他潜在用法,比如钱包可以用 ZK 协处理器把身份验证放在链下执行,协处理器还可以让链上游戏能执行更复杂的计算,降低 DAO 治理所需的 gas 等。ZK 协处理器的格局未定,但相比用户自己写电路,用一个项目做接口调用链下资源是更加友好的,但那“接口”项目背后接入了什么计算服务提供商(传统云厂商、去中心化资源共享)就是另一个值得讨论的问题了。

1. ZK 协处理器的含义与应用

图片来源:Kernel Ventures

ZK 协处理器的核心是把链上计算挪到链下,用 ZK 证明链下计算过程的可靠性,使得智能合约能够轻松处理大量的计算,同时让合约能核实计算的可靠性。这和 zkRollup 的思路是类似的,但 Rollup 是链协议层利用链下计算资源,而 ZK 协处理器是 dApp 利用链下资源。
这里用 RISC Zero 来解释一种 ZK 协处理器实现方式,但 ZK 协处理器有很多种实现方式,后文会继续介绍。RISC Zero 开发了 Bonsai ZK 协处理器架构,其中的核心是 RISC Zero 的 zkVM, 开发者可以在 zkVM 上为“某段 Rust 代码被正确执行”这件事生成 zkp。有了 zkVM 后,实现 ZK 协处理器的具体流程为:
开发者向 Bonsai 的中继合约发起请求,即在 zkVM 中跑开发者要求的程序中继合约把请求发给链下请求池Bonsai 在链下 zkVM 中执行请求,进行链下的大规模运算,然后为其生成一个凭证(receipt)。这些证明,也叫做“收据”,由 Bonsai 通过中继合约发布回链上。

图片来源:RISC Zero

在 Bonsai 中证明的程序被称作 Guest Program ,凭证(receipt)用来证明 guest program 被正确执行。凭证包括一个 journal 和一个印章(seal)。具体而言,Journal 承载了 zkVM 应用的公共输出,而印章用于证明凭证的有效性,即证明 guest program 被正确执行 ,印章本身也是一个由证明者生成的 zkSTARK。验证凭证可以保证 journal 是用了正确的电路等构建所得 。
Bonsai 为开发者简化了从 Rust 代码到 zkVM 字节码的编译、程序上传、在 VM 中的执行和证明反馈等流程,让开发者能够更聚焦于程序的逻辑设计。而且不仅是部分的合约逻辑,而且整个合约逻辑都可以放到链下跑。RISC Zero 还使用了 continuations,它把一个大的 proof 生成拆分成很多份,然后每份单独进行证明。这样既可以为大型程序生成证明,也不会占用太多内存。除了 RISC Zero, 还有 IronMill , =nil; Foundation 和 Marlin 几个项目也提供了类似的通用解决方案。

2. ZK 协处理器在 DeFi 上的应用

2.1 AMM - Bonsai 为协处理器

zkUniswap 就是一种利用了链下计算资源的 AMM,它的核心是把 swap 的部分计算放在链下,而且它使用了 Bonsai。用户在链上发起一个 swap 请求。Bonsai 的中继合约获得请求,发起链下计算,Bonsai 完成计算后向 EVM 中的 callback 函数返回计算结果和 proof。如果 proof 被验证为成功,swap 就会被执行。

但 swap 不是一次完成的,请求和执行过程分别在不同的 transactions 中,这带来了一定风险,即在提交请求后和 swap 完成前,池子的状态可能发生变化。因为验证是基于提交请求时池子的状态。如果一个请求还在等待时,池子状态变了,那么验证就会失效。
为了解决这个问题,开发者设计了一个池子锁。当用户发起请求时,除了结算 swap 以外的所有操作都被锁了起来,直到链下成功触发链上 swap 或者 swap 超时了(会预设这个时间)。有时间限制的话,即使中继或 zkp 出问题,池子也不会被一直锁着。而具体的时间限制可能是几分钟。
zkUniswap 对 MEV 有个特殊的设计,即开发者希望让协议捕获 MEV 价值。理论上 zkAMMs 同样有 MEV,因为第一个提交易的人就能上锁,所以大家还是会争 gas, builders 同样可以为请求交易排序。但 zkUniswap 会把 MEV 收益自己吃掉,用到的方法是可变利率渐变式荷兰拍卖(VRGDA)。

zkUniswap 把 lock 拿出来自己降价拍卖,如果 lock 很快卖掉,那协议就知道目前需求量大,然后自动升价,如果售出 lock 的速度变慢,协议就会降低价格。这会成为新的收入来源。相当于,协议提供了一个新东西决定交易顺序,而竞争价格的钱直接通过新东西给到项目方,这个很有想象力。

2.2 AMM - zkOracle 为协处理器

除了用 zkVM,还有人提出用 zkOracle 来实现对链下计算资源的利用, 而zkOracle 是兼顾输入和输出的预言机。一般预言机有两种,一种是输入预言机,一种是输出预言机,输入预言机是把链下数据整理(计算)后放到链上,输出预言机是把链上数据整理(计算)后提供给链下。I/O(输入兼输出)预言机(zkOracle ),是先做输出,再做输入,让链上能利用链下计算资源。

zkOracle 一方面使用链上数据作为数据源,另一方面用 ZK 保证预言机节点的计算没有作假,可以实现协处理器的功能。因此,可以把 AMM 的核心计算放在 zkOracle 中,实现传统 AMM 功能的同时,还可以用 zkOracle 实现更复杂更消耗计算资源的操作。

图片来源:github fewwwww/zkAMM
2.3 借贷利率计算、保证金计算等其他应用
抛开实现方式,有了 ZK 协处理器后可以实现很多功能。比如,借贷协议可以不再预设参数,而是根据实时的借贷情况调整利率。比如在借钱需求旺盛时提高利率吸引供给,然后在需求降低时降低利率。这要求借贷协议能实时获得链上数据,同时进行大量的计算,得出合适的参数,这就需要链下计算了(除非链上成本极低)。
计算保证金余额、未实现的盈亏、清算金额等的复杂运算也可以将其转移到协处理器来执行。用协处理器的优势在于它让这些应用更透明、更可验证,保证金引擎的逻辑不再是一个秘密的黑盒子。虽然计算是在链下完成的,但用户可以完全信任其执行的正确性。此外,这种做法也适用于期权的计算。

3. ZK 协处理器的其他应用
3.1 钱包-用 Bonsai 为协处理器
Bonfire Wallet 用 zkVM 把验证身份的计算放到了链下。这个钱包的目标让用户能用生物信息(指纹),或加密硬件 yubikey 创建 burner 钱包。

具体而言,Bonfire Wallet 使用了 WebAuthn 这个通用的网页验证标准,让用户不用密码,直接用设备来完成网页上的身份验证。所以在 Bonfire 钱包中,用户通过 WebAuthn 生成一个公钥(不是链上的,给 WebAuthn 用的),然后用它来创建钱包。
每个 Burner 钱包在链上都有合约,其中包含了 WebAuthn 的公钥,合约需要验证用户的 WebAuthn 签名。但这个计算量是很大的,所以用到了 Bonsai 把计算放在链下,通过一个 zkVM guest 程序在链下验证签名,并生产 zkp 供链上验证。

图片来源:Bonfire Wallet

3.2 链上数据索取-用户自行写 ZK 电路
Axiom 是一个没有用 zkVM 但使用另一种协处理器解决方案的应用。先介绍一下 Axiom 想做什么,它希望利用 ZK 协处理器让合约能查阅历史链上信息。其实让合约读历史数据是很难的,因为智能合约一般是获得实时的链上数据,而且很贵,合约很难获得账户过往余额或者交易记录等有价值的链上数据。

图片来源:Axiom demo

Axiom 节点访问所需链上数据并在链下执行指定的计算,然后为计算生成一个零知识证明,证明结果是根据有效的链上数据正确计算出来的。这个证明在链上被验证,确保合约可以信任这个结果。
为链下计算生成 zkp 就需要把程序编译进 ZK 电路里,前文也提到了用 zkVM 来做这件事,而 Axiom 官方指出在这件事情上有很多方案,需要权衡性能,灵活度和开发体验:
定制电路:开发者为程序定制电路,那性能肯定最好,但要花时间开发;eDSL/DSL:开发者还是自己写电路,但有一些可选框架帮开发者把 ZK 相关的问题解决掉,这样可以平衡性能和开发体验。zkVM:开发者直接用现成的虚拟机里跑 ZK,这非常方便但 Axiom 官方认为效率很低
因此,Axiom 选了第二种,项目方还为用户提供了一套优化过的 ZK 模块,使其可以自行设计电路。

与 Axiom 类似的项目还有 Herodotus ,但它想做的是跨链信息传输的中间件。由于信息处理是在链下,所以让不同链获得处理后的数据是很合理的思路。而另一个项目 Space and Time 则是用类似架构实现了数据索引。

3.3 链上游戏、DAO 治理等其他应用
除此以外,链上游戏,DAO 治理都可以用 ZK 协处理器。RISC Zero 认为,任何需要 250k gas 以上的计算使用 ZK 协处理器的话成本都会更低,但具体如何得出的还有待考究。DAO 治理也可以用到 ZK 协处理器,因为涉及多人和多个合约,这很耗计算资源。RISC Zero 称使用 Bonsai 后 gas 费可以降 50%。ZKML 本质上也是 ZK 协处理器的思路,因此 Modulus Labs ,Giza 也是这个领域的项目,只不过 ZK 协处理器的概念更大 。
此外,ZK 协处理器这个领域还有一些辅助性项目,比如 ezkl,它提供制作 ZK 电路的编译器, ZK 部署的工具套件,把链上计算移到链下的工具等。

4. 未来展望
协处理器使得链上应用拥有了如“云”一样的外部计算资源,它提供了相对廉价的大量计算,而链上只处理必要的计算。在实际情况下,zkVM 也可以在云上面跑,ZK 协处理器本质上是一个架构,是把链上计算放到链下的方式,而链下计算资源由谁提供是不限制的。

本质上说,链下计算资源由传统的大厂商,甚至去中心化的计算资源共享,和本地设备都有可能。这三个方向各有差异,传统大厂可以做到相对成熟的链下计算解决方案,在未来去中心化计算资源的“鲁棒性”可能更强,而用户本地计算也很有想象空间。但目前很多 ZK 协处理器项目都选择闭源提供服务的阶段,因为这个赛道的上下游尚未形成,无法把服务细化并交给不同项目,未来有两种可能:
ZK 协处理器的每一个环节都有大量的项目相互竞争一个服务体验良好的项目占据大部分市场

从开发者的角度,其使用 ZK 协处理器时可能只会用一个“接口”项目,这也是亚马逊云占据市场大量的原因,开发者会习惯于一种部署方式。但作为那一个链下计算资源的“接口”项目,背后接入了什么计算服务提供商(传统云厂商、去中心化资源共享)就是另一个赛道值得讨论的问题了。
Kernel Ventures是一个由研究和开发社区驱动的加密风险投资基金,拥有超过70个早期投资,专注于基础设施、中间件、dApps,尤其是ZK、Rollup、DEX、模块化区块链,以及将搭载未来数十亿加密用户的垂直领域,如账户抽象、数据可用性、可扩展性等。在过去的七年里,我们一直致力于支持世界各地的核心开发社区和大学区块链协会的发展。
参考资料:
A Guide to ZK Coprocessors for Scalability:https://www.risczero.com/news/a-guide-to-zk-coprocessors-for-scalabilityDefining zkOracle for Ethereum:https://ethresear.ch/t/defining-zkoracle-for-ethereum/15131zkUniswap: a first-of-its-kind zkAMM:https://ethresear.ch/t/zkuniswap-a-first-of-its-kind-zkamm/16839What is a ZK Coprocessor?:https://blog.axiom.xyz/what-is-a-zk-coprocessor/A Brief Intro to Coprocessors:https://crypto.mirror.xyz/BFqUfBNVZrqYau3Vz9WJ-BACw5FT3W30iUX3mPlKxtALatest Applications Building on Hyper Oracle (Bonus: Things You Can Build Now):https://mirror.xyz/hyperoracleblog.eth/Tik3nBI9mw05Ql_aHKZqm4hNxfxaEQdDAKn7JKcx0xQBonfire Wallet:https://ethglobal.com/showcase/bonfire-wallet-n1dzp
👻
👻
LIVE
Kernel Ventures
--
Kernel Ventures: Empowering DApps with Off-Chain Computing Ability — ZK Coprocessors
Author: Kernel Ventures Turbo Guo
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR: The ZK coprocessor is a solution for dApps to utilize off-chain computing resources. This article explores the existing solutions, various applications, and future development of coprocessors. The main topics covered are as follows:
RISC Zero's zkVM is a ZK coprocessor solution that allows on-chain contracts which call off-chain zkVM to run specific Rust code and return the results to the chain, while providing zkp for on-chain verification of the correctness of the computation.There are different solutions for ZK coprocessors. Besides zkVM, users can also write customized ZK circuits for their programs, or use pre-made frameworks to write circuits, thereby enabling contracts to utilize off-chain computing resources.ZK coprocessor can play a role in DeFi, such as offloading AMM calculations off-chain to capture value similar to MEV or enabling complex and computationally intensive logic for AMMs. ZK coprocessor can also facilitate real-time interest rate calculations for lending protocols, making margin calculations transparent, among other things. zkAMM has two implementation approaches, one using zkVM, and the other using zkOracle.ZK coprocessor also has other potential use cases, such as wallets using it to perform off-chain identity verification. It can enable more complex computations for on-chain games and reduce the gas required for DAO governance, among other applications.The landscape for ZK coprocessors is still uncertain, but compared to users writing their own circuits, using a solution for off-chain resource interfacing is more user-friendly. However, the question of which computation service providers are integrated behind that "interface" solution, whether traditional cloud providers or decentralized resource-sharing networks, is another important topic for discussion.
1. The Purpose and Application of ZK Coprocessors

Source: Kernel Ventures

The core of ZK coprocessor is to move on-chain computation off-chain, using ZK proofs to ensure the reliability of off-chain computation, allowing smart contracts to easily handle a large amount of computation while verifying the reliability of the computation. This is similar to the idea of zkRollups, but Rollups use off-chain computing resources at the chain protocol layer, while ZK coprocessors are used by dApps to utilize off-chain resources.
Using RISC Zero as an example to explain one solution of ZK coprocessors, RISC Zero has developed the Bonsai ZK coprocessor architecture, whose core is RISC Zero's zkVM. Developers can generate zkp on zkVM for "a certain Rust code being correctly executed". With zkVM, the specific process of implementing a ZK coprocessor is:
Developers send a request to Bonsai's relay contract, i.e., to run the developer's required program in zkVM.The relay contract sends the request to the off-chain request pool.Bonsai executes the request in off-chain zkVM, performs large-scale computations, and then generates a receipt.These proofs, also known as "receipts", are published back to the chain by Bonsai through the relay contract.

Source: RISC Zero

In Bonsai, the proven program is called the Guest Program, and the receipt is used to prove that the guest program has been executed correctly. The receipt includes a journal and a seal. Specifically, the journal carries the public output of the zkVM application, while the seal is used to prove the validity of the receipt, i.e., to prove that the guest program has been executed correctly. The seal itself is a zkSTARK generated by the prover. Verifying the receipt ensures that the journal is constructed using the correct circuit, etc.
Bonsai simplifies the process for developers to compile Rust code into zkVM bytecode, upload programs, execute them in the VM, and receive proof feedback, allowing developers to focus more on logical design. It enables not only partial contract logic but the entire contract logic to run off-chain. RISC Zero also utilizes continuations, breaking down the generation of a large proof into smaller parts, enabling proof generation for large programs without consuming excessive memory. In addition to RISC Zero, there are other projects like IronMill, =nil; Foundation, and Marlin that provide similar general solutions.
2. Application of ZK Coprocessors in DeFi
2.1 AMM - Bonsai as a Coprocessor
zkUniswap is an AMM that leverages off-chain computing resources. Its core feature is to offload part of the swap computation off-chain, using Bonsai. Users initiate a swap request on-chain. Bonsai's relay contract obtains the request, initiates off-chain computation, and upon completion, returns the computation result and proof to the EVM's callback function. If the proof is successfully verified, the swap is executed.
However, the swap is not completed in one go. The request and execution processes are in different transactions, which brings certain risks. That is, between the submission of the request and the completion of the swap, the state of the pool may change. As the verification is based on the state of the pool at the time of request submission, if a request is still pending, and the pool's state changes, then the verification will be invalid. This is an important consideration in the design and security of such systems.
To address this issue, developers have designed a pool lock. When a user initiates a request, all operations other than settling the swap are temporarily locked until off-chain computing successfully triggers the on-chain swap or the swap times out (the time limit will be preset). With a time limit in place, even if there are problems with the relay or zkp, the pool will not be locked indefinitely. The specific time limit might be a few minutes.
zkUniswap has a unique design to capture MEV, as developers aim to have the protocol benefit from MEV. Theoretically, zkAMMs also have MEV, as the first person to submit a swap can lock it and front-run others, leading to gas wars, and builders can still prioritize transaction sequencing. However, zkUniswap takes the MEV profits for itself using a method known as the Variable Rate Gradual Dutch Auction (VRGDA). This approach allows zkUniswap to extract MEV value for the protocol.
zkUniswap's concept is quite interesting. It involves lowering the price of locked assets in an auction, and if the locked assets are sold quickly, the protocol recognizes high demand and raises the price automatically. If the sale of locked assets slows down, the protocol lowers the price. This innovative approach could potentially become a new source of revenue. Essentially, the protocol introduces a unique mechanism for prioritizing transactions, and the competition for pricing benefits the project directly through this mechanism.
2.2 AMM - zkOracle as a Coprocessor
Besides using zkVM, some have proposed using zkOracle to utilize off-chain computing resources, it is worth noting that zkOracle is an I/O (input and output) oracle that handles both input and output. Generally, there are two types of oracles, one is the input oracle, and the other is the output oracle. The input oracle processes (computes) off-chain data and puts it on-chain, while the output oracle processes (computes) on-chain data and provides it off-chain. The I/O oracle (zkOracle) first does the output, then the input, allowing the chain to utilize off-chain computing resources.
On the one hand, zkOracle uses on-chain data as a data source, and on the other hand, it uses ZK to ensure that the oracle nodes' computations are honest, thus achieving the function of a coprocessor. Therefore, the core computation of AMM can be placed within zkOracle, allowing for traditional AMM functionality while also enabling more complex and computationally intensive operations using zkOracle.

Source: github fewwwww/zkAMM
2.3 Lending Rate Calculation, Margin Calculation, and Other Applications
Setting aside the implementation method, with the addition of ZK coprocessors, many functionatlities can be achieved. For example, lending protocols can adjust interest rates according to real-time parameters instead of pre-defined conditions. For instance, increasing the interest rate to attract supply when the demand for borrowing is strong, and lowering the interest rate when demand decreases. This requires the lending protocol to obtain a large amount of on-chain data in real-time, preprocess the data, and calculate the parameters off-chain (unless the on-chain cost is extremely low).
Complex calculations such as determining margin balances, unrealized profits/losses and etc., can also use coprocessors for execution. The advantage of using coprocessors is that it make these applications more transparent and verifiable. The logic of the margin engine is no longer a secret black box. Although the calculations are performed off-chain, users can fully trust the correctness of their execution. This approach is also applicable to options calculations.
3. Other Applications of ZK Coprocessors
3.1 Wallet - Using Bonsai as a Coprocessor
Bonfire Wallet uses zkVM to offload the computation of identity verification off-chain. The goal of this wallet is to allow users to create burner wallets using biometric information (fingerprints) or encrypted hardware yubikey. Specifically, Bonfire Wallet uses WebAuthn, a common web authentication standard, to allow users to complete web identity verification directly with devices without a password. So in Bonfire Wallet, users generate a public key with WebAuthn (not on-chain, but for WebAuthn), and then use it to create a wallet. Each Burner wallet has a contract on-chain, which contains the public key of WebAuthn. The contract needs to verify the user's WebAuthn signature. But this computation is large, so Bonsai is used to offload this computation off-chain, through a zkVM guest program to verify the signature off-chain, and produce zkp for on-chain verification.

Source: Bonfire Wallet
3.2 On-Chain Data Retrieval - ZK Circuits Written by Users
Axiom is an application that does not use zkVM but uses a different coprocessor solution. Let's first introduce what Axiom aims to do. It leverages a ZK coprocessors to allow contracts to access historical on-chain information. In reality, enabling contracts to read historical data is quite challenging, because smart contracts typically obtain real-time on-chain data, which can be very expensive. It is hard for contracts to access valuable on-chain data such as historical account balances or transaction records.

Source: Axiom demo
Axiom nodes access the required on-chain data and perform the specified computation off-chain, then generate a zero-knowledge proof for the computation, proving that the result is correctly calculated based on valid on-chain data. This proof is verified on-chain, ensuring that the contract can trust this result.
To generate zkp for off-chain computation, it is necessary to compile programs into ZK circuits. Previously we also mentioned using zkVM for this, but Axiom suggested that there are many solutions for this, and it's necessary to balance performance, flexibility, and development experience:
Customized Circuits: if developers customize circuits for their programs, the performance will definitely be the best, but it takes time to develop;eDSL/DSL: developers still write their circuits, but there are some optional frameworks to help developers solve zk-related problems, thus balancing performance and development experience.zkVM: developers directly run ZK on an existing virtual machine, which is very convenient, but Axiom believes it's inefficient.
Therefore, Axiom chose the second option, and provides users with a set of optimized ZK modules, allowing them to design their own circuits.
Projects similar to Axiom include Herodotus, which aims to be a middleware for cross-chain messaging. Since information processing is off-chain, it's reasonable to allow different chains to obtain processed data. Another project, Space and Time, uses a similar architecture to implement data indexing.
3.3 On-Chain Games, DAO Governance and Other Applications
In addition to the above, on-chain games, DAO governance can also use ZK coprocessors. RISC Zero believes that any computation requiring more than 250k gas would be cheaper using a ZK coprocessor, but how this is calculated remains to be further investigated. DAO governance can also use ZK coprocessors, as it involves multiple people and multiple contracts, which is very computationally intensive. RISC Zero claims that using Bonsai can reduce gas fees by 50%. Many ZKML projects, such as Modulus Labs and Giza, are using the same solution as ZK coprocessors, but the concept of ZK coprocessors is broader.
It's worth mentioning that there are some auxiliary projects in the field of ZK coprocessors, such as ezkl, which provides compilers for ZK circuits, toolkits for deploying ZK, and tools for offloading on-chain computation off-chain.
4. Future Outlook
Coprocessors provide on-chain applications with external computational resources akin to the "cloud", offering cost-effective and abundant computation, while on-chain processing focuses on essential calculations. In practice, zkVM can also run on the cloud. Essentially, ZK coprocessors is an architectural approach that moves on-chain computation off-chain, with an unlimited source of off-chain computational resources.
Essentially, off-chain computing resources can be provided by traditional cloud providers, even decentralized computing resource sharing, and local devices. These three directions each have their characteristics. Traditional cloud providers can provide relatively mature off-chain computing solutions, the "robustness" of future decentralized computing resources may be stronger, and local computing also holds a lot of potential. But currently, many ZK coprocessor projects are in a closed-source service provider stage because the ecosystem for these services has not fully formed and service specialization among different projects is yet to be defined. Two possible scenarios for the future are:
Every part of the ZK coprocessor has a large number of projects competing with each other.A single project with excellent service experience may dominate the market.
From a developer's perspective, when using ZK coprocessors, they might only interact with a single "interface" project. This is similar to the reason why Amazon Web Services has a substantial market share, as developers tend to become accustomed to a specific deployment method. However, the question of which computing service providers (traditional cloud companies, decentralized resource sharing) are integrated behind this off-chain computational resource "interface" project is another topic worth discussing.
Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world.
REFERENCE:
A Guide to ZK Coprocessors for Scalability:https://www.risczero.com/news/a-guide-to-zk-coprocessors-for-scalabilityDefining zkOracle for Ethereum:https://ethresear.ch/t/defining-zkoracle-for-ethereum/15131zkUniswap: a first-of-its-kind zkAMM:https://ethresear.ch/t/zkuniswap-a-first-of-its-kind-zkamm/16839What is a ZK Coprocessor?:https://blog.axiom.xyz/what-is-a-zk-coprocessor/A Brief Intro to Coprocessors:https://crypto.mirror.xyz/BFqUfBNVZrqYau3Vz9WJ-BACw5FT3W30iUX3mPlKxtALatest Applications Building on Hyper Oracle (Bonus: Things You Can Build Now):https://mirror.xyz/hyperoracleblog.eth/Tik3nBI9mw05Ql_aHKZqm4hNxfxaEQdDAKn7JKcx0xQBonfire Wallet:https://ethglobal.com/showcase/bonfire-wallet-n1dzp
Kernel Ventures: Empowering DApps with Off-Chain Computing Ability — ZK CoprocessorsAuthor: Kernel Ventures Turbo Guo Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua TLDR: The ZK coprocessor is a solution for dApps to utilize off-chain computing resources. This article explores the existing solutions, various applications, and future development of coprocessors. The main topics covered are as follows: RISC Zero's zkVM is a ZK coprocessor solution that allows on-chain contracts which call off-chain zkVM to run specific Rust code and return the results to the chain, while providing zkp for on-chain verification of the correctness of the computation.There are different solutions for ZK coprocessors. Besides zkVM, users can also write customized ZK circuits for their programs, or use pre-made frameworks to write circuits, thereby enabling contracts to utilize off-chain computing resources.ZK coprocessor can play a role in DeFi, such as offloading AMM calculations off-chain to capture value similar to MEV or enabling complex and computationally intensive logic for AMMs. ZK coprocessor can also facilitate real-time interest rate calculations for lending protocols, making margin calculations transparent, among other things. zkAMM has two implementation approaches, one using zkVM, and the other using zkOracle.ZK coprocessor also has other potential use cases, such as wallets using it to perform off-chain identity verification. It can enable more complex computations for on-chain games and reduce the gas required for DAO governance, among other applications.The landscape for ZK coprocessors is still uncertain, but compared to users writing their own circuits, using a solution for off-chain resource interfacing is more user-friendly. However, the question of which computation service providers are integrated behind that "interface" solution, whether traditional cloud providers or decentralized resource-sharing networks, is another important topic for discussion. 1. The Purpose and Application of ZK Coprocessors Source: Kernel Ventures The core of ZK coprocessor is to move on-chain computation off-chain, using ZK proofs to ensure the reliability of off-chain computation, allowing smart contracts to easily handle a large amount of computation while verifying the reliability of the computation. This is similar to the idea of zkRollups, but Rollups use off-chain computing resources at the chain protocol layer, while ZK coprocessors are used by dApps to utilize off-chain resources. Using RISC Zero as an example to explain one solution of ZK coprocessors, RISC Zero has developed the Bonsai ZK coprocessor architecture, whose core is RISC Zero's zkVM. Developers can generate zkp on zkVM for "a certain Rust code being correctly executed". With zkVM, the specific process of implementing a ZK coprocessor is: Developers send a request to Bonsai's relay contract, i.e., to run the developer's required program in zkVM.The relay contract sends the request to the off-chain request pool.Bonsai executes the request in off-chain zkVM, performs large-scale computations, and then generates a receipt.These proofs, also known as "receipts", are published back to the chain by Bonsai through the relay contract. Source: RISC Zero In Bonsai, the proven program is called the Guest Program, and the receipt is used to prove that the guest program has been executed correctly. The receipt includes a journal and a seal. Specifically, the journal carries the public output of the zkVM application, while the seal is used to prove the validity of the receipt, i.e., to prove that the guest program has been executed correctly. The seal itself is a zkSTARK generated by the prover. Verifying the receipt ensures that the journal is constructed using the correct circuit, etc. Bonsai simplifies the process for developers to compile Rust code into zkVM bytecode, upload programs, execute them in the VM, and receive proof feedback, allowing developers to focus more on logical design. It enables not only partial contract logic but the entire contract logic to run off-chain. RISC Zero also utilizes continuations, breaking down the generation of a large proof into smaller parts, enabling proof generation for large programs without consuming excessive memory. In addition to RISC Zero, there are other projects like IronMill, =nil; Foundation, and Marlin that provide similar general solutions. 2. Application of ZK Coprocessors in DeFi 2.1 AMM - Bonsai as a Coprocessor zkUniswap is an AMM that leverages off-chain computing resources. Its core feature is to offload part of the swap computation off-chain, using Bonsai. Users initiate a swap request on-chain. Bonsai's relay contract obtains the request, initiates off-chain computation, and upon completion, returns the computation result and proof to the EVM's callback function. If the proof is successfully verified, the swap is executed. However, the swap is not completed in one go. The request and execution processes are in different transactions, which brings certain risks. That is, between the submission of the request and the completion of the swap, the state of the pool may change. As the verification is based on the state of the pool at the time of request submission, if a request is still pending, and the pool's state changes, then the verification will be invalid. This is an important consideration in the design and security of such systems. To address this issue, developers have designed a pool lock. When a user initiates a request, all operations other than settling the swap are temporarily locked until off-chain computing successfully triggers the on-chain swap or the swap times out (the time limit will be preset). With a time limit in place, even if there are problems with the relay or zkp, the pool will not be locked indefinitely. The specific time limit might be a few minutes. zkUniswap has a unique design to capture MEV, as developers aim to have the protocol benefit from MEV. Theoretically, zkAMMs also have MEV, as the first person to submit a swap can lock it and front-run others, leading to gas wars, and builders can still prioritize transaction sequencing. However, zkUniswap takes the MEV profits for itself using a method known as the Variable Rate Gradual Dutch Auction (VRGDA). This approach allows zkUniswap to extract MEV value for the protocol. zkUniswap's concept is quite interesting. It involves lowering the price of locked assets in an auction, and if the locked assets are sold quickly, the protocol recognizes high demand and raises the price automatically. If the sale of locked assets slows down, the protocol lowers the price. This innovative approach could potentially become a new source of revenue. Essentially, the protocol introduces a unique mechanism for prioritizing transactions, and the competition for pricing benefits the project directly through this mechanism. 2.2 AMM - zkOracle as a Coprocessor Besides using zkVM, some have proposed using zkOracle to utilize off-chain computing resources, it is worth noting that zkOracle is an I/O (input and output) oracle that handles both input and output. Generally, there are two types of oracles, one is the input oracle, and the other is the output oracle. The input oracle processes (computes) off-chain data and puts it on-chain, while the output oracle processes (computes) on-chain data and provides it off-chain. The I/O oracle (zkOracle) first does the output, then the input, allowing the chain to utilize off-chain computing resources. On the one hand, zkOracle uses on-chain data as a data source, and on the other hand, it uses ZK to ensure that the oracle nodes' computations are honest, thus achieving the function of a coprocessor. Therefore, the core computation of AMM can be placed within zkOracle, allowing for traditional AMM functionality while also enabling more complex and computationally intensive operations using zkOracle. Source: github fewwwww/zkAMM 2.3 Lending Rate Calculation, Margin Calculation, and Other Applications Setting aside the implementation method, with the addition of ZK coprocessors, many functionatlities can be achieved. For example, lending protocols can adjust interest rates according to real-time parameters instead of pre-defined conditions. For instance, increasing the interest rate to attract supply when the demand for borrowing is strong, and lowering the interest rate when demand decreases. This requires the lending protocol to obtain a large amount of on-chain data in real-time, preprocess the data, and calculate the parameters off-chain (unless the on-chain cost is extremely low). Complex calculations such as determining margin balances, unrealized profits/losses and etc., can also use coprocessors for execution. The advantage of using coprocessors is that it make these applications more transparent and verifiable. The logic of the margin engine is no longer a secret black box. Although the calculations are performed off-chain, users can fully trust the correctness of their execution. This approach is also applicable to options calculations. 3. Other Applications of ZK Coprocessors 3.1 Wallet - Using Bonsai as a Coprocessor Bonfire Wallet uses zkVM to offload the computation of identity verification off-chain. The goal of this wallet is to allow users to create burner wallets using biometric information (fingerprints) or encrypted hardware yubikey. Specifically, Bonfire Wallet uses WebAuthn, a common web authentication standard, to allow users to complete web identity verification directly with devices without a password. So in Bonfire Wallet, users generate a public key with WebAuthn (not on-chain, but for WebAuthn), and then use it to create a wallet. Each Burner wallet has a contract on-chain, which contains the public key of WebAuthn. The contract needs to verify the user's WebAuthn signature. But this computation is large, so Bonsai is used to offload this computation off-chain, through a zkVM guest program to verify the signature off-chain, and produce zkp for on-chain verification. Source: Bonfire Wallet 3.2 On-Chain Data Retrieval - ZK Circuits Written by Users Axiom is an application that does not use zkVM but uses a different coprocessor solution. Let's first introduce what Axiom aims to do. It leverages a ZK coprocessors to allow contracts to access historical on-chain information. In reality, enabling contracts to read historical data is quite challenging, because smart contracts typically obtain real-time on-chain data, which can be very expensive. It is hard for contracts to access valuable on-chain data such as historical account balances or transaction records. Source: Axiom demo Axiom nodes access the required on-chain data and perform the specified computation off-chain, then generate a zero-knowledge proof for the computation, proving that the result is correctly calculated based on valid on-chain data. This proof is verified on-chain, ensuring that the contract can trust this result. To generate zkp for off-chain computation, it is necessary to compile programs into ZK circuits. Previously we also mentioned using zkVM for this, but Axiom suggested that there are many solutions for this, and it's necessary to balance performance, flexibility, and development experience: Customized Circuits: if developers customize circuits for their programs, the performance will definitely be the best, but it takes time to develop;eDSL/DSL: developers still write their circuits, but there are some optional frameworks to help developers solve zk-related problems, thus balancing performance and development experience.zkVM: developers directly run ZK on an existing virtual machine, which is very convenient, but Axiom believes it's inefficient. Therefore, Axiom chose the second option, and provides users with a set of optimized ZK modules, allowing them to design their own circuits. Projects similar to Axiom include Herodotus, which aims to be a middleware for cross-chain messaging. Since information processing is off-chain, it's reasonable to allow different chains to obtain processed data. Another project, Space and Time, uses a similar architecture to implement data indexing. 3.3 On-Chain Games, DAO Governance and Other Applications In addition to the above, on-chain games, DAO governance can also use ZK coprocessors. RISC Zero believes that any computation requiring more than 250k gas would be cheaper using a ZK coprocessor, but how this is calculated remains to be further investigated. DAO governance can also use ZK coprocessors, as it involves multiple people and multiple contracts, which is very computationally intensive. RISC Zero claims that using Bonsai can reduce gas fees by 50%. Many ZKML projects, such as Modulus Labs and Giza, are using the same solution as ZK coprocessors, but the concept of ZK coprocessors is broader. It's worth mentioning that there are some auxiliary projects in the field of ZK coprocessors, such as ezkl, which provides compilers for ZK circuits, toolkits for deploying ZK, and tools for offloading on-chain computation off-chain. 4. Future Outlook Coprocessors provide on-chain applications with external computational resources akin to the "cloud", offering cost-effective and abundant computation, while on-chain processing focuses on essential calculations. In practice, zkVM can also run on the cloud. Essentially, ZK coprocessors is an architectural approach that moves on-chain computation off-chain, with an unlimited source of off-chain computational resources. Essentially, off-chain computing resources can be provided by traditional cloud providers, even decentralized computing resource sharing, and local devices. These three directions each have their characteristics. Traditional cloud providers can provide relatively mature off-chain computing solutions, the "robustness" of future decentralized computing resources may be stronger, and local computing also holds a lot of potential. But currently, many ZK coprocessor projects are in a closed-source service provider stage because the ecosystem for these services has not fully formed and service specialization among different projects is yet to be defined. Two possible scenarios for the future are: Every part of the ZK coprocessor has a large number of projects competing with each other.A single project with excellent service experience may dominate the market. From a developer's perspective, when using ZK coprocessors, they might only interact with a single "interface" project. This is similar to the reason why Amazon Web Services has a substantial market share, as developers tend to become accustomed to a specific deployment method. However, the question of which computing service providers (traditional cloud companies, decentralized resource sharing) are integrated behind this off-chain computational resource "interface" project is another topic worth discussing. Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world. REFERENCE: A Guide to ZK Coprocessors for Scalability:https://www.risczero.com/news/a-guide-to-zk-coprocessors-for-scalabilityDefining zkOracle for Ethereum:https://ethresear.ch/t/defining-zkoracle-for-ethereum/15131zkUniswap: a first-of-its-kind zkAMM:https://ethresear.ch/t/zkuniswap-a-first-of-its-kind-zkamm/16839What is a ZK Coprocessor?:https://blog.axiom.xyz/what-is-a-zk-coprocessor/A Brief Intro to Coprocessors:https://crypto.mirror.xyz/BFqUfBNVZrqYau3Vz9WJ-BACw5FT3W30iUX3mPlKxtALatest Applications Building on Hyper Oracle (Bonus: Things You Can Build Now):https://mirror.xyz/hyperoracleblog.eth/Tik3nBI9mw05Ql_aHKZqm4hNxfxaEQdDAKn7JKcx0xQBonfire Wallet:https://ethglobal.com/showcase/bonfire-wallet-n1dzp

Kernel Ventures: Empowering DApps with Off-Chain Computing Ability — ZK Coprocessors

Author: Kernel Ventures Turbo Guo
Editor(s): Kernel Ventures Rose, Kernel Ventures Mandy, Kernel Ventures Joshua
TLDR: The ZK coprocessor is a solution for dApps to utilize off-chain computing resources. This article explores the existing solutions, various applications, and future development of coprocessors. The main topics covered are as follows:
RISC Zero's zkVM is a ZK coprocessor solution that allows on-chain contracts which call off-chain zkVM to run specific Rust code and return the results to the chain, while providing zkp for on-chain verification of the correctness of the computation.There are different solutions for ZK coprocessors. Besides zkVM, users can also write customized ZK circuits for their programs, or use pre-made frameworks to write circuits, thereby enabling contracts to utilize off-chain computing resources.ZK coprocessor can play a role in DeFi, such as offloading AMM calculations off-chain to capture value similar to MEV or enabling complex and computationally intensive logic for AMMs. ZK coprocessor can also facilitate real-time interest rate calculations for lending protocols, making margin calculations transparent, among other things. zkAMM has two implementation approaches, one using zkVM, and the other using zkOracle.ZK coprocessor also has other potential use cases, such as wallets using it to perform off-chain identity verification. It can enable more complex computations for on-chain games and reduce the gas required for DAO governance, among other applications.The landscape for ZK coprocessors is still uncertain, but compared to users writing their own circuits, using a solution for off-chain resource interfacing is more user-friendly. However, the question of which computation service providers are integrated behind that "interface" solution, whether traditional cloud providers or decentralized resource-sharing networks, is another important topic for discussion.
1. The Purpose and Application of ZK Coprocessors

Source: Kernel Ventures

The core of ZK coprocessor is to move on-chain computation off-chain, using ZK proofs to ensure the reliability of off-chain computation, allowing smart contracts to easily handle a large amount of computation while verifying the reliability of the computation. This is similar to the idea of zkRollups, but Rollups use off-chain computing resources at the chain protocol layer, while ZK coprocessors are used by dApps to utilize off-chain resources.
Using RISC Zero as an example to explain one solution of ZK coprocessors, RISC Zero has developed the Bonsai ZK coprocessor architecture, whose core is RISC Zero's zkVM. Developers can generate zkp on zkVM for "a certain Rust code being correctly executed". With zkVM, the specific process of implementing a ZK coprocessor is:
Developers send a request to Bonsai's relay contract, i.e., to run the developer's required program in zkVM.The relay contract sends the request to the off-chain request pool.Bonsai executes the request in off-chain zkVM, performs large-scale computations, and then generates a receipt.These proofs, also known as "receipts", are published back to the chain by Bonsai through the relay contract.

Source: RISC Zero

In Bonsai, the proven program is called the Guest Program, and the receipt is used to prove that the guest program has been executed correctly. The receipt includes a journal and a seal. Specifically, the journal carries the public output of the zkVM application, while the seal is used to prove the validity of the receipt, i.e., to prove that the guest program has been executed correctly. The seal itself is a zkSTARK generated by the prover. Verifying the receipt ensures that the journal is constructed using the correct circuit, etc.
Bonsai simplifies the process for developers to compile Rust code into zkVM bytecode, upload programs, execute them in the VM, and receive proof feedback, allowing developers to focus more on logical design. It enables not only partial contract logic but the entire contract logic to run off-chain. RISC Zero also utilizes continuations, breaking down the generation of a large proof into smaller parts, enabling proof generation for large programs without consuming excessive memory. In addition to RISC Zero, there are other projects like IronMill, =nil; Foundation, and Marlin that provide similar general solutions.
2. Application of ZK Coprocessors in DeFi
2.1 AMM - Bonsai as a Coprocessor
zkUniswap is an AMM that leverages off-chain computing resources. Its core feature is to offload part of the swap computation off-chain, using Bonsai. Users initiate a swap request on-chain. Bonsai's relay contract obtains the request, initiates off-chain computation, and upon completion, returns the computation result and proof to the EVM's callback function. If the proof is successfully verified, the swap is executed.
However, the swap is not completed in one go. The request and execution processes are in different transactions, which brings certain risks. That is, between the submission of the request and the completion of the swap, the state of the pool may change. As the verification is based on the state of the pool at the time of request submission, if a request is still pending, and the pool's state changes, then the verification will be invalid. This is an important consideration in the design and security of such systems.
To address this issue, developers have designed a pool lock. When a user initiates a request, all operations other than settling the swap are temporarily locked until off-chain computing successfully triggers the on-chain swap or the swap times out (the time limit will be preset). With a time limit in place, even if there are problems with the relay or zkp, the pool will not be locked indefinitely. The specific time limit might be a few minutes.
zkUniswap has a unique design to capture MEV, as developers aim to have the protocol benefit from MEV. Theoretically, zkAMMs also have MEV, as the first person to submit a swap can lock it and front-run others, leading to gas wars, and builders can still prioritize transaction sequencing. However, zkUniswap takes the MEV profits for itself using a method known as the Variable Rate Gradual Dutch Auction (VRGDA). This approach allows zkUniswap to extract MEV value for the protocol.
zkUniswap's concept is quite interesting. It involves lowering the price of locked assets in an auction, and if the locked assets are sold quickly, the protocol recognizes high demand and raises the price automatically. If the sale of locked assets slows down, the protocol lowers the price. This innovative approach could potentially become a new source of revenue. Essentially, the protocol introduces a unique mechanism for prioritizing transactions, and the competition for pricing benefits the project directly through this mechanism.
2.2 AMM - zkOracle as a Coprocessor
Besides using zkVM, some have proposed using zkOracle to utilize off-chain computing resources, it is worth noting that zkOracle is an I/O (input and output) oracle that handles both input and output. Generally, there are two types of oracles, one is the input oracle, and the other is the output oracle. The input oracle processes (computes) off-chain data and puts it on-chain, while the output oracle processes (computes) on-chain data and provides it off-chain. The I/O oracle (zkOracle) first does the output, then the input, allowing the chain to utilize off-chain computing resources.
On the one hand, zkOracle uses on-chain data as a data source, and on the other hand, it uses ZK to ensure that the oracle nodes' computations are honest, thus achieving the function of a coprocessor. Therefore, the core computation of AMM can be placed within zkOracle, allowing for traditional AMM functionality while also enabling more complex and computationally intensive operations using zkOracle.

Source: github fewwwww/zkAMM
2.3 Lending Rate Calculation, Margin Calculation, and Other Applications
Setting aside the implementation method, with the addition of ZK coprocessors, many functionatlities can be achieved. For example, lending protocols can adjust interest rates according to real-time parameters instead of pre-defined conditions. For instance, increasing the interest rate to attract supply when the demand for borrowing is strong, and lowering the interest rate when demand decreases. This requires the lending protocol to obtain a large amount of on-chain data in real-time, preprocess the data, and calculate the parameters off-chain (unless the on-chain cost is extremely low).
Complex calculations such as determining margin balances, unrealized profits/losses and etc., can also use coprocessors for execution. The advantage of using coprocessors is that it make these applications more transparent and verifiable. The logic of the margin engine is no longer a secret black box. Although the calculations are performed off-chain, users can fully trust the correctness of their execution. This approach is also applicable to options calculations.
3. Other Applications of ZK Coprocessors
3.1 Wallet - Using Bonsai as a Coprocessor
Bonfire Wallet uses zkVM to offload the computation of identity verification off-chain. The goal of this wallet is to allow users to create burner wallets using biometric information (fingerprints) or encrypted hardware yubikey. Specifically, Bonfire Wallet uses WebAuthn, a common web authentication standard, to allow users to complete web identity verification directly with devices without a password. So in Bonfire Wallet, users generate a public key with WebAuthn (not on-chain, but for WebAuthn), and then use it to create a wallet. Each Burner wallet has a contract on-chain, which contains the public key of WebAuthn. The contract needs to verify the user's WebAuthn signature. But this computation is large, so Bonsai is used to offload this computation off-chain, through a zkVM guest program to verify the signature off-chain, and produce zkp for on-chain verification.

Source: Bonfire Wallet
3.2 On-Chain Data Retrieval - ZK Circuits Written by Users
Axiom is an application that does not use zkVM but uses a different coprocessor solution. Let's first introduce what Axiom aims to do. It leverages a ZK coprocessors to allow contracts to access historical on-chain information. In reality, enabling contracts to read historical data is quite challenging, because smart contracts typically obtain real-time on-chain data, which can be very expensive. It is hard for contracts to access valuable on-chain data such as historical account balances or transaction records.

Source: Axiom demo
Axiom nodes access the required on-chain data and perform the specified computation off-chain, then generate a zero-knowledge proof for the computation, proving that the result is correctly calculated based on valid on-chain data. This proof is verified on-chain, ensuring that the contract can trust this result.
To generate zkp for off-chain computation, it is necessary to compile programs into ZK circuits. Previously we also mentioned using zkVM for this, but Axiom suggested that there are many solutions for this, and it's necessary to balance performance, flexibility, and development experience:
Customized Circuits: if developers customize circuits for their programs, the performance will definitely be the best, but it takes time to develop;eDSL/DSL: developers still write their circuits, but there are some optional frameworks to help developers solve zk-related problems, thus balancing performance and development experience.zkVM: developers directly run ZK on an existing virtual machine, which is very convenient, but Axiom believes it's inefficient.
Therefore, Axiom chose the second option, and provides users with a set of optimized ZK modules, allowing them to design their own circuits.
Projects similar to Axiom include Herodotus, which aims to be a middleware for cross-chain messaging. Since information processing is off-chain, it's reasonable to allow different chains to obtain processed data. Another project, Space and Time, uses a similar architecture to implement data indexing.
3.3 On-Chain Games, DAO Governance and Other Applications
In addition to the above, on-chain games, DAO governance can also use ZK coprocessors. RISC Zero believes that any computation requiring more than 250k gas would be cheaper using a ZK coprocessor, but how this is calculated remains to be further investigated. DAO governance can also use ZK coprocessors, as it involves multiple people and multiple contracts, which is very computationally intensive. RISC Zero claims that using Bonsai can reduce gas fees by 50%. Many ZKML projects, such as Modulus Labs and Giza, are using the same solution as ZK coprocessors, but the concept of ZK coprocessors is broader.
It's worth mentioning that there are some auxiliary projects in the field of ZK coprocessors, such as ezkl, which provides compilers for ZK circuits, toolkits for deploying ZK, and tools for offloading on-chain computation off-chain.
4. Future Outlook
Coprocessors provide on-chain applications with external computational resources akin to the "cloud", offering cost-effective and abundant computation, while on-chain processing focuses on essential calculations. In practice, zkVM can also run on the cloud. Essentially, ZK coprocessors is an architectural approach that moves on-chain computation off-chain, with an unlimited source of off-chain computational resources.
Essentially, off-chain computing resources can be provided by traditional cloud providers, even decentralized computing resource sharing, and local devices. These three directions each have their characteristics. Traditional cloud providers can provide relatively mature off-chain computing solutions, the "robustness" of future decentralized computing resources may be stronger, and local computing also holds a lot of potential. But currently, many ZK coprocessor projects are in a closed-source service provider stage because the ecosystem for these services has not fully formed and service specialization among different projects is yet to be defined. Two possible scenarios for the future are:
Every part of the ZK coprocessor has a large number of projects competing with each other.A single project with excellent service experience may dominate the market.
From a developer's perspective, when using ZK coprocessors, they might only interact with a single "interface" project. This is similar to the reason why Amazon Web Services has a substantial market share, as developers tend to become accustomed to a specific deployment method. However, the question of which computing service providers (traditional cloud companies, decentralized resource sharing) are integrated behind this off-chain computational resource "interface" project is another topic worth discussing.
Kernel Ventures is a research & dev community driven crypto VC fund with more than 70 early stage investments, focusing on infrastructure, middleware, dApps, especially ZK, Rollup, DEX, Modular Blockchain, and verticals that will onboard the next billion of users in crypto such as Account Abstraction, Data Availability, Scalability and etc. For the past seven years, we have committed ourselves to supporting the growth of core dev communities and University Blockchain Associations across the world.
REFERENCE:
A Guide to ZK Coprocessors for Scalability:https://www.risczero.com/news/a-guide-to-zk-coprocessors-for-scalabilityDefining zkOracle for Ethereum:https://ethresear.ch/t/defining-zkoracle-for-ethereum/15131zkUniswap: a first-of-its-kind zkAMM:https://ethresear.ch/t/zkuniswap-a-first-of-its-kind-zkamm/16839What is a ZK Coprocessor?:https://blog.axiom.xyz/what-is-a-zk-coprocessor/A Brief Intro to Coprocessors:https://crypto.mirror.xyz/BFqUfBNVZrqYau3Vz9WJ-BACw5FT3W30iUX3mPlKxtALatest Applications Building on Hyper Oracle (Bonus: Things You Can Build Now):https://mirror.xyz/hyperoracleblog.eth/Tik3nBI9mw05Ql_aHKZqm4hNxfxaEQdDAKn7JKcx0xQBonfire Wallet:https://ethglobal.com/showcase/bonfire-wallet-n1dzp
Explore the latest crypto news
⚡️ Be a part of the latests discussions in crypto
💬 Interact with your favorite creators
👍 Enjoy content that interests you
Email / Phone number

Latest News

--
View More
Sitemap
Cookie Preferences
Platform T&Cs