Original author: Faust, Geek web3

 

From the summer of 2023 to now, Bitcoin Layer 2 has always been the highlight of the entire Web3. Although the rise of this field is much later than Ethereum Layer 2, with the unique charm of POW and the smooth landing of spot ETFs, Bitcoin, which does not need to worry about the risk of "securitization", has attracted tens of billions of dollars of capital attention to the derivative track of Layer 2 in just half a year.

In the Bitcoin Layer 2 track, Merlin, with a TVL of billions of dollars, is undoubtedly the largest and most followed one. With clear staking incentives and considerable yields, Merlin suddenly sprang up in a few months and created an ecological myth that surpassed Blast. As Merlin becomes increasingly popular, the discussion about its technical solutions has become a topic of concern for more and more people.

In this article, Geek Web3 will focus on the Merlin Chain technical solution and interpret its publicly available documents and protocol design ideas. We are committed to helping more people understand the general workflow of Merlin and have a clearer understanding of its security model, so that everyone can understand how this "head Bitcoin Layer 2" works in a more intuitive way.

Merlin’s Decentralized Oracle Network: An Open Off-Chain DAC Committee

For all Layer 2, whether it is Ethereum Layer 2 or Bitcoin Layer 2, DA and data publishing costs are one of the most pressing issues to be resolved. Since the Bitcoin network itself has many problems and does not inherently support large data throughput, how to use this DA space has become a difficult problem that tests the imagination of Layer 2 project parties.

One conclusion is obvious: if Layer 2 "directly" publishes unprocessed transaction data to the Bitcoin block, it can neither achieve high throughput nor low transaction fees. The most mainstream solution is to either compress the data size as small as possible through high compression and then upload it to the Bitcoin block, or publish the data directly off the Bitcoin chain.

Among the Layer 2 that adopts the first approach, the most famous one may be Citrea, which intends to upload the state changes (state diff) of Layer 2 over a period of time, that is, the state change results on multiple accounts, together with the corresponding ZK proofs, to the Bitcoin chain. In this case, anyone can download the state diff and ZKP from the Bitcoin mainnet, and then monitor the state change results of Citrea. This method can compress the size of the data on the chain by more than 90%.

Although this can greatly compress the data size, the bottleneck is still obvious. If a large number of accounts change their status in a short period of time, Layer 2 must upload all the changes of these accounts to the Bitcoin chain, and the final data release cost cannot be reduced to a very low level. This can be seen in many Ethereum ZK Rollups.

Many Bitcoin Layer 2 projects simply take the second path: directly use the DA solution under the Bitcoin chain, either by building a DA layer yourself, or by using Celestia, EigenDA, etc. B^Square, BitLayer, and the protagonist of this article, Merlin, all use this off-chain DA expansion solution.

In the previous article of Geekweb3 - "Analysis of the new B^2 technology roadmap: the necessity of DA and verification layer under the Bitcoin chain", we mentioned that B^2 directly imitated Celestia and built a DA network under the chain that supports data sampling function, named B^2 Hub. "DA data" such as transaction data or state diff is stored under the Bitcoin chain, and only the datahash / merkle root is uploaded to the Bitcoin main network.

This is actually using Bitcoin as a trustless bulletin board: anyone can read the datahash from the Bitcoin chain. When you get the DA data from the data provider off-chain, you can check whether it corresponds to the datahash on the chain, that is, hash(data 1) == datahash 1? If there is a correspondence between the two, it means that the data provider off-chain gave you the right data.

The above process can ensure that the data provided to you by the off-chain node is associated with certain "clues" on Layer 1, preventing the DA layer from maliciously providing false data. But there is a very important malicious scenario: if the source of the data - Sequencer, did not send the data corresponding to the datahash at all, but only sent the datahash to the Bitcoin chain, but deliberately withheld the corresponding data and prevented anyone from reading it, what should we do at this time?

Similar scenarios include but are not limited to: only publishing ZK-Proof and StateRoot, but not publishing the corresponding DA data (state diff or transaction data). Although people can verify ZKProof and confirm that the calculation process from Prev_Stateroot to New_Stateroot is valid, they do not know which accounts' states have changed. In this case, although the user's assets are safe, everyone cannot be sure of the actual state of the network, which transactions are packaged and put on the chain, and which contract states have been updated. At this time, Layer 2 is basically equivalent to downtime.

This is actually "data withholding". Dankrad of the Ethereum Foundation once briefly discussed a similar issue on Twitter in August 2023. Of course, he mainly targeted something called "DAC".

Many Ethereum Layer 2s that use off-chain DA solutions often set up several nodes with special permissions to form a committee, the full name of which is the Data Availability Committee (DAC). This DAC committee acts as a guarantor, claiming to the outside world that the Sequencer has indeed released complete DA data (transaction data or state diff) off-chain. Then the DAC nodes collectively generate a multi-signature. As long as the multi-signature meets the threshold requirements (for example, 2/4), the relevant contracts on Layer 1 will default to the Sequencer passing the DAC committee's inspection and truthfully releasing complete DA data off-chain.

The DAC committees of Ethereum Layer 2 basically follow the POA model, and only a few nodes that have passed KYC or are officially designated are allowed to join the DAC committee, which makes DAC synonymous with "centralization" and "alliance chain". In addition, in some Ethereum Layer 2s that adopt the DAC model, the sorter only sends DA data to DAC member nodes and rarely uploads data to other places. Anyone who wants to obtain DA data must obtain permission from the DAC committee, which is essentially no different from the alliance chain.

There is no doubt that DAC should be decentralized. Layer 2 does not need to upload DA data directly to Layer 1, but the access rights of the DAC committee should be open to the outside world to prevent a few people from colluding to do evil. (For a discussion of DAC evil scenarios, please refer to Dankrad’s previous remarks on Twitter)

Celestia's previously proposed BlobStream essentially replaces the centralized DAC with Celestia. The Ethereum L2 sorter can publish DA data to the Celestia chain. If 2/3 of the Celestia nodes sign it, the Layer 2 exclusive contract deployed on Ethereum will believe that the sorter has published the DA data truthfully, which actually makes the Celestia node a guarantor. Considering that Celestia has hundreds of Validator nodes, we can think that this large DAC is relatively decentralized.

The DA solution adopted by Merlin is actually quite similar to Celestia's BlobStream. Both open up access to DAC in the form of POS to make it decentralized. Anyone who pledges enough assets can run a DAC node. In Merlin's documentation, the above-mentioned DAC nodes are called Oracles, and it is pointed out that asset pledges of BTC, MERL and even BRC-20 tokens will be supported to achieve a flexible pledge mechanism, and proxy pledges similar to Lido will also be supported. (The oracle's POS pledge protocol is basically one of Merlin's next core narratives, and the pledge rates provided are relatively high)

Here we briefly describe the workflow of Merlin (picture below):

  • After receiving a large number of transaction requests, the Sequencer aggregates them and generates data batches, which are then passed to the Prover node and the Oracle node (decentralized DAC).

  • Merlin’s Prover node is decentralized and uses Lumoz’s Prover as a Service. After receiving multiple data batches, the Prover pool will generate the corresponding zero-knowledge proof, and then the ZKP will be sent to the Oracle node for verification.

  • The Oracle node will verify whether the ZK Proof sent by Lmuoz's ZK mining pool corresponds to the data Batch sent by the Sequencer. If the two can correspond and do not contain other errors, the verification is passed. In this process, the decentralized Oracle nodes will generate multi-signatures through threshold signatures, declaring to the outside world that the sequencer has sent the DA data completely, and the corresponding ZKP is valid and has passed the verification of the Oracle node.

  • The sorter collects multi-signature results from the Oracle node. When the number of signatures meets the threshold requirement, the signature information is sent to the Bitcoin chain, along with the datahash of the DA data (data batch), for the outside world to read and confirm.

The Oracle node performs special processing on the calculation process of verifying the ZK Proof, generates a Commitment, and sends it to the Bitcoin chain, allowing anyone to challenge the "commitment". The process here is basically the same as the fraud proof protocol of bitVM. If the challenge is successful, the Oracle node that issued the Commitment will be subject to economic penalties. Of course, the data that Oracle wants to publish to the Bitcoin chain also includes the hash of the current Layer 2 state - StateRoot, and ZKP itself, which must be published on the Bitcoin chain for external detection.

There are a few details that need to be explained. First of all, the Merlin roadmap mentions that in the future, Oracle will back up DA data to Celestia. In this way, Oracle nodes can properly eliminate local historical data and do not need to store data locally forever. At the same time, the Commitment generated by Oracle Network is actually the root of a Merkle Tree. It is not enough to just disclose the root. The complete data set corresponding to the Commitment must be made public. This requires finding a third-party DA platform, which can be Celestia or EigenDA, or other DA layers.

Security Model Analysis: Optimistic ZKRollup + Cobo’s MPC Service

We have briefly described the workflow of Merlin above, and I believe everyone has already mastered its basic structure. It is not difficult to see that Merlin, B^Square, BitLayer, and Citrea all follow the same security model - optimistic ZK-Rollup.

The first reading of this word may make many Ethereum enthusiasts feel strange. What is "optimistic ZK-Rollup"? In the cognition of the Ethereum community, the "theoretical model" of ZK Rollup is completely based on the reliability of cryptographic calculations, and there is no need to introduce trust assumptions. The word "optimism" introduces trust assumptions, which means that most of the time, people should be optimistic that Rollup has no errors and is reliable. Once an error occurs, the Rollup operator can be punished by fraud proof. This is the origin of the name of Optimistic Rollup, also known as OP Rollup.

For the Ethereum ecosystem, which is the home of Rollup, the optimistic ZK-Rollup may seem a bit out of place, but this fits the current situation of Bitcoin Layer 2. Due to technical limitations, ZK Proof cannot be fully verified on the Bitcoin chain, and only a certain step of the ZKP calculation process can be verified under special circumstances. Under this premise, the Bitcoin chain can actually only support the fraud proof protocol. People can point out that there is an error in a certain calculation step of ZKP during the off-chain verification process, and challenge it through fraud proof. Of course, this cannot be compared with Ethereum-style ZK Rollup, but it is already the most reliable and secure security model that Bitcoin Layer 2 can achieve.

Under the above optimistic ZK-Rollup scheme, assuming that there are N people in the Layer 2 network who have the authority to initiate challenges, as long as one of these N challengers is honest and reliable, and can detect errors and initiate fraud proofs at any time, the state transition of Layer 2 is safe. Of course, the optimistic Rollup with a higher degree of completion needs to ensure that its withdrawal bridge is also protected by the fraud proof protocol. However, almost all Bitcoin Layer 2s currently cannot achieve this premise and need to rely on multi-signature/MPC. Therefore, how to choose a multi-signature/MPC solution has become a problem closely related to the security of Layer 2.

Merlin chose Cobo’s MPC service for the bridge solution, and adopted measures such as cold and hot wallet isolation. The bridged assets are jointly managed by Cobo and Merlin Chain. Any withdrawal behavior needs to be handled by Cobo and Merlin Chain’s MPC participants. In essence, the reliability of the withdrawal bridge is guaranteed by the credit endorsement of the institution. Of course, this is only a stopgap measure at the current stage. As the project gradually improves, the withdrawal bridge can be replaced by an "optimistic bridge" with a 1/N trust assumption by introducing BitVM and a fraud proof protocol, but it will be difficult to implement (almost all official Layer 2 bridges currently rely on multi-signatures).

Overall, we can sort out that Merlin introduced DAC based on POS, optimistic ZK-Rollup based on BitVM, and MPC asset custody solution based on Cobo. It solved the DA problem by opening DAC permissions; ensured the security of state transitions by introducing BitVM and fraud proof protocol; and ensured the reliability of the withdrawal bridge by introducing MPC services of Cobo, a well-known asset custody platform.

Two-step verification ZKP submission scheme based on Lumoz

Earlier, we reviewed Merlin's security model and introduced the concept of optimistic ZK-rollup. In Merlin's technical roadmap, we also talked about decentralized Prover. As we all know, Prover is a core role in the ZK-Rollup architecture. It is responsible for generating ZKProof for the Batch released by Sequencer. The generation process of zero-knowledge proof is very hardware-intensive and a very tricky problem.

To speed up the generation of ZK proofs, parallelizing and splitting the tasks is a basic operation. The so-called parallelization is actually to divide the generation task of ZK proofs into different parts, which are completed by different Provers respectively, and finally the Aggregator aggregates multiple Proofs into a whole.

In order to speed up the generation process of ZK proofs, Merlin will adopt Lumoz's Prover as a service solution, which actually brings together a large number of hardware devices to form a mining pool, and then assigns computing tasks to different devices and allocates corresponding incentives, which is somewhat similar to POW mining.

In this decentralized Prover solution, there is a type of attack scenario, commonly known as a front-running attack: suppose an Aggregator has built a ZKP and sends it out in the hope of getting a reward. After other Aggregators see the content of the ZKP, they publish the same content before him, claiming that they generated the ZKP first. How to solve this problem?

Perhaps the most instinctive solution that everyone can think of is to assign a specific task number to each Aggregator. For example, only Aggregator A can take Task 1, and others will not get rewards even if they complete Task 1. However, there is a problem with this method, which is that it cannot resist single-point risk. If Aggregator A has a performance failure or is offline, Task 1 will be stuck and cannot be completed. Moreover, this method of assigning tasks to a single entity cannot improve production efficiency through a competitive incentive mechanism, which is not a good solution.

Polygon zkEVM once proposed a method called Proof of efficiency in a blog post, which pointed out that different Aggregators should be encouraged to compete with each other through competitive means, and incentives should be distributed on a first-come, first-served basis. The Aggregator that first submits ZK-Proof to the chain can get rewards. Of course, he did not mention how to solve the MEV preemptive problem.

Lumoz adopts a two-step verification ZK proof submission method. After an Aggregator generates a ZK proof, it does not need to send out the complete content first, but only publishes the hash of the ZKP. In other words, it publishes the hash (ZKP+Aggregator Address). In this way, even if other people see the hash value, they do not know the corresponding ZKP content and cannot directly rush to get it.

If someone simply copies the entire hash and releases it first, it will be meaningless because the hash contains the address of a specific aggregator X. Even if aggregator A releases the hash first, when the original image of the hash is revealed, everyone will see that the aggregator address contained in it is X's, not A's.

Through this two-step verification ZKP submission scheme, Merlin (Lumoz) can solve the problem of front-running in the ZKP submission process, and then achieve highly competitive zero-knowledge proof generation incentives, thereby increasing the speed of ZKP generation.

Merlin’s Phantom: Multi-chain interoperability

According to Merlin’s technical roadmap, they will also support interoperability between Merlin and other EVM chains. The implementation path is basically the same as the previous idea of ​​Zetachain. If Merlin is used as the source chain and other EVM chains are used as target chains, when the Merlin node senses the cross-chain interoperability request issued by the user, it will trigger the subsequent workflow on the target chain.

For example, an EOA account controlled by the Merlin network can be deployed on Polygon. When a user issues a cross-chain interoperability instruction on the Merlin Chain, the Merlin network first parses its content and generates a transaction data executed on the target chain. The Oracle Network then performs MPC signature processing on the transaction and generates a digital signature for the transaction. After that, Merlin's Relayer node releases the transaction on Polygon and completes subsequent operations through Merlin's assets in the EOA account on the target chain.

When the operation requested by the user is completed, the corresponding assets will be directly forwarded to the user's address on the target chain, and in theory, they can also be directly transferred to Merlin Chain. This solution has some obvious advantages: it can avoid the handling fee wear and tear generated by the cross-chain bridge contract when traditional assets cross-chain, and the security of cross-chain operations is directly guaranteed by Merlin's Oracle Network, without relying on external infrastructure. As long as users trust Merlin Chain, they can assume that such cross-chain interoperability is not a problem.

Summarize

In this article, we briefly interpret the general technical solution of Merlin Chain, and believe that it will allow more people to understand the general workflow of Merlin and have a clearer understanding of its security model. Considering the current booming Bitcoin ecosystem, we believe that such technology popularization is valuable and needed by the general public. We will follow up on Merlin, bitLayer, B^Square and other projects in the future, and conduct a more in-depth analysis of their technical solutions. Please stay tuned!