Interviewer: Nickqiao & Faust, Geek Web3

Interviewee: Ye Zhang, co-founder of Scroll

Editor: Faust & Jomosis

On June 17th, Geek Web3 and BTCEden, with the help of Scroll’s DevRel, Vincent Jin, were fortunate to invite Scroll’s co-founder Zhang Ye to answer many questions about Scroll and zkEVM.

During the interview, the two sides not only talked about many technical topics, but also talked about some interesting things about Scroll and its grand vision of empowering the real economy in Asia, Africa and Latin America. This article is a text record of the interview, with more than 10,000 words, covering at least 15 topics:

  • Application space of ZK in traditional fields

  • Differences in engineering difficulty between zkEVM and zkVM

  • Difficulties encountered by Scroll in implementing zkEVM

  • Scroll’s improvements to Zcash’s halo2 proof system

  • How Scroll collaborated with the Ethereum PSE group

  • How to ensure your circuits are safe and reliable at the code audit level

  • Scroll's brief plan for the future new version of zkEVM and proof system

  • Scroll's Multi Prover design and the construction method of its Prover generation network (zk mining pool), etc.

In addition, Professor Zhang Ye talked about Scroll's grand vision of taking root in Africa, Turkey, Southeast Asia and other regions with backward financial systems, and creating real economic scenarios for the people in the region to "go from virtual to real". This article may be one of the best materials for more people to better understand Scroll, and I recommend everyone to read it carefully.

1.Faust: Professor Zhang Ye, what do you think about the application of ZK outside of Rollup? Many people are accustomed to thinking that ZK is mainly used in mixers, private transfers, or ZK Rollup and ZK bridges, but outside of Web3, there are still many applications of ZK in traditional industries. In which directions do you think ZK is most likely to be adopted in the future?

Ye Zhang: This is a good question. People who work on ZK in traditional industries have been exploring various scenarios of ZK five or six years ago. The scenarios where ZK is used in blockchain are actually very small. This is why Vitalik thinks that in ten years, the scenarios of ZK will be as big as blockchain.

I think ZK will be very useful in scenarios that require trust assumptions. Suppose you need to handle some heavy computing tasks now. If you rent a server on AWS, run your own tasks and get the results, it is equivalent to doing the calculation on the device you control, and then you have to pay for the server rental, but this money is often not cheap;

But if we adopt the computing outsourcing model, many people can share your computing tasks with their idle equipment or resources, and the cost you pay may be cheaper than renting a server yourself. But there is a trust issue here, you don’t know whether the calculation results returned to you by others are correct. Now suppose you are doing a very troublesome calculation, and then you give the calculation to me to do it, and then pay me. I will give you a random result half an hour later, and you have no way to believe that this result is valid, because I can make up a result at will.

But if I can prove to you that the computational result I deliver is correct, then you can rest assured and dare to outsource more computational tasks to me. ZK can make many unreliable data sources reliable. This function is very powerful. Through ZK, you can use unreliable but very cheap third-party computing resources more efficiently.

I think this scenario is very meaningful and can give rise to a business model similar to outsourcing computing. In some academic literature, it is called verifiable computation, which means making a computation trustworthy. In addition, ZK can be applied to the database field. Suppose running a database locally is too expensive, and you decide to outsource it. Someone happens to have excess database resources, and you store the data there. You will worry that the other party will change the data you host there, or that the result you get after a SQL query is correct?

In this regard, you can ask the other party to generate a proof. If this can be done, you can also outsource the data storage and get a trustworthy result. This and the trusted computing mentioned above are both a large category of application scenarios.

There are many other application examples. I remember a paper about Verifiable ASIC. When producing chips, you can write the ZK algorithm on your chip. When you run the program on the chip, the result will have a Proof by default. In this way, I think many things can be delegated to any device to generate credible results.

There is also something a bit far-fetched, called Photo Proof. For example, you don't know if many pictures have been Photoshopped, but we can use ZK to prove that the photos have not been tampered with. For example, you can add some settings in the camera software to automatically generate some digital signatures. After you take the photo, this signature is equivalent to stamping the photo. If someone takes the photo you took and uses PS to do "secondary creation", we can verify the signature and identify that the photo has been modified.

We can introduce ZK here. When you make some minor changes to the original image, you can use ZK Proof to prove to others that you have only performed simple operations such as rotation and translation on the photo, and have not tampered with the original content of the photo. This means that you can prove that the image you have fine-tuned is basically the same as the original image, that is, you have not tampered with the core content of the image to engage in "secondary creation."

This scenario can also be extended to video and audio. With ZK, you don't have to tell the other party what changes you made to the original video, but you can prove that you did not tamper with the core content of the original and that you only made some harmless adjustments. In addition, there are many interesting applications that ZK can play a role in.

At present, I think the reason why ZK's application scenarios have not been widely accepted is that its cost is too high. The existing ZK proof generation scheme cannot generate proofs for any calculation in real time, because the cost of ZK is generally 100 to 1000 times that of the original calculation. Of course, the numbers I mentioned are already relatively low.

So imagine that a computing task originally takes 1 hour, and you now generate a ZK Proof for it. The overhead may be 100 times, which means it takes 100 hours to generate the proof. Although you can use GPU or ASIC to shorten this time, it still costs a huge amount of computing overhead. If you ask me to calculate something very troublesome and generate a ZK Proof for it, I can refuse to do so because it will consume 100 times more computing resources, which is not cost-effective in the end. So for a one-to-one scenario, generating a one-time ZK proof is very expensive.

However, this is also why ZK is very suitable for Blockchain, because blockchain performs redundant computing and has many one-to-many scenarios. Different nodes in the blockchain network perform the same computing tasks. If there are 10,000 nodes, the same task will be executed 10,000 times. But if you complete the task off-chain and generate a ZK proof, 10,000 nodes only verify the ZK proof without rerunning the task, so there is no need to replay the original calculation 10,000 times. It is equivalent to replacing the cost of your own computing with the cost of redundant computing of 10,000 nodes. From an overall perspective, it can allow more people to save resources.

Therefore, the more decentralized the chain is, the more suitable it is for ZK, because anyone can verify ZKP at almost zero cost. As long as we pay a certain cost when initially generating the proof, we can gain liberation for most people. This is why the public verifiability of blockchain is very suitable for ZK, because there are many one-to-many scenarios in blockchain.

There is another point that is not mentioned above. The ZK proofs with large overhead used in the blockchain field are all non-interactive, that is, you give me something, I give you a proof, and then it ends, because in the blockchain, you cannot interact with the chain repeatedly. But there is a more efficient and low-cost way to generate proofs, which is interactive proofs. For example, you send me a Challenge, I send you something, you send me something, and I send you another one. Through this way of multiple interactions between the two parties, it is possible to reduce the computational level of ZK. If this method works, it is possible to solve the problem of proof generation in large ZK application scenarios.

Nickqiao: What do you think about the development prospects of zkML, which is the combination of ZK and machine learning?

YeZhang: zkML is also a very interesting direction, which can ZK-ize Machine Learning, but I think there is still a lack of killer application scenarios in this area. It is generally believed that with the improvement of the performance of the ZK system, it will be able to support applications at the level of ML in the future. At present, the efficiency of zkML can support applications at the level of gpt2. It can be done technically, but it can only do reasoning in ML. In the final analysis, I think everyone is still exploring the application scenarios of this area, that is, what kind of things require you to prove that its reasoning process is correct, which is quite tricky.

2.Nickqiao: I would like to ask Professor Zhang Ye, how big is the difference in engineering implementation difficulty between zkEVM and zkVM?

YeZhang: First of all, whether it is zkEVM or zkVM, the essence is to generate a customized ZK circuit for the opcode/instruction set of a certain virtual machine. The difficulty of implementing the zkEVM project depends on how you implement it. When we just started the project, because the efficiency of ZK was not that high, the most efficient way was to write a corresponding circuit for each opcode of EVM and then combine the circuits to customize a zkEVM.

However, the engineering difficulty of this solution is definitely very high, much higher than that of zkVM. After all, the EVM instruction set has more than 100 opcodes, and each opcode needs to be customized and then combined. Once an EIP adds new opcodes to the EVM, such as EIP-4844, zkEVM will also add new things accordingly. In the end, you have to write a long circuit and conduct a long audit, so the development difficulty and workload are much greater than that of zkVM.

On the contrary, zkVM defines its own instruction set/opcodes. Its customized instruction set can be very simple and ZK friendly. After you make a zkVM, you don’t have to frequently change the underlying code to support various upgrades and pre-compilations. Therefore, the main workload of zkVM and the difficulty of subsequent upgrades and maintenance are placed on the compiler, that is, the step of converting smart contracts into zkVM opcodes, which is very different from zkEVM.

Therefore, from the perspective of engineering difficulty, I think zkVM is easier to implement than customized zkEVM, but if you want to run EVM on zkVM, the overall performance is much lower than customized zkEVM, because the latter is specially customized. But at present, the efficiency of Prover in generating proofs has leaped by at least 3~5 times or even 5~10 times in the past two years, and the efficiency of zkVM is improving accordingly. Now using zkVM to run EVM, the overall efficiency has been slowly improved. In the future, the disadvantages of zkVM in performance may be covered up by its advantages of easy development and maintenance. After all, for zkVM and zkEVM, the biggest bottleneck besides performance is the difficulty of development. You must have a very strong engineering team to maintain such a complex system.

3.Nickqiao: Can you tell us whether Scroll encountered any technical difficulties during its implementation on zkEVM and how you solved them?

YeZhang: The biggest challenge along the way is that there is too much uncertainty when the project is first launched. When we first started the project, basically no one else was working on zkEVM. We were the first team to explore zkEVM from impossible to possible. On a theoretical level, a feasible framework was basically determined in the first 6 months of the project. In the subsequent implementation process, the engineering workload of zkEVM is very large, and there are some very technical challenges, such as how you dynamically support different Pre-Compiles, how to aggregate opcodes more efficiently, which involves many engineering problems.

Moreover, we are the only one and the first to support EC Pairing Pre-Compile. Circuits like Pairing are very difficult to implement and involve many complex mathematical problems. It requires a high level of cryptography/mathematical skills and engineering capabilities from the person who writes the circuit.

Then in the later development, we also need to consider the long-term maintainability of the technology stack and at what time point to upgrade to the next generation zkEVM 2.0. We have a dedicated research team that has been studying such solutions, such as supporting EVM through zkVM, and we also have related papers to discuss this aspect.

In summary, I think the previous difficulty was to turn the zkEVM from impossible to possible, and the challenges faced were mainly in engineering implementation and optimization. In the next stage, the greater difficulty is when and through what specific methods to switch to a more efficient ZK proof system, how we can transition the current code base to the next generation of zkEVM, and what new features the next generation of zkEVM can provide us. There is a lot of room for exploration.

4.Nickqiao: It sounds like Scroll has considered switching to another ZK proof system. As far as I know, Scroll currently uses an algorithm based on PLONK+Lookup. Is this algorithm currently the most suitable for implementing zkEVM? What proof system does Scroll plan to switch to in the future?

YeZhang: First, let me briefly answer the question about PLONK and Lookup. Currently, this system is still the most suitable for implementing zkEVM or zkVM. Most implementations are bound to specific PLONK and Lookup. Generally speaking, when PLONK is mentioned, it is more about using PLONK's arithmetization, that is, the arithmetic expression of the circuit to write the zkVM circuit.

Lookup is a method used when writing circuits, a type of constraint. So when we mention PLONK + Lookup, we mean using PLONK’s constraint format when writing zkEVM or zkVM circuits. This method is currently the most common.

In terms of the backend, the boundaries between PLONK and STARK have become blurred. They just use different polynomial commitment methods, but they are actually very similar. Even if the combination of STARK + Lookup is adopted, it is similar to PLONK + Lookup. Everyone only looks at the algorithm. The difference between the two is mainly reflected in the efficiency of the Prover, the size of the proof, etc. Of course, in terms of the frontend, Plonk + Lookup is still the most suitable for implementing zkEVM.

Regarding the second question, what proof system does Scroll plan to switch to in the future? Because Scroll's goal is to always keep its technology and chain framework at the top of the zk field, we will definitely use the latest technologies. We have always prioritized security and stability, so we will not switch our ZK proof system too aggressively. We may first use some Multi Prover as a transition, and explore and progress step by step to complete the upgrade and iteration of the next version. Anyway, we must ensure that this is a smooth transition process.

But for now, it is still too early to switch to a new proof system. This is actually the development direction for the next stage, such as the next 6 months to 1 year.

5.Nickqiao: Does Scroll have any unique innovations based on the current proof system based on PLONK and Lookup?

YeZhang: Currently, halo2 is running on the Scroll mainnet. Halo2 originated from the Zcash project team. They first made a backend system that supports Lookup and flexible circuit format. Then, we worked with Ethereum's PSE team to transform halo2, changing its polynomial commitment scheme from IPA to KZG, reducing the Proof Size, and thus being able to more efficiently verify ZK Proof on Ethereum.

Then we did a lot of work on GPU hardware acceleration, which can make ZKP generation 5 to 10 times faster than using the CPU to generate ZKP. In general, we replaced the original halo2 polynomial commitment scheme with a more easily verifiable version, and made a lot of optimizations on Prover, and put a lot of effort into engineering implementation.

6. Nickqiao: So Scroll is now working with the Ethereum PSE team to maintain the KZG version of Halo2. Can you tell us how you work with the PSE team?

Ye Zhang: Before we started the Scroll project, we already knew some engineers from the PSE team, and we talked to them and said we wanted to do zkEVM. We estimated that the efficiency was ok. It just so happened that at the same time point, they also wanted to do the same thing, and we hit it off.

So, we met people from the Ethereum community and Ethereum Research who wanted to work on zkEVM. We all wanted to productize zkEVM and serve Ethereum, so we naturally started an open source cooperation model. This cooperation model is more like an open source community rather than a commercial company. For example, we would call each other once a week to synchronize progress and discuss problems encountered.

We maintained this code in an open source way. From improving halo2 to implementing zkEVM, there were many exploration processes in between, and we helped each other review the code. You can see from the code contributions on Github that PSE wrote half and Scroll wrote half. Later, we completed the code audit and implemented a version of the code that was truly productized and running on the mainnet. In summary, our cooperation model with Ethereum PSE is more like the path of an open source community, which is a spontaneous form.

7.Nickqiao: You just mentioned that writing zkEVM circuits requires very high mathematics and cryptography skills. Given this, there are probably very few people who can understand zkEVM. So how does Scroll ensure the correctness of circuit writing and reduce bugs?

YeZhang: Because we are open source code, basically every PR will be reviewed by our people, some people from Ethereum, and some community members, and there is a relatively strict audit process. At the same time, we also spent a lot of money on circuit auditing, more than 1 million US dollars, and found the most professional cryptography and circuit auditing agencies in the industry, such as Trail of Bits, Zellic, etc. We also found openzepplin to audit the smart contract part of our chain. Basically, all security-related things have mobilized the highest-end audit resources. We also have a dedicated security team to do testing internally to continuously improve the security of Scroll.

Nickqiao: In addition to this auditing method, are there any more mathematically rigorous methods such as formal verification?

YeZhang: We have actually looked at the direction of Formal Verification a long time ago, including Ethereum, which has recently been thinking about how to do formal verification for zkEVM, which is actually a very good direction. But for now, it is still too early to do a complete formal verification for zkEVM, and we can only start with some small modules, because Formal Verification has a cost to run. For example, if you want to run Formal Verification for a set of code, you have to write a spec for it first, but writing a spec is not easy, and it takes a long time to perfect this set of things.

So I think we have not yet reached the stage of complete formal verification of zkEVM, but we will continue to actively explore how to do formal proof of zkEVM with external partners including Ethereum.

At present, the best way is still manual audit, because even if you have a spec and formal verification, if the spec is written incorrectly, you will still have problems. So I think it is best to first conduct a manual audit, and then ensure the stability of the current Scroll code through open source and vulnerability bounties.

However, in the next generation of zkEVM, how to do formal verification, how to design a better zkEVM and write spec more easily, and prove its security through formal verification are the ultimate goals of Ethereum. That is to say, when a zkEVM is formally verified, they can implement it on the Ethereum mainnet with complete confidence.

8.Nickqiao: Regarding halo2 adopted by Scroll, if it is to support new proof systems such as STARK, will the development cost be very high? Can a plug-in system be implemented to support multiple proof systems at the same time?

YeZhang: Halo2 is a very modular ZK proof system. You can replace its domain, polynomial commitment, etc. As long as you change the polynomial commitment it uses from KZG to FRI, you can basically implement a Halo2 version of STARK. Someone has indeed done this, so if Halo2 wants to support STARK, this compatibility is completely ok.

Then in actual implementation, you will find that if you want to pursue extreme efficiency, the more modular the framework, the more likely it is to cause some efficiency problems, because you sacrifice the degree of customization for modularity, and there will be some costs. We continue to pay attention to one issue, that is, whether the future development direction should be a modular framework or a very customized framework, especially as we have a strong enough ZK development team to maintain an independent proof system, and then make zkEVM more efficient. Of course, the above issues need to make some trade-offs, but as far as halo2 is concerned, it can support FRI.

9.Nickqiao: What is the main iteration direction of Scroll in ZK? Is it to optimize the current algorithm, add some new features, etc.?

YeZhang: The core work of our engineering team is to double the current Prover performance and make EVM compatibility the best. In the next version upgrade, we will continue to maintain our position as the most EVM-compatible in ZK Rollup. Now all other zkEVMs are probably not as compatible as ours.

So this is what the Scroll engineering team is doing on one hand, which is to continue to optimize Prover and Compatibility, and to reduce costs. We have now invested a lot of manpower to research the next generation of zkEVM, and have invested about half of the engineering force to achieve minute-level or even second-level ZK proof generation, making Prover more efficient.

At the same time, we are exploring the new zkEVM execution layer. Our nodes used go-ethereum before, but now there is a Rust version of Ethereum client Reth with better performance. So we are studying how to better combine the next generation zkEVM with the Reth client to improve the performance of the entire chain. We will examine what kind of implementation and transition form is best for zkEVM around the new execution layer.

10.Nickqiao: So for the diverse proof systems that Scroll is considering supporting, is it necessary to implement multiple Verifier contracts on the chain? For example, to do cross-validation

Ye Zhang: I think these are two questions. First, is it necessary to make a modular proof system and a variety of Provers? I think it makes sense to do so, because we have been an open source project from the beginning to the end. The more general you make the open source framework, the more people will be attracted to help you reinvent the wheel, and your community will grow. Later, in project development or tool use, you can naturally refer to external forces. So I think it is very meaningful to make a ZK proof framework that can be used not only by Scroll itself, but also by others.

Then the second thing is to do cross-validation on the mainnet, which is actually an orthogonal topic to whether the proof system itself is diversified and whether it supports STARK or PLONK. Generally, few projects verify the same zkEVM with PLONK and then with STARK. This is rare because doing so will not improve security much, but will make the Prover pay a higher cost, so this kind of cross-validation generally does not occur.

We are actually working on something called Multi Prover, where two sets of Provers can prove the same Block together, but the two Proofs will be aggregated together off-chain and then put on-chain for verification. Therefore, there will be no cross-verification between STARK and SNARK on-chain. Our multi-Prover solution is to ensure that when one set of Prover code has a problem, the other set of code can provide a backup, and one system can run normally if there is a bug in the other system, so this is another topic from cross-verification.

11.Nickqiao: Scroll's Multi Prover, what are the differences in the proof procedures run by each Prover?

YeZhang: First, let's assume that I have a normal zkEVM written in halo2, and a normal Prover to generate ZKP and then verify it on the chain. But there is a problem here. zkEVM is very complex and there may be bugs. If there is a bug, for example, a hacker or project owner can use this bug to generate a Proof and finally take away everyone's money, this is definitely not good.

The core idea of ​​Multi Prover was actually first proposed by Vitalik at the Bogota event. This means that if a zkEVM may have a bug, you can run different types of Provers at the same time, such as a TEE-based SGX Prover (Scroll currently uses this), or a Prover based on OP, or using zkVM to run EVM. Anyway, these Provers must simultaneously prove the validity of an L2 Block.

Assume that there are 3 different types of Prover. You can finalize the final state of Layer2 on the Ethereum chain only when the 3 different Proofs they generate are verified, or at least 2 of the 3 Proofs are verified. Multi Prover can ensure that when a Prover fails, the other two Provers can take over. In the end, the stability of the entire Prover system will be very good, which will improve the security of ZK Rollup. Of course, this will also introduce other disadvantages, such as the overall operating cost of Prover will increase. We have a special blog introducing these concepts.

12.Nickqiao: Now regarding Scroll’s ZK proof generation, how is its proof generation network (ZK mining pool) built? Is it self-built, or will some calculations be outsourced to a third party such as Cysic?

YeZhang: For us at the moment, the whole design is actually very easy. We want to allow more GPU holders or miners to participate in our proof network (ZK mining pool), but for now, Scroll's Prover Market is still operated by ourselves. We will cooperate with some third-party people with GPU clusters to run Prover, but this is for the stability of the main network, because once your Prover is decentralized, there will be many problems.

For example, if your incentive mechanism is not well designed, if no one generates proof for you, the network performance will be affected. In the early stage, we chose a relatively centralized approach, but the design of the entire interface and framework makes it very easy to switch to a decentralized mode. People can use our technical framework to make a decentralized Prover network and add some incentives.

But for now, for the stability of Scroll, our Prover generation network is still centralized. In the future, we will decentralize the Prover network on a larger scale, and everyone can run their own Prover node. We are also cooperating with third-party platforms such as Cysic, Snarkify network, etc., to see if someone wants to start their own Layer2 through our technology stack, they can connect to the third-party Prover Market to directly call the other party's Prover service.

13.Nickqiao: Does Scroll have any investment or output in ZK hardware acceleration?

YeZhang: This is actually what I mentioned before. The two major directions that Scroll initially worked on were: the first was to turn the zkEVM from impossible to possible, and the second was that the reason why we could turn it from impossible to possible was because the efficiency of ZK hardware acceleration was improved.

I actually started working on ZK hardware acceleration three years before I started Scroll, and we also have papers on ASIC or GPU hardware acceleration. We are very familiar with ZK hardware, whether from chips or GPUs, whether from academic or practical perspectives, and have very strong credentials.

However, Scroll will focus on GPU hardware acceleration, because we do not have the resources to specialize in FPGA or hardware, nor do we have specialized experience in tape-out. Therefore, we will choose to cooperate with hardware companies such as Cysic. They specialize in hardware, and we will focus on the software-oriented field of GPU acceleration. Our own team will optimize GPU hardware acceleration, and then open source the results. External partners can make specialized chips such as ASIC, and we will also frequently discuss and exchange problems encountered by each other.

14.Nickqiao: You just mentioned that Scroll will switch to other proof systems in the future. Can you tell us more about some new proof systems, such as Nova or other algorithms? What are their advantages?

Ye Zhang: Yes. One direction we are currently exploring internally is to use smaller domains that can be combined with our current proof systems, such as libraries like PLONKy3, which can quickly implement some operations on small domains. This is an option, how can we switch from our original large domain to a small domain? This is one way.

We are also looking at some directions internally, such as a proof system called GKR, which takes a linear amount of time to generate proofs and is much less complex than other proofs, but there is currently no mature engineering implementation method. If we want to do this, we need to invest more manpower and material resources.

But the advantage of GKR is that it is very efficient in dealing with repeated calculations. For example, if a signature is calculated 1,000 times, GKR can generate proofs for such things very efficiently. The ZK bridge Polyhedra uses GKR to prove signatures, which is very efficient. Then, EVM has many repeated calculation steps, and GKR can better reduce the cost of generating ZK proofs.

Then there are some benefits, that is, the GKR proof system does require much less computation than other systems. For example, if you use methods like PLONK or STARK to prove the calculation process of a keccak hash function, you need to commit and calculate all the variables in the middle and everything generated during the entire keccak calculation process.

But for GKR, you only need to commit the short input layer, and it can express all the parameters in the middle through the transfer relationship, without you having to commit the variables in the middle, which will greatly reduce the computing cost. The sum check protocol behind GKR is also adopted by Jolt or Lookup argument or some popular new frameworks, so this direction has great potential, and we are also studying it seriously.

Finally, there is Nova, which you mentioned. I feel that Nova was very popular a few months ago because Nova is also more suitable for handling this kind of repeated calculations. For example, if you originally want to prove 100 tasks, the overhead for this is 100. But Nova's approach is that I can stack these 100 tasks to be proved one by one, one by one, and randomly perform linear combinations on every two of them, and finally combine them to produce the final thing to be proved. Then, as long as the final task is proved, it can prove that the previous 100 tasks are valid. In this way, the original proof overhead of 100 can be greatly compressed.

Then, some subsequent work such as Hyper Nova can extend Nova to some proof systems other than R1CS, some other circuit writing formats, and support lookup or other things, so that a VM can be better proved by ZK proof systems such as Nova. But at present, the productization of Nova and GKR is not perfect enough, and there is no good and efficient library for Nova on the market.

And because Nova is a folding system, its design concept is different from other proof systems. I think it is still immature and is just a potential alternative. But at present, from the perspective of Production Ready, the most common ZK proof systems are put on the market earlier, but it is hard to say which one is the best in the long run.

15.Faust: Finally, I want to talk about the issue of values. I remember that you said last year that after a trip to Africa, you felt that blockchain is most likely to be widely adopted in economically backward places like Africa. Can you talk about your thoughts in this regard?

Ye Zhang: I have always had a very strong belief that blockchain has real application space in economically backward countries. In fact, you can see that there are many scams in the blockchain industry, which has shaken many people's confidence in the industry. Why should we work in an industry with no actual value? Is blockchain really useful? In addition to currency speculation or gambling, are there some more practical application scenarios?

I think you can feel the potential of blockchain more if you go to some economically backward places, especially Africa, because people living in China or Western countries have very sound monetary and financial systems. For example, Chinese people are very happy to use WeChat and Alipay, and the RMB is relatively stable. You don’t need to use blockchain to make payments.

But in places like Africa, they really have a demand for blockchain and stablecoins on the chain, because their currency inflation is really serious. For example, the inflation rate in some African countries can reach 20% every six months. In this case, every time you buy vegetables every six months, the price will increase by 20%, and if it is a year, it may increase even more. In this case, their currency has been severely depreciating, and their assets are also depreciating, so many people want to use US dollars, stablecoins, or stable currencies of other developed countries.

However, it is difficult for Africans to apply for bank accounts in developed countries. Stablecoins are a rigid demand for them. Even without blockchain, they still want to hold things like US dollars. Obviously, for Africans, the best way to hold US dollars is to hold stablecoins. Every time they get their salary, they can go to Binance to quickly exchange the money for USDT or USDC, and then withdraw it when they need it. This will lead to their real demand for blockchain and practical applications.

In fact, after you go to Africa, you will clearly feel that Binance has been deployed in Africa for a long time and has done a very solid job. Many Africans there really rely on stablecoins. That is, everyone would rather trust exchanges than their own monetary system, because many Africans may not be able to apply for loans locally. Suppose you want to borrow 100 yuan, the bank will issue various procedures and conditions, and in the end you may not be able to get a loan, but exchanges or other on-chain platforms are much more flexible, so I think in Africa, people have more practical applications and needs for blockchain.

Of course, most people don’t know what I’m talking about, because most people who use Twitter don’t care about this, or won’t see this on Twitter. There are many backward regions like Africa, including some countries where Binance has a large number of users, such as Turkey, some Southeast Asian countries, and Argentina. You will find that people in these regions use exchanges very frequently. So I think that in these places, Binance’s case has proved that people have a very strong demand for blockchain.

So I think it is really necessary to develop markets and communities in these regions. We also have a team specifically in Turkey. We have a very large community in Turkey, and then we will slowly transition to the countries mentioned above, such as Africa, Southeast Asia, Argentina, etc. And I think that among all Layer2, Scroll is most likely to successfully take root in the above-mentioned countries, because our team culture is quite diverse. Although the three founders are Chinese, our entire team probably includes people from at least 20 to 30 countries. Although we have only 70 to 80 people in total, there are basically at least two or three people in each region, so the overall culture is very diverse. In comparison, the other Layer2s you can think of are basically dominated by Westerners, such as OP, Base, and Arbitrum, which are completely Westernized.

In summary, we hope to build a set of infrastructure with real application scenarios for people in Africa, where the economy is backward and there is a real demand for blockchain, a bit like "surrounding the city with the countryside", to slowly build Mass Adoption. So I think I was deeply touched by my trip to Africa, but for now, Scroll is still a bit expensive for some people, so I still hope to further reduce the cost, for example, ten times or more, and then bring users to the blockchain through some other means.

In fact, there is another example that I haven't mentioned before, which may be a bit inappropriate, which is Tron. People may have some bad impressions of it, but it is true that many people in economically backward countries are using it, because HTX's previous exchange strategy and many other marketing strategies have gradually allowed Tron to really have its own network effect. I think that if there is a chain in the Ethereum ecosystem that can bring these users to the Ethereum ecosystem, it will be a very big achievement, and it is also doing something very positive for this industry. I think this is very meaningful.

Now many Ethereum second-layers are rolling up TVL data, rolling back and forth, yours is 600 million US dollars, ours may be 700 million US dollars, it is 1 billion US dollars, but I think compared to these things, the more shocking news is that Tether suddenly said that I have issued another 1 billion USDT on this second-layer or on some chain, or issued a certain amount of stablecoins. If a chain grows naturally to such a level that it does not need to capture user demand through airdrop expectations, then it is considered a relatively successful state, at least a state that I am more satisfied with, that is, it can make the real demand of users grow to a certain extent, so that more and more people really use your chain in daily life.

Finally, I actually want to interject a digression. There will be many activities in the Scroll ecosystem in the future. I hope everyone will pay more attention to our follow-up progress and participate more in our defi ecosystem.