Author: Hill
Recently, Uniswap v4 was released. Although it is not yet fully functional, we hope that the community will explore unprecedented possibilities. Considering that there may be a lot of articles introducing the huge impact of Uniswap v4 in the DeFi field, this article will explore how Uniswap v4 inspires a new type of blockchain infrastructure: coprocessor.
Introduction to Uniswap v4
As stated in its white paper, Uniswap v4 has 4 major improvements:
Hooks: Hooks are externally deployed contracts that execute developer-defined logic at specified points during pool execution. These hooks enable integrators to create centralized liquidity pools with flexible and customizable execution.
Singleton: Uniswap v4 adopts a singleton design pattern, in which all pools are managed by a single contract, reducing pool deployment costs by 99%.
Flash Accounting: Each operation updates an internal net balance, also known as a delta, with external transfers only occurring at the end of the lock. Flash Accounting simplifies complex pool operations such as atomic swaps and additions.
Native ETH: Support WETH and ETH trading pairs.
Most of the gas fee savings come from the last 3 improvements, but the most exciting new feature is undoubtedly the brand new highlight mentioned at the beginning of this article: hooks.
Hooks make liquidity pools more complex and powerful
Uniswap v4’s major enhancements revolve around the programmability unlocked by hooks. This feature allows liquidity pools to be more complex and powerful, making them more flexible and customizable than ever before. Compared to Uniswap v3’s centralized liquidity (a net upgrade over Uniswap v2), Uniswap v4’s hooks open up a wider range of possibilities for how liquidity pools can operate.
This release could be considered a net upgrade over Uniswap v3, but this may not be the case when actually implemented. Uniswap v3 pools were always an upgrade compared to Uniswap v2 pools, as the “worst” upgrade that could be performed in Uniswap v3 was to “centralize” liquidity across the entire price range, which operates on the same principle as Uniswap v2. However, in Uniswap v4, the degree of programmability of liquidity pools may not result in a good trading or liquidity provision experience, bugs may occur, and new attack vectors will emerge. Due to the many changes to how liquidity pools operate, developers looking to take advantage of hooks must proceed with caution. They need to thoroughly understand the impact of their design choices on pool functionality and the potential risks to liquidity providers.
The introduction of hooks in Uniswap v4 represents a significant shift in the way code is executed on the blockchain. Traditionally, blockchain code is executed in a predetermined, sequential manner. However, hooks allow for a more flexible execution order to ensure that certain code is executed before other code. This feature pushes complex computations to the edge of the stack, rather than solving them in a single stack.
Essentially, hooks enable more complex calculations to be performed outside of the native Uniswap contract. While in Uniswap v2 and Uniswap v3, this feature could be achieved through manual calculations outside of Uniswap and triggered by external activators such as other smart contracts, Uniswap v4 integrates hooks directly into the smart contracts of the liquidity pool. This integration makes the process more transparent, verifiable, and trustless compared to the previous manual process.
Another benefit that hooks bring is scalability. Uniswap no longer needs to rely on new smart contracts (requiring liquidity migration) or forks to deploy innovations. Hooks can now directly implement new features, giving old liquidity pools a new look.
Uniswap v4 liquidity pool today is the future of other dApps
I expect more and more dApps will push computation outside of their smart contracts like Uniswap v4.
The way Uniswap v4 works today is that it allows splitting the liquidity pool execution at any step, inserting arbitrary conditions, and triggering calculations outside the Uniswap v4 contract. So far, the only similar case is flash loans, where if the loan is not repaid in the same block, the execution is resumed. It's just that the calculation still happens in the flash loan contract.
The design of Uniswap v4 brings many advantages that were not possible or implemented poorly in Uniswap v3. For example, embedded oracles can now be used, reducing reliance on external oracles that often introduce potential attack vectors. This embedded design enhances the security and reliability of price information, which is a key factor in the operation of DeFi protocols.
Additionally, automation that previously had to be triggered externally can now be embedded directly into the liquidity pool. This integration not only alleviates security concerns, but also solves reliability issues associated with external triggers. It also allows the liquidity pool to run more smoothly and efficiently, enhancing its overall performance and user experience.
Finally, with the hooks introduced in Uniswap v4, more diverse security features can be implemented directly in the liquidity pool. In the past, the security measures adopted by liquidity pools were mainly audits, bug bounties, and purchasing insurance. With Uniswap v4, developers can now design and implement various fail-safe mechanisms and low liquidity warnings directly in the pool's smart contract. This development not only enhances the security of the pool, but also provides more transparency and control for liquidity providers.
The advantage of smartphones over traditional phones is programmability. Smart contracts have long lived in the shadow of "persistent scripts". Now, with the advantage of Uniswap v4, liquidity pool smart contracts have received a new programmable upgrade and become "smarter". I can't understand why not all dApps want to upgrade from Nokia to iPhone when there is a chance. Since Nokia is more reliable than iPhone, I can understand some smart contracts want to keep the status quo, but I am talking about the future development direction of dApps.
dApps want to use their own “hooks”, which creates a scaling problem
Imagine applying this to all other dApps, where we can insert conditions to be triggered and then insert arbitrary computations in between raw transaction sequences.
This sounds like how MEV works, but MEV is not an open design space for dApp developers. It is more like an unknown hike in a dark forest, at best seeking external MEV protection, and then hoping for the best.
We assume that the flexibility of Uniswap v4 inspires a new generation of dApps (or upgrades from existing dApps) to adopt similar ideas, making their execution sequences more programmable. Since these dApps are typically deployed on only one chain (L1 or L2), we expect most state changes to run on that chain.
The additional computation inserted during a dApp state change may be too complex and cumbersome to run on this chain. We may quickly exceed the gas limit, or it may simply be difficult to implement. In addition, it will bring many challenges, especially in terms of security and composability.
Not all computation is created equal. This is evidenced by dApps’ reliance on external protocols such as oracles and automated networks. However, this reliance can present security risks.
To summarize the problem: consolidating all computation into a single chain of state-changing smart contract executions is far from optimal.
Solution Tip: Already solved in the real world
To solve this problem with the new generation of dApps (likely heavily inspired by Uniswap v4), we have to get to the heart of the matter: the single chain. Blockchains operate like distributed computers, with a single CPU handling all tasks. On PCs, modern CPUs have made great strides in solving this problem.
Just as computers transition from single-core monolithic CPUs to modular designs consisting of multiple efficiency cores, performance cores, GPUs, and NPUs.
dApp computation can be scaled in a similar way. By specializing processors and combining their efforts, outsourcing some computations outside of the main processor can achieve flexibility, optimality, security, scalability, and upgradeability.
Practical Solutions
There are really only two types of coprocessors:
External Coprocessor
Embedded Coprocessor
External Coprocessor
External coprocessors are like cloud GPUs, which are nice and powerful, but there is additional network latency between the CPU and GPU communicating. Also, you don’t ultimately control the GPU, so you have to trust that it is doing its job correctly.
Taking Uniswap v4 as an example, suppose some ETH and USDC are added to the liquidity pool in the last 5 minutes of TWAP. If the TWAP calculation is done in Axiom, Uniswap v4 basically uses Ethereum as the main processor and Axiom as a co-processor.
Axiom
Axiom is Ethereum’s ZK coprocessor that provides smart contracts with trustless access to all on-chain data and the ability to evaluate arbitrary expressions on that data.
Developers can query Axiom and use the on-chain zero-knowledge (ZK) verified results in their smart contracts in a trustless manner. To complete a query, Axiom performs three steps:
Read: Axiom uses zero-knowledge proofs to trustlessly read correct headers, states, transactions, and receipts from any historical Ethereum block. All Ethereum on-chain data is encoded in one of these forms, which means Axiom can access any data accessible to an archive node.
Compute: Once data is ingested, Axiom applies proven computational primitives to it. This includes operations ranging from basic analytics (sum, count, max, min) to cryptography (signature verification, key aggregation) and machine learning (decision trees, linear regression, neural network inference). The validity of each computation is verified in a zero-knowledge proof.
Verification: Axiom comes with a zero-knowledge validity proof for each query result, proving that (1) the input data was correctly fetched from the chain, and (2) the computation was correctly applied. This zero-knowledge proof is verified on-chain in the Axiom smart contract, and the final result is then made available to all downstream smart contracts in a trustless manner.
Warp Contract (via RedStone)
Warp contracts are the most common SmartWeave implementation, an architecture designed to create a reliable, fast, production-ready smart contract platform/engine on Arweave. In essence, SmartWeave is an ordered array of Arweave transactions that benefits from the absence of a Block Inclusion fee market on Arweave. These unique properties allow for unlimited transaction data at no additional cost other than storage costs.
SmartWeave uses a unique method called "lazy evaluation" to transfer the responsibility for executing smart contract code from network nodes to the users of the smart contract. Essentially, this means that calculations for transaction verification are deferred until needed, reducing the workload on network nodes and enabling transactions to be processed more efficiently. With this approach, users can perform as many calculations as they want without incurring additional fees, providing functionality not possible with other smart contract systems. Obviously, trying to evaluate a contract with thousands of interactions on the user's CPU is ultimately futile. To overcome this challenge, an abstraction layer such as Warp's DRE was developed. This abstraction layer consists of a distributed network of validators that handle contract calculations, ultimately resulting in significantly faster response times and improved user experience.
Additionally, SmartWeave’s open design enables developers to write logic in any programming language, providing a fresh alternative to often rigid Solidity codebases. Seamless SmartWeave integration enhances existing social graph protocols built on EVM chains by delegating certain high-cost or high-throughput operations to Warp, leveraging the strengths of both technologies.
Hyper Oracle
Hyper Oracle is a ZK oracle network designed specifically for blockchains. Currently, the ZK oracle network only runs for the Ethereum blockchain. It uses zkPoS to retrieve data from each block of the blockchain as a data source, while processing data using a programmable zkGraph running on zkWASM, all in a trustless and secure manner.
Developers can define custom off-chain computations using JavaScript, deploy those computations to the Hyper Oracle Network, and leverage Hyper Oracle Meta Apps to index and automate their smart contracts.
Hyper Oracle’s indexing and automated Meta Apps are fully customizable and flexible. Any computation can be defined, and all computations (even machine learning computations) will be protected by generated zero-knowledge proofs.
The Ethereum blockchain is the original on-chain data source for ZK oracles, but any network could be used in the future.
The Hyper Oracle ZK oracle node consists of two main components: zkPoS and zkWASM.
- zkPoS obtains the block headers and data roots of the Ethereum blockchain by using zero-knowledge to prove Ethereum's consensus. The zero-knowledge proof generation process can be outsourced to a decentralized network of provers. zkPoS acts as the outer loop of zkWASM.
- zkPoS provides block headers and data roots to zkWASM. zkWASM uses this data as basic input to run zkGraph.
- zkWASM runs custom data mappings or any other computation defined by zkGraph and generates zero-knowledge proofs for those operations. Operators of ZK oracle nodes can choose the number of zkGraphs they wish to run (from one to all deployed zkGraphs). The zero-knowledge proof generation process can be outsourced to a decentralized network of provers.
The output of the ZK oracle is off-chain data, which can be used by developers through Hyper Oracle Meta Apps (which will be introduced in subsequent chapters). The data also comes with a zero-knowledge proof that proves its validity and calculation.
Other projects worth mentioning
There are also a number of projects that can be used as external co-processors if you decide to go that route, but these overlap with other verticals of blockchain infrastructure and are not categorized as co-processors separately.
RiscZero: If a dApp uses RiscZero to compute machine learning tasks for an on-chain agent and provides the results to a game contract on StarkNet, it will use StarkNet as the main processor and RiscZero as a coprocessor.
IronMill: If a dApp runs a zk-loop in IronMill but deploys the smart contract on Ethereum, it will use Ethereum as the main processor and IronMill as a co-processor.
Potential Use Cases for External Coprocessors
Governance and Voting: Historical on-chain data can help decentralized autonomous organizations (DAOs) record the number of voting rights each member has, which is essential for voting. Without this data, members may not be able to participate in the voting process, which may hinder governance.
Underwriting: Historical on-chain data can help asset managers evaluate their managers’ performance beyond profit. They can see the level of risk taken and the types of drawdowns experienced, which helps them make more informed decisions when compensation or potential rewards are reduced.
Decentralized exchanges: On-chain historical price data can help decentralized exchanges trade based on past trends and patterns, potentially bringing higher profits to users. In addition, historical trading data can help exchanges improve algorithms and user experience.
Insurance products: Insurance companies can use historical on-chain data to assess risk and set premiums for different types of policies. For example, when setting premiums for DeFi projects, insurance companies may look at past on-chain data.
Note that all of the above use cases are asynchronous use cases, as the client dApp needs to call the smart contract of the external coprocessor when triggered in block N. When the coprocessor returns the result of the computation, it must be accepted or verified in some form at least in the next block (i.e. N+1). This way, you need to get at least the next triggering block to utilize the coprocessing result. This model is really like a cloud GPU. It can run your machine learning model well, but you can't happily play fast-paced games on it due to latency.
Embedded Coprocessor
An embedded coprocessor is like a GPU on a PC motherboard, sitting next to the CPU. The GPU communicates with the CPU with very little latency. And the GPU is completely under your control, so you can be very sure it hasn’t been tampered with. It’s just that it costs a lot to run machine learning as fast as a cloud GPU.
Still taking Uniswap v4 as an example. Assuming that some ETH and USDC are added to the liquidity pool deployed on Artela in the last 5 minutes of TWAP, if the pool is deployed in the EVM on Artela and the TWAP calculation is done in the WASM on Aretla, then the pool is basically using Artela's EVM as the main processor and Artela's WASM as the co-processor.
the art
Artela is an L1 built using Tendermint BFT. It provides a framework that supports dynamic extension of arbitrary execution layers to implement custom functions on the chain. Each Artela full node runs two virtual machines simultaneously.
EVM, the main processor that stores and updates the state of smart contracts.
WASM, a coprocessor that stores and updates Aspect state.
Aspects represent arbitrary computations that developers want to run without touching the state of smart contracts. Think of it as a Rust script that provides custom functionality to dApps beyond the native composability of smart contracts.
If this is difficult to understand, try looking at it from the following two perspectives:
From the perspective of blockchain architecture
- Aspect is the new execution layer.
- In Artela, the blockchain runs two execution layers simultaneously - one for smart contracts and one for other computations.
- This new execution layer does not introduce new trust assumptions and therefore does not affect the security of the blockchain itself. Both virtual machines are protected by the same set of nodes running the same consensus.
From the application runtime perspective
- Aspects are programmable modules that work with smart contracts, allowing for the addition of custom functionality and independent execution.
- It has several advantages over single smart contracts:
-- Non-intrusive: No need to modify the smart contract code, you can intervene before or after the contract execution.
-- Synchronous execution: supports hook logic throughout the entire transaction lifecycle, allowing fine-grained customization.
-- Direct access to global state and base layer configuration, supporting system-level functionality.
-- Elastic Blockspace: Provides independent blockspace with protocol guarantees for dApps with higher transaction throughput requirements.
-- Compared with static precompilation, it supports dynamic and modular upgrades of dApp at runtime to balance stability and flexibility.
By introducing this embedded coprocessor, Artela has achieved an exciting breakthrough: now, arbitrary extension modules Aspects can be executed through the same transactions as smart contracts. Developers can bind their smart contracts to Aspects and have all transactions calling the smart contract handled by Aspects. .
In addition, like smart contracts, Aspects store data on-chain, allowing smart contracts and Aspects to read each other's global state.
These two features greatly improve the composability and interoperability between smart contracts and Aspects.
Aspect function:
Compared to smart contracts, the functionality provided by Aspects focuses primarily on pre- and post-transaction execution. Aspects does not replace smart contracts, but complements them. Compared to smart contracts, Aspects provides the following unique capabilities to applications:
- Automatically insert reliable transactions into upside-down blocks (e.g. for scheduled tasks).
- Reversal of state data changes caused by transactions (only authorized contract transactions can be reversed).
- Read static environment variables.
- Pass temporary execution state to other Aspects downstream.
- Reads temporary execution status passed from the upstream Aspect.
- Dynamic and modular upgradability.
The difference between Aspect and smart contract:
The difference between Aspect and smart contract is:
- Smart contracts are accounts with code, while Aspect is a native extension of the blockchain.
- Aspects can run at different points in the transaction and block lifecycle, while smart contracts only execute at fixed points.
- Smart contracts have access to their own state and the bounded context of the block, while Aspects can interact with the global processing context and system-level APIs.
- The Aspect execution environment is designed for near native speed.
Aspect is just a piece of code logic and has nothing to do with accounts, so it cannot:
- Write, modify or delete contract state data.
- Create a new contract.
- Transfer, destroy or hold native tokens.
These Aspects make Artela a unique platform that extends the functionality of smart contracts and provides a more comprehensive and customizable development environment.
*Please note that strictly speaking, the above Aspect is also called a "built-in" Aspect, which is an embedded coprocessor run by the Artela Chain full node. dApps can also deploy their own heterogeneous Aspects, run by external coprocessors. These external coprocessors can be executed on an external network or by a subset of nodes in another consensus. It is more flexible because dApp developers can actually use it to perform any operation they want, as long as such operation is safe and reasonable. It is still under exploration and the specific details have not yet been announced.
Potential Use Cases for Embedded Coprocessors
The complex calculations involved in new DeFi projects (such as complex game theory mechanisms) may require embedded coprocessors with higher flexibility and iterative real-time computing capabilities.
More flexible access control mechanisms for all types of dApps. Currently, access control is usually limited to blacklists or whitelists based on smart contract permissions. Embedded coprocessors can unlock instant and granular levels of access control.
Certain complex functions in Full Chain Game (FOCG). FOCG has long been limited by the EVM. It might be simpler if the EVM was reserved for simpler functions such as transferring NFTs and tokens, while other logic and state updates were calculated by coprocessors.
Safety mechanisms. dApps can introduce their own active safety monitoring and fail-safe mechanisms. For example, a liquidity pool can block withdrawals exceeding 5% every 10 minutes. If the coprocessor detects one of these withdrawals, the smart contract can stop and trigger some alarm mechanisms, such as injecting emergency liquidity within a certain dynamic price range.
Conclusion
It’s inevitable that dApps will become large, bloated, and overly complex, and so will the proliferation of coprocessors. It’s just a matter of time and adoption curve.
Running an external coprocessor allows a dApp to stay in its comfort zone: no matter which chain it was previously on. However, for new dApp developers looking for a deployable execution environment, an embedded coprocessor is like a GPU on a PC. If you call yourself a high-performance PC, you must have a decent GPU.
Unfortunately, the above projects are not live on mainnet yet. We cannot really benchmark and show which one is better for which use case. However, one thing is undeniable, that is, technology is in an upward spiral. It may seem like we are going in circles, but remember, from the side, history will witness that technology is really developing.
Long live the impossible triangle of scalability, and long live the coprocessor.