About the Author
Advisor @Moledao @Web3Geeks, prev Tech Lead @Bybit
Twitter: @0xkookoo, DM Open
Telegram: @web3kookoo
IC Research DAO: @neutronstardao
Welcome to communicate
TL;DR
Note: This article represents the author's personal views at this stage. Some thoughts may contain factual errors and biases; it is for discussion purposes only, and I look forward to corrections from other colleagues.
BTC introduced electronic cash, opening up the blockchain industry from 0 to 1
ETH introduced smart contracts, leading the blockchain industry from 1 to 100
ICP introduced Chainkey technology, driving the blockchain industry from 100 to 100,000,000
Introduction
On January 3, 2009, the first block of BTC was mined, marking the beginning of 14 years of blockchain development.
Looking back at the past 14 years, the ingenuity and greatness of BTC, the emergence of Ethereum, the passionate crowdfunding of EOS, the fateful battle of PoS and PoW, and the interconnection of Polkadot's myriad chains, each stunning technology and wonderful story have made countless insiders bow down!
As of now, in 2023, what is the overall landscape of the blockchain? Here are my thoughts, as detailed in the public chain landscape analysis section of this article.
BTC stands firm as a giant in the industry, owing to its introduction of the legitimacy of electronic cash.
ETH stands as the leader in the industry, thanks to its introduction of the programmability of smart contracts and the composability of the Layer 2 ecosystem.
Cosmos, Polkadot, etc. attempt to unify the world through cross-chain interoperability.
Various Ethereum killers emerge endlessly, each excelling in small domains.
But how will the entire blockchain industry develop in the next 10 years? Here are my thoughts.
Sovereignty is the only issue that blockchain needs to solve, including asset sovereignty, data sovereignty, freedom of speech sovereignty, etc.; otherwise, there is no need for blockchain;
Immutability is a sufficient condition, but not a necessary condition. As long as you can ensure that my sovereignty is not harmed, I don’t care if you tamper with it. If everyone's assets in the world were tampered with and doubled proportionally, what difference would it make?
Complete decentralization is impossible. No matter how it is designed, there will always be 'gifted' individuals or vested interests occupying greater discourse power, and there will always be people who choose not to participate. [Decentralized multi-centralization] is the ultimate pattern;
Transparency is a must. This social experiment for all humanity is precisely to ensure that everyone has a voice and the right to protect their sovereignty, right? Although there are always people who are lazy, always willing to believe more professional people, and always choose to give up voting for maximum efficiency, this is also their choice. They have the right but choose not to exercise it. As long as everything is transparent and there is no manipulation, I am willing to accept the outcome. Losing means I am not as capable as others. Survival of the fittest is also in line with market economy;
The control over decentralized code execution is the core; otherwise, it is just an empty gesture. Voting is public for a week, but in the end, the project party still deploys a malicious version of the code. Even if it is not a malicious version, it is still a mockery of everyone. It can be said that half of the world today is composed of code. If decentralized entities do not include control over code execution, how can people, including governments, dare to let the blockchain industry grow?
The linear cost of unlimited scalability. As blockchain becomes increasingly intertwined with real life, with more and more participants and greater demand, the infrastructure cannot support unlimited scalability, or scaling up is too expensive, both of which are unacceptable.
Why ICP
Here, I will introduce a story. In 2009, Alibaba proposed the 'go IOE' strategy, which was a significant milestone that later contributed to Alibaba's 'Double Eleven'.
Go IOE
The core content of the 'go IOE' strategy is to eliminate IBM minicomputers, Oracle databases, and EMC storage devices, embedding the essence of 'cloud computing' into Alibaba's IT DNA. Among these
I refers to IBM p series minicomputers, and the operating system is AIX (IBM's proprietary Unix system);
O refers to Oracle database (RDBMS);
E refers to EMC mid-range SAN storage.
The reasons for going IOE mainly include the following three points, with the first being the essential reason, while the latter two are more indirect:
Unable to meet demand, traditional IOE systems struggle to adapt to the high concurrency needs of internet companies and cannot support large-scale distributed computing architectures;
Costs are too high; maintaining IOE costs is excessive, for example, IBM minicomputers cost 500,000, and Oracle annual maintenance can reach hundreds of thousands;
Dependency is too strong; IOE systems are too dependent and 'kidnapped' by vendors like IBM and Oracle, making it difficult to flexibly configure according to one's own needs.
So why was the 'go IOE' strategy proposed in 2009 rather than earlier?
Prior to this,
Alibaba's business scale and data volume had not reached a level where traditional IOE systems were difficult to adapt, so the demand to go IOE was not urgent;
Domestic database products are not mature enough in technology and quality, and cannot effectively replace the role of IOE;
The internet mindset and cloud computing concepts have not yet become popular in the country, and distributed architecture has not become a popular direction;
Management and technical personnel may need a period of practical accumulation to realize the existing problems and necessary measures.
In 2009,
Alibaba rapidly expanded its business, the IOE system struggled to support the scale, and cost issues became more apparent;
Some open-source database products like MySQL have reached a high level of maturity and can serve as alternatives;
The internet mindset and cloud computing began to spread widely and be applied in the country, facilitating the promotion of the 'go IOE' concept;
Former Microsoft tech giant Wang Jian joined Alibaba in 2008 with a global technical perspective, deeply trusted by Jack Ma, and proposed the 'go IOE' strategy.
However, the 'go IOE' strategy is not simply about changing the software and hardware itself, replacing old software and hardware with new ones, but about replacing old methods with new ways. It is about fundamentally changing IT infrastructure through cloud computing. In other words, this change is driven by industry changes, not just simple technological upgrades.
Three major stages of enterprise development
The development of an enterprise can be divided into three stages,
Shape genes, organize culture, Start-up, from 0 to 1
Rapid growth, small steps, Scale-up, from 1 to 100
Unlimited expansion, broadening boundaries, Scale-out, from 100 to 100,000,000
Next, let's analyze the entire blockchain industry as an enterprise.
Start-up / Blockchain 1.0 / BTC
The innovation of Bitcoin lies in its solution to a problem that has plagued computer scientists for decades, namely, how to create a digital payment system that can operate without trusting any central authority.
However, BTC does have some limitations in its design and development, and these limitations provide market opportunities for subsequent blockchain projects such as Ethereum (ETH). Here are some major limitations:
Transaction throughput and speed: The block generation time of Bitcoin is about 10 minutes, and the size limit of each block leads to an upper limit on its transaction processing capacity. This means that during busy network times, transaction confirmations may take a long time, and higher transaction fees may need to be paid.
Smart contract functionality is limited: Bitcoin's design primarily serves as a digital currency, and the types of transactions and scripting language functions it supports are relatively limited. This limits Bitcoin's application in complex financial transactions and decentralized applications (DApps).
Difficulties in upgrading and improving: Due to Bitcoin's decentralization and conservative design principles, significant upgrades and improvements usually require broad consensus from the community, which is often difficult to achieve in practice, leading to relatively slow progress for Bitcoin.
Energy consumption issue: Bitcoin's consensus mechanism is based on proof of work (PoW), which means that a large amount of computing resources is used for competition among miners, resulting in high energy consumption. This has faced criticism in terms of environmental protection and sustainability. On this point, everyone can also pay attention to EcoPoW, which alleviates this limitation somewhat.
Scale-up / Blockchain 2.0 / ETH
The current Layer 2 scaling forms of Ethereum can be seen as a kind of 'vertical scaling', relying on the security and data availability guarantees of the underlying Layer 1. Although it seems like a two-layer structure, it will still ultimately be limited by the processing power of Layer 1. Even if a multi-layer structure is created, such as Layer 3 or Layer 4, it merely adds complexity to the entire system, delaying the effect somewhat. Moreover, based on the diminishing marginal returns effect, each additional layer will add overhead that significantly reduces the scaling effect. This vertical layering approach can be seen as upgrading single machine hardware; however, this single machine refers to the entire ETH ecosystem.
As usage increases, users' demands for low costs and high performance will also increase. Layer 2, as an application on Layer 1, can only reduce costs to a certain extent, ultimately still constrained by the basic costs and throughput of Layer 1. This is similar to the demand curve theory in economics - as prices decrease, total demand increases. Vertical scaling struggles to fundamentally solve scalability issues.
Ethereum is a towering giant tree, and everyone relies on its roots. Once the roots cannot absorb nutrients quickly enough, people's demands will not be met;
Therefore, only horizontal scaling can easily achieve infinity.
Some believe that multi-chain cross-chain can also be seen as a form of horizontal scaling.
Using Polkadot as an example, it is a heterogeneous kingdom, where each country looks different, but every time something is done, a kingdom has to be built;
Cosmos is a homogeneous kingdom, where each country's meridians and bones grow the same, but also every time something is done, a kingdom has to be established;
However, from the Infra perspective, the models of the two above seem a bit strange. Does every application require building an entire kingdom? Let's look at an example to see how strange it is.
Three months ago, I bought a Mac and developed a Gmail app on it;
Now I want to develop a YouTube app, but I have to buy another Mac for development, which is strange.
Moreover, both methods face the problem of high complexity in cross-chain communication when adding new chains, so they are not my first choice.
Scale-out / Blockchain 3.0 / ICP
To scale-out, a complete set of underlying infrastructure is needed to support rapid horizontal expansion without reinventing the wheel.
A typical example of supporting scale-out is cloud computing. [VPC + subnet + network ACL + security group] these underlying templates are all identical, and all machines come with numbers and types. The upper layer core components like RDS and MQ support unlimited expansion. If more resources are needed, just click a button to quickly start.
Previously, a Leader shared with me that if you want to understand what basic infrastructures and components are needed for Internet companies, you just need to take a look at all the services provided by AWS, which is the most comprehensive and powerful combination.
Similarly, let's take a high-level look at ICP and see why it meets the requirements for Scale-out.
Here, I will explain a few concepts first,
Dfinity Foundation: A non-profit organization dedicated to promoting the development and application of decentralized computer technology. It is the developer and maintainer of the Internet Computer protocol, aiming to achieve comprehensive development of decentralized applications through innovative technology and an open ecosystem.
Internet Computer (IC): A high-speed blockchain network developed by Dfinity Foundation, specifically designed for decentralized applications. It uses a new consensus algorithm to achieve high throughput and low latency transaction processing, while supporting the development and deployment of smart contracts and decentralized applications.
Internet Computer Protocol (ICP): The native token of the Internet Computer protocol, it is a digital currency used to pay for network usage fees and reward nodes.
What’s ICP
The following many contents may be a bit hardcore, but I have described them in plain language, hoping everyone can keep up. If there are more details you want to discuss with me, you can find my contact information at the top of the article.
Architecture Overview / Architecture Overview
From the perspective of layered structure, from bottom to top, they are
P2P layer, collecting and sending messages from users, other replicas in the subnet, and other subnets. Ensuring that messages can be transmitted to all nodes in the subnet to ensure security, reliability, and resilience.
Consensus layer: The main task is to sort inputs to ensure that all nodes within the same subnet process tasks in the same order. To achieve this, the consensus layer uses a new consensus protocol designed to ensure security and liveness, as well as resist DOS / SPAM attacks. Once consensus is reached on the processing order of various messages within the same subnet, these blocks will be passed on to the message routing layer.
Message routing layer: Prepares input queues for each Canister based on the tasks received from the consensus layer. After execution, it is also responsible for receiving the output generated by Canisters and forwarding it as needed to local or other regional Canisters. In addition, it is responsible for recording and verifying the responses to user requests.
Execution layer, providing runtime environments for Canisters, orderly reading inputs according to scheduling mechanisms, calling the corresponding Canisters to complete tasks, and returning the updated state and generated output to the message routing layer. It utilizes the randomness brought by non-determinism to ensure fairness and auditability of the calculations. Because in some cases, the behavior of Canisters needs to be unpredictable. For example, when performing cryptographic operations, random numbers are needed to increase the security of encryption. Additionally, the execution results of Canisters need to have randomness to prevent attackers from discovering vulnerabilities or predicting the behavior of Canisters through analysis of the execution results.
4-layers of ICP
Key Components / Key Components
From the perspective of components,
Subnet: Supports unlimited scaling, each subnet is a small blockchain. Subnets communicate through Chain Key technology, as consensus has already been reached within the subnet, so only Chain Key verification is needed.
Replica: Each subnet can have multiple nodes, with each node being a replica. The consensus mechanism of IC ensures that every replica in the same subnet processes the same input in the same order, ensuring that the final state of each replica is the same. This mechanism is known as Replicated State Machine.
Canister: A Canister is a smart contract, a computing unit running on the ICP network that can store data and code and communicate with other Canisters or external users. ICP provides a runtime environment for executing Wasm programs in Canisters and communicates with other Canisters and external users through message passing. It can be simply thought of as a docker for running code, where you inject your Wasm Code Image to run inside.
Node: An independent server, Canisters still need a physical machine to run on, and these physical machines are the ones in real data centers.
Data Center: Nodes in the data center are virtualized into replicas through node software IC-OS, and some replicas are randomly selected from multiple data centers to form a subnet. This ensures that even if one data center is hacked or encounters a natural disaster, the entire ICP network still operates normally, similar to an upgraded version of Alibaba's 'two places and three centers' disaster recovery high availability plan. Data centers can be distributed all over the world, and in the future, a data center could even be set up on Mars.
Boundary Nodes: Provide entry and exit points between external networks and IC subnets, verifying responses.
Principal: An identifier for external users, derived from a public key, used for permission control.
Network Nervous System (NNS): An algorithm DAO that uses staked ICP for governance, used to manage IC.
Registry: A database maintained by NNS that contains the mapping relationships between entities (like Replica, Canister, Subnet), somewhat similar to the current DNS operation principles.
Cycles: Local tokens representing the CPU quota used to pay for canister runtime resource consumption. If I had to express it in Chinese, I would use the term '计算周期' (calculation cycles), as cycles mainly refer to the unit used for paying computing resources.
Key Innovative Technologies / Key Innovative Technologies
From the underlying perspective, it uses Chain-key technology, among which
Threshold BLS signatures: ICP implements a threshold signature scheme, where each Subnet has a public verifiable public key, and the corresponding private key is split into multiple shares, with each share held by a Replica in that Subnet. Only when a sufficient number of Replicas in the same Subnet sign the message is it considered valid. This way, the messages transmitted between Subnets and between Replicas are encrypted yet quickly verifiable, ensuring both privacy and security. The BLS algorithm is a well-known threshold signature algorithm and is the only signature scheme that can produce a very simple and efficient threshold signature protocol, and the signature is unique, meaning that for a given public key and message, there is only one valid signature.
Non-interactive Distributed Key Generation (NIDKG): To securely deploy the threshold signature scheme, Dfinity has designed, analyzed, and implemented a new DKG protocol that runs over asynchronous networks and has high robustness (it can still succeed even if up to one-third of the nodes in the subnet crash or are damaged) while still providing acceptable performance. In addition to generating new keys, this protocol can also be used to re-share existing keys. This feature is crucial for achieving the self-evolving topology of IC, as subnets will undergo member changes over time.
Publicly Verifiable Secret Sharing scheme (PVSS Scheme): This is a publicly verifiable secret sharing scheme. In the Internet Computer protocol's white paper, the PVSS scheme is used to implement a decentralized key generation (DKG) protocol to ensure that nodes' private keys are not leaked during the generation process.
Forward-secure public-key encryption scheme: A forward-secure public-key encryption scheme ensures that even if the private key is leaked, previous messages cannot be decrypted, thereby enhancing system security.
Key resharing protocol: A key sharing scheme based on threshold signatures, used in the Internet Computer protocol to implement key management. The main advantage of this protocol is that it can share existing keys with new nodes without creating new keys, thus reducing the complexity of key management. Additionally, this protocol uses threshold signatures to protect the security of key sharing, thereby improving the system's security and fault tolerance.
PoUW: PoUW adds a U to PoW, which stands for Useful, mainly improving performance and reducing useless work for node machines. PoUW does not artificially create difficult hash computations but puts computing power as much as possible to serve users. Most resources (CPU, memory) are used for the actual execution of code in canister.
Chain-evolution technology: A technology used to maintain the blockchain state machine, which includes a series of technical means to ensure the security and reliability of the blockchain. In the Internet Computer protocol, Chain-evolution technology mainly includes the following two core technologies:
Summary blocks: The first block of each epoch is a summary block that contains special data used to manage different threshold signature schemes. Among them, a low-threshold scheme is used to generate random numbers, while a high-threshold scheme is used to authenticate the replication status of the subnet.
Catch-up packages (CUPs): CUPs are a technology used to quickly synchronize node states. They allow newly joined nodes to quickly obtain the current state without needing to rerun the consensus protocol.
My logical deduction of the entire IC underlying technology is:
In traditional public key cryptography, each node has its own public-private key pair, which means that if a node's private key is leaked or attacked, the security of the entire system will be threatened. The threshold signature scheme divides a key into multiple parts, distributed to different nodes, and only when a sufficient number of nodes collaborate can a signature be generated. This way, even if some nodes are attacked or leaked, it will not significantly compromise the security of the entire system. In addition, the threshold signature scheme can enhance the decentralization of the system because it does not require a centralized authority to manage keys, but instead distributes them across multiple nodes, avoiding single points of failure and centralization risks. Therefore, IC uses a threshold signature scheme to enhance the system's security and decentralization, hoping to implement a highly secure, scalable, and quickly verifiable generic blockchain.
BLS is a famous threshold signature algorithm, and it is the only signature scheme that can produce a very simple and efficient threshold signature protocol. Moreover, BLS signatures have an additional advantage: they do not require storing signature states. As long as the message content remains unchanged, the signature is fixed, meaning that for a given public key and message, there is only one valid signature. This guarantees extremely high scalability, which is why ICP chose the BLS scheme.
Because of the use of threshold signatures, there needs to be a distributor to distribute the key fragments to different participants. However, this person who distributes the key fragments is a single point, which can easily lead to single point failure issues. Therefore, Dfinity designed a distributed key distribution technology known as NIDKG. During the initialization period of subnet creation, all participating replicas collaboratively generate a public key A non-interactively; for the corresponding private key B, each participant mathematically calculates and holds one derived secret share.
To implement NIDKG, it must be ensured that every participating party in the distribution is honest. Therefore, each participant can not only obtain their own secret share but can also publicly allow others to verify whether their secret share is correct. This is a very important point for achieving distributed key generation.
But what if the subnet key at a certain historical moment is leaked? How to ensure the immutability of historical data? Dfinity adopts a forward-secure signing scheme that guarantees that even if a subnet key at a certain historical moment is leaked, the attacker cannot modify the historical block data. This also prevents later corruption attacks from threatening the historical data of the blockchain. If this limitation is stronger, it can also ensure that information will not be successfully intercepted during transmission because the timestamps do not match. Even if the key is compromised in a short period, it cannot decrypt past communication content.
With NIDKG in place, if a certain segment of secret share is held by a node for a long time, once various nodes are gradually compromised by hackers, the entire network may face issues. Therefore, key updates need to be performed continuously, but key updates cannot require all participating replicas to gather and perform interactive communication; they must also be non-interactive. However, since public key A has been registered in NNS, other subnets will also use this public key A for verification. Therefore, it is best for the subnet public key not to change. However, if the subnet public key does not change, how can the secret share between nodes be updated? Therefore, Dfinity designed a Key resharing protocol that allows all holders of the current version secret share to non-interactively generate a new round of derived secret share for the new version secret share holder without creating a new public key.
This not only ensures that the new version of the secret share is certified by all current legitimate secret share holders
Also ensures that the old version of the secret share is no longer valid.
It also guarantees that even if the future new version of the secret share is leaked, the old version of the secret share will not be leaked, as the polynomials between the two are unrelated and cannot be reverse-engineered. This is also the aforementioned forward security.
Additionally, it guarantees efficient re-randomization. When trusted nodes or access controls change, access policies and controllers can be modified at any time without restarting the system. This greatly simplifies key management mechanisms in many scenarios. For example, this is very useful in cases of subnet member changes because re-sharing will ensure that any new members have the appropriate secret share, while any replicas that are no longer members will no longer have a secret share. Furthermore, if at any point or even during each period, a small amount of secret share is leaked to attackers, these secret shares would not help the attackers, as the old version of the secret share is already invalid. Only the current subnet nodes can generate valid signatures for the current epoch, so even if old subnet nodes collude, the signatures generated would be useless now.
Because traditional blockchain protocols require storing all block information from the genesis block, this leads to scalability issues as the blockchain grows. This is why many public chains find it very difficult to develop a light client. Therefore, IC aims to solve this problem, and thus IC developed Chain-evolution Technology. At the end of each epoch, all processed inputs and consensus-required information can be safely cleared from each Replica's memory, significantly reducing the storage requirements for each Replica. This enables IC to scale to support a large number of users and applications. Additionally, Chain-evolution technology includes CUPs technology, which allows newly joined nodes to quickly obtain the current state without needing to rerun the consensus protocol, greatly reducing the entry threshold and synchronization time for new nodes joining the IC network.
In summary, all underlying technologies of IC are interlinked, based on cryptography (from theory) and fully considering the industry's challenges, such as rapid node synchronization (from practice). Truly a culmination of achievements!
Key Features / Key Features
From the perspective of characteristics
Reverse gas model: Traditional blockchain systems mostly require users to hold native tokens like ETH or BTC first, and then consume native tokens to pay transaction fees. This increases the entry barrier for new users and does not align with people's usage habits. Why do I have to hold TikTok stocks to use TikTok? ICP adopts a reverse gas model design, allowing users to directly use the ICP network, while the project party is responsible for the fees, lowering the entry threshold and fitting better with the habits of internet services, which is conducive to achieving larger-scale network effects and supporting more users joining.
Stable Gas: For other public chains in the market, to ensure the security of the chain and to facilitate transfers, there will be people buying native tokens, miners will strive to dig, or people will hoard native tokens, thus contributing computing power for the chain, like Bitcoin, or providing staking economic security for the chain, like Ethereum. It can be said that our demand for BTC/ETH actually comes from Bitcoin/Ethereum public chains' requirements for computing power/staking, which are essentially security requirements for the chain. Therefore, as long as a chain uses native tokens to pay gas directly, it will still be expensive in the future. Even if native tokens are cheap now, once the chain's ecosystem takes off, they will all become expensive. ICP is different; the gas consumed in the ICP blockchain is called Cycles, which are exchanged through consumption of ICP. Cycles remain stable under algorithmic adjustments, pegged to 1 SDR (SDR can be seen as a stable unit calculated from multiple national currencies). Therefore, regardless of how much ICP rises in the future, the amount of money you spend doing anything in ICP will remain the same as today (excluding inflation).
Wasm: Using WebAssembly (Wasm) as the standard for code execution, developers can write code using various popular programming languages (such as Rust, Java, C++, Motoko, etc.), thus supporting the inclusion of more developers.
Support for running AI models: Python can also be compiled into wasm, and Python has one of the largest user bases in the world, being the first language for AI, such as matrix and large integer calculations. Some have already run the Llama2 model on IC. If in the future the concept of AI + Web3 occurs on ICP, I would not be surprised at all.
Web2 speed experience: Many applications on ICP have achieved astonishing results with millisecond-level queries and second-level updates. If you don't believe it, you can directly use OpenChat, a purely decentralized chat application on the chain.
Running the frontend on-chain: You may have only heard about the backend being written into simple smart contracts and running on-chain, ensuring that core logic such as data assets cannot be tampered with. However, the frontend also needs to run completely on-chain for safety, as frontend attacks are very typical and frequent issues. Imagine that everyone may think that Uniswap's code is very secure, that the smart contracts have gone through so many years of verification by many people, and that the code is simple, so there should be no problems. But one day, if Uniswap's frontend is hijacked, the contract you interact with could be a malicious contract deployed by hackers, and you could instantly lose everything. However, if you store and deploy all frontend code in IC's Canister, at least the frontend code is ensured to be unhackable due to IC's consensus security, making this protection more robust. Additionally, IC can run and render the frontend directly, not affecting the normal operation of the application. On IC, developers can directly build applications without traditional cloud services, databases, or payment interfaces, and there is no need to purchase a frontend server or worry about databases, load balancing, network distribution, firewalls, etc. Users can directly access the frontend webpage deployed on ICP through a browser or mobile app, such as a personal blog I previously deployed.
DAO controls code upgrades: Many DeFi protocols currently allow project parties complete control, enabling them to initiate significant decisions like suspending operations or selling funds without community voting discussions. This is a case many people have witnessed or heard about. In contrast, DAPP code in the ICP ecosystem runs in containers controlled by DAO, even if a project party holds a large proportion in voting, a public voting process is implemented, satisfying the necessary conditions for blockchain transparency described at the beginning of this article. This process guarantees a mechanism that better reflects community wishes, achieving a better level of governance compared to other public chain projects.
Automatic protocol upgrade: When a protocol upgrade is needed, new threshold signature schemes can be added in the summary block, achieving automatic protocol upgrades. This method can ensure the security and reliability of the network while avoiding the inconveniences and risks associated with hard forks. Specifically, Chain Key technology in ICP ensures network security and reliability by maintaining the blockchain state machine through a special signature scheme. At the beginning of each epoch, the network uses a low-threshold signature scheme to generate random numbers and a high-threshold signature scheme to authenticate the replication status of the subnet. This signature scheme can ensure the security and reliability of the network, and can also achieve automatic protocol upgrades, thus avoiding the inconveniences and risks associated with hard forks.
Proposal Voting
Fast forwarding: This is a technology in the Internet Computer protocol that allows for quick synchronization of node states, enabling newly joined nodes to quickly obtain the current state without having to rerun the consensus protocol. Specifically, the process of fast forwarding is as follows:
Newly joined nodes obtain the current epoch's Catch-up package (CUP), which contains the current epoch's Merkle tree root, summary block, and random number.
Newly joined nodes use the state sync subprotocol to obtain the complete state of the current epoch from other nodes, while verifying the correctness of the state using the Merkle tree root in the CUP.
Newly joined nodes use the random number from the CUP and protocol messages from other nodes to run the consensus protocol, quickly synchronizing to the current state.
The advantage of fast forwarding is that it allows newly joined nodes to quickly obtain the current state without having to start from scratch like some other public chains. This can accelerate the network's synchronization and expansion, while also reducing the communication volume between nodes, thus improving the efficiency and reliability of the network.
fast forwarding
Decentralized internet identity: The identity system on IC truly makes me feel that the DID problem can be completely solved, both in terms of scalability and privacy. The identity system on IC currently has an implementation version called Internet Identity, as well as a more powerful NFID developed based on it.
Its principle is as follows:
During registration, it generates a pair of public and private keys for the user. The private key is stored in the TPM secure chip within the user's device and will never be leaked, while the public key is shared with services on the network.
When a user wants to log into a dapp, the dapp creates a temporary session key for the user. This session key is signed by the user through an authorized electronic signature, granting the dapp permission to verify the user's identity.
After signing the session key, the dapp can use that key to access network services on behalf of the user, without the user needing to sign electronically each time. This is similar to representative authorization login in Web2.
Session keys have only a short validity period. After expiration, users need to re-authorize the signature through biometric recognition to obtain a new session key.
The user's private key is always kept within the local TPM secure chip and does not leave the device. This ensures the security of the private key and the anonymity of the user.
By using temporary session keys, different dapps cannot track the user's identity. This achieves true anonymous and private access.
Users can conveniently synchronize and manage their Internet Identity across multiple devices, but the devices themselves also require appropriate biometric or hardware keys for authorization.
Some advantages of Internet Identity are as follows:
No need to remember passwords. Users can log in directly using biometrics such as fingerprint recognition, without having to set and remember complex passwords.
Private keys do not leave the device, ensuring higher security. Private keys are stored within the TPM secure chip, preventing theft and solving the issue of username and password theft in Web2.
Anonymous login, impossible to be tracked. Unlike Web2, where email serves as a username that can be tracked across platforms, Internet Identity eliminates this tracking.
Managing multiple devices is more convenient. Users can log into the same account on any device that supports biometrics, rather than being limited to a single device.
No reliance on centralized service providers, achieving true decentralization. Unlike the model in Web2, where usernames correspond to email service providers.
The delegated authentication process enhances user experience, as there is no need to repeatedly sign in each time.
Supports the use of dedicated secure devices like Ledger or Yubikey for login, enhancing security.
Hides users' actual public keys, preventing transaction records from being queried through public keys and thereby ensuring user privacy.
Seamlessly compatible with Web3 blockchain, allowing for secure and efficient login and signing for blockchain DApps or transactions.
The architecture is more advanced, representing an organic integration of the advantages of Web2 and Web3, setting a standard for future network accounts and logins.
In addition to providing a new user experience, it also employs the following technical measures to ensure its security:
Using the TPM secure chip to store private keys, designed so that even developers cannot access or extract private keys, preventing theft.
Secondary authentication mechanisms such as biometric recognition (like fingerprints or facial recognition) require validation based on the device being used, ensuring that only users holding the device can use that identity.
Session keys are designed to expire shortly, limiting the time window for misuse and enforcing the destruction of related ciphertext at the end of the session, reducing risk.
Public key encryption technology ensures that data during transmission is encrypted, preventing external listeners from knowing users' private information.
No reliance on third-party identity providers; PRIVATE KEY is generated and controlled by users themselves, eliminating trust in third parties.
Combined with the immutability brought by the IC blockchain's consensus mechanism, ensuring the reliability of the entire system's operation.
Continuously updating and upgrading relevant cryptographic algorithms and security processes, such as incorporating multi-signature mechanisms for enhanced security.
Open source and decentralized design optimize transparency, facilitating community collaboration to enhance security.
Internet Identity
Core Team / Core Team
From the team perspective, there are over 200 employees, all very elite talents. The staff has published over 1600 papers, cited over 100,000 times, and holds over 250 patents.
Founder Dominic Williams is a cryptographic theorist and an entrepreneur with a track record of multiple startups.
Academically, his recent mathematical theories include Threshold Relay and PSC chains, Validation Towers and Trees, and USCID.
From a technical background, he has a solid foundation in research on big data and distributed computing, which lays the technical groundwork for building a complex ICP network.
From an entrepreneurial perspective, he previously operated an MMO game using his own distributed system, which hosted millions of users. In 2015, Dominic began launching Dfinity, and he is also the president and CTO of String Labs.
From a visionary perspective, he proposed the concept of a decentralized internet over a decade ago, and promoting this grand project is no easy task. Currently, his design ideas seem very forward-looking.
On the technical team front, Dfinity has a very strong strength. The Dfinity Foundation has gathered many top experts in cryptography and distributed systems, such as Jan Camenisch, Timothy Roscoe, Andreas Rossberg, Maria D., Victor Shoup, and even Ben Lynn, one of the authors of the BLS cryptographic algorithm, is also working at Dfinity. This provides strong support for the technological innovation of ICP. The success of blockchain projects cannot be separated from technology, and the gathering of top talents can lead to technological breakthroughs, which is a key advantage of ICP.
Dfinity Foundation Team
Funding and Economic Model / Fund-raising & Tokenomics
If this section were also discussed, the article would be too long. Therefore, I have decided to write a separate article later to analyze this in detail. This article focuses more on the development direction of the blockchain industry, discussing why ICP has a significant opportunity.
Applications / Applications
All types of applications can be developed on ICP, including social platforms, creator platforms, chat tools, games, and even metaverse games.
Many people say that IC is naturally unsuitable for DeFi due to the difficulty in achieving global state consistency. However, I believe that this issue itself is erroneous. It is not that achieving global state consistency is difficult, but that achieving it under low latency is difficult. If you can accept a one-minute delay, global consistency can be achieved with 10,000 machines worldwide. Ethereum and BTC have so many nodes now, and they have been forced to achieve high-latency global state consistency, which is why they cannot currently achieve unlimited horizontal scaling. IC addresses the problem of unlimited horizontal scaling first by segmenting subnets. As for achieving 'global state consistency' under low latency, it can be accomplished through strong consistency distributed consensus algorithms, well-designed network topologies, high-performance distributed data synchronization, effective timestamp verification, and mature fault tolerance mechanisms. However, to be honest, building a trading platform at the application level on IC is more challenging than what those high-performance trading platforms on Wall Street are doing. It is not just about achieving consensus across multiple data centers. However, the difficulty does not mean it cannot be done; it just means that many technical problems need to be solved first. Ultimately, it will find a balanced state that ensures both security and an acceptable user experience. For example, the ICLightHouse below.
ICLightHouse, an orderbook dex on the full chain. What does it mean to be on the full chain? How many technical challenges need to be solved? On other public chains, this is something that couldn't even be imagined, but on IC, at least it's doable, giving us hope.
OpenChat, a decentralized chat application with a fantastic experience, is a product I have not seen a second of in the entire blockchain industry. Many other teams have attempted to create similar applications in this direction but ultimately failed due to various technical issues. At the root, it is still that users found the experience lacking; for example, it was too slow, taking 10 seconds to send a message and another 10 seconds to receive others' messages. However, on ICP, a small team of three has created such a successful product. Everyone can experience how smooth it is for themselves. Welcome to join the organization, where you can enjoy the collision of ideas and to some extent experience the thrill of freedom of speech.
Mora, a platform for super creators, allows everyone to create their own planet and build their own individual brand, and the content you produce always belongs to you, even supporting paid reading. It can be called a decentralized knowledge sphere, and I find myself refreshing articles there every day.
Mora - 0xkookoo
OpenChat and Mora applications are products that I genuinely use almost every day, providing a sense of comfort that I cannot live without; the two words to describe them are freedom and fulfillment.
Some teams are already developing game applications on IC, and I believe that the narrative of full-chain games may ultimately be taken over by IC. As I mentioned earlier in the GameFi section of this article, playability and fun are aspects that project parties must consider, and playability is easier to achieve on IC. I look forward to the grand work of Dragginz.
Summary / Summary
ICP is like the Earth, and Chainkey technology is like the Earth's core. Its relationship with ICP is similar to that of the TCP/IP protocol to the entire Internet industry. Each Subnet is like the continents of Asia, Africa, and Latin America. Of course, Subnets can also be the Pacific or Atlantic Oceans, with different buildings and areas (Replicas and Nodes) in the continents and oceans, where plants (Canisters) can be grown, and different animals live happily.
ICP supports horizontal scaling, and each subnet can communicate while being autonomous. Regardless of your application, whether it's social media, finance, or even the metaverse, you can achieve final consistency through this distributed network. Achieving a global ledger under synchronous conditions is easy, but the challenge of achieving 'global state consistency' under asynchronous conditions is significant. Currently, only ICP has the opportunity to achieve this.
It is important to note that this refers not to 'global state consistency' but to 'global state consistency'. 'Global state consistency' requires all participating nodes to reach an agreement on [the order of all operations], [final results], [objective consistency, not dependent on whether nodes fail], [clock synchronization], and [immediate consistency, where all operations are processed synchronously]. This can be guaranteed within an IC subnet. However, to ensure 'global state consistency', all subnets need to achieve the above 'global state consistency' as a whole, which is practically impossible to achieve within low latency. This is also the bottleneck that currently prevents public chains like ETH from horizontally scaling. Therefore, IC chose to achieve consensus within a subnet, while other subnets quickly verify that their results are not fraudulent through communication, thus achieving 'final global state consistency'. It effectively combines the decentralization of large public chains with the high throughput and low latency of consortium chains, and achieves unlimited horizontal scaling of subnets through mathematical and cryptographic proofs.
In summary, it can be seen that according to my thoughts on the ultimate development direction of blockchain at the beginning of the article, [sovereignty] + [decentralized multi-centralization] + [transparency] + [control over code execution] + [linear cost of unlimited scalability],
Sovereignty is the only problem that blockchain needs to solve, including asset sovereignty, data sovereignty, freedom of speech sovereignty, etc.; otherwise, there is no need for blockchain;
IC has fully achieved this
Immutability is a sufficient condition, but not a necessary condition. As long as you can ensure that my sovereignty is not harmed, I don't care if you tamper with it. If everyone's assets in the world were tampered with and doubled proportionally, what difference would it make?
IC has also achieved this
Complete decentralization is impossible. No matter how it is designed, there will always be 'gifted' individuals or vested interests occupying greater discourse power, and there will always be people who choose not to participate. [Decentralized multi-centralization] is the ultimate pattern;
IC is currently the best-performing among all public chains, being able to maintain a certain degree of decentralization while fully utilizing the advantages of centralized entities, thus better achieving governance and operation of the network.
Transparency is a must. This social experiment for all humanity is precisely to ensure that everyone has a voice and the right to protect their sovereignty, right? Although there are always people who are lazy, always willing to believe more professional people, and always choose to give up voting for maximum efficiency, this is also their choice. They have the right but choose not to exercise it. As long as everything is transparent and there is no manipulation, I am willing to accept the outcome. Losing means I am not as capable as others. Survival of the fittest is also in line with market economy;
IC has fully achieved this
Control over code execution is the core; otherwise, it is just an empty gesture. Voting is public for a week, but in the end, the project party still deploys a malicious version of the code. Even if it is not a malicious version, it is still a mockery of everyone.
Currently, only IC has achieved this
The linear cost of unlimited scalability, as blockchain becomes increasingly intertwined with real life, with more and more participants and greater demand, the infrastructure cannot support unlimited scalability, or scaling up is too expensive, both of which are unacceptable.
Currently, only IC has achieved this
Based on the above facts and my analysis, I believe that ICP = Blockchain 3.0.
This article aims to discuss why ICP is likely to be the innovative driver of blockchain 3.0 from the perspective of the future development direction of the blockchain industry. However, it cannot be denied that there are indeed some issues in the Tokenomics design of ICP, and the ecosystem has not yet exploded. Currently, ICP still needs to continue making efforts to reach my envisioned final blockchain 3.0. But don't worry, this matter is inherently difficult. Even the Dfinity Foundation has already prepared a 20-year roadmap. The mainnet has only been online for 2 years and has already achieved such great accomplishments. It has also utilized cryptographic methods to integrate with the BTC and ETH ecosystems. I believe it will reach new heights in three years.
Future
IC has already completed the Infra construction from bottom to top, and the applications from top to bottom are also beginning to take shape. My recent direct impression is that IC has more and more cards to play, preparing well for the next bull market.
IC is a paradigm shift, not just a simple technological upgrade, but a paradigm migration from single-machine computing to distributed computing, and also a paradigm migration from single-machine systems to distributed systems. The concept of decentralized cloud computing allows many small companies to enjoy a one-stop development experience from the initial stage.
According to Professor Yu Jun's product value formula: Product value = (New experience – Old experience) - Migration cost, in the future, as long as some people find that the experience benefits of joining the IC ecosystem are greater than the migration costs, more people including project parties and users will join IC, and the scale effect of 'cloud computing' will be more easily reflected. Solving the problem of [which came first, the chicken or the egg], the positive flywheel of IC will be established.
Of course, everyone has subjective definitions of experience, so some people will choose to join first, while others will choose to join later. Those who join first bear greater risks, but usually also average greater benefits.