
An update call is a core operation in the Internet Computer protocol that enables users to make changes within a container (a smart contract hosted on the Internet Computer). This post explores each stage of the update call lifecycle and highlights how Tokamak Milestones optimizes its end-to-end latency.
background
To fully understand the update call lifecycle, it is necessary to understand some of the fundamental components of the Internet Computer architecture.
1. Container
A container is a smart contract on the Internet Computer Protocol (ICP) that is used to store state and execute code. Users interact with the container by submitting update calls to initiate operations on the smart contract.
2. Subnet
A subnet is a group of nodes that hosts and manages containers. Each subnet acts as an independent blockchain network, enabling ICP to scale by distributing the load across multiple subnets. Each subnet manages a unique set of containers. A smart contract on one subnet can communicate with another smart contract on another subnet by sending messages.
3. Copy
In each subnet, nodes (called replicas) store the code and data of each container on that subnet. Each replica also executes the container's code. This replication of storage and computation ensures fault tolerance, allowing container smart contracts to run reliably even if some nodes crash or are attacked by malicious actors.
4. Boundary Nodes
The border nodes are responsible for routing requests to the appropriate subnet and balancing the load across the replicas within that subnet.
Update call lifecycle
The following diagram outlines the lifecycle of an update call on the Internet Computer Protocol:

1. Internet Computer (IC) receives update notification
The update call starts with the user sending a request using an IC agent implementation such as agent-rs or agent-js. These libraries provide a user-friendly interface to interact with ICP, handling request formatting, signing, and communication protocols. The request is then sent to the border node for resolution via DNS.
2. Routing through border nodes
The border node routes the update call to the replica in the subnet hosting the target container. The round-robin selection method distributes the request load to F+1 replicas to ensure performance and reliability. Here, F represents the fault tolerance threshold of ICP - the maximum number of faulty replicas that can be tolerated in each subnet. For more information about Internet Computer fault tolerance, see this link:
internetcomputer.org/how-it-works/fault-tolerance
3. Broadcast update call
Once a replica receives an update call, it broadcasts the request to other replicas in the subnet using the abortable broadcast primitive. This approach ensures strong delivery guarantees even in the face of network congestion, peer or link failures, and backpressure.
Abortable broadcast is essential for efficient inter-replica communication in a Byzantine Fault Tolerant (BFT) environment. It saves bandwidth, ensures that all data structures remain bounded even in the presence of malicious peers, and maintains reliable communication for consistent update processing within ICP. For more technical details, you can refer to the paper explaining the abortable broadcast solution here:
arxiv.org/abs/2410.22080
4. Block Proposal (Block Production)
One replica (designated the block producer) is responsible for creating a new block containing the update call, and the block producer then submits the block to the other replicas in the subnet for processing.
Steps 4 to 7 constitute the consensus round on the Internet Computer, where the replicas work together to agree on a proposed block. For a detailed description of the consensus mechanism, you can read more here:
internetcomputer.org/how-it-works/consensus
5. Notarization Delay
A short delay, called the notarization delay, is introduced to synchronize the network and give all replicas time to receive the block proposal. This delay is critical to maintaining a consistent state between replicas.
6. Notarization
During the notarization phase, replicas review the validity of the proposed block and agree to notarize it, which is a preliminary consensus step indicating that the block meets the ICP's criteria.
7. Finalization
After notarization, the block undergoes finalization - all replicas in the subnet agree on its validity, ensuring it is accepted and added to the chain. Finalization ensures that all nodes confirm the block, ensuring consensus across the network.
8. Execution
Once completed, the block enters the execution phase, during which the state of the container is updated based on the update calls. Several factors affect the latency of this phase, including:
Container code complexity: The complexity of container code directly affects the execution speed. More complex logic or data-intensive operations may bring additional delays.
Subnet load: Since each subnet hosts multiple containers, execution resources are shared, and a high subnet load may increase latency as containers compete for compute resources.
Depending on subnet activity, even simple operations may experience delays, and during peak usage, update calls may experience delays while waiting for resources.
9. Authentication sharing
Once executed, replicas share attestation across the subnet, verifying that the update call was executed accurately and that the resulting state changes are consistent.
10. The replica responds with a certificate
After authentication, the replica sends a response containing a certificate to the border node, indicating that the update call completed successfully.
11. Border node transmits response
Finally, the border node delivers the authenticated response to the user, marking the end of the update call lifecycle.
Tokamak Milestones
The streamlined flow of the update call lifecycle described above is significantly enhanced by Tokamak milestones, which introduce several key improvements to Internet computers:
Abortable broadcast via QUIC: The abortable broadcast primitive implemented on top of the QUIC protocol now manages all replica-to-replica communication, providing reliable and efficient messaging on the network. This solution can significantly reduce notarization latency, thereby speeding up consensus without sacrificing reliability.
Enhanced border node routing: Improved routing logic in border nodes optimizes the distribution of update calls to replicas, as shown in the second phase of the lifecycle.
Synchronous update call: The introduction of a synchronous update call allows responding directly to the user immediately after authentication, thus simplifying and speeding up the final stage of the lifecycle.
Together, these advances improve the efficiency, speed, and reliability of update calls to the Internet Computer, creating a more seamless user experience and a more robust protocol.
Key factors affecting update call latency
The end-to-end latency of the Internet Computer is affected by several prominent factors:
Subnet topology: The physical and network layout of the subnet affects the round-trip time (RTT) between replicas. Shorter RTT helps speed up communication, while greater geographic distance between replicas increases latency.
Subnet load: The number of containers on a subnet and the volume of messages processed can impact latency. Since IC operates as a shared infrastructure, containers located on heavily loaded subnets may experience higher latency due to competing demands for the same resources.
Pipeline Architecture: ICP’s architecture is designed to maximize throughput by pipelined consensus and execution stages. This design allows multiple processes to run concurrently, but it comes with a trade-off - while throughput is increased, each stage in the pipeline may experience additional latency while waiting for previous stages to complete.
ICP's design prioritizes high throughput and scalability, balancing these requirements with the performance trade-offs inherent in distributed, decentralized networks.
Benchmark of ICP before and after Tokamak
To measure the impact of the Tokamak milestone, we measured the end-to-end (E2E) latency of three different smart contracts hosted on ICP. As a baseline, we performed the benchmark before the launch of the Tokamak milestone and then repeated the benchmark after the milestone was completed for comparison.
The results are very exciting and suggest that users can expect lower latency in the future with ICP, leading to a better user experience.
For each use case we benchmarked, we attached a table and graph showing the 0-99.99 percentile E2E latency.
ICP Ledger
The ICP ledger is a smart contract hosted on the NNS subnet that serves as a ledger for ICP tokens. Users can interact with the ledger in a variety of ways, but the most popular dapp and frontend is the NNS dapp, which is also hosted on the NNS subnet.
We ran a benchmark over several days, during which we looped sending ICP tokens and recorded the time it took from submitting the transaction to getting a response from ICP (along with a certificate proving the token was sent), and the average latency decreased from 4.57 seconds to 2.23 seconds, a 51% reduction.


Internet identity
The Internet Identity dapp is hosted on the Internet Identity subnet, a federated authentication service running on the Internet Computer. If you have ever interacted with a dapp on ICP, you have most likely spent time logging in using Internet Identity. This benchmark measures the time it takes to log in using the Internet Identity service.
Our results show that the average latency for login time was reduced from 7.12 seconds to 3.9 seconds, a 45.2% reduction! Figure 2 below shows the before and after results for different percentiles, which shows that for the 50th percentile, or median, the login time was reduced from 6.9 seconds to 3.2 seconds.
The purple area highlights the time savings for each percentile, and you can also view Table 2 to see the results for each percentile in more detail.


Application Subnet
We have a container hosted on a 13-node application subnet snjp, this subnet allows us to test the improvements of the 13-node application subnet after the Tokamak milestone, our benchmarks show that the average E2E latency has decreased from 2.43 seconds to 1.35 seconds, which means a reduction of about 44%.


IC content you care about
Technology Progress | Project Information | Global Activities

Collect and follow IC Binance Channel
Get the latest news