This guest post was written by Ross Dold of EOSphere. Learn more about EOSphere’s work in the EOS ecosystem at the end of this article.

Antelope Leap v5.0.0 was released about a month ago and is now being adopted by many Antelope-based networks as node operators begin upgrading their production environments.


Leap v5.0.0 is designed to be more performant, efficient, and reliable than previous versions, which is great news for node operators, as even small improvements can translate into huge gains for a fleet of 100 managed nodes.

With this in mind, the EOSphere team has documented a practical comparison of CPU, memory, and disk IO improvements between Leap v4.0.4 and v5.0.0 in the article below.

Leap v5.0.0 CPU, memory and disk IO performance

The following article was written based on gathering statistics from one of the EOSphere EOS mainnet public peer nodes. This node was chosen because it is already in production and is highly utilized, with 180-195 natural incoming public nodes. The hardware configuration is as follows:

  • Ubuntu 22.04

  • Virtualization in KVM 7.2.0

  • 4 CPU cores

  • 32GB RAM

  • 128GB SWAP

  • Drive 1: OS and Status: 256 GB Enterprise NVMe

  • Drive 2: Blocks: 4TB Enterprise NVMe (ZFS)

CPU

Below is a monthly CPU usage chart showing utilization from v4.0.4 before upgrading to v5.0.0 on January 22, 2024 (20:00).

KVM CPU utilization of EOSphere public peer nodes

CPU utilization dropped immediately from an average of 85% to a normalized 60%.
This is great news for running multiple nodes in physical, virtual or cloud environments. It may also mean that the traditionally configured max-clients peer limit of 200 can be extended to 250 or even 300 for public nodes.

Disk IO and memory

If you have read our previous Antelope chain article, you will know that EOSphere has been advocating for running Leap nodes using the tmpfs strategy.

The tmpfs strategy involves running the nodeos chainbase database state folder in a tmpfs mount, allowing us to oversubscribe RAM via SWAP and improve memory utilization and disk IO efficiency.

tmpfs is a Linux file system that keeps all files in virtual memory and the contents of the folder are temporary, meaning that if the folder is unmounted or the server is restarted, all contents will be lost.

The challenge with using tmpfs is that it is temporary, all data will be lost on reboot, and Nodeos will then need to be restarted from the snapshot.

Leap v5.0.0 brings a new database mapping mode called mapped_private as an alternative to the default mapped mode. Instead of the constant writing to disk with mapped mode, mapped_private mode better utilises memory and reduces disk IO. It does this by mapping the chainbase database to memory using private mappings, which means that any chainbase data accessed during execution remains in memory and is not eligible to be written back to the shared_memory.bin disk file.

If this sounds familiar, it is. mapped_private is an excellent replacement for the tmpfs strategy. This means there is no need to mount the tmpfs partition, and since the in-memory link library data is written to disk on exit, there is no need to restart with a snapshot on reboot.

mapped_private configuration

Configuration of mapped_private simply adds the following to config.ini

> nano config.ini
database-map-mode = mapped_private

In order to start a node mapped_private requires enough memory to overwrite the private mapping configured with chain-state-db-size-mb = , physical RAM can be replaced with SWAP which allows oversubscription.

At the time of writing, 32GB physical RAM and 128Gb SWAP is sufficient to run an EOS mainnet node.

mapped_private operations and results

When the first node mapped_private starts, assuming you start from a snapshot, the entire chain library will be uploaded to memory (RAM and SWAP) and may take some time.

CPU and memory utilization when starting mapped_private mode for the first time

On node exit, the in-memory chain library is written to disk, which may take some time depending on its size.

Subsequent nodes start up faster, require no snapshots, and only add the data needed for execution to memory, showing much lower utilization.

CPU and memory utilization of mapped_private mode second start

Subsequent Nodeos exits will also be faster, depending on how long the node has been running, because mapped_private tracks dirty pages and only writes them out on exit.

Memory utilization is also slightly improved compared to mapped mode.

CPU and memory utilization in mapping mode

Besides RAM oversubscription and lower utilization, the real value of using mapped_private and the reason why EOSphere originally started using this mode is that the disk IO is much lower.

Performance requirements necessitated the operator to place the state folder containing the chainbase database on a high-speed SSD drive. SSD drives have a manufacturer-assigned endurance rating that states the maximum amount of data that can be written to the drive before failure occurs. This is typically measured in TerraByte Written (TBW), which is typically between 150–2000TBW on consumer disks and in the PB range on enterprise drives. Essentially, too many disk writes can wear out the SSD disk, causing failure.

Below is drive 1 disk IO (writes) for an example peer using mapped mode, with the network seeing 10-15 transactions per second (TPS).

Using mapped mode drive 1 disk IO (write)

This is the drive 1 disk IO (writes) for our example peer, using mapped_private mode, the network sees the same 10-15 TPS.

Drive 1 disk IO (write) using mapped_private mode

This shows that using mapped_private greatly reduces the amount of writes.

About 4 megabytes (MB) per second to 12 kilobytes (KB) per second. About 120TBW/year reduced to 0.378TBW/year.

This means SSDs last longer, virtual environments scale better, and cloud environments are not constrained by IO limits.

In summary, Antelope Leap v5.0.0 has lower CPU utilization, more efficient memory usage, and easier-to-manage lower disk IO when using mapped_private.

Please be sure to ask any questions in the EOSphere Telegram and EOS Global Telegram.

This guest post was written by Ross Dold of EOSphere. EOxSphere is the block producer and infrastructure provider for the EOS mainnet and other Antelope-based blockchains. Learn more about their work at EOSphere.io and the links below.

About the EOS Network

The EOS network is a paradigm of the blockchain 3.0 era, powered by EOS VM. EOS VM is a low-latency, high-performance and scalable WebAssembly engine that enables near-invisible deterministic transaction execution. The EOS network is designed for Web3 and is committed to achieving the best Web3 user and developer experience. EOS is the flagship blockchain and financial center of the Antelope protocol, and through the EOS Network Foundation (ENF) as a tool for multi-chain collaboration and the development of public infrastructure products, it further improves the infrastructure and drives the rapid development of EOS.

About the EOS Network Foundation

The EOS Network Foundation (ENF) was created to create a prosperous, decentralized future for the EOS ecosystem. ENF is ushering in a new round of Web3 change by encouraging active participation from key stakeholders in the EOS ecosystem, supporting community projects, providing ecosystem funding, and supporting the construction of an open technology ecosystem. As the center of the EOS network and a leading open source platform, ENF was founded in 2021 and has a stable set of frameworks, tools, and blockchain deployment libraries. Together, we have achieved innovation in community building and are committed to creating a stronger future for all.