The official launch of the Chainbase AVS mainnet is deepening the connection between different layers of Chainbase, bringing validation and processing capabilities to the Chainbase data network and becoming a new opportunity for the comprehensive launch of the data system.
On December 4th, the natively chained Web3 AI data network Chainbase officially launched the Chainbase AVS mainnet and released the first batch of 20 AVS node operator lists.
Chainbase AVS is the first application-oriented mainnet AVS in EigenLayer AVS, adopting a four-layer network architecture, where Chainbase AVS in the execution layer ensures data processing scalability and security through integration with EigenLayer.
The execution layer utilizes the Chainbase Virtual Machine (CVM) and the Chainbase Manuscripts data framework to achieve seamless execution of complex workflows, providing strong support for large-scale AI data processing.
In fact, since Chainbase launched the AVS testnet in July, the network has attracted over 2,100 node operators to register, staking over 600,000 ETH, making it one of the most important data stacks with more than 600 million API calls daily. In its previously launched Chainbase Genesis Odyssey event, Chainbase has already connected 31,161,249 wallets, representing market recognition for Chainbase.
With the release of the new AVS mainnet, it not only signifies further improvement of the Chainbase Web3 full-chain data network ecosystem but also lays an important foundation for the launch of the Chainbase mainnet.
Meanwhile, Chainbase AVS is the first mainnet AVS in EigenLayer AVS centered on data intelligence applications, which not only expands the application field of the EigenLayer AVS system but also lays the foundation for deeper integration of more data-related sectors with the EigenLayer system.

The natively chained distributed full-chain data network Chainbase: Built for the Web3 and AI economy.
AI technology is enabling a leap in human productivity and efficiency, experiencing explosive growth in recent years.
Computing power, algorithms, and data are the three driving forces of AI development. Among them, computing power is no longer scarce with the continuous upgrade of computing hardware and the emergence of large-scale computing clusters in the market, while algorithms are constantly optimized through innovation. However, the available and uniformly formatted data for AI training is becoming increasingly scarce, such as:
Due to a lack of sufficient open data and restrictions on data access permissions, it is becoming increasingly difficult for developers or researchers to obtain the required datasets, directly impacting the development and application of AI models.
Captured data may be missing or contaminated, making it difficult to ensure accuracy, completeness, and freedom from tampering, which is becoming a significant factor misleading AI models and generating inaccurate or unreliable results.
Even high-quality data often lacks uniformity in format, requiring appropriate preprocessing; otherwise, it is challenging to apply directly to AI model training, significantly increasing the workload and cost of AI training.
In this context, Chainbase is building a solution in a Web3 manner, aimed at addressing a series of issues related to data accessibility, integrity, and usability faced during the development of the AI field, ensuring that AI can obtain high-quality, structured, and trustworthy data supply. Through advanced technological architecture and an open ecosystem, it redefines the ways data is acquired, managed, and utilized, providing comprehensive and efficient solutions for developers and users, and rewarding multiple contributors to jointly participate based on the knowledge economy.
Transparent, open, and collaborative Web3 data ecosystem
As mentioned above, the data issues faced in the AI field can generally be summarized as accessibility, integrity, and usability.
For an ideal Web3 data system, it needs to ensure openness and non-permissioned access, while also ensuring the standardization, high quality, and comprehensiveness of data. The Chainbase network is based on decentralization as a fundamental principle, ensuring that data is not controlled or processed by a single entity and establishing four core principles: openness, collaboration, incentive mechanisms, and AI readiness.
Chainbase itself has built a distributed data ecosystem based on blockchain solutions, aimed at Web3. On the one hand, it allows all data requesters to capture specific data from the network and presents it to all developers and researchers in an open-source manner, laying the foundation for providing necessary accessibility for AI data capture.
At the same time, Chainbase is building a system centered on collaboration and incentives to provide comprehensive, high-quality, and unified standard datasets, ensuring data integrity and availability while avoiding control or processing of data by a single entity.
Chainbase allows users to become data providers within the network and achieve data standardization and processing through Manuscripts (by simplifying data formatting and standardization processes, enabling AI systems to seamlessly access the required data). At the same time, data providers can also contribute to the network's data decoding by sharing and verifying data or providing computing power. These contributions will be rewarded with $C tokens to incentivize more users to participate in data provision and sharing.
Four-Layer Architecture Design
To ensure the efficient operation of the system, Chainbase has designed a four-layer architecture system, including data accessibility layer, consensus layer, execution layer, and co-processing layer. Each layer encapsulates different functions and roles, and seamlessly integrates with each other. Based on the hierarchical architecture, Chainbase will be able to build a decentralized environment for collaborative knowledge sharing, powerful execution capabilities, consensus-driven data validation, and high-quality data accessibility.
Data Accessibility Layer
The data accessibility layer is the source of Chainbase's data sources and the foundation of data, responsible for collecting, verifying, and storing on-chain and off-chain data, responding to the data processing functional needs of Manuscripts.
This layer encompasses not only on-chain data, including transaction history, staking information, and metadata, but also off-chain data (stored in decentralized storage systems), addressing scalability and privacy issues, making it suitable for storing large raw datasets, programming code, and complex AI models.
This layer obtains information through a decentralized network to ensure the dispersion and diversity of data sources, avoiding control or manipulation by a single entity. At the same time, it utilizes cryptographic techniques like ZKP to verify data sources, protecting sensitive information, and ensuring the credibility of data before permanent storage through consensus mechanisms, providing an early foundation for compliance and integrity of data.
Consensus Layer
As a distributed on-chain ecosystem, Chainbase needs to ensure that data is complete, secure, and trustworthy, and that all transactions and data states are validated and recognized by network participants.
Chainbase has built a consensus algorithm based on CometBFT, providing strong resistance to network failures and malicious attacks, and achieving rapid finality, ensuring instant data updates. At the same time, Chainbase has also introduced economic factors into the system through the DPoS consensus mechanism, binding them with validators.
In this, validators can maintain the integrity of the blockchain, verify data operations, and ensure consistency by staking $C tokens, while delegators can further enhance the network staking scale and security by staking $C tokens to trusted validators, strengthening the economic resilience of the system.
Validators receive rewards for their key role in ensuring the accuracy and stability of the network. The consensus layer maintains a strong and trustworthy decentralized framework by aligning economic incentives with network security.
Execution Layer
The execution layer is the computational core of Chainbase, responsible for executing Manuscripts and managing large-scale data processing tasks, aiming to ensure efficient, secure, and scalable processing of Manuscripts, enabling developers to execute complex AI tasks while maintaining high performance and reliability.
The execution layer encapsulates the Chainbase Virtual Machine (CVM), designed as the core of the execution layer, optimized for processing Manuscripts and executing data workflows in a customized virtual environment. It is designed as a parallel architecture, supporting the simultaneous processing of different parts of datasets to ensure parallel data processing and allowing multiple tasks to be executed in parallel, thereby optimizing resource utilization and achieving task parallelism.
At the same time, the CVM also supports node operators to contribute computing resources, laying the foundation for smoother network operation, and the system will provide rewards based on the workload and performance of nodes contributing computing resources.
With the launch of the Chainbase AVS mainnet, the execution layer will officially take effect on EigenLayer and be able to collaborate with Chainbase AVS.

Co-Processing Layer
The top layer of the Chainbase network is the co-processing layer, supporting users with data processing expertise and AI knowledge to contribute and collaborate. The core concept in the co-processing layer, 'Manuscripts', is a programmable script used to define and execute data processing tasks. Developers can use Manuscripts to standardize data formats and processes, transforming raw data into a unified format usable by AI.
Manuscripts not only represent a data format but can also be seen as a resource for transactions, facilitating the construction of a creator economy ecosystem.
Contributors can assetize their work by compiling it into Manuscripts, allowing their work to circulate within the network and thereby building a creator economy ecosystem. In this economic system, the $C token serves as a medium for payment, settlement, staking, and governance, playing a key role in this layer.
The co-processing layer not only promotes collaborative knowledge sharing but also establishes a transparent economic system, providing fair compensation for contributors, encouraging innovation, and enhancing the overall utility of the network. Meanwhile, this also lays the foundation for Chainbase to build a comprehensive and complete usable data system.
Chainbase AVS Mainnet: A New Opportunity for the Comprehensive Launch of the Data System
The official launch of the Chainbase AVS mainnet is deepening the connection between different layers of Chainbase, bringing validation and processing capabilities to the Chainbase data network.
By providing a range of powerful features, it will become the start of Chainbase empowering developers and achieving seamless data processing capabilities for decentralized applications and AI-driven solutions, marking a new opportunity for the comprehensive launch of the data system.
With the launch of the Chainbase AVS mainnet, developers will be able to use Manuscripts to convert raw blockchain data into standardized, AI-ready formats, and through cooperation with advanced open-source models such as Google Gemini and OpenAI, achieve AI-ready workflows, thereby simplifying the preparation and utilization of blockchain data, providing efficient support for advanced applications and AI reasoning.
At the same time, Chainbase has officially introduced the $C token as part of its incentive system, allowing manuscript creators, data providers, network operators, and validators to be rewarded for their contributions to the network, building an AGI economy and exploring new income models to ensure the generation of high-quality data and encourage active participation from all stakeholders.
In addition, the Chainbase execution layer will officially take effect on EigenLayer and will be able to collaborate with Chainbase AVS. By leveraging the decentralized architecture supported by EigenLayer, node roles can not only enhance the security of the network but also demonstrate their capability to handle complex workloads, such as real-time AI data processing.
Based on the core CVM of the data processing workflow, developers can achieve seamless execution of complex manuscripts and gain a reliable and scalable environment to efficiently manage data-intensive tasks, which is crucial for large-scale AI data processing.
From another perspective, Chainbase AVS plays a crucial role in expanding the application scenarios of the EigenLayer ecosystem.
As the first mainnet focused on data intelligence use cases in EigenLayer AVS, Chainbase AVS aims to optimize data standardization and AI readiness through decentralized data processing and AI applications. This not only expands the application field of the EigenLayer AVS ecosystem but also lays the foundation for deeper integration of more data-related industry sectors with the EigenLayer system.
At the same time, Chainbase AVS promotes deep integration of the EigenLayer system with the AI track, further expanding its applications from within the industry to outside the industry.
It is reported that in the next phase following the launch of the Chainbase AVS mainnet, Chainbase plans to further expand its data sources, incorporating real-time IoT data streams and other various datasets, thereby enhancing data accessibility for developers and expanding the applicability and practicality of data infrastructure across industries, further becoming an important cornerstone for the development of the technology field.