According to BlockBeats, on September 17, during the R3al World DePIN Summit at Token2049, Rock Zhang, founder of DePIN & AI project Network3, announced the launch of a new local large language model (LLM) feature. This new feature aims to significantly improve the efficiency and performance of edge AI technology.

Rock Zhang highlighted that while edge AI has become deeply integrated into daily life, its complexity and importance are often overlooked. For instance, the ability of smartphones to complete data processing tasks overnight showcases the potential of edge devices to perform efficient computations using idle resources. The newly introduced local LLM feature by Network3 will optimize the efficient computing capabilities of smart devices during idle times, particularly for local inference and processing without relying on cloud computing. This advancement not only reduces bandwidth consumption but also enhances data security and privacy protection.

Network3 plans to support edge AI by integrating idle edge device resources globally, providing robust support and vast resources. The upcoming local LLM feature will enable users to seamlessly enjoy AI chat services on mobile devices without depending on expensive cloud computing infrastructure. Additionally, users can earn tokens through interactions with the model and customize and optimize algorithms based on personal needs, enhancing personalized experiences. Network3 revealed that the beta version is scheduled for release in October, and users will be able to download and experience it from the official website.

Network3 is also building an AI Layer2 to help global AI developers efficiently, conveniently, and economically conduct large-scale inference, training, or model validation. Previously, Network3 completed $5.5 million in pre-seed and seed round financing, and the next round of financing has already begun, with participation from several leading institutions.