A new network of supercomputers could lead to general AI, scientists hope, with the first node online within weeks

Scientists hope to accelerate the development of human-level AI by using a network of powerful supercomputers, with the first of these machines fully operational by 2025.

Researchers plan to accelerate the development of artificial general intelligence (AGI) with a global network of extremely powerful computers — starting with a new supercomputer that will go live in September.

Artificial intelligence (AI) encompasses technologies such as machine learning and generative AI systems like GPT-4. The latter provide predictive reasoning based on learning from a large dataset. They can often outperform human capabilities in a particular domain, based on their training data. However, they are subpar at cognitive or reasoning tasks and cannot be applied to all disciplines.

AGI, on the other hand, is a hypothetical future system that surpasses human intelligence in multiple disciplines – and can learn from itself and improve its decision-making based on access to more data.

The supercomputers, built by SingularityNET, will form a “multi-tier cognitive computing network” that will be used to host and train the architectures required for AGI, company representatives said in a statement.

These include elements of advanced AI systems such as deep neural networks, which mimic the functions of the human brain; large language models (LLMs), which are large datasets on which AI systems train; and multimodal systems that connect human behaviors such as speech and motion inputs to multimedia outputs. This is similar to what you would see in AI videos.

Building a new network of AI supercomputers

The first supercomputers will begin commissioning in September and work will be completed by late 2024 or early 2025, company officials said, depending on supplier delivery times.

The modular supercomputer will feature advanced hardware components and infrastructure, including Nvidia L40S graphics processing units (GPUs), AMD Instinct and Genoa processors, Tenstorrent Wormhole server racks powered by Nvidia H200 GPUs, and Nvidia GB200 Blackwell systems. Together, they provide some of the most powerful AI hardware on the market.

“This supercomputer will be a major step forward in the transition to AGI. While the new neural-symbolic AI approaches developed by the SingularityNET AI team somewhat reduce the data, processing, and energy requirements compared to standard deep neural networks, we still need significant supercomputing facilities,” SingularityNET CEO Ben Goertzel told LiveScience in a written statement.

“The mission of the computing machine we are creating is to ensure a phase transition from learning on big data and subsequent reproduction of the contexts of the neural network’s semantic memory, to non-imitative machine thinking based on multi-stage reasoning algorithms and dynamic modeling of the world based on cross-domain pattern matching and iterative knowledge distillation. Before our eyes, a paradigm shift is occurring towards continuous learning, transparent generalization, and reflexive self-modification of AI.”

The Road to AI “Superintelligence”

The goal of SingularityNET is to provide access to data for the growth of AI, AGI, and a future artificial superintelligence, a hypothetical future system far more cognitively advanced than any human being. To do this, Ben Goertzel and his team also needed a single software to manage the federated (distributed) computing cluster.

Federated compute clusters enable the abstraction of user data and exposure of summary data needed for large-scale, yet protected, computations of datasets containing highly secure elements such as PII.

“OpenCog Hyperon is an open source software framework designed specifically for architecting AI systems,” added Ben Goertzel. “This new hardware infrastructure is purpose-built to implement OpenCog Hyperon and its AGI ecosystem environment.”

To give users access to the supercomputer, Goertzel and his team use a tokenization system common in AI. Users are given access to the supercomputer, and with their tokens, they can use and add data to existing sets that other users rely on to test and deploy general AI concepts.

In their simplest form, AI tokens mimic tokens in standalone video games in arcades. Players had to purchase tokens and then insert them into the video game to get a certain number of chances to play. In this case, the data collected during the game is accessible to all other players, not only in an arcade, but also wherever that instance of the game is in other arcades around the world.

“GPT-3 was trained on 300 billion tokens (typically words, parts of words, or punctuation marks) and GPT-4 on 13 trillion,” wrote Nabeel S. Qureshi, a researcher and software engineer at Mercatus. “Self-driving cars are trained on thousands of hours of video footage; OpenAI Copilot, for programming, is trained on millions of lines of human code from the Github website.”

AI leaders, most notably DeepMind co-founder Shane Legg, have said the systems could match or surpass human intelligence by 2028. Goertzel has already estimated that systems will reach that point by 2027, while Mark Zuckerberg is actively pursuing AGI after investing $10 billion in building the infrastructure to train advanced AI models in January.

SingularityNET, part of the Artificial Super Intelligence Alliance (ASI) — a collective of companies dedicated to open-source artificial intelligence research and development — plans to expand the network in the future and increase the available computing power. Other ASI members include Fetch.ai, which recently invested $100 million in a decentralized computing platform for developers.

https://www.livescience.com/technology/artificial-intelligence/new-supercomputing-network-lead-to-agi-1st-node-coming-within-weeks

#AGI

** The most powerful universal GPU on the market

The NVIDIA L40S GPU gives you industry-leading performance to tackle multiple workloads. Combining powerful AI compute capabilities with industry-leading graphics and multimedia acceleration, the L40S GPU is designed to support the next generation of data center workloads, from generative AI to inference and large language model training to 3D computer graphics, graphics rendering, and video. **

** Peak performance at any scale

AMD Instinct™ accelerators deliver industry-leading performance for data centers at any scale, from single-server solutions to the world's largest exascale supercomputers1.

They are particularly well suited to handling the most demanding AI and HPC workloads, offering exceptional compute performance, high memory density, high bandwidth memory, and support for specialized data formats.**

#ASI #FET $FET