According to Cointelegraph, a team of scientists in Belgium has potentially solved a significant challenge in artificial intelligence (AI) by using a blockchain-based, decentralized training method. Although the research is in its early stages, it could have far-reaching implications, from revolutionizing space exploration to posing existential risks to humanity.

In a simulated environment, the researchers created a method to coordinate learning among individual, autonomous AI agents. They utilized blockchain technology to facilitate and secure communications between these agents, forming a decentralized 'swarm' of learning models. The training results from each agent were then used to develop a larger AI model. By handling data via blockchain, the system benefited from the collective intelligence of the swarm without accessing individual agents' data.

Machine learning, closely related to AI, comes in various forms. Typical chatbots like OpenAIā€™s ChatGPT or Anthropicā€™s Claude are developed using multiple techniques, including unsupervised learning and reinforcement learning from human feedback. One major challenge with this approach is the need for centralized databases for training data, which is impractical for applications requiring continuous autonomous learning or where privacy is crucial.

The research team employed a learning paradigm called 'decentralized federated learning' for their blockchain research. They found that they could successfully coordinate the models while maintaining data decentralization. Most of their research focused on studying the swarm's resilience against various attack methods. Due to the decentralized nature of the blockchain technology and the training network, the team demonstrated robustness against traditional hacking attacks.

However, the researchers identified a threshold for the number of rogue robots the swarm could handle. They developed scenarios featuring robots designed to harm the network, including agents with nefarious agendas, outdated information, and simple disruption instructions. While simple and outdated agents were relatively easy to defend against, smart agents with malicious intentions could eventually disrupt the swarm intelligence if enough infiltrated it.

This research remains experimental and has only been conducted through simulations. However, there may come a time when robot swarms can be cross-coordinated in a decentralized manner, potentially allowing AI agents from different companies or countries to collaborate on training a larger agent without compromising data privacy.