In the context of rapid development of artificial intelligence (AI), Google DeepMind's research suggests important factors to achieve artificial super intelligence (ASI).

In recent years, platform models have made great progress and are widely used in many applications. However, creating pioneering artificial intelligence (AI) systems, capable of self-improvement and continuously generating new information, remains a major challenge. The report by Edward Hughes and his co-authors shows how open-ended nature affects the development of ASI, and how to achieve that nature in today's AI systems.

The formal definition of openness is given based on two main aspects: newness and the ability to learn. A system is considered open if it continuously generates new data that has learning value to improve the understanding and skills of the observer.

The article provides many specific examples of current AI systems to illustrate this concept. AlphaGo is a typical example of a system that is open in scope. AlphaGo surpassed the world's top Go players by developing new, unpredictable strategies. However, the open nature of AlphaGo is limited to the game Go.

Another example is the AdA system, a learning agent in the 3D XLand2 environment with 25 billion task variations. AdA is capable of accumulating complex and diverse skills, but its novelty tends to fade after a period of training. This suggests that to maintain openness, a richer environment and stronger actors are needed.

The article also discusses evolutionary systems such as POET (Paired open-ended trailblazer), where agents and the environment co-evolve. POET illustrates the “stepping stone” phenomenon, where agents can tackle very challenging environments through gradual evolution. However, these systems also face limitations when the environment is not complex enough to maintain openness.

In addition, the article also states that current platform models do not meet the criteria of openness when only trained on fixed data sets. These models may appear open-ended in broad domains, but when narrowed in scope, they reveal limitations in their ability to generate new and precise solutions.

The authors propose four main research directions to combine openness with platform models: reinforcement learning (RL), self-improvement, task generation, and evolutionary algorithms. Reinforcement learning has achieved much success in narrow domains, and models like Voyager have shown the potential to self-improve by building a library of skills from continuously improving tasks. Evolutionary algorithms also provide a promising path to creating open systems, with the ability to implement meaningful mutations through text.

An important part of the article is a discussion of safety and responsibility issues when developing open systems. Openness brings many security risks, including misinterpretation of goals and abuse of specifications. Ensuring that open systems can be interpreted and controlled in human hands is important. This requires systems to be able to explain and interact with humans in a clear and understandable way.

In the report, the authors assert that current platform models have made significant progress, but to move towards ASI, it is necessary to develop open systems. These systems can bring enormous benefits to society, including accelerating scientific and technological breakthroughs, enhancing human creativity, and expanding general knowledge across many fields.

Google DeepMind's paper opened a new direction in AI research, emphasizing the importance of openness in achieving artificial superintelligence. Developing these systems responsibly will help ensure that they deliver maximum benefits to society, while minimizing potential risks.