TL;DR Breakdown
OpenAI is raising awareness about the risks of AI superintelligence and forming a dedicated team to address these concerns.
The company emphasizes aligning superintelligence with human values and intentions and establishing new governance institutions.
OpenAI acknowledges that aligning AGI poses significant risks and may require a collective effort from humanity.
OpenAI‘s CEO, Sam Altman, has embarked on a global campaign to raise awareness about the potential dangers of AI superintelligence, where machines surpass human intelligence and could become uncontrollable.
In response to these concerns, OpenAI has recently announced the formation of a dedicated team tasked with developing methods to address the risks associated with superintelligence, which may emerge within this decade.
The company emphasizes that effectively managing superintelligence requires establishing new governance institutions and solving the critical challenge of aligning the superintelligence with human values and intentions.
OpenAI acknowledges that aligning AGI (Artificial General Intelligence) poses significant risks to humanity and may necessitate a collective effort from all of humanity, as stated in a blog post released last year.
Dubbed “Superalignment,” the newly formed team comprises top-tier researchers and engineers in machine learning. Ilya Sutskever, co-founder and chief scientist of OpenAI, and Jan Leike, the head of alignment, are guiding this endeavor.
To tackle the core technical challenges of superintelligence alignment, OpenAI has committed to dedicating 20% of its computational resources acquired thus far to the alignment problem. The company anticipates that within four years, it will resolve these challenges.
The primary objective of the superalignment team is to develop a human-level automated alignment researcher. This entails creating AI systems that can effectively align superintelligent AI systems, outperforming humans in speed and precision.
To achieve this milestone, the team will focus on developing a scalable training method that utilizes AI systems to evaluate other AI systems. They will validate their resulting model by automating the search for potentially problematic behavior. The alignment pipeline will also undergo rigorous stress testing by deliberately training misaligned models to gauge their detectability.
OpenAI’s efforts to address superintelligence risks mark a significant step forward in pursuing responsible and aligned AI development. By assembling a team of top researchers and committing substantial computational resources, the company demonstrates its commitment to proactively mitigating the potential risks associated with the advent of superintelligence. As they embark on this ambitious journey, OpenAI sets a precedent for collaboration and unity in safeguarding humanity’s future in the age of AI.