
Two weeks after the release of GPT-4, an open letter signed by Elon Musk and thousands of people in the technology industry was published online. The open letter called on all artificial intelligence laboratories to immediately suspend the training of artificial intelligence systems more powerful than GPT-4 for at least 6 months.
Apple co-founder Steve Wozniak, Turing Award winner Yoshua Bengio, Stuart Russell, co-author of the AI textbook "Artificial Intelligence: A Modern Approach," and other well-known technology figures have signed the open letter.
The open letter claims that "advanced AI may mean profound changes in the history of life on Earth, and we should devote commensurate attention and resources to its planning and management", but now artificial intelligence has fallen into an out-of-control race that developers cannot understand, predict, and reliably control.
The open letter also mentioned that contemporary AI systems are now becoming competitive with humans in general tasks, and "powerful AI systems should only be developed when we are confident that their effects are positive and their risks are controllable."
According to the open letter, there should be a six-month moratorium on the training of AI systems more powerful than GPT-4. During this moratorium, AI labs and independent experts should jointly develop and implement a set of shared safety protocols for advanced AI design and development, which will be strictly audited and supervised by independent external experts.
These protocols should ensure that their systems are safe and beyond doubt. This does not mean a moratorium on AI development in general, just a step back from the dangerous race toward larger, unpredictable black-box models.
The letter was posted on the website of the Future of Life Institute, which is primarily funded by the Musk Foundation, the effective altruism organization Founders Pledge and the Silicon Valley Community Foundation. The institute's mission is to guide transformative technologies away from extreme, large-scale risks and toward benefiting life.
As early as 2017, the Future of Life Institute convened more than 100 thought leaders and researchers in the fields of economics, law, ethics and philosophy in California to discuss and formulate principles for beneficial artificial intelligence. In the end, they formulated 23 Asilomar AI Principles, which are regarded as important principles for artificial intelligence governance. The first of them is "The goal of artificial intelligence research should be to establish beneficial intelligence, not disorderly intelligence."
For the technology workers who signed the open letter, most of them are worried that AI is developing too fast, relevant supervision and laws have not kept up, and even the inventors lack effective control methods. Under such conditions, if AI is used without restrictions and problems arise, it is likely to bring systemic risks.
According to Time magazine, Simon Campos, CEO of AI safety startup SaferAI, who signed the open letter, said that the inventors of AI systems do not know exactly how they work or what they are, and therefore cannot manage the risks of the system. The inventors have the ability to do so, but do not know how to limit the behavior of AI.
"What are we doing right now?" Campos said. "We are moving at full speed to scale these systems to unprecedented levels of capability and transformative impacts on society. We have to slow down the development of these systems and let society adapt."
Musk's worry: AI can't be turned off
Musk has long been wary of AI. As early as 2014, Musk publicly stated in an interview that we should be very careful with artificial intelligence. If we must guess what the biggest threat to human survival is, it may be artificial intelligence. "Through artificial intelligence, humans are summoning the devil," Musk said.
He also said, “I am increasingly inclined to think that there should be some regulatory oversight, perhaps at the national and international level, just to make sure that we don’t do something really stupid.”
Musk is not just talking about AI safety. In early 2015, Musk donated $7 million to the Future of Life Institute, which aims to study how AI can benefit humanity. Most of the grantees are engaged in research on AI ethics, governance, and safety. Many of them, such as Stuart Russell, signed the open letter.
As an early investor in DeepMind, Musk's attitude towards AI seems somewhat contradictory. But he claimed that his investment was not based on return on investment, but just wanted to pay attention to the development of artificial intelligence because it could lead to dangerous results. He said there are some terrible consequences, and we should work hard to ensure that the results are good, not bad. "
The bad result is not that robots rebel and destroy humans, as depicted in the movies. Instead, experts are not worried about the consciousness of robots, but their capabilities. AI has a strong ability to achieve its goals, but if it gets out of control or deviates from the planned usage scenario, it may cause irreversible damage.
An important person who signed the open letter, Turing Award winner Joshua Bengio, known as the "Godfather of AI", has always been concerned about the abuse of artificial intelligence. In 2018, Bengio was one of thousands of artificial intelligence researchers who signed a pledge against the development of AI weapons. He called on governments to regulate artificial intelligence research and stop the development of lethal robots.
AI weapons are an extreme example of robots being abused. AI weapons are more efficient than traditional weapons when used on the battlefield. However, once out of control, the problems caused by AI weapons will be more serious. On the battlefield, in order to avoid failure under simple interference, current AI weapons are often designed to be difficult to shut down directly. Imagine what kind of tragic consequences would occur if an efficient AI weapon programmed to attack humans accidentally landed in a civilian residential area.
In fact, how to turn off AI is a problem that many signature bosses are worried about.
But if the application of AI goes too fast, shutting it down may require huge costs and become difficult. Musk once imagined that if an algorithm in a system gets out of control, managers can find it. But if the system is managed by a large AI, we may not be able to find the location of the out-of-control, or have no authority to stop the operation of the entire large AI. Simple maintenance will become a problem.
This is why AI needs more regulation and more democracy in technology, rather than concentration in the hands of big companies.
In order to prevent the power of AI from being concentrated in the hands of large companies, especially Google DeepMind, Musk and Sam Altman co-founded OpenAI, with the aim of democratizing the power of artificial intelligence and reducing the possibility of AI power being monopolized.
This is somewhat ironic, as GPT-4, created by OpenAI, is exactly what Musk is now actively opposing. Just a few days ago, Musk was still engaged in a verbal battle with OpenAI CEO Sam Altman.
Stop, "Let's align"
In addition to the concerns of these bigwigs, the fear of AI has a long history among the general public.
Among these, there is the fear of HAL 9000, the super-intelligent computer that gained consciousness in the science fiction movie "2001: A Space Odyssey", and the more practical fear of the chaos and bad situations that may be caused by AI controlling the world.
In the open letter, a document that was cited many times was the best-selling book "Alignment Problem", which was highly praised by many AI leaders including Microsoft executives. A co-founder of the Future of Life Institute (FLI), which published the open letter, publicly praised the book as "full of amazing discoveries, unexpected obstacles, ingenious solutions, and more and more difficult questions about the nature of our species." The biggest problem discussed in the book is: AI ethics.
One of the simplest ethical issues of artificial intelligence is the paper clip problem. If a robot is ordered to make as many paper clips as possible, it will exhaust all the resources on the earth and tirelessly make paper clips, because completing this task does not conflict with its moral code, so all it can do is to execute the task. Throughout the long history of mankind, a large number of moral codes have been solidified in our culture and deeply rooted in everyone's mind. Sometimes we ourselves are not aware of our respect and fear. However, a machine has not experienced such a long evolution. Like a child born with superpowers, they themselves become risks.
Therefore, Brian Christian, the best-selling author of "The Alignment Problem" and currently a visiting scholar at the University of California, Berkeley, believes that it is necessary to align the ethics of AI with the ethics of humans in a detailed, comprehensive and real-time manner. In this sense, this open letter is like a department of a large company asking employees to stop their work and go to the conference room to "align."
Of course, in addition to discussing these seemingly distant issues, "Alignment Problems" also discusses issues that are currently happening.
A state in the United States uses a computer program to score the probability of criminals committing crimes again, and then decides on bail and parole, and allocates the amount of security deposit. However, something strange happened to this system. Black Borden was rated as high risk, and white Pratt was rated as low risk. A two-year follow-up investigation of the two found that Borden was never charged with any crime in the two years, while Pratt was sentenced to eight years in prison for robbery and theft. Obviously, under big data, racial discrimination among humans has also been transplanted into AI - but the key problem is that everyone thinks that they are newborn babies and are fair. In "The Alignment Problem", the author also mentioned gender discrimination in recruitment. No one has set any settings for AI, but under big data, discrimination naturally arises.
Artificial intelligence based on big data may intensify the discrimination and inequality issues that are widely ignored within our community. Once these problems enter the system without being noticed or considered insignificant, they will be solidified, which will be a very unhappy outcome.
Therefore, in readers' comments on the book "The Alignment Problem", many people mentioned people - AI is magnifying existing human problems, and, most importantly, what kind of people will AI be handed over to?
There is no doubt that AI will bring about a significant increase in productivity, but like the last information revolution, this powerful labor tool will further lead to the concentration of wealth - from 1980 to today, the income share of the top 1% of the population has risen from 10% to nearly 20%, while the income share of the bottom 50% of the population has fallen from 20% in 1980 to 12%. No one can imagine what this data will become 40 years after the launch of GPT? Moreover, this is a more capital-friendly productivity tool.
As early as February this year, Daron Acemoglu, an MIT professor who has long been concerned about artificial intelligence issues, wrote an article about the following scenario: companies dismissed human customer service, a large number of people were unemployed, and consumers could only accept the services of a robot customer service that was eloquent but could not solve any problems. This approach "will deprive and replace employees' power and reduce consumer experience, ultimately disappointing most investors", and we should not do this.
This open letter is not the first time that people have expressed concerns about the future of artificial intelligence. Gates also mentioned in his open letter that we should pay attention to the issue of equality in the era of artificial intelligence: market forces will not naturally produce artificial intelligence products and services that help the poorest people, but the opposite is more likely.
As early as March 16, Altman also mentioned with concern that "software that can think and learn will do more and more of the work that people do now. More power will shift from labor to capital. If public policies are not adjusted accordingly, most people will end up living worse off than they are now."
Therefore, Altman mentioned that a new system needs to be introduced to tax capital rather than labor so that more people can share the fruits of this artificial intelligence revolution.
At present, it will take some time for AI to become the infrastructure of human life, but this time may not be long. After the emergence of GPT-4, the capabilities of large-scale AI have skyrocketed in the "arms race", and AI applications may soon be popularized in all aspects of life, but there is no time to help AI test its safety, understand human needs, and set up regulatory plans. This is also the reason why the experts behind this open letter want to suspend large-scale AI research.
Perhaps, as Microsoft chief scientist Eric Horvitz said: Today, on the horizon of artificial intelligence, some things are known, some things are unknown, and in the middle is a door left for us to observe the world.