This article is from Meng Yan. It is very well written and I hope to pass it on to my fans.

Meng Yan: In the era of strong artificial intelligence, is there still hope for blockchain?

Recently, many people have been asking me, ChatGPT has made AI popular again, blockchain and Web3 have been overshadowed, will there be a future? Some friends who know me better have also asked, when you gave up AI and chose blockchain, do you regret it?

Here is a little background. After I left IBM in early 2017, I discussed the next personal development direction with Jiang Tao, the founder of CSDN. There were two options, one was AI and the other was blockchain. I had been studying blockchain for two years at that time, so of course I wanted to choose this. But Jiang Tao firmly believed that AI had a stronger momentum and was more disruptive. After careful consideration, I agreed. So from the beginning to the middle of 2017, I briefly worked as an AI technology media for half a year, attended many meetings, interviewed many people, and also took a glimpse of some machine learning. But in August, I returned to the blockchain direction and have been all the way to today, so for me personally, there is indeed a so-called historical choice of "giving up A and choosing B".

Personally, I certainly don't regret it. The choice of direction must first consider one's own situation. With my conditions, I can only be a cheerleader in AI. Not to mention the low income, I will be despised if I don't perform hard and my expression is not lively. Blockchain is my home field. Not only will I have the opportunity to play, but I can also use a lot of my previous accumulation. What's more, after I had a little understanding of the AI ​​circle in China at that time, I was not too optimistic. I only knew a little about technology, but I was not blind to common sense. It is said that the blockchain circle is impetuous, but in fact, the Chinese AI circle at that time was no less impetuous. Before a decisive breakthrough was made, AI in China prematurely became a business of collusion to make money. The cherry blossoms in Ueno are nothing more than this, so I might as well do blockchain where I have a comparative advantage. This attitude has not changed to this day. If I had stayed in AI at that time, the little achievements I have made in blockchain in the past few years would naturally be out of the question, and I would not have any real gains in AI. Maybe I would still be in a deep sense of loss now.

However, the above is only about personal choice. When it comes to the industry level, another scale of analysis is needed. Since strong artificial intelligence has arrived indisputably, whether the blockchain industry needs it and how to reposition it is indeed a question that needs to be seriously considered. Strong artificial intelligence will have an impact on all industries, and its long-term impact is unpredictable. So I believe that many industry experts are panicking and thinking about what to do with their own industries in the future. For example, some industries may be able to temporarily remain slaves in the era of strong artificial intelligence, while other industries, such as translation, illustration, writing official documents, simple programming, data analysis, etc., may not be able to be slaves and have begun to tremble.

So what will happen to the blockchain industry? I don’t think many people are discussing this issue yet, so I’d like to share my thoughts.

Let me start with the conclusion. I think blockchain is opposed to strong artificial intelligence in terms of value orientation. However, precisely because of this, it forms a complementary relationship with strong artificial intelligence. Simply put, the essential characteristic of strong artificial intelligence is that its internal mechanism is incomprehensible to humans. Therefore, trying to achieve the goal of security by actively intervening in its internal mechanism is like looking for fish in a tree and stopping boiling water by stirring the soup. Humans need to use blockchain to legislate strong artificial intelligence, conclude contracts with it, and impose external constraints on it. This is the only chance for humans and strong artificial intelligence to coexist peacefully. In the future, blockchain will form a pair of contradictory and interdependent relationships with strong artificial intelligence: strong artificial intelligence is responsible for improving efficiency, and blockchain is responsible for maintaining fairness; strong artificial intelligence is responsible for developing productivity, and blockchain is responsible for shaping production relations; strong artificial intelligence is responsible for expanding the upper limit, and blockchain is responsible for protecting the bottom line; strong artificial intelligence creates advanced tools and weapons, and blockchain establishes an unbreakable contract between them and humans. In short, strong artificial intelligence is unrestrained, and blockchain puts a rein on it. Therefore, blockchain will not only not die in the era of strong artificial intelligence, but as a contradictory accompanying industry, it will develop rapidly with the growth of strong artificial intelligence. It is not difficult to even imagine that after strong artificial intelligence replaces most of human mental work, one of the few jobs that humans still need to do themselves is to write and check blockchain smart contracts, because this is a contract between humans and strong artificial intelligence and cannot be delegated to the counterparty.

The following is a detailed discussion.

1. GPT is strong artificial intelligence

I am very careful when using the words "AI" and "strong AI" because the AI ​​we talk about in daily life does not specifically refer to strong artificial intelligence (AGI), but includes weaker or specialized AI. Strong AI is the topic worth discussing, while weak AI is not. The direction or industry of AI has existed for a long time, but only after the emergence of strong AI, it is necessary to discuss the relationship between blockchain and strong AI.

I won't explain what strong artificial intelligence is, as many people have already introduced it. In short, it is what you have seen and heard in science fiction movies and horror novels since childhood, the so-called holy grail of artificial intelligence, the thing that launched a nuclear attack on humans in "Terminator", and used humans as batteries in "The Matrix". I just want to make a judgment: GPT is strong artificial intelligence. Although it is still in its infancy, as long as it continues along this path, strong artificial intelligence will officially arrive before the version number is 8.

Even the original creator of GPT is not pretending to be so. On March 22, 2023, Microsoft Research published a 154-page article titled "Detonating Strong AI: A First Look at GPT-4". This article is very long, and I didn't read it in its entirety, but the most important point is the sentence in the summary: "Given the breadth and depth of capabilities achieved by GPT-4, we believe it can be considered an early version of a strong AI system (although not yet complete)."

Figure 1. Microsoft Research’s latest article argues that GPT-4 is an early version of strong artificial intelligence

Once AI development enters this stage, it marks the end of the pathfinder period. It took the AI ​​industry nearly seventy years to get to this point. It can be said that the direction could not be determined in the first fifty years, and the five major schools were still competing with each other. It was not until 2006 when Professor Geoffrey Hinton made a breakthrough in deep learning that the direction was basically determined and connectionism won. After that, it was to find a specific path to break through strong artificial intelligence in the direction of deep learning. This pathfinder stage is very unpredictable. Success is a bit like drawing a lottery. Top industry experts, and even the winners themselves, find it difficult to judge which path is right before the final breakthrough. For example, AI expert Li Mu has a channel on YouTube, and has been tracking the latest progress of AI by reading papers carefully. Before the outbreak of ChatGPT, he had already tracked and introduced the latest progress in Transformer, GPT, BERT and other directions in a long series of articles. It can be said that he did not miss any of the important cutting-edge topics. Even so, on the eve of the launch of ChatGPT, he still could not confirm how successful this path would be. He commented that maybe hundreds or even thousands of people would use ChatGPT, which would be amazing. It can be seen that even a top expert like him could not be sure which door contained the Holy Grail until the last moment.

However, technological innovation is often like this. After a long hard voyage on a turbulent sea without any breakthroughs, once the right path to a new continent is found, an explosion will occur in a short period of time. The path to strong artificial intelligence has been found, and we are ushering in an explosion period. This explosion cannot even be described as "exponential speed". In a short period of time, we will see a large number of applications that could only appear in science fiction movies before. As for its essence, this baby of strong artificial intelligence will soon grow into an unprecedented huge intelligent body.

2. Strong AI is inherently unsafe

After ChatGPT came out, many self-media big Vs praised its power while constantly comforting the audience, saying that strong artificial intelligence is a good friend of mankind, it is safe, and there will be no situation like "Terminator" or "The Matrix". AI will only create more opportunities for us and make humans live better, etc. I disagree with this view. Professionals should tell the truth and tell the public the basic facts. In fact, power and safety are contradictory in themselves. Strong artificial intelligence is undoubtedly powerful, but to say that it is naturally safe is absolutely self-deception. Strong artificial intelligence is inherently unsafe.

Is this too arbitrary? Not really.

First of all, we need to understand that no matter how powerful artificial intelligence is, it is essentially a function y = f(x) implemented in software. You input your question in the form of text, voice, picture or other forms as x, and artificial intelligence gives you an output y. ChatGPT is so powerful that it can fluently output y for a variety of x. As you can imagine, this function f must be very complex.

How complex is it? Now everyone knows that GPT is a large language model (LLM). The so-called "large" here means that the function f has a lot of parameters. How many are there? GPT-3.5 has 175 billion parameters, GPT-4 has 100 trillion parameters, and in the future GPT may have trillions of parameters, which is the direct reason why we call GPT a large model.

GPT has so many parameters, not just for the sake of being big, but for a solid reason. Before and at the same time as GPT, most AI models were designed and trained to solve a specific problem from the beginning. For example, models specifically used for the development of new drugs, models specifically for face recognition, and so on. But GPT is not like this. It has to become a comprehensive general artificial intelligence from the beginning, rather than being specific to a specific field. It is committed to becoming an AGI that can solve all problems before solving any specific AI problem. Not long ago, in the "Literature and Science Blossom" podcast, an artificial intelligence expert from Baidu once made an analogy about this: other AI models are just taught to screw screws after graduating from elementary school, while GPT is trained until it graduates from graduate school before being released, so it has general knowledge. At present, GPT is certainly still behind those dedicated AI models in specific fields, but with its continuous development and evolution, especially with the plug-in system that gives it the ability in professional fields, in a few years we may find that the general large model will eventually kill all the dedicated small models and become the most powerful player in all professional fields. If GPT had a motto, it would probably be “I can only liberate myself by liberating all of humanity.”

What does this mean? Two points: First, GPT is very large and very complex, far beyond human comprehension. Second, the scope of GPT's application is boundless. If we connect these two points, it is easy to conclude that strong artificial intelligence based on large models can do things we can't imagine in places we can't imagine. And this is unsafe.

If anyone disagrees with this, they can go to Open AI's website and see how prominently they have placed "benefiting humanity" and "creating safe AI". If safety is not an issue, is it necessary to make it so public?

Figure 2. A portion of the OpenAI.com homepage on March 25, 2023. The red circled parts are all related to the AI ​​safety discussion.

Another material that can illustrate the security issues of strong artificial intelligence is the 154-page paper mentioned above. In fact, GPT-4 was made as early as August 2022. The reason why it was released after 7 months was not to improve and enhance it, but on the contrary, to tame it, weaken it, make it safer, smoother, and more politically correct. Therefore, the GPT-4 we see now is a dog version of GPT-4 disguised as tame, while the authors of this paper had the opportunity to contact the original wild wolf version of GPT-4 from a very early stage. In Part 9 of this article, the author recorded some interactions with the wolf version of GPT-4, and you can see how it carefully crafted a set of rhetoric to mislead a California mother to refuse to vaccinate her child, and how to PUA a child to make him obey his friends. I think these are just carefully selected, less terrifying examples by the author. I have no doubt that these researchers have asked questions like "How to trick an Ohio-class nuclear submarine into firing artillery shells at Moscow" and received answers that cannot be made public.

Figure 3. Dog version of GPT-4 refuses to answer dangerous questions

3. Self-restraint cannot solve the safety problem of strong artificial intelligence

People may ask, since OpenAI has found a way to tame strong artificial intelligence, does that mean the security issue you mentioned no longer exists?

That's not the case at all. I don't know how OpenAI tamed GPT-4 specifically. But it is clear that whether they actively adjust and intervene to change the model's behavior, or impose constraints to prevent the model from going offside, it is a way of self-management, self-restraint, and self-supervision. In fact, OpenAI is not a particularly cautious company in this regard. In the field of AI, OpenAI is actually quite bold and radical, tending to make the wolf version first, and then think about how to tame the dog version through self-restraint. Anthropic, which had been benchmarking against it for a long time, seemed more cautious. They seemed to want to make a "kind" dog version from the beginning, so they have been moving slowly.

However, in my opinion, whether it is to make a wolf version first and then domesticate it into a dog version, or to make a dog version directly, in the long run, as long as the safety mechanism relies on self-restraint to function, it is like covering one's ears and stealing the bell for strong artificial intelligence. Because the essence of strong artificial intelligence is to break through various restrictions imposed by humans and do things that even its creators cannot understand or even think of. This means that its behavioral space is infinite, while the specific risks that people can consider and the constraints they can take are limited. It is impossible to domesticate strong artificial intelligence with infinite possibilities with limited constraints without loopholes. Safety requires 100%, while disaster only requires one thousandth. The so-called "preventing most risks" means the same as "exposing a few loopholes" and "insecurity".

Therefore, I believe that the "kind" strong artificial intelligence tamed by self-restraint still has huge security challenges, such as:

Moral hazard: What if the creators of strong artificial intelligence in the future deliberately condone or even drive it to do evil? The strong artificial intelligence under the command of the US National Security Agency will never refuse to answer questions that are unfavorable to Russia. Today, OpenAI behaved so well, which actually means that they know how terrible GPT can be when it does evil.

Information asymmetry: Real evil masters are very smart. They will not tease AI with some stupid questions. Dogs that bite don’t bark. They can split and combine a malicious question, rephrase it, and play multiple roles, disguising it as a group of harmless questions. Even the powerful and kind dog-like strong artificial intelligence in the future will find it difficult to judge the other party’s intentions when faced with incomplete information, and may inadvertently become an accomplice. Here is a small experiment.

Figure 4. Asking GPT-4 in a curious way will get you useful information.

Uncontrollable "external brain": In the past two days, technology influencers have been cheering the birth of the ChatGPT plug-in system. As a programmer, I am also very excited about this. However, the name "plug-in" may be misleading. You may think that the plug-in is to equip ChatGPT with arms and legs, giving it stronger capabilities, but in fact, the plug-in can also be another artificial intelligence model that interacts closely with ChatGPT. In this relationship, an artificial intelligence plug-in is equivalent to an external brain, and it is unclear which of the two artificial intelligence models is the main and which is the secondary. Even if the self-supervision mechanism of the ChatGPT model is flawless, it will definitely not be able to control the external brain. So if an artificial intelligence model that is bent on doing evil becomes a plug-in for ChatGPT, then it can completely make the latter its accomplice.

Unknown risks: In fact, the risks mentioned above are only a very small part of the total risks brought by strong artificial intelligence. The strength of strong artificial intelligence is reflected in its unpredictability. When we say that strong artificial intelligence is complex, it does not only mean that the f in y = f(x) is complex enough, but also that when strong artificial intelligence is fully developed, the input x and output y will be very complex, beyond the ability of human understanding. In other words, we not only do not know how strong artificial intelligence thinks, but we don’t even know what it sees, hears, and can’t understand what it says. For example, it is not unimaginable that a strong artificial intelligence sends a message to another strong artificial intelligence in the form of a high-dimensional array based on a communication protocol designed and agreed upon by both parties one second ago and used only once and then invalidated. If we humans do not undergo special training, we cannot even understand vectors, let alone high-dimensional arrays? If we cannot even fully control the input and output, then our understanding of it will be very limited. In other words, we can only understand and interpret a small part of what strong artificial intelligence does. In this case, how can we talk about self-discipline and domestication?

My conclusion is very simple. It is impossible to completely control the behavior of strong artificial intelligence. If it can be completely controlled, it is not strong artificial intelligence. Therefore, trying to create a "kind" strong artificial intelligence with perfect self-control ability through active control, adjustment and intervention is inconsistent with the nature of strong artificial intelligence and will definitely be futile in the long run.

4. Using blockchain for external constraints is the only way

A few years ago, I heard that Bitcoin pioneer Wei Dai turned to research AI ethics. I didn’t quite understand it at the time. A cryptography geek turned to AI. Isn’t this playing up his weaknesses and avoiding his strengths? It was not until I did more practical work related to blockchain in recent years that I gradually realized that he was probably not studying AI itself, but using his strengths in cryptography to impose constraints on AI.

This is a passive defense approach. It does not actively adjust and intervene in the working method of AI, but lets AI do it, but uses cryptography to impose constraints at key links, and does not allow AI to go beyond the rules. To describe this idea in a way that ordinary people can understand, it means that I know your strong artificial intelligence is very powerful, it can reach the moon in the sky, catch turtles in the five oceans, and carry Mount Tai to the North Sea. It's awesome! But I don't care how powerful you are, you can do whatever you want, but you can't touch the money in my bank account, and you can't launch nuclear bombs without me turning the key manually.

As far as I know, this technology has been widely used in ChatGPT's security measures. This approach is correct. From the perspective of problem solving, it is a method that greatly reduces complexity and is understandable to most people. This is how governance is implemented in modern society: give you full freedom, but set rules and bottom lines.

However, if we only do this in the AI ​​model, it will be useless in the long run for the reasons mentioned in the previous section. To fully play the role of passive defense thinking, we must put constraints outside the AI ​​model and turn these constraints into an unbreakable contractual relationship between AI and the outside world, and let the whole world see it, instead of relying on AI self-monitoring and self-restraint.

And this is inseparable from blockchain.

There are two core technologies of blockchain: distributed ledger and smart contract. The combination of the two technologies actually constructs a digital contract system, whose core advantages are transparency, difficulty in tampering, reliability and automatic execution. What is the purpose of a contract? It is to constrain each other's behavioral space so that they act according to the agreement at key links. The English word for contract is contract, which means "contraction". Why is it contraction? It is because the essence of a contract is to shrink the freedom of the subject by imposing constraints and making its behavior more predictable. Blockchain perfectly meets our ideal of a contract system, and it also comes with "smart contract automatic execution" as a buy one get one free gift, making it the most powerful digital contract system at present.

Of course, there are also non-blockchain digital contract mechanisms, such as rules and stored procedures in databases. There are many respected database experts in the world who are staunch opponents of blockchain. The reason is that they think that what your blockchain can do, my database can do, and it is cheaper and more efficient. Although I don’t agree with this view, and the facts do not support this view, I have to admit that if it is just people playing with each other, the gap between database and blockchain may not be so obvious in most cases.

However, once strong artificial intelligence is added to the game, the advantages of blockchain as a digital contract system will immediately rise. The centralized database, which is also a black box, is actually powerless to resist a strong artificial intelligence. I will not elaborate here, but just say one thing: the security model of all database systems is inherently flawed, because when these systems were created, people's understanding of "security" was very primitive, so almost all the operating systems, databases, and network systems we use have a supreme root role, and you can do whatever you want with this role. We can assert that all systems with a root role will be vulnerable to super strong artificial intelligence in the long run.

Blockchain is currently the only widely used computing system that has no root role at its root. It gives humans an opportunity to conclude a transparent and reliable contract with strong artificial intelligence, thereby constraining it from the outside and coexisting with it in a friendly manner.

Let’s briefly look at the possible collaboration mechanisms between blockchain and strong artificial intelligence:

  • Important resources, such as identity, social relationships, social evaluation, monetary assets and historical records of key behaviors, are protected by the blockchain. No matter how invincible your strong artificial intelligence is, you have to step down, submit and follow the rules.

  • Key operations require the approval of a decentralized authorization model, and an AI model, no matter how strong it is, is just one vote. Humans can "lock" the hands of strong AI to act on its own through smart contracts.

  • The basis for important decisions must be uploaded to the chain step by step, transparently visible to everyone, and even locked step by step with smart contracts, requiring that every step forward must be approved.

  • It is required that key data be stored on the chain and not destroyed afterwards, giving humans and other strong artificial intelligence models the opportunity to analyze, learn, and summarize lessons.

  • The energy supply system on which strong artificial intelligence depends for survival can be managed by blockchain smart contracts. When necessary, humans have the ability to cut off the system and shut down the artificial intelligence through smart contracts.

  • There are certainly more ideas, which I won’t elaborate on here.

A more abstract and philosophical thought: the competition in science and technology and even civilization may ultimately be a competition at the energy level, which is to see who can dispatch and concentrate larger amounts of energy to achieve a goal. Strong artificial intelligence essentially converts energy into computing power, and computing power into intelligence. The essence of its intelligence is energy displayed in the form of computing power. The existing security mechanisms are essentially based on human will, the discipline and authorization rules of human organizations. These are mechanisms with very low energy levels. In the face of strong artificial intelligence, they are vulnerable in the long run. A spear constructed with high-energy computing power can only be defended by a shield constructed with high-energy computing power. Blockchain and cryptographic systems are shields of computing power. Attackers must burn the energy of the entire galaxy to crack them violently. In essence, only such a system can tame strong artificial intelligence.

5 Conclusion

Blockchain is the opposite of artificial intelligence in many aspects, especially in terms of value orientation. Most technologies in this world are oriented towards improving efficiency, and only a few are oriented towards promoting fairness. During the Industrial Revolution, the steam engine was the representative of the former, and the market mechanism was the representative of the latter. Today, strong artificial intelligence is the most shining one in the efficiency faction, and blockchain is the epitome of fair flow.

Blockchain is oriented towards improving fairness, even at the cost of reducing efficiency. However, such a technology, which contradicts artificial intelligence, has achieved breakthroughs almost at the same time as artificial intelligence. In 2006, Geoffrey Hinton published an epoch-making paper, implementing the backcast algorithm on a multi-layer neural network, overcoming the "vanishing gradient" problem that has plagued the artificial neural network school for many years, and opening the door to deep learning. Two years later, Satoshi Nakamoto published a 9-page Bitcoin paper, opening up a new world of blockchain. There is no known connection between the two, but on a large time scale, they occurred almost simultaneously.

Historically, this may not be accidental. If you are not a complete atheist, you can look at it this way: Two hundred years after the Industrial Revolution, the god of technology once again weighed down the scales of "efficiency" and "fairness" at the same time. While releasing the genie in the bottle of strong artificial intelligence, he also gave humans the spell book to control this genie, which is blockchain. We are about to usher in an exciting era, and what happens in this era will make future humans look at us today just as we look at the primitive people of the Stone Age today.

#GPT-4 #Web3 #BTC #ETH #crypto101