David Brin, the Hugo and Nebula-winning science fiction author behind the Uplift novels and The Postman, has devised a plan to combat the existential threat from rogue artificial intelligence.

He says only one thing has ever worked in history to curb bad behavior by villains. Its not asking them nicely, and its not creating ethical codes or safety boards.

Its called reciprocal accountability, and he thinks it will work for AI as well.

Empower individuals to hold each other accountable. We know how to do this fairly well. And if we can get AIs doing this, there may be a soft landing waiting for us, he tells Magazine.

Sic them on each other. Get them competing, even tattling or whistle-blowing on each other.

Of course, thats easier said than done.

Magazine chatted with Brin after he gave a presentation about his idea at the recent Beneficial Artificial General Intelligence (AGI) Conference in Panama. Its easily the best-received speech from the conference, greeted with whoops and applause.

David Brin at the Beneficial AGI conference in Panama. (Fenton)

Brin puts the science into science fiction writer he has a PhD in astronomy and consults for NASA. Being an author was my second life choice after becoming a scientist, he says, but civilization appears to have insisted that Im a better writer than a physicist.

His books have been translated into 24 languages, although his name will forever be tied to the Kevin Costner box office bomb, The Postman. Its not his fault, though; the original novel won the Locus Award for best science fiction novel.

Privacy and transparency proponent

An author after the crypto communitys heart, Brin has been talking about transparency and surveillance since the mid-1990s, first in a seminal article for Wired that he turned into a nonfiction book called The Transparent Society in 1998.

“Its considered a classic in some circles, he says.

In the work, Brin predicted new technology would erode privacy and that the only way to protect individual rights would be to give everyone the ability to detect when their rights were being abused.

He proposed a transparent society in which most people know whats going on most of the time, allowing the watched to watch the watchers. This idea foreshadowed the transparency and immutability of blockchain.

In a neat bit of symmetry, his initial thoughts on incentivizing AIs to police each other were first laid out in another Wired article last year, which formed the basis of his talk and which hes currently in the process of turning into a book.

David Brin in conversation with Magazine. (Fenton) History shows how to defeat artificial intelligence tyrants

A keen student of history, Brin believes that science fiction should be renamed speculative history.

He says theres only one deeply moving, dramatic and terrifying story: humanitys long battle to claw its way out of the mud, the 6,000 years of feudalism and people sacrificing their children to Baal that characterized early civilization.

But with early democracy in Athens and then in Florence, Adam Smiths political theorizing in Scotland, and with the American Revolution, people developed new systems that allowed them to break free.

And what was fundamental? Dont let power accumulate. If you find some way to get the elites at each others throats, theyll be too busy to oppress you.

Only one thing has ever worked in history to tame powerful tyrants. (Fenton) Artificial intelligence: hyper-intelligent predatory beings

Regardless of the threat from AI, we already have a civilization thats rife with hyper-intelligent predatory beings, Brin says, pausing for a beat before adding: Theyre called lawyers.

Apart from a nice little joke, its also a good analogy in that ordinary people are no match for lawyers, much fewer AIs.

What do you do in that case? You hire your own hyper-intelligent predatory lawyer. You sic them on each other. You dont have to understand the law as well as the lawyer does in order to have an agent thats a lawyer whos on your side.

The same goes for the ultra-powerful and the rich. While its difficult for the average person to hold Elon Musk accountable, another billionaire like Jeff Bezos would have a shot.  

So, can we apply that same theory to get AIs to hold each other accountable? It could, in fact, be our only option, as their intelligence and capabilities may grow far beyond what human minds can even conceive.

Its the only model that ever worked. Im not guaranteeing that it will work with AI. But what Im trying to say is that it’s the only model that can.

Read also Features The best (and worst) stories from 3 years of Cointelegraph Magazine Features How smart people invest in dumb memecoins: 3-point plan for success Individuating artificial intelligence

There is a big problem with the idea, though. All our accountability mechanisms are ultimately predicated on holding individuals responsible.

So, for Brins idea to work, the AIs would need to have a sense of their own individuality, i.e., something to lose from bad behavior and something to gain from helping police rogue AI rule breakers.

They have to be individuals who can be actually held accountable. Who can be motivated by rewards and disincentivized by punishments, he says.  

The incentives arent too hard to figure out. Humans are likely to control the physical world for decades, so AIs could be rewarded with more memory, processing power or access to physical resources.

And if we have that power, we can reward individuated programs that at least seem to be helping us against others that are malevolent.

But how can we get AI entities to coalesce into discretely defined, separated individuals of relatively equal competitive strength?

Brin proposes anchoring AIs to the real world and registering them via blockchain. (Fenton)

However, Brins answer drifts into the realm of science fiction. He proposes that some core component of the AI a soul kernel, as he calls it should be kept in a specific physical location even if the vast majority of the system runs in the cloud. The soul kernel would have a unique registration ID recorded on a blockchain, which could be withdrawn in the event of bad behavior.

It would be extremely difficult to regulate such a scheme worldwide, but if enough corporations and organizations refuse to conduct business with unregistered AIs, the system could be effective.

Any AI without a registered soul kernel would become an outlaw and shunned by respectable society.

This leads to the second big issue with the idea. Once an AI is an outlaw (or for those who never registered), wed lose any leverage over it.

Is the idea to incentivize the good AIs to fight the rogue ones?

Im not guaranteeing that any of this will work. All Im saying is this is what has worked.

Three Laws of Robotics and AI alignment

Brin continued Isaac Asimov’s Foundation trilogy.

Brin continued Isaac Asimovs work with Foundations Triumph in 1999, so you might think his solution to the alignment problem involved hardwiring Asimovs three laws of robotics into the AIs.

The three rules basically say that robots cant harm humans or allow harm to come to humans. But Brin doesnt think the three laws of robotics have any chance of working. For a start, no one is making any serious effort to implement them.

Isaac assumed that people would be so scared of robots in the 1970s and 80s because he was writing in the 1940s that they would insist that vast amounts of money go into creating these control programs. People just arent as scared as Isaac expected them to be. Therefore, the companies that are inventing these AIs arent spending that money.

A more fundamental problem is that Brin says Asimov himself realized the three laws wouldnt work. 

One of Asimovs robot characters named Giskard devised an additional law known as the Zeroth Law, which enables robots to do anything they rationalize as being in humanitys best interests in the long term.

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

So like the environmental lawyers who successfully interpreted the human right to privacy in creative ways to force action on climate change, sufficiently advanced robots could interpret the three laws any way they choose.

So thats not going to work.

Isaac Asimovs Three Laws of Robotics (World of Engineering, X)

While he doubts that appealing to robots better natures will work, Brin believes we should impress upon the AIs the benefits of keeping us around.

I think its very important that we convey to our new children, the artificial intelligences, that only one civilization ever made them, he says, adding that our civilization is standing on the ones that came before it, just as AI is standing on our shoulders.

If AI has any wisdom at all, theyll know that keeping us around for our shoulders is probably a good idea. No matter how much smarter they get than us. Its not wise to harm the ecosystem that created you.

Subscribe The most engaging reads in blockchain. Delivered once a week.

Email address

SUBSCRIBE