Click here to support the Daily Grail for as little as $US1 per month on Patreon
AI Death Flowchart

Philosopher Says We Should Begin Planning Now, So That a Super-Intelligent A.I. Doesn’t Kill Us All Off

Could the creation of an artificial intelligence (AI) more powerful than our own be a dangerous moment in the history of humanity? Philosopher Nick Bostrom, in the TED talk below, says yes indeed. Intelligence is what has lifted humans to their current dominion over many aspects of nature, Bostrom notes, so the creation of an intelligence beyond ours would have “profound implications”:

Chimpanzees are strong…pound for pound, a chimpanzee is about twice as strong as a fit human male. And yet, the fate of [chimpanzees] depends a lot more on what we humans do, than on what the chimpanzees do themselves.

Once there is super-intelligence, the fate of humanity may depend on what that super-intelligence does.

Bostrom points out that problems may not even necessarily be due to malevolence on the part of the AI. He points out the story of King Midas, who wished for his touch to turn everything into gold, as an allegory for what might happen to us if we give a super-intelligent AI a poorly thought-out ‘wish’ to complete.

As such, Bostrom urges those involved in the creation of artificial intelligence to consider the safety measures needed now. so that we can plan for the eventuality of an intelligence superior to our own:

I believe that the answer here is to figure out how to create superintelligent A.I. such that even if — when — it escapes, it is still safe because it is fundamentally on our side because it shares our values. I see no way around this difficult problem.

…The technical problems that need to be solved to make this work look quite difficult — not as difficult as making a superintelligent A.I., but fairly difficult. Here is the worry: Making superintelligent A.I. is a really hard challenge. Making superintelligent A.I. that is safe involves some additional challenge on top of that. The risk is that if somebody figures out how to crack the first challenge without also having cracked the additional challenge of ensuring perfect safety.

So I think that we should work out a solution to the control problem in advance, so that we have it available by the time it is needed. Now it might be that we cannot solve the entire control problem in advance because maybe some elements can only be put in place once you know the details of the architecture where it will be implemented. But the more of the control problem that we solve in advance, the better the odds that the transition to the machine intelligence era will go well.

This to me looks like a thing that is well worth doing and I can imagine that if things turn out okay, that people a million years from now look back at this century and it might well be that they say that the one thing we did that really mattered was to get this thing right.

Mobile menu - fractal