The trouble with making more efficient weaponry is that eventually your enemies will get access to the same technology, and then you have to create something even better (ie. worse)…until your enemies get their hands on that, and so on: rinse, repeat. But how long can that cycle repeat itself before you reach a point where humanity makes itself extinct, or at least civilization descends into chaos?
The advent of nuclear weapons put us on the threshold of the end of that cycle, though they remain largely out of the hands of small groups due to the advanced technology and materials needed to construct them. But we may be closer to ‘the end’ than we think, with simpler, more affordable technologies now offering absolutely frightening possibilities for small groups of people to wield vast amounts of killing power.
Take for instance this short film, documenting the scary possibilities of ‘slaughterbots’ – swarms of tiny, cheap drones carrying small explosives – to carry out assassinations, or even mass killing, using what is now pretty much ubiquitous facial recognition technology:
As artificial intelligence researcher Stuart Russell says at the end of the video:
This short film is more than just speculation. It show the results of integrating and miniaturising technologies that we already have. [A.I.’s] potential to benefit humanity is enormous, even in defense. But allowing machines to choose to kill humans would be devastating to our security and freedom. Thousands of my fellow researchers agree – we have an opportunity to prevent the future you just saw. But the window to act is closing fast.
Some of those concerned researchers have taken it upon themselves to try and raise awareness of the dangers of autonomous weapons, forming an official campaign to stop killer robots and launching a website with links to research and reports on the topic.
This warning about killer robots comes on the back of a recent research paper warning of the dangers of advances in biotechnology. Which begs the question: with the continual advance of technology, is there any way of stopping a human-made apocalypse, or is it inevitable. For all the discussion we might have about ethical usage, in the end the world is ruled by money and power – and both of those desire ongoing technological advance without being restricted by rules of any kind.