Above you’ll find a fascinating one hour overview of the current state-of-the-art of robotics technology, with some of the world leaders in this endeavour giving five minute summaries of their work, then sitting down to discuss the issues involved.
Featuring:
-
- Russ Tedrake – Director, Center for Robotics, MIT Computer Science and Artificial Intelligence Lab
- Sangbae Kim – MIT Biomimetic Robotics Lab
- Mick Mountz – Founder, Kiva Systems
- Gill Pratt – Program Manager, DARPA Robotics Challenge, DARPA Defense Sciences
- Marc Raibert – Founder, Boston Dynamics
- Radhika Nagpal – Self-organizing Systems Research and Robotics Group, Harvard University
That by itself is meaty enough. But why stop there, when we can take a closer look at the specifics of these projects? Step through them all, examining to what degree they’re driving us towards utopia or oblivion; or both at the same time. Pausing occasionally to take a look at related issues during our journey across the robotic landscape of the present and near-future.
We start then with the latest video of MIT’s Cheetah, in full, showing off its LIDAR vision upgrades that enable it to quickly identify and jump over obstacles:
And here we have the Chinese clone of Boston Dynamics’ Big Dog:
Robotics technology has reached the point now where we are rapidly progressing beyond our simple mechanistic visions to far more complex horizons, and using nature as a guidebook to travel there. That is the essence of biomimicry, and Sangbae Kim’s talk in particular demonstrates that pathway.
As has been the case with so much of technological progress, the principle sponsors and early adaptors are military. They have the big funding grants, and the long term vision.
Here’s your literal metaphor for the relationship between technology and war made ‘flesh’: Cujo – a robotic “pack mule” that automatically follows wherever this US Marine leads. It can walk for 20 miles without a break and carry up to 400lbs:
It’s often years or decades before the citizenry are able to explore less hostile uses technology, before the tech trickles down.
Except where it trickles up. The Aibo – a now discontinued consumer electronics take on the robotic pet, developed by Sony – was the complete opposite of the Big Dog, may it rest in robot heaven.
We habitually anthropomorphise such technology, whatever form it may take.
Don’t just take my word for it, here’s a recent anecdote from science fiction writer William Gibson; the man who invented the word ‘cyberspace’ and sketched out a vision of the reality we largely occupy today (for better and worse) in the 1980s:
My first-ever sighting of a dead robot in the wild, 2015: took dead cathode tv to small-appliance recyc tent, saw a Roomba in bottom of bin
— William Gibson (@GreatDismal) August 16, 2015
Feeling empathy for recently demised consumer electronics or ruggedised militarised robot dogs when they’re kicked by their creators – and the Boston Dynamics representative insists above is only to show off their resilience – it’s not a glitch…
If watching that GIF pains you, you’re not alone. One might say part of what we are experiencing collectively now is a test designed to evoke an emotional response – or not. A selective filter for those that will inhabit the human-machine civilisation to come, or continue to rail against it.
Mocking those DARPA Challengers as they fell over is no less an act of empathy. We see our failings in them; we can relate. We empathise.
In a recent talk for the Long Now, Ramez Naam (author of the excellent posthuman novels, the Nexus series) goes deep into all the ways humanity is being enhanced by its continual coevolution and merger with various forms of robotic and cybernetic technology. A merger that is taking place at all levels, from the psychic to the physical:
Most fascinating to me is the area of research known as the Brain Computer Interface (BCI); “a direct communication pathway between the brain and an external device.” As all these pieces of technology are slowly integrated, the possibilities and potentialities becomes vast indeed. (Something I explore in far more depth in this essay.)
For better or worse, the road to Elysium is being paved:
An only slightly more mundane piece of human-machine synthesis from MIT is a less surgical effort to utilize each other’s strengths to maximum effect. Using haptic feedback to integrate a robotic exoskeleton with the human body’s nervous system, letting the fuzzy logic of the human brain plug the gaps in robot’s programming:
Writing about it on the MIT website, the creators for once pitch a less militaristic use case for the technology:
Ultimately, Ramos and his colleagues envision deploying HERMES to a disaster site, where the robot would explore the area, guided by a human operator from a remote location.
“We’d eventually have someone wearing a full-body suit and goggles, so he can feel and see everything the robot does, and vice versa,” Ramos says.
“We plan to have the robot walk as a quadruped, then stand up on two feet to do difficult manipulation tasks such as open a door or clear an obstacle.”
The DARPA spokesperson was at pains to demonstrate in his talk above, playing the Verge recap video in full, that using robots in the aftermath of disasters, especially nuclear ones, is very much their goal. To save human lives and protect infrastructure. (Never mind that this is all a consequence of the citizen use of military technology all gone horribly wrong.)
We are starkly reminded of this need by stories such as that of the “liquidators”: the volunteer Soviet soldiers who basically sacrificed themselves to try to limit the damage done by the meltdown of the Chernobyl reactor. It’s difficult to put a price on the development of any technology that may be required to mitigate the literal fallout of such incidents in the future:
It’s my suspicion that as the effects of climate change continue to escalate, the more functional a robot army we have to call upon for any and all such emergency operations, the better. It would be so much better if we could skip straight to the end of Elysium where the robot medics save everyone, before the world falls totally into ruin (If you have any suggestions how we can do that, please leave them in the comments).
For all but the elite we now simply call “the 1%” (or, more probably, the 0.001%) the near-future just looks like a menu of apocalypses…
While that fighting robot video may cause one’s mind to drift towards Pacific Rim, a better science fictional example is the much overlooked Mexican cyberpunk film from 2008, Sleep Dealer:
It’s an incredibly prescient film that anticipates not just a near-future of a telepresent – making them effectively remote migrants, kept safely behind the wall – workforce, but also the damaging psychological effects of being a drone pilot – actual stories of which are only beginning to be told now. The empire takes a toll on everyone.
But if there’s a more corporate vision of a robot-filled hellscape, it’s being rushed to you at no extra expense thanks to Amazon:
Honestly, if you want an everyday vision of the robot work camps under an Artificial Intelligence Autocracy, imagine being trapped forever in some nondescript factory as a never ending train of shelves and plastic tubs comes before you, demanding you fulfil them.
Such a corporate culture is no joke though, as this much talked about NYTimes piece illustrates.
If you’re a good Amazonian, you become an Amabot,” said one employee, using a term that means you have become at one with the system.
As many people have argued, including Ramez in his Long Now talk above, AIs have been here a long time and they are called Corporations.
And they’re about to colonise the sky. This is a diagram of Amazon’s proposal [PDF] for its delivery drones to begin sharing the airspace with birds and helicopters:
A networked machine vision of the world awaits.
If you want a better idea of what’s to come, this is how Google’s cars are being taught to see the world:
Once Amazon’s robots take flight – and Google have a drone fleet in development too – it will be increasingly omniscient. This is where issues of governance and control come in to play. Google, Amazon and the rest of the Stacks, as they’re known – Facebook are working on high flying internet delivering drones and Apple are heavily rumoured to be constructing their own robot car to compete with Google’s – will have unparalleled knowledge of cities, and increasingly beyond; far exceeding what any State had before.
In this NeoLiberal Age, they’ll likely be left largely to self-regulate, so the question becomes how we’ll merge with them in the end. (I already know of a few people voting no to the Amazon future by cancelling their accounts following that NYTimes piece.)
Because as the one piece of emerging robotic technology we haven’t looked at yet, from the video at the top points out – Radhika Nagpal and her work on self-organising robotics at Harvard University – the ability to construct artificial organisms will soon be only limited by our imagination.
We could – as I’ve sketched out here before – build a society that tackles the challenges of climate change and head out into the stars.
A next nature paradise, where technology effortlessly bridges civilisation with a resurgent, carbon-sinking ecology.
Or a technocratic hellscape, where the prime directive is to protect the elite as the Earth falls to ruin, and let everything else rot where entropy and neglect sees fit.
To end then, an in-depth look further into the future, at the Swarms of Kilo (not Killer!) Bots that are coming. This 30min video gets pretty technical in parts, you can still focus on the general points it makes on the current limits and future plans that define the area of self-organising robotics and how they’re being used to explore Collective Artifical Intelligence. Ideal material for everyone from the person planning a SkyNet proof bunker to the cult contemplating designs to assemble asteroids into a new home world:
Don’t forget: while the future continues to buffer we can still decide whether to hit play or stop. But there is no rewind, or undo.