Click here to support the Daily Grail for as little as $US1 per month on Patreon
Artificial Intelligence

A History of Artificial Intelligence, From Ancient Babylon to Westworld

“By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”

―  Eliezer Yudkowsky

Artificial Intelligence (AI) is something of a nebulous concept, at least from a layman’s perspective. It’s difficult to understand just how much is covered under the umbrella of AI, other than to say there’s a lot. And while it’s commonly thought that AI is a concept of the future, it actually has a longer and more storied past than you think.

The story of artificial intelligence begins in the same place as the story of robotics; with the automaton. We’ve been building robots for almost five thousand years. Yes, that long. Though what passed for a robot in the Old Babylonian period would hardly be recognized as technology today. That’s mainly because we use these terms – robot, automaton, computer, machine, technology – in different ways than they were originally intended.

What is technology? It’s your cell phone, your laptop, your TV, your car. It’s all the ways those things are made. It’s medicine, food, and tools. It’s…well it’s nearly everything. In fact, the word technology can rightly describe anything used by man to achieve any purpose. Fire burned deliberately is technology. A wooden club is technology. A bucket used to catch rainwater is technology, and in precisely the same way that a self-driving car is technology. And in fact, water vessels are where this story begins.

Ancient Robots

The very first autonomous machine ever built was a water trap. It was built in approximately 2000BCE in Babylon, and was used to mark the passage of time for astrological purposes. It was a crude clock, the earliest example of what’s now known as clypsidra – from the Greek water-clocks of similar design. Babylonian water clocks were very simple clay pots with a single small hole drilled in the bottom and hash marks inscribed on them to measure the amount of water in the pot. As the water drained through the hole, at a constant rate, the amount of time that had passed could be counted out on the hash marks. It may have been a simple concept, hardly comparable to modern digital clocks — which are accurate to one-tenth of a second—, but it worked nonetheless. From these humble beginnings, everything we know about robotics and computers follows.

The early clypsidra, and the later automata found in ancient Egyptian, Greek, and Arab cultures in the centuries that followed, are thought of as the first attempts by man to create thinking machines. Of course, what we interpret as intelligence, consciousness, and cognitive autonomy has changed drastically in the last 4000 years. But the only thing that’s changed in robotics since then is the relative complexity of the machines and the ability to utilise independent power-supplies. Leonardo da Vinci’s magnificent mechanical lion, which he designed and built for the entertainment of the King of France in 1515, operated via pulleys and cables, using leverage and pulley placement to simulate an incredible array of mechanical motions that were initiated by the king when he “lashed the great beast three times with a small whip”.

Leonardo stored kinetic energy within the pulley system when he first wound it, allowing it to be released when the King activated it. In principle, the very same thing is done by your child every time they turn on their noisy robotic dog. The only difference being that the power is supplied electronically, to small servo motors via batteries in the modern toy.

These are important ideas to remember whenever you think of modern automata. You see, it’s easy to look at a relatively primitive machine like da Vinci’s Lion and understand that it was never acting on its own. It could only follow the preprogrammed movements with which it had been designed. Da Vinci told it what to do by placing pulleys in certain places, aligned certain ways in relation to other pulleys. Imagine if you had been part of King Francis’ court and had been present the day da Vinci’s mechanical lion miraculously walked on its own. Would you have realized you were seeing the illusion of will? Would it have seemed to you like the reproduction of intelligence instead of the simulacrum of awareness? Likely not. People are still falling for that age old charade even now. If you were to scale up the complexity of the machine, to say, a programmable spy drone, the distinction between programming and self-awareness becomes far more difficult to discern, but rest assured, even at this level programming does not equal intelligence.

In order to create a machine, an automaton, that could actually think independently, we needed to figure out just what exactly it means to think. This too is a very old question. It hearkens back to the great philosophers of old. To Socrates, Plato, and Aristotle, and many others who pioneered the subjects of formal reasoning, deduction, logic, and the language of mathematics. Far as we’ve come in our understanding of common psychology, neuroscience, and philosophy, we still haven’t found that answer, though we are a good deal closer than we were.

20th Century Breakthroughs

While there are still whole areas of science in pursuit of that fundamental question, there are others that have moved onto more tangible works. The formal study of artificial intelligence grew out of the insightful work of some of the greatest minds in computer science in the 1940s. Men like Alan Turing, Norbert Wiener, Claude Shannon, and Warren McCullough came together – after working independently for many years on cybernetics, network theory, mathematics and algorithms – and founded the field of Artificial Intelligence research at the 1956 Dartmouth Conference.

The Dartmouth Conference was a momentous occasion, and nearly all of those who attended went on to each become a pillar of AI and robotics research in some way over the next several decades. In fact, that conference is considered by most to have been the official birthplace of AI. From that point forward, Artificial Intelligence was on the minds of nearly everyone in the scientific community.

It hasn’t been a smooth road though.

Following close on the heels of the excitement generated by early breakthroughs, such as successfully programming computers to perform algebraic calculations, proving geometric theorems, and even beginning to understand and use English syntax and grammar, a lot of money was injected into the research at many levels. It seems that those with the science and those with the money, thought AI was a sure thing. Some even boasted that a true, thinking machine would be realized within 20 years. Of course, we’re now almost 60 years on, and we’re still learning.

Favour for AI has waxed and waned over the years, periods of bounty followed by periods of scorn. In fact the field of AI has experienced what some have called AI winters, where funding and enthusiasm dried up almost completely, and most everyone declared the whole idea to be unreachable. The first such AI Winter lasted from 1974 to 1980, and though interest and funding have come back, the unbridled spending and the limitless scope of research have been replaced with a work smarter, not harder approach.

New Approaches

Modern AI research looks very different from its original conception. Not only have computer hardware, materials science, and software become much, much more advanced (not to mention user friendly), researchers are taking a different approach to the science as well. Where the old guard tried a brute force approach to teaching computers to think – such as building translation matrices, and vast reference libraries – modern research is more focused and subtle. The leading minds seem to have realized that thinking isn’t just about access to information and psychology, but it also entails economics and game theory, and a whole new way of looking at knowledge. A way that is not yet amenable to a solid mathematical description.

Today there are people working on many different fronts, all pushing in toward a common goal, but the difference between each approach can be like night and day. Terms like connectionism, expert agents, intelligence agents (no, not spies), neats, Bayesian Networks, and information theory have replaced the ideas that we, the public, could once understand without expensive post-secondary education. But despite the complex ideas, big words, and tech-speak, the foundation is still the same, and it’s still headed in the same direction.

This journey that AI has been taking (or perhaps has been led along is the more appropriate term) is mirrored by a similar story that took place in popular literature. And though those two paths have at times intertwined, at other times it seems like that mirror may have come from a carnival fun house.

A.I. in Pop Culture

When you think of the history of Artificial Intelligence, there are a few names that immediately jump to mind: Isaac Asimov, Arthur C. Clarke, and Stanley Kubrick for a start. As their timeless characters came to life on the big screen and in many, many books. Seldom does one hear the phrase “I’m sorry, Dave. I can’t do that,” without conjuring the passive-aggressive yet still deadly Hall 9000, or “Danger! Danger, Will Robinson!” without fond memories of the Robinson family’s fearless and faithful robot protector.

Artificial intelligence has rarely left our minds in the last 50 years. The idea has permeated our entertainment, and in recent years it has invaded our lives in ways we often don’t even realize. Though the reality is seldom anything like the fantasy.

While Hollywood has painted a picture of intelligent machines being violent, anthropomorphic, sociopaths, balanced by innocent, child-like facsimiles of humanity, the fruits of real AI research have been slowly integrated into our daily lives. Complex algorithms governing banks and ATMs, connecting communication networks, and giving us instantaneous access to people and information around the world at a moment’s notice. Or pseudo-AI applications on our smart phones, recommending restaurants based on browsing activity and converting common units of measure for us with a single word.

One of the defining features of AI in popular culture is control, or a lack thereof. This notion is embodied perfectly by Isaac Asimov’s Three Laws of Robotics. Whatever name you know them by, the Three Laws are ubiquitous in modern media where it pertains to robotics or AI. Those laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Some iteration of these laws has appeared in nearly every book or movie involving robots as sentient or semi-sentient beings since Asimov first presented them in 1942. And, indeed, they played a central role in the plot of Westworld. Though modified from what one would consider the classical three laws, Dr. Ford (Hopkins) manipulated his menagerie of automatons by adjusting the levels to which they could violate each concept. But there’s a very important distinction to understand, something that gets lost in most tales of this nature. Many of the “robots” featured in these stories are not examples of a true artificial intelligence. Their autonomy was – with some exceptions – the product of clever programming, or in some cases even magic, but almost all were portrayed as being under the control of some nefarious character in the story. All but a few of the early sci-fi stories involving robots featured drones, not beings with independent agency. This is even true of Westworld.

The vast majority of hosts included in the park were no more than sophisticated examples of unthinking automata, where the exception lay in those small few who transcended the restrictions inherent in their programming and, so we’re to believe, became independently sentient. Though much of the underlying philosophical back and forth throughout the show, cast some doubt on just how, and when that agency arose. If it actually did at all. Recall, Dr. Ford planned his suicide by robot from the beginning, thus were Delores and Bernard acting of their own will, or was the massacre designed by human hands?

Apropos of the Westworld hosts’ purpose, the word robot owes its existence to servitude and slavery.  The term was first coined in a science-fiction play written in 1929 by Czech playwright Karel Čapek. The English translation of that play is titled R.U.R. or Rossum’s Universal Robots, and in it appears one of the earliest examples of AI in modern literature. Although Čapek’s Robots didn’t fit what we currently think of as robots or AI (they were more in line with the current idea of clones, or cybernetic organisms), even so, they were enough to later inspire Asimov’s cannon of characters who were endowed with a heretofore unknown non-human sentience.

The etymology of that word may have been responsible, in an indirect way, for the creation of the three laws. Early science-fiction writers, who influenced the way we all see these creations, often viewed robots (and all the ephemera that went with them) to be a convenient way to publish commentary about the ethics of slavery and class-division without assigning that station to a real being, so to speak. A tradition carried on by productions such as Westworld. The genre allows us to explore both the dark and light side of creating a being without personhood, and in turn gain some understanding of just what personhood and human rights mean in the context of our our lives. And with history being the headmaster of our cultural education, it was inevitable that men like Asimov would assign the almost archetypal capacity for rebellion, deceit, and ambition to his brain-child. After all, struggle is the essence of good story telling. That struggle thereby necessitating a means of controlling this little understood creation, to protect humanity from its superficial perfection.

As a comment on the ethics of enslaving a race of beings, in this case artificial beings, to do the bidding of men, the three laws represent humanity’s will to dominate, to control. And the inevitable violation of one or more of those rules in virtually every story involving artificial intelligence of any kind, belies the sanctity of life in general, whether human, animal, or artifical

It all makes for some excellent sci-fi storytelling, but it says far more about the psychology of humans than it does the nature of robots. Either way, these classic tales, and many others, have served to inform the cultural zeitgeist on what robots are, and what AI looks like.

Of course, such predictions – as can be found in any sci-fi novel from that period and onward, whether written by Jules Verne, H.G. Wells, Aldous Huxley, George Orwell, Jim Carrol, or Johnathan Nolan – often leave the modern reader feeling a bit credulous. All of those men, and many, many more, are known as futurologists. They studied futurology, as much as anyone can study the future. It was a very popular vocation for the thinkers of the 1940’s through 1960’s, though in truth its popularity never went away. The leading minds of our generation too are using their expertise to predict what may come in twenty, fifty, or one hundred years. It seems though that our generation’s futurologists are simply better at it, in a way.

To give credit where credit is due, the science fiction authors of the mid-twentieth century were, perhaps, some of the most imaginative of any era (Jules Verne notwithstanding). The worlds they built, the ideas with which they played, were but tiny seeds that, through them, found life. Many of which have since grown out of the fictional realm and into our lives. But in spite of their creative genius, they got an awful lot wrong. The world of the Jetsons is still well out of reach. Personal robots are a good deal less exciting and helpful than we were promised. Flying cars, while something that a few stalwarts of enterprise still pursue, are little more than a dream even now. And laying beneath all the rubble of shattered dreams and abandoned hopes, is that early idea that Artificial Intelligence was well on the way, and that it was poised to revolutionize the way in which humans live. It is unlikely we’ll ever see the likes of hyper-realistic theme park hosts bidding us to indulge in our secret carnal fantasies with no threat of reprisal or consequence. Not least due to the ethics of such an endeavour. But it seems clear that Artificial Intelligence is becoming more and more integral to human life, so much so that the line between real and pretend may be obscured permanently in the near future.

Mobile menu - fractal