The term AI conjures up a different image depending on who you talk to, for most, it’s the stereotypical sentient robot capable of coherent speech and logical thought. However, Artificial Intelligence can refer to any automated process controlled by a computer, for example, your toaster, microwave, recommended playlists, and many other things we take for granted and don’t recognize to be forms of Artificial Intelligence.
Most people skip over these more “basic” forms and leap immediately to the end-game goal of a sentient robot butler who will have their own character and personality while handing us a fresh beer. We all do it. Isaac Asimov literally and figuratively wrote the book on the topic of AI and for anyone who hasn’t read I, Robot you definitely should (the movie is entertaining but trust me, the book is way better). We go straight to the more conventional forms of intelligence that mimic our own actions, simply because we somewhat arrogantly consider ourselves to be the most advanced form of intelligence.
But, just how close are we to that form of AI? Is AI really dangerous? Is Elon Musk right to be scared? Okay so I’m not about to say ol’ Musky is a paranoid fool, some of his concerns are pretty genuine, but (and there’s going to be a few “but” moments) they may be a little overzealous and premature. The “Deep Learning Revolution” got a lot of people scared and somewhat rightly so, however, let’s look at it for what it is rather than what it could be and getting ahead of ourselves.
We’ve trained a computer program to play one game better than any human player, this does not immediately mean that this machine has had some Ultron style moment of clarity and now wants to wipe us off the face of the Earth. It means we have a highly specialized program capable of doing its job better than anything else on the planet, which is exactly what it was designed for. If we take said program and tell it to play chess then you might as well be asking a baby to explain quantum mechanics, it has no idea and no experience in dealing with this new situation and game. Okay, Go features over double the possible outcomes (and there’s a lot) than Chess does, so you could argue it’s smarter than a Chess playing robot, but only if the Chess playing robot was to challenge the Go-bot to a game of Go.
Until we’re able to develop a robot capable of learning and adapting to new situations on its own without external input or support we don’t have to worry. We’d have to train the machine to handle each situation and it’s through those simulations it is able to learn the game or task better than anything else to ever walk or blip through the Earth. Thanks to this current limitation, we’re pretty safe from Skynet, Ultron or any other sentient machine deciding to eradicate us in order to protect us. That being said, the moment we recognize that it’s a genuine possibility and someone may have crossed a line, that’s when it IS too late. If we get to the stage where we think “Shit, we could be in trouble here…” we already are and it’s only a matter of time.
Getting back to the whole robot butler idea, there are a number of limiting factors, aside from the digital form of cognition. Physical limitations of the embodiment of AI, currently one of the most advanced machines on the planet is that of the human body or any mammalian body, but we’re simple arrogant humans. We can’t easily recreate the human body in an artificial form to be controlled by a digital consciousness, we need to overcome the following in order to stand a chance of having Robo-Jeeves exist: Real-Time operation, Safe Actions, Adaptability, and Creativity. Constructing a subservient machine to make us a cheese toasty in the morning probably doesn’t need all that much creativity, but it would probably help.
So three of the points above kind of cover themselves, almost forming a paradox in a way. Real-time operations mean that the robot needs to be able to operate within a live and dynamic environment where chaos reigns and finding a set pattern or program to follow would be a challenge, therefore requiring adaptability to unknown situations. Of course being able to adapt to these unknown situations in a live and active environment is all well and good only if Robo-Jeeves can do it safely. So our robot needs to be able to process data on the fly while being able to safely adapt to it in order to function in a way deemed acceptable by us squishy humans. These are conditions our AI robot friend needs to have locked down before it’s capable of making any action, thus making them physical embodiment challenges.
That being said the teams from Boston Dynamics and the DARPA Robotics Challenges are making great progress in improving robots ranges of motion and maneuverability. This research will be instrumental in creating our robot butler capable of not only making the cheese toasty safely but being able to get it from the kitchen to your hand (after all the whole point is you don’t have to move).
Progress is making leaps and bounds, however the robot movements are still quite slow and their physical bodies are quite large and cumbersome. Given time there is no doubt that this will change and they’ll become sleeker and more streamlined as our understanding of how to develop and construct them expands. Boston Dynamics are making the most waves in this field of research and they have created some truly staggering stuff, making massive improvements in just a few short years.
They’ve successfully created a robot capable of navigating obstacles, moving boxes from place to place carefully, and working in tandem with another bot. Not to mention their incredible reveal of the back-flipping parkour robot. Thing is, these are probably different programs running for each situation, there’s likely no on-the-fly adaptability from each environment. While running the BD not is probably not looking to detect obstacles it has to jump over or parkour around, this lack of adaptability is hamstringing development and until such a feat is achieved then that is unlikely to change.
So, the question is do we make the robots think like us in order to safely operate or do we tweak their programming into something different? After all, they are essentially a different organism entirely to ourselves. Currently, we can apply two models of cognition to both humans and robots in order to understand the thought processes; Intuition and Reasoning. Intuition is the rapid decisive motion made off the back of associative recall, we recognize a similar situation and react accordingly, allowing a swifter response. Reasoning, as the name would suggest, requires more logical reasoning and thought, which naturally takes longer but is more likely to end in success. There is a third concept that is the key to operating in real-time real-life environments and that is Introspection; knowing when we don’t know something.
Humans are both brilliant and terrible when doing this, we all know someone (or multiple) who never admit they don’t know something. We refuse to accept we don’t know something, and those that do are often the ones seeking the answers and discovering why we don’t know what we thought we did. Just how we go about instilling this introspection into a machine capable of something approaching conscious thought remains a mystery, probably rightfully so. As mentioned above that’s a genie we can’t put back in the bottle once it’s released. We’ve had robots building robots (now that’s just stupid) for years, but the moment they become aware that they’re doing it could be a disaster for us all.
Robots aren’t all that great at making decisions. What they excel at is measuring probabilities and possibilities, then using that information create a situation that will lead to the most likely successful outcome. Which when you think about it is practically all us humans do, just slower and with the pretense of higher cognition. We have been training computers to do this for years by teaching them to play various games (eg. Go! and chess) and making them better than anything else on Earth. So naturally, these machines are great decision makers, within their set environments. Teaching them to apply their skills to wider subjects require far more technical skill and prowess, something we’re working hard towards achieving.
Driverless car algorithms are proving to be instrumental in this training, their mapping and hazard perceptive abilities are revolutionary and pioneering the field. Their analytical algorithms are scanning an environment in real-time for possible dangers and obstructions, allowing the vehicle to react accordingly. Admittedly they still have a long way to go but the outlook is promising. This concept of metacognition, deciding when the digital consciousness has enough information in order to make a valid and reliable decision, is incredibly tricky. Programming something to achieve this is the challenge and although there are numerous methods of training the device to analyze information and learn from it, these are still highly specialized situations that are unable to be applied to the wider contexts of real-life operations.
This constant vigilance requires a huge investment of energy and resources, something we humans take for granted. Our brains, despite being very hungry, are exceptionally efficient considering the sheer volume of information being processed at any one time. This kind of efficiency is difficult to replicate in a machine, making it quite challenging to engineer a robot capable of mimicking the same level of sensory input that a human intellect has to deal with regularly. A cunning combination of exceedingly clever programming, top of the line processors and sensory devices and environmental scanning, hazard perception algorithms might just be able to achieve it. Marrying all of these things into one robot will be expensive and difficult, but undoubtedly will yield impressive results, possibly even lead us to be waited on hand and foot by an army of subservient super polite British robot folk. Or it leads us to the inevitable destruction of mankind. Either or.
There is any number of limits and hurdles that currently stand between us and intelligent robot butlers that don’t just hand you the sandwich but make some sassy remark about your waistline too. Do we really want that? Of course, we do, but as Dr. Ian Malcom said so many years ago, we’ve spent so long wondering if we can, we never stopped to think if we should. Humans are on this Earth for pretty much one reason and one reason only, to create more life. Creating an immortal facsimile of life would be the ultimate goal and once achieved, where else do we go from there? This is the kind of change that will forever change the face of humanity as we know it, assuming we manage to get there safely and in a timely manner.
It’s difficult to imagine when it may happen and just what the changes will be exactly, will we end up in some kind of Wall-E situation where the world, destroyed by our gluttony and crippling lethargy force us into space? Or possibly stray into Isaac Asimov’s timeless literary classic world of I, Robot where the robots integrate pretty seamlessly into our lives and run our governments, our production and pretty much all of the important infrastructure that the world needs? Hell, we might even end up, one day far from now, in the universe Iain M. Banks envisioned within his Culture novel series, where sentient AI, called Minds, call all of the shots and allow us humans to pretty much just dick about and have a laugh doing whatever tickles our pickles.
Just where or how we get there remains quite the mystery, one thing is for sure and that is that we’re working steadily towards it. The robots become more and more advanced with every passing day, demonstrated incredibly by teams like Boston Dynamics and their backflipping-parkour bot that is astounding everyone. Then there are independent roboticists the world over creating all kinds of crazy contraptions just because they can, eg. Simone Giertz has been taking the internet by storm with her brilliant but broken devices to “enhance” our lives.