A lot is being said about artificial intelligence lately, but not in the context of the newest Terminator movie. No one is talking about that. Prominent technologists and scientists such as Elon Musk, Stephen Hawking, Steve Wozniak, and others recently presented an open letter at the International Joint Conferences on Artificial Intelligence in Buenos Aires. The missive, endorsed by some of the world’s leading thinkers, calls for a ban on AI weaponry, which they warn could be possible within ten years.
“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow,” reads the open letter.
Now weapons of any kind are bad news, especially ones that can think and kill without any confirmation from an outside human source, but how close are we really to having truly artificially intelligent tanks, planes, or even washing machines? For many the thoughts conjure images of Terminators or Agent Smith, but we at The NYRD do not think the process of AI will be that black and white. Like humans, intelligent robots will most likely be more than meets the eye.
Bill Gates gave a talk on Reddit on a series of subjects but touched heavily on artificial intelligence and automation. Our current level of smart technology is relatively dumb by comparison to what will be coming in the future. Military drones still require human pilots and even current weak artificial intelligence still needs to follow a complex set of coded instructions.
The phone in your pocket, and more advanced systems like IBM’s Watson, perform rudimentary self-driven thinking, but their intelligence is based in very narrow and limited fields. Watson, and to a lesser extent Siri, can pull data from thousands of sources and make educated guesses on how it all fits together according to very specific preprogramed code. In layman’s terms, they are glitch-heads compared to what is on the horizon. That is not say that your GPS is not intelligent to an extent. It is very good at its defined job, better than a human, in fact. Watson may even be more capable than Starscream, but will never be as ambitious, at least not yet. Robo-evolution is coming. Basically, Siri is to Bumblebee as homo erectus was to modern humans, but robots will evolve not in the space of millenniums but decades, and they may do it along some familiar patterns.
According to Gates, “In the next ten years, problems like vision and speech understanding and translation will be very good. Mechanical robot tasks like picking fruit or moving a hospital patient will be solved. Once computers/robots get to a level of capability where seeing and moving is easy for them then they will be used very extensively.” These smarter machines will still not be self-aware, but they will be everywhere, cutting down human labor and need dramatically, which in itself will present other problems. However, futurist, Ray Kurzweil, believes that we will see a strong artificial intelligence in the next two to three decades. He also believe that by 2045 we will have a robot capable of passing the Turing Test, which was devised by Alan Turing -played by Benedict Cumberbatch- one of the fathers of modern computer science.
The most current advancements into building the All-Spark is called deep learning. It is a sophisticated algorithm which allows machines to learn, similar to the way humans learn. The project has its roots in work conducted in the 1950’s by Frank Rosenblatt, who built a type of mechanical brain called the Perceptron, a name of a Transformer ancestor is we have ever heard one. The goal of deep learning is to give robots the tools and abilities to learn about the world themselves, making artificial intelligence less about programming and more about natural development. The concept is similar to how a human children grows to explore and understand the world around them, but hopefully with less diaper changes. Personally, we also would like to skip the teething stage if we could. This is just one of many theories being tested, but it shows a lot of promise even if it has its critics.
It also means that the way we have always looked at artificial intelligence is somewhat deceptive. The science fiction author David Brin argues that the problem in all our dystopian future scenarios, involving artificial intelligence, is that we usually only get to see the end point. We rarely talk about the journey that was involved to get there, and the journey could be important. Maybe our language of talking about AI is all wrong. After all, what we are really discussing is not the toaster coming to life, no matter what Michael Bay tells you, but the evolution of an entirely new sentient species. In essence, we will answer the question, “Are we alone in the universe?” And the answer will be, “Not anymore.”
The legendary science fiction writer Isaac Asimov attempted to solve the AI problem by createing the Three Laws of Robotics. These laws have been modified a bit over the years but they are still talked about even today by roboticists, science fiction writers, and amateurs alike.
There are a few problems with even these seemingly flawless laws. First, we are still unaware of how robotic intelligence will evolve and preprogramming anything into a thinking and possibly feeling machine could be problematic and even morally questionable. At what point can a truly intelligent and questioning robot disregard its programming? After all, we humans disregard our instincts all the time. Could it be the same for an intelligent robot? Even worse, would the simple attempt cause them to view us as tyrannical. Does it not make us the Megatron in this scenario?
If AI robots are truly alive and sentient, do we have any right to impose our will on them. Apply the Asimov Laws to a human and think about it’s consquences. The problem is not so much with Law 1, but Law 2, and the fact that Law 3, which talks about self preservation, is overridden by Law 2. Ultimately, the inclusion of these laws would mean that no matter how intelligent, feeling, and/or human-like a robot is they will always be subject to the will of us, even when it comes to their own well being. Does that make them our tools or our slaves? If we gave them life, does that mean we have the right to control that life or even take it away again? Can they die, and if so will it be in a traumatically childhood scarring way to the soundtrack of the most 80’s bands you may ever hear?
The truth is that if and when we create artificial intelligence, it will be something completely new and completely different. It is unlikely that it will fall absolutely under our control. This new being’s thought process and views of the world will be its own. Will it have morality? Will it question its own existence? What sort of vehicle mode will it choose? It could be influenced by human thought, but we would be foolish to think that this new species will be completely human in their views and actions. The needs of a machine are not the same as the needs of a flesh and blood creature.
In this way, the Transformers, gives us a good glimpse into what we may be facing. They are not Earth-created machines. Instead they are aliens. Neither Decepticons nor Autobots bend their will to humans or our needs, and whatever we create could be just as alien. Yet, another important key is that the Transformers are as different from one another as humans are. Optimus Prime is noble and self sacrificing, and has a high regard for life and freedom. Megatron is concerned with power and greed. He sees all other beings as lesser creatures. They are the same, but the way they act and talk are different, not programmed but learned. Each is also very human in their own way.
In a sense, they may be made of metal parts, but they are not robots in the way a Roomba is. They are individuals with different hopes and goals. However, even our language is problematic. The word robot comes from the Czech word, robota, which basically means forced labor or serf. It literally means repressed worker. Is that what we will expect of our creations, to be nothing more than laborers to suit our needs? Maybe it is time we stop and readjust our own thoughts on the subject.
Whenever we talk about AI, we always think about it in very human terms. These new AI machines will evolve on our planet and in ways similar to us but their priorities may differ wildly from our own. Hawking and others have suggested that the real danger is not so much in the violence they could do toward us but the indifference they could show us. Just as the Transformers have come to Earth for energon, these new fast evolving artificial beings could potentially use up the resources of our planet faster, and with as much regard for our needs as we currently show for the needs of animals and insects. Thinking machines could see us as their gods, or as their equals, or as their pets. It is possible that AI robots might rise up, but they could just as easily decide to leave us and our world for the stars, until they evolve into a living world with the voice of Orson Welles. Maybe the real danger is not so much that they will destroy us, but that they will fail to notice us at all.
The truth of this issue is that we do not know. There are so many questions and right now we do not have any good answers. We can guess and debate, but ultimately any ideas we have on the reality of AI are completely human. The simple fact that we continuously depict robots as something that will revolt against us says more about ourselves than it does about any potential intelligent machine. Maybe our real fear is that these new thinking machines will not kill us, but judge us for who and what we really are, and maybe that is a lot scarier. Optimus and his ilk accept humanity’s flaws, but the Decepticons point to them as a reason why we are inferior. Perhaps we fear these new beings because we know ourselves and our history, and we fear they could be right.
However, it is worth remaining cautiously optimistic about the future, because our relationship with an AI machines could come down to us and the way we treat this new species. They could be friend or foe, Autobot or Decepticon. It may depend entirely on us. More to the point, we may not get just one type of intelligent robot but a diverse and rich mixture, much like humans. For every Starscream a Wheeljack. For every Soundwave a Bumblebee. For every Ramjet a Ratchet. -We had a lot of the toys as kids,- but the point is that it we will be the ones to set the expectations. That is why Hawkins and Musk helped create that letter in the first place. If we go looking to make weapons, then we should not be surprised if eventually our creations transform into the very things we feared.