Elon Musk, Stephen Hawking and Bill Gates have all issued warnings about the increasing capabilities of artificial intelligence. Yet how real is the threat? Is it something we need to worry about in the short to medium term or something that’s so far away that we can’t even begin to prepare for it?
Many popular sci-fi movies such as 2001: A Space Odyssey, The Terminator, The Matrix, Transcendence and Ex Machina have played on our deeply rooted fears about the rapid evolution of artificial intelligence and how it may threaten the survival of humans. Yet the reality is that it’s very difficult to decipher how artificial intelligence is developing, and where we are at with current breakthroughs and applications of artificial intelligence.
This is why the perspective offered by Dr. Michio Kaiu is so interesting. Uniquely, he is both a theoretical physicist and futurist and has turned his attention to focusing on the future of the human mind. He not only understands the latest developments in artificial intelligence, but also has a deep understanding of human intelligence and what it would require for AI to develop to the point of threatening humans.
In a phone interview, Dr. Kaku was asked for his thoughts on the current state of artificial intelligence:
“The potential of A.I. is real in the sense that if artificially intelligent robots leave the laboratory it could signal the end of the human race, however let’s be practical.”
For the time being, Dr. Kaku believes we have little to fear from AI:
“Our most advanced robots have the intelligence of a cockroach; a retarded lobotomized cockroach. You put one of our robots in Fukushima, for instance, instead of cleaning up the mess they just fall over, they can barely walk correctly.
“The pentagon even sponsored a challenge, the DARPA challenge, to create a robot that could clean up the Fukushima nuclear disaster. All the robots failed except one. They failed to do simple things like use a broom or turn a valve. That’s how primitive we are.”
Dr. Kaku foresees advances over the next few decades, but also believes we have enough time to prepare:
“However, in the coming decades I expect robots to become as smart as a mouse, then a rabbit, then a dog, and finally a monkey. At that point who knows? Perhaps by the end of the century, it could be dangerous. Not now. In which case, I think we should put a chip in their brains to shut them off. The conclusion is we have time, we have time in which to deal with robots as they gradually, slowly become more intelligent.”
He has a valuable analogy to share with anyone thinking the danger is already here:
“I think it’s good to alert people that this is happening, but the time frame is not years, the time frame is decades, or as one scientist said, ‘the probability that a sentient robot will be built soon is similar to the probability a 747 jetliner is assembled spontaneously by a hurricane,’ so we’re still children when it comes to harnessing artificial intelligence, not that it can’t happen.”
Despite the fears surrounding the evolution of artificial intelligence, Dr. Kaku believes our main concerns as a civilization should be focused on making the rapid technological growth seen in the past few decades sustainable.
Moore’s Law is a computing term which originated in 1970 that states the processing power for computers overall will double every two years. The term was coined by Gordon Moore, the co-founder of Intel, who predicted the pace of the modern digital world would exponentially increase every six months. Yet Dr. Kaku warns that we are reaching the limits of how much further we can go with the silicon-based technologies that launched the computer revolution.
“Moore’s law is one of the pillars of modern civilization. It is the reason why we have such prosperity, and why we have such dazzling household electronic appliances, but it can’t last forever. Sooner or later Silicon Valley could become a rust belt, just like rust belt in Pennsylvania.
“That’s because components inside a silicon chip are going down to the size of atoms. In your Pentium chip, in your laptop, there is a layer that is about 20 atoms across, tucked into some of the layers that are in a Pentium chip, we’re talking about atomic scale.”
At this point, the major challenge is how we’ll get to the next level. As Dr. Kaku says:
“However, in the coming decades it’ll go down to maybe 5 atoms across, and at that point two things happen; you have enormous heat being generated and second is leakage, electrons leak out because of the certainty principle.
“In other words, the quantum theory kills you. The quantum theory giveth and the quantum theory taketh away. The quantum theory makes possible the transistor to begin with, so it also spells the doom of the age of silicon.”
Although it’s almost impossible to predict what’s going to emerge as the next big technological breakthrough, Dr. Maku is certain that Silicon-based processors will go the way of vacuum tubes:
“One day the age of silicon will end just like the age of vacuum tubes. When I was a kid I remember TV sets all had vacuum tubes. That era is long gone. Historians today look at this age of silicon and wonder what’s next. The short answer is we don’t know. There are no viable candidates, none of them ready for prime time, to replace silicon power.
“So what this means is that in the future, we could see a slowing down of Moore’s Law. That at Christmas time, computers may not be twice as powerful as the previous Christmas, so we physicists are desperately looking for replacements like quantum computers or I personally think molecular computers will eventually replace silicon, but they’re not ready yet. I think that in principle, we could be in trouble.”