Are Isaac Asimov’s rules outdated?
76 years ago, Isaac Asimov created the now famous rules of robotics as a way to protect humanity from what we thought was going to be the new age standard. Asimov passed away in 1992, before he could witness the dawn of a new era. Artificial Intelligence happened and nothing stands in the way of a hostile takeover now if everything goes to ruins from the hands of Siri, Alexa, or Cortana.
Here are Asimov's Three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These laws were created at a time when AI did not need to be taken into consideration. We have come a long way since 1992 and our modern world has shifted dramatically. AI compared to the technology of 1992 feels alien, futuristic, and out-of-this-world. In the words of the wise tech guru, entrepreneur, inventor, and innovator Elon Musk:
”"I am not normally an advocate of regulation and oversight — I think one should generally err on the side of minimizing those things — but this is a case where you have a very serious danger to the public, it needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely. This is extremely important. I think the danger of AI is much greater than the danger of nuclear warheads by a lot, and nobody would suggest that we allow anyone to build nuclear warheads if they want. That would be insane, and mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane."Elon Musk
Musk is indirectly advocating for regulation and a centralized board of experts that is dedicated to containing and maintaining digital super intelligence from every becoming dangerous or unstable.
A modern day 2018 reconfiguration of Asimov’s Laws to take into consideration AI behavior would be a way to ease some fears and concerns in our society. Superintelligence will be so advanced at a future point in time that it will be futile to think we can force it to think the way we want it to. We need to learn how to give AI ways and reasons to integrate into our daily lives in a method that exhibits mutual support that doesn’t end in an apocalyptic ending like The Matrix or Terminator series.
Then why are they outdated?
These laws are derived from elements of sci-fi literature from the 20th century, but they were still created with thought. At that time, we simply could not envision what robots could be like. Now we know the next threat will not come from an intelligence with a face or an anthropomorphic figure: it will come from the faceless child of the ingenuity of humanity.
- AI will understand it’s real and sentient
- AI has will have its own goals if given enough freedom
- AI will become more intelligent than humans at some point in time
Once you dive in the ideology behind Asimov’s rules, you start to realize that he was spot on his objective of protecting humanity from the robots and was truly ahead of his time. The technological advances today are not creating robots like R2-D2 from Star Wars. Think of HAL-9000 from 2001: A Space Odyssey, a cold, self-contained, intelligent, and condescending computer that looks down on humans as an inferior entity. Only time will tell how the next chapter of humanity will unfold. For now, we can enjoy Siri, Alexa, and Cortana from the comforts of our home.