Artificial Intelligence: Will it be a Boon for Mankind or a Disaster?

What exactly is artificial intelligence or “AI” as it is sometimes called. Is it a loyal and helpful robot like C3PO, R2D2, BB8 of Star Wars or Data, the amiable android of Star Trek fame?  Perhaps, it is a callous, indifferent intelligence like Ultron in the Avengers: Age of Ultron or Hal in 2001: A Space Odyssey, V.I.K.I., the evil intelligence in I-Robot, or Auto the Autopilot from Disney Pixar’s WALL-E?  Right now, it isn’t any of those things, but it is a technology with the potential to bring great benefits to humanity or great harm, depending on how it is developed and used.

What Exactly is Artificial Intelligence?

Artificial Intelligence is defined as an area of computer science that works to create and develop machines with the ability to think and react like humans.  AI scientists create programs to endow computers with abilities like speech recognition, the ability to learn, the ability to plan and to problem solve.  Some, scientists who work in robotics concentrate on giving machines bodies that allow them to move, pick up and carry objects, process visual and aural stimulation to understand, retrieve and communicate information.  “Knowledge Engineering” is a key component of AI research and development.  If a machine is going to take in information, process it, problem solve and respond like a human, it must have access to many different types and levels of information to accomplish a task.  Needless to say, giving a machine the power to think, have common sense reasoning, planning, and problem-solving skills is a complex and tedious programming task.  Machine learning requires many thousands of examples of the needed data and intricate mathematical algorithms.

Types of Artificial Intelligence

Science Fiction has long envisioned a future with sentient computers and androids.  Star Trek, Westworld, Terminator and Battle Star Galactica are just a few of many movies and television shows with androids who look, act and think like human beings.  While computer scientists are working on creating an AI that embodies all the functions of the human brain, the technology is still largely theoretical.

There are three types of possible artificial intelligence.  Much of the work that has been done to date focuses on weak or narrow AI.  In fact, virtually all of the operational artificial intelligence in the world today is narrow AI.

Narrow or Weak AI: Narrow AI (ANI) is programing a computer to perform a single task.  It may be monitoring the weather, playing chess, or keeping track of your Google searches to target advertising to your interests.  Some of those tasks can be incredibly complex.  Consider what Siri or Alexa can do.  Both are examples of Narrow AI.  They are capable of processing and interpreting human language and plugging the content into a search engine to answer your question.  Neither are true AI because they are not self-aware.  Alexa and Siri do not have the ability to see themselves as autonomous beings.  They lack imagination, the ability to plan, and problem solve. They are tools, not individuals.

Even self-driving cars are examples of narrow AI.  They use multiple ANI systems that interface to operate the car and navigate roads.  ANI has been of real benefit to mankind.  It sorts and processes data much faster than the human brain, and it has been used to create all kinds of systems to assist humans with mundane tasks.

Artificial General Intelligence or AGI:  AGI, also called strong AI, is expected to be the next level of artificial intelligence developed.  Strong AI will be able to replicate all the functions of human thinking.  It will be able to reason, solve problems, learn and integrate prior knowledge.  It will be capable of innovation and imagination.  Strong AI will have all the thinking ability of the human brain, but it still will not be equal to human beings because of one key feature that is lacking – self-awareness.  AGI, when it arrives, will have powerful brains, but these computers will not be sentient beings.

Artificial Super Intelligence or ASI:  While some scientists believe ASI is a crucial development in the future of mankind, others warn of its dangers.  ASI will be smarter than any human being on the planet.  It will have the capacity to think, reason, imagine, and learn faster than the human brain.  In addition, Artificial Super Intelligence will be self-aware.  It will have consciousness and see itself as an individual with its own needs and wants.  Stephen Hawkings said: “Success in creating Artificial Intelligence would be the biggest event in human history.  Unfortunately, it might also be the last, unless we learn how to avoid the risks.”  When Hawkings made this predictive statement, he was talking about ASI.

Benefits and Risks of Developing Artificial Super Intelligence.

The big question with ASI is whether we have the intelligence and foresight to create this technology with adequate safeguards to make it a benefit to man rather than a threat.  ASI, if it is ever developed, will be immensely smarter than any human.  As a sentient entity, it will have a desire for self-preservation.  Yes, it has the potential to end wars and poverty, to make business and government more efficient and productive.  It can advance science and eradicate disease.

However, great minds like Bill Gates, Elon Musk, Steve Wozniak and Stephen Hawkings have all expressed grave concerns about the pursuit of ASI.  While it may take years to develop sentient AI, the scientists warn that it cannot be developed without serious consideration to safeguards. A few decades ago, computer scientists believed it would be centuries before we had the tools to develop ASI, but subsequent great leaps in technology have made them rethink the possibilities.  We are now looking at the potential for ASI to be developed in our lifetime.  Because scientists have a strong desire to move ahead and create the ultimate computer, the dangers of ASI must be addressed before it becomes a reality.

Laws, Primary Directives and Artificial Intelligence

Concerned scientists believe the key to developing and living with ASI is to align its goals with ours.  Accomplishing that goal of alignment may be more difficult than we can conceive.  In Star Trek and some other science fiction programs, androids have a primary rule in their coding that prevents them from doing anything that would harm humans.  Could such a primary directive be encoded that cannot be changed?  Who knows? 

What we do know is right now there are laws in place that protect us; and believe it or not we have already seen some cases involving artificial intelligence. In 2007, the case of Jones v. W + M Automation, Inc. was decided which dealt with a man, Thomas E. Jones, being injured on the job by an automated robotic arm, a form of narrow AI. The court found that the manufacturer was not liable to Mr. Jones because they had complied with regulations and the injury was not foreseeable. While this case didn’t work out for Mr. Jones it shows that there are many laws and regulations already in place to control artificial intelligence.

In fact, the laws surrounding artificial intelligence are growing almost as fast as the technology itself. Before the year 2000, no state had any law on the books for self-driving vehicles. Now, according to the National Conference of State Legislature’s website 40 out of 50 states have either enacted legislation or an executive order on the topic and the National Highway and Transportation Safety Administration (NHTSA) has released federal guidelines for Automated Driving Systems (self-driving vehicles).

As we continue to advance all forms of artificial technology, blurring the lines between science fiction and reality, we will also need to continue to keep pace with the laws surrounding it. Advanced programming directives, local, national and even global laws will need to be continually created, updated and enacted to ensure public safety. These laws may be the only way to determine which movie plot we end up in.