Teresa Escrig

News and oppinion about Cognitive AI & Robotics

Archive for the ‘qualitative artificial intelligence’ tag

Is Artificial Intelligence a Real Existential Threat?

2 comments

480x347xMichelangelo-Robot-by-aaronufema-at-Flickr v2.jpg.pagespeed.ic.Lt2S_CQTWs

I was recently invited to be the keynote speaker at three events – (1) The first hardware committee meeting of the Keiretsu group of investors in Seattle in March; (2) The RoboBusiness Europe 2015 conference in Milan, Italy in April (http://www.robobusiness.eu/rb/); and (3) The International Conference in Cyber Conflict –CyCon- 2015 in Estonia in May (https://ccdcoe.org/cycon/home.html).

They all wanted me to talk about Artificial Intelligence being Friendly or Hostile to people – I did some research about the topic to prepare the presentations and here is a summary of what I found…

The ultimate goal of AI is to create creatures with human intelligence. This has been demonstrated to be a highly challenging problem, although the original group of researchers who coined the term AI in 1956, lead by John McCarthy, thought that a PhD student could resolve Artificial Vision in just a summer. Artificial Vision remains an open problem almost 60 years later. For some strange reason we keep thinking (and saying) that we are going to have artificial intelligence in the next few years. A comparably difficult problem might be to create a star from dust, but no scientist thinks or says that we are going to create a star in the next 5 years. Why do we have such a distorted view regarding the difficulty of AI?

220px-Raymond_Kurzweil_Fantastic_VoyageRay Kurzweil, Director of Engineering at Google, scientist, futurist and author of “The Singularity is Here” is a strong believer that AI (together with nanotechnology) is going to solve most of humanity’s problems, including global warming, cancer and other diseases, world hunger, complex macro issues like our economy, and even… our own mortality. He thinks we will progressively substitute our biological body by artificial matter. He is very optimistic.

On the other hand, there are many other influencers that are very worried about the direction that AI is taking.  At a conference at MIT, Elon Musk (founder of Tesla Motors & Space X) expressed  that: “with artificial intelligence we’re summoning the demon”, and humans might just be “the biological boot loader for digital super intelligence.” Bill Gates (founder of Microsoft) doesn’t understand “why some people are not concerned.” Stephen Hawking declared to the BBC that “the development of full artificial intelligence could spell the end of the human race.” Why are they so concerned? What are they seeing that we are not?

Lets look deeper into those questions… First we might consider why people are confused about the term AI?  Thanks to Hollywood filmmakers we associate AI closely with movies, yet AI is a broad topic.  It’s good to recognize that we use AI all the time in our daily lives, but often don’t realize that it is AI, then “as soon as it works, no one calls it AI anymore.”

There are several grades of AI:

  • AI Caliber 1) Artificial Narrow Intelligence (ANI). When an AI system knows a lot about one thing but nothing about the rest. We are now at the full bloom of narrow AI.
  • AI Caliber 2) Artificial General Intelligence (AGI). This will be the moment when AI arrives to have the intelligence of a human.
  • AI Caliber 3) Artificial Super Intelligence (ASI). This is when AI will surpass the intelligence of a human.
Howard-Graph

Figure 1. While the evolution of human intelligence is linear, artificial intelligence grows exponentially.

Although the evolution of human intelligence is linear, the evolution of artificial intelligence is exponential (see figure 1), it grows very slowly at the beginning but very fast once it arrives to a critical point. We are now at the point where AI systems have the intelligence of a mouse, and approaching the day where AI will have the intelligence of a person (see figure 2).

A serious test conducted in an AI conference found that 87% of AI experts believe that ASI will probably happen within this century and will have a huge impact on humanity; 10% of AI experts think that ASI won’t happen in this century; 2% believe that ASI will probably never happen or if it happens it won’t have that big of an impact; and 1% don’t have an opinion formed about the topic. I was among this last group.

Figure 2. Evolution of Artificial Intelligence.

Figure 2. Evolution of Artificial Intelligence.

To the question when will AGI and ASI happen?  The median expert prediction for AGI is 2040 and the median for ASI prediction is 2060 – only 45 years from now!

AI researchers are divided into those whose focus is on solving goals or challenges, such as knowledge representation, planning, reasoning, learning, natural language processing, and general artificial intelligence; those whose focus is on different approaches, such as brain simulation, symbolic AI, statistical or integration of approaches; and those whose focus is on tools, such as search & optimization, logic, probabilistics for uncertain reasoning, classifier & statistical methods, neural networks, control theory, and genetic algorithms.

The approach that tries to simulate our brain – also called deep learning – uses neural networks as a tool. The basic unit of the human brain is the neuron (Figure 3). A human brain has a hundred billion neurons (100.000.000.000 – 11 zeros).

Figure 3. Sketch of a human neuron (we have around 10 power 11 of them).

Figure 3. Sketch of a human neuron (we have around 10 power 11 of them).

Figure 4 is a simplified representation of one artificial neuron. A neural network contains millions of neurons interconnected in 2 to 4 layers. Basically the researchers provide inputs to the neurons (i.e. pictures of the faces of people); each input has an associated weight which determines the level of importance of that input to that neuron – also called parameters that the researchers tune for the neural net to work properly. In each neuron a basic calculation is computed (a sum of multiplications) and the result is compared to a function or threshold – if the result is greater than the threshold the next neuron is activated or fired, and in that way the learning process is propagated. The output in this example will be a classification of the input pictures with their corresponding names.

Figure 4. Sketch of an artificial neuron. A deep learning algorithm has millions of them.

Figure 4. Sketch of an artificial neuron. A deep learning algorithm has millions of them.

Once the neural network is initially programmed by the researchers, they do not control how the neurons are fired. They only can include more inputs, watch the results, and play with the parameters. The neural net per se acts as a “black box”. (Currently even the “outside” is being programmed by genetic algorithms – we will explain this in the next section).

Another tool that the AI researchers use to emulate evolution is genetic algorithms. Here is a very simplistic explanation of genetic algorithms:

  1. Initialization – Create an initial population randomly
  2. Evaluation – Each member is evaluated against a function, such as ‘faster algorithms are better’ or ‘stronger materials are better but they shouldn’t be too heavy’
  3. Selection – Discard bad designs
  4. Crossover – Reproduction
  5. Mutation – Add a little bit of randomness to create quantum leaps
  6. Repeat! (go back to 2.)

Once the researchers have programmed and initialized a genetic algorithm, the steps of Selection, Reproduction and Mutation occur without any control from the researcher, it again acts as a “black box”.

To exemplify the implications of programming “black boxes” here’s a fictitious story that happens in 2045 (originally published in Wait but why?)

A group of engineers who founded a start-up called Robotica, created an arm manipulator called “Sarah”, whose purpose was to handwrite the note “We love our customers. Robotica”. They tried to cover the need of automatic handwriting because it is well know that the rate of opening letters increases if they are hand written. They created complex deep learning algorithms including Neural Networks and Genetic Algorithms whose goal was to improve the handwriting of the note. The researchers initially provided numerous examples of handwriting and Sarah was continually learning with the mistakes she made in her own handwritten notes. Once the system was advanced enough, Sarah requested that the engineers bring her other types of books to continue learning and improving her goal, which the engineers provided. Sarah was systematically improving to the delight of her owners. One day Sarah asked the engineers to be connected to the Internet to continue learning to improve her handwriting. By that time there was a law that forbid the connection of any narrow AI system to the Internet. However, after much deliberation, the researchers thought that Sarah was not intelligent enough to be of any significant threat and decided to connect her for a couple of hours. They did. Sarah was able to spread programs across the Internet in those two hours then acted normally for the following months, until she realized that she was not going to be able to continue improving her handwriting note if somebody ever disconnected her. So, slowly she decided she was going to get rid of the whole humanity and one day she just did (by spreading lethal gas through the air conditioning of every office in the world). Once free of the threat, she continued piling perfectly handwritten notes “We love our customers. Robotica.” around earth. She very soon decided Earth was not going to be large enough to pile such an amount of notes and started to conquer other planets.

This is the plot of many AI movies … The fact is that this ANI system was not programmed with the intention of destroying humanity. And even did not have any emotional charge when she did it. AI systems are amoral by definition. They are narrow and only focus on one goal, that don’t care about anything else. Somebody said once that a narrow AI which knows a lot about only one thing, doesn’t really know of anything at all. The question is how to program them in such a way that serves humanity without destroy us or the planet on their way to evolution.

Peter Norvig, director of Research at Google.

Peter Norvig, director of Research at Google.

Figure 5. The face of a cat emerged from the deep learning algorithms at Google.

Figure 5. The face of a cat emerged from the deep learning algorithms at Google.

A real example is provided by Peter Norvig (Director of Research at Google, professor at Stanford University and author of the text book “Artificial Intelligence. A Modern Approach” used to teach AI in all the universities around the world).  Norvig conducted an experiment running a huge Neural Network – using non-supervised learning – to process the first frame of all the videos in YouTube using 16,000 processors at a Google warehouse for one year. The result – the system was able to identify the face of a “cat” (Figure 5). Sergey Brin (founder and CEO of Google) commented at a TED talk: “People seem to like cats”. I would say something more than that… Deep learning (brain & evolution simulation approaches, neural networks, genetic algorithms and probabilistic methods), is not only inefficient, but it might also be dangerous – acting as a “black box”.

There are however other approaches and tools to AI, such as symbolic (the use of models to describe the way humans think), logic, and qualitative modeling for uncertainty reasoning which are not black boxes.

As I did the with brain & evolution simulation approaches, I’ll offer two simplified examples of this kind of AI in order to understand it better. This is the kind of AI that I have been working on and directing for over two decades.

Figure 6. The robot perceives numbers.

Figure 6. The robot perceives numbers.

Figure 7. Qualitative transformation from numbers into relevant knowledge.

Figure 7. Qualitative transformation from numbers into relevant knowledge.

The first example is qualitative modeling for autonomous robot navigation. Imagine a robot trying to perceive and learn its environment with a laser distance sensor on its top (Figure 6). Each second the laser delivers an array of numbers –distances from the robot to the obstacles in the room. Distance sensors contain intrinsic errors which provide uncertainty. The common way for researchers to deal with uncertainty has been using probabilistic methods (which are brute force or black boxes). On the other hand, symbolic AI uses qualitative models to store angles and distances to the most significant landmarks in the room (Figure 7). In qualitative knowledge representation there is a reference system formed by the movement of the robot from point a to point b, which divides the space into 15 qualitative regions, represented iconically in Figure 8 – i.e. the landmark C1 is to the “right-front” with respect to (wrt) the reference system formed by a and b, C1 wrt ac = right-front (rf).

Figure . Qualitative orientation.

Figure 8. Qualitative representation of orientation.

Then, a qualitative reasoning process can be applied to infer new knowledge from the one originally perceived: if we know that the orientation of C1 wrt ab is “right-front” and the orientation of C2 wrt bC1 is “right-front”, then we can infer that C2 wrt ab, i.e. the last landmark with respect to the first reference system is “right-front”. If we repeat this inference process as many times as we can with the original knowledge and the one we infer, we can develop a qualitative map independently of the position of the robot. The transformation from quantitative data into qualitative representation was one of my main discoveries after my PhD, and this simplified example is the core technology of Cognitive Robots, a company that I co-founded in 2007.

The second example of Symbolic AI is the use of qualitative modeling for cognitive vision. The widely spread methods for artificial vision consist of making sense of the numbers which correspond to the characteristics of each one of the thousands of pixels of an image (again using brute force or black box methods). Using qualitative modeling, we can process any image to obtain a set of regions, and then apply a set of qualitative models – for spatial description we use orientation and topology, and for visual description we use shape and color– which provides a set of tags with meaning associated to each image. We then associate the set of tags to a ontology to obtain the name of the objects and the concept associated to the object, i.e. how to use it, where to buy it, etc.

Examples of the use of this technology are marketing – we can identify the handbag of the picture 12 as “Authentic, Jimmy Choo, Day Medium, Zebra Printed, Suede Hobo Handbag, Cognac brown”, and where to buy it near you; or identify the disease of the skin; or provide real augmented reality with a camera and a projector in your glasses we can get more information related to the things we are seeing; robots will behave more intelligently and be more useful to us with automatic identification of objects and their associated concepts.

These two examples of symbolic AI were part of the products of Cognitive Robots. It was included in the Cognitive brain for Service Robotics ® incorporated into autonomous scrubber machines, and our own robotic platform, among others applications. As you might know, Cognitive Robots is not active anymore. I am currently regrouping a team to re-start a new company – Qualitative Artificial Intelligence. Stay tune for more information.

The most important benefits of qualitative models are:

  • They formalize human Common Sense reasoning.
  • Naturally deal with uncertainty, incomplete or partial information.
  • Transform information into knowledge and wisdom.
  • Extract relevant information, allowing Real Time processing.
  • They have a high level of abstraction, which make them extremely good for decisions.

They seem to have all the benefits that fields like cyber security, big data, robotics, and artificial vision are looking for.

My final insights related with the worry regarding AI… There is no way to know what “black box” ASI will do or what the consequences will be for us. My proposal is to use Symbolic, Logic, Qualitative & Cognitive approaches or at least use a combination of both, to always be able to access the reasoning behind any decision made by an AI system.