Teresa Escrig

News and oppinion about Cognitive AI & Robotics

Archive for the ‘Original posts’ Category

Stopping wild fires with Autonomous Fire Fighting Aircrafts (AFFAs)

one comment

The tremendous effect of wild fires.

After working for over a year in the organization of this project, I am proud to announce Qualitative Artificial Intelligence’s (QAI) participation in a project that is especially touching for me: Autonomous Fire Fighting Aircrafts (AFFAs).

The number of wild fires has increased exponentially in the last several years, especially this year here in the Western United States, and the technology is ready to help mitigate the problem.

Here’s how it works: A fleet of autonomous electric aircraft (AFFA) will be transported by truck to the fire. The necessary water or retardant will also be transported in trucks to the site. The AFFA will work as a team and be interactively monitored by the incident commander through their Smart Pad. Each AFFA will be capable of carrying water/retardant, flying for 15-20 minutes, collecting data, return to the trucks position to recharge batteries, load retardant, and repeat the process.

The biggest benefit to firefighting is that this all happens autonomously, day and night, in adverse weather conditions and very close to the fire without putting lives at risk.  It provides accurate data of the position of the fire, weather and critical infrastructure to protect, which will feed fire models for improvement of future operations.

We have created an Indiegogo project to fund the development of 3 electric prototype aircraft at half scale with complete autonomous control. Quicksilver Industries will develop the aircraft with folding wings so they may be transported on trucks; US Aeronautics Inc. will provide the hybrid aircraft propulsion system; and Qualitative Artificial Intelligence (QAI) will provide the Autonomous and Intelligent Control system and analytics.

Please, consider contributing to our Indiegogo campaign according to your interest and possibilities, and most important, share this information with your contacts who might be interested. Lets stop the wild fires!

 

Written by Teresa Escrig

August 30th, 2015 at 12:05 am

Is Artificial Intelligence a Real Existential Threat?

2 comments

480x347xMichelangelo-Robot-by-aaronufema-at-Flickr v2.jpg.pagespeed.ic.Lt2S_CQTWs

I was recently invited to be the keynote speaker at three events – (1) The first hardware committee meeting of the Keiretsu group of investors in Seattle in March; (2) The RoboBusiness Europe 2015 conference in Milan, Italy in April (http://www.robobusiness.eu/rb/); and (3) The International Conference in Cyber Conflict –CyCon- 2015 in Estonia in May (https://ccdcoe.org/cycon/home.html).

They all wanted me to talk about Artificial Intelligence being Friendly or Hostile to people – I did some research about the topic to prepare the presentations and here is a summary of what I found…

The ultimate goal of AI is to create creatures with human intelligence. This has been demonstrated to be a highly challenging problem, although the original group of researchers who coined the term AI in 1956, lead by John McCarthy, thought that a PhD student could resolve Artificial Vision in just a summer. Artificial Vision remains an open problem almost 60 years later. For some strange reason we keep thinking (and saying) that we are going to have artificial intelligence in the next few years. A comparably difficult problem might be to create a star from dust, but no scientist thinks or says that we are going to create a star in the next 5 years. Why do we have such a distorted view regarding the difficulty of AI?

220px-Raymond_Kurzweil_Fantastic_VoyageRay Kurzweil, Director of Engineering at Google, scientist, futurist and author of “The Singularity is Here” is a strong believer that AI (together with nanotechnology) is going to solve most of humanity’s problems, including global warming, cancer and other diseases, world hunger, complex macro issues like our economy, and even… our own mortality. He thinks we will progressively substitute our biological body by artificial matter. He is very optimistic.

On the other hand, there are many other influencers that are very worried about the direction that AI is taking.  At a conference at MIT, Elon Musk (founder of Tesla Motors & Space X) expressed  that: “with artificial intelligence we’re summoning the demon”, and humans might just be “the biological boot loader for digital super intelligence.” Bill Gates (founder of Microsoft) doesn’t understand “why some people are not concerned.” Stephen Hawking declared to the BBC that “the development of full artificial intelligence could spell the end of the human race.” Why are they so concerned? What are they seeing that we are not?

Lets look deeper into those questions… First we might consider why people are confused about the term AI?  Thanks to Hollywood filmmakers we associate AI closely with movies, yet AI is a broad topic.  It’s good to recognize that we use AI all the time in our daily lives, but often don’t realize that it is AI, then “as soon as it works, no one calls it AI anymore.”

There are several grades of AI:

  • AI Caliber 1) Artificial Narrow Intelligence (ANI). When an AI system knows a lot about one thing but nothing about the rest. We are now at the full bloom of narrow AI.
  • AI Caliber 2) Artificial General Intelligence (AGI). This will be the moment when AI arrives to have the intelligence of a human.
  • AI Caliber 3) Artificial Super Intelligence (ASI). This is when AI will surpass the intelligence of a human.
Howard-Graph

Figure 1. While the evolution of human intelligence is linear, artificial intelligence grows exponentially.

Although the evolution of human intelligence is linear, the evolution of artificial intelligence is exponential (see figure 1), it grows very slowly at the beginning but very fast once it arrives to a critical point. We are now at the point where AI systems have the intelligence of a mouse, and approaching the day where AI will have the intelligence of a person (see figure 2).

A serious test conducted in an AI conference found that 87% of AI experts believe that ASI will probably happen within this century and will have a huge impact on humanity; 10% of AI experts think that ASI won’t happen in this century; 2% believe that ASI will probably never happen or if it happens it won’t have that big of an impact; and 1% don’t have an opinion formed about the topic. I was among this last group.

Figure 2. Evolution of Artificial Intelligence.

Figure 2. Evolution of Artificial Intelligence.

To the question when will AGI and ASI happen?  The median expert prediction for AGI is 2040 and the median for ASI prediction is 2060 – only 45 years from now!

AI researchers are divided into those whose focus is on solving goals or challenges, such as knowledge representation, planning, reasoning, learning, natural language processing, and general artificial intelligence; those whose focus is on different approaches, such as brain simulation, symbolic AI, statistical or integration of approaches; and those whose focus is on tools, such as search & optimization, logic, probabilistics for uncertain reasoning, classifier & statistical methods, neural networks, control theory, and genetic algorithms.

The approach that tries to simulate our brain – also called deep learning – uses neural networks as a tool. The basic unit of the human brain is the neuron (Figure 3). A human brain has a hundred billion neurons (100.000.000.000 – 11 zeros).

Figure 3. Sketch of a human neuron (we have around 10 power 11 of them).

Figure 3. Sketch of a human neuron (we have around 10 power 11 of them).

Figure 4 is a simplified representation of one artificial neuron. A neural network contains millions of neurons interconnected in 2 to 4 layers. Basically the researchers provide inputs to the neurons (i.e. pictures of the faces of people); each input has an associated weight which determines the level of importance of that input to that neuron – also called parameters that the researchers tune for the neural net to work properly. In each neuron a basic calculation is computed (a sum of multiplications) and the result is compared to a function or threshold – if the result is greater than the threshold the next neuron is activated or fired, and in that way the learning process is propagated. The output in this example will be a classification of the input pictures with their corresponding names.

Figure 4. Sketch of an artificial neuron. A deep learning algorithm has millions of them.

Figure 4. Sketch of an artificial neuron. A deep learning algorithm has millions of them.

Once the neural network is initially programmed by the researchers, they do not control how the neurons are fired. They only can include more inputs, watch the results, and play with the parameters. The neural net per se acts as a “black box”. (Currently even the “outside” is being programmed by genetic algorithms – we will explain this in the next section).

Another tool that the AI researchers use to emulate evolution is genetic algorithms. Here is a very simplistic explanation of genetic algorithms:

  1. Initialization – Create an initial population randomly
  2. Evaluation – Each member is evaluated against a function, such as ‘faster algorithms are better’ or ‘stronger materials are better but they shouldn’t be too heavy’
  3. Selection – Discard bad designs
  4. Crossover – Reproduction
  5. Mutation – Add a little bit of randomness to create quantum leaps
  6. Repeat! (go back to 2.)

Once the researchers have programmed and initialized a genetic algorithm, the steps of Selection, Reproduction and Mutation occur without any control from the researcher, it again acts as a “black box”.

To exemplify the implications of programming “black boxes” here’s a fictitious story that happens in 2045 (originally published in Wait but why?)

A group of engineers who founded a start-up called Robotica, created an arm manipulator called “Sarah”, whose purpose was to handwrite the note “We love our customers. Robotica”. They tried to cover the need of automatic handwriting because it is well know that the rate of opening letters increases if they are hand written. They created complex deep learning algorithms including Neural Networks and Genetic Algorithms whose goal was to improve the handwriting of the note. The researchers initially provided numerous examples of handwriting and Sarah was continually learning with the mistakes she made in her own handwritten notes. Once the system was advanced enough, Sarah requested that the engineers bring her other types of books to continue learning and improving her goal, which the engineers provided. Sarah was systematically improving to the delight of her owners. One day Sarah asked the engineers to be connected to the Internet to continue learning to improve her handwriting. By that time there was a law that forbid the connection of any narrow AI system to the Internet. However, after much deliberation, the researchers thought that Sarah was not intelligent enough to be of any significant threat and decided to connect her for a couple of hours. They did. Sarah was able to spread programs across the Internet in those two hours then acted normally for the following months, until she realized that she was not going to be able to continue improving her handwriting note if somebody ever disconnected her. So, slowly she decided she was going to get rid of the whole humanity and one day she just did (by spreading lethal gas through the air conditioning of every office in the world). Once free of the threat, she continued piling perfectly handwritten notes “We love our customers. Robotica.” around earth. She very soon decided Earth was not going to be large enough to pile such an amount of notes and started to conquer other planets.

This is the plot of many AI movies … The fact is that this ANI system was not programmed with the intention of destroying humanity. And even did not have any emotional charge when she did it. AI systems are amoral by definition. They are narrow and only focus on one goal, that don’t care about anything else. Somebody said once that a narrow AI which knows a lot about only one thing, doesn’t really know of anything at all. The question is how to program them in such a way that serves humanity without destroy us or the planet on their way to evolution.

Peter Norvig, director of Research at Google.

Peter Norvig, director of Research at Google.

Figure 5. The face of a cat emerged from the deep learning algorithms at Google.

Figure 5. The face of a cat emerged from the deep learning algorithms at Google.

A real example is provided by Peter Norvig (Director of Research at Google, professor at Stanford University and author of the text book “Artificial Intelligence. A Modern Approach” used to teach AI in all the universities around the world).  Norvig conducted an experiment running a huge Neural Network – using non-supervised learning – to process the first frame of all the videos in YouTube using 16,000 processors at a Google warehouse for one year. The result – the system was able to identify the face of a “cat” (Figure 5). Sergey Brin (founder and CEO of Google) commented at a TED talk: “People seem to like cats”. I would say something more than that… Deep learning (brain & evolution simulation approaches, neural networks, genetic algorithms and probabilistic methods), is not only inefficient, but it might also be dangerous – acting as a “black box”.

There are however other approaches and tools to AI, such as symbolic (the use of models to describe the way humans think), logic, and qualitative modeling for uncertainty reasoning which are not black boxes.

As I did the with brain & evolution simulation approaches, I’ll offer two simplified examples of this kind of AI in order to understand it better. This is the kind of AI that I have been working on and directing for over two decades.

Figure 6. The robot perceives numbers.

Figure 6. The robot perceives numbers.

Figure 7. Qualitative transformation from numbers into relevant knowledge.

Figure 7. Qualitative transformation from numbers into relevant knowledge.

The first example is qualitative modeling for autonomous robot navigation. Imagine a robot trying to perceive and learn its environment with a laser distance sensor on its top (Figure 6). Each second the laser delivers an array of numbers –distances from the robot to the obstacles in the room. Distance sensors contain intrinsic errors which provide uncertainty. The common way for researchers to deal with uncertainty has been using probabilistic methods (which are brute force or black boxes). On the other hand, symbolic AI uses qualitative models to store angles and distances to the most significant landmarks in the room (Figure 7). In qualitative knowledge representation there is a reference system formed by the movement of the robot from point a to point b, which divides the space into 15 qualitative regions, represented iconically in Figure 8 – i.e. the landmark C1 is to the “right-front” with respect to (wrt) the reference system formed by a and b, C1 wrt ac = right-front (rf).

Figure . Qualitative orientation.

Figure 8. Qualitative representation of orientation.

Then, a qualitative reasoning process can be applied to infer new knowledge from the one originally perceived: if we know that the orientation of C1 wrt ab is “right-front” and the orientation of C2 wrt bC1 is “right-front”, then we can infer that C2 wrt ab, i.e. the last landmark with respect to the first reference system is “right-front”. If we repeat this inference process as many times as we can with the original knowledge and the one we infer, we can develop a qualitative map independently of the position of the robot. The transformation from quantitative data into qualitative representation was one of my main discoveries after my PhD, and this simplified example is the core technology of Cognitive Robots, a company that I co-founded in 2007.

The second example of Symbolic AI is the use of qualitative modeling for cognitive vision. The widely spread methods for artificial vision consist of making sense of the numbers which correspond to the characteristics of each one of the thousands of pixels of an image (again using brute force or black box methods). Using qualitative modeling, we can process any image to obtain a set of regions, and then apply a set of qualitative models – for spatial description we use orientation and topology, and for visual description we use shape and color– which provides a set of tags with meaning associated to each image. We then associate the set of tags to a ontology to obtain the name of the objects and the concept associated to the object, i.e. how to use it, where to buy it, etc.

Examples of the use of this technology are marketing – we can identify the handbag of the picture 12 as “Authentic, Jimmy Choo, Day Medium, Zebra Printed, Suede Hobo Handbag, Cognac brown”, and where to buy it near you; or identify the disease of the skin; or provide real augmented reality with a camera and a projector in your glasses we can get more information related to the things we are seeing; robots will behave more intelligently and be more useful to us with automatic identification of objects and their associated concepts.

These two examples of symbolic AI were part of the products of Cognitive Robots. It was included in the Cognitive brain for Service Robotics ® incorporated into autonomous scrubber machines, and our own robotic platform, among others applications. As you might know, Cognitive Robots is not active anymore. I am currently regrouping a team to re-start a new company – Qualitative Artificial Intelligence. Stay tune for more information.

The most important benefits of qualitative models are:

  • They formalize human Common Sense reasoning.
  • Naturally deal with uncertainty, incomplete or partial information.
  • Transform information into knowledge and wisdom.
  • Extract relevant information, allowing Real Time processing.
  • They have a high level of abstraction, which make them extremely good for decisions.

They seem to have all the benefits that fields like cyber security, big data, robotics, and artificial vision are looking for.

My final insights related with the worry regarding AI… There is no way to know what “black box” ASI will do or what the consequences will be for us. My proposal is to use Symbolic, Logic, Qualitative & Cognitive approaches or at least use a combination of both, to always be able to access the reasoning behind any decision made by an AI system.

 

My keynote speech at RoboBusiness 2015

leave a comment

My keynote speech at RoboBusiness 2015
I’ve been invited to give a talk at RoboBusiness Europe in Milan the 29th of April. If you are going to attend the conference, please, come to see me.

Tittle: “What type of AI will provide “safe” intelligence to service robotics?”

Summary: The AI winter is long since over.  We are well into the spring of narrow AI. What were research projects just ten to fifteen years ago are now apps accessible at our fingertips.  If all the AI systems in the world suddenly stopped functioning, our economic infrastructure would grind to a halt. Of course, our AI systems are not smart enough — yet — to organize such a conspiracy. They understand things only in one way, which means that they don’t really understand them at all.

Strong AI will happen when a narrow AI system arrives to have human level intelligence. Would strong AI be a real existential threat to humanity, as many people seem to believe? Do we need to create a framework to develop narrow AI systems considering all risks? Do we need to start seriously considering Asimov’s 3 Laws of Robotics?

Bio: For two decades, Dr. Teresa Escrig has been a researcher and professor in Artificial Intelligence areas including Qualitative Modeling, Cognitive Vision, and Robotics.  She is the author of 3 books, more than 100 research articles, and the recipient of numerous awards.  From 2002 to 2010, she lead the research group Cognition for Robotics Research.  Since 2007, she has been the CEO of the spin-off Cognitive Robots, whose mission is to provide an integrated solution for the automation of any service vehicle, using a cognitive process that mimics the human mind.

Written by Teresa Escrig

April 18th, 2015 at 7:29 pm

What are the benefits of Artificial Intelligence in Robotics?

one comment

Happy New Year to all!  It’s been a while since my last post. Too busy. Now, I’m back.

————————————————————————————-

Robotics is not only a research field within artificial intelligence, but a field of application, one where all areas of artificial intelligence can be tested and integrated into a final result.

Amazing humanoid robots exhibit elegant and smooth motion capable of walking, running, and going up and down stairs.  They use their hands to protect themselves when falling, and to get up afterward.  They’re an example of the tremendous financial and human capital that is being devoted to research and development in the field of electronics, control and the design of robots.

Very often, the behavior of these robots contains a fixed number of pre-programmed instructions that are repeated regardless of  any changes in the environment. These robots have no autonomy, nor adaptation, to the changing environment, and therefore do not show intelligent behavior. We are amazed by the technology they provide, which is fantastic! But we can not infer that, because the robots are physically so realistic and the movements so precise and gentle, that they are able to do what we (people) do. Read the rest of this entry »

Cloud Robotics: benefits to adopt, drawbacks to solve

22 comments

For us humans, with our non-upgradeable, offline meat brains, the possibility of acquiring new skills by connecting our heads to a computer network is still science fiction. It is a reality for robots.

Cloud Robotics can allow the robot to access vast amounts of processing power, data and offload compute-intensive tasks like image processing and voice recognition and even download new skills instantly, Matrix-style.

There is an excellent post at ieee spectrum about Cloud Robotics that I absolute recommend to read for those who want to know what is next in the Robotics world.

Here are the benefits I see by using Cloud-enable robots: Read the rest of this entry »

Cognitive Robots includes Common-Sense Knowledge and Reasoning into their Robotics and Computer Vision solutions

5 comments

Representation, reasoning and learning are the basic principles of human intelligence. The emulation of human intelligence has been the aim of Artificial Intelligence since its origins in 1956.

In fact, converting raw data into information (data in the context of other data) and hence into knowledge (information in the context of other information), is critical for understanding activities, behaviors, and in general the world we try to model. Both in the Robotics and the Computer Vision areas we try to model the real world where the humans are operating.

The type of knowledge that Robotics and Computer Vision need to obtain is Common Sense Knowledge. Contra intuitively, common sense knowledge is more difficult to model than expert knowledge, which can be quite easily modeled by expert systems (a more or less closed research area since the 70s).

Both in Robotics and Computer Vision areas, Probabilistic and Bayesian models have historically been used as the way to represent, reason and learn from the world. These methods have provided very good initial results. The problem is that they have never been scalable. That is why there is no commercial intelligent robot that has the full ability to serve people yet. Although there exist many preliminary solutions including artificial vision, the percentage of false positives or negatives are still too high to consider it as completely reliable, and therefore artificial vision is still an open research area.

The problems detected in the probabilistic approaches have been twofold: Read the rest of this entry »

Service Robotics is still very much in its infancy

2 comments

According to Innovation News Daily these are the Top 7 Useful Robots You Can Buy Right Now. You can read the explanation of each one of them here.

 

 

 

 

 

 

 

 

 

 

It’s very obvious that the service robotics field is very much in its infancy.  Basically toys (with the exception of the tele-presence robot), these represent what are currently considered, the top most useful robots. It is clear we can do much, much better.

The technology, is much more advanced, not only in the academic world but also in the industrial one, and can provide much more service to humanity. I guess it takes time to arrive to the market. Read the rest of this entry »

A more or less concise Historical Evolution of Robotics and Artificial Intelligence. From Plato to the 1990’s

leave a comment

Lets first consider the historical evolution that shows man’s desire to build a machine like ourselves.  From philosophers and alchemists of the Middle Ages to 21st century scientists, the fascinating idea of creating a machine like the human being has endured.

The origins of Artificial Intelligence are attributed to the philosophers of antiquity. Plato (428 BC) wanted to know the characteristics of piety to determine if action could be regarded as pious.  This could be the first algorithm.  Aristotle conceived an informal system of syllogistic reasoning by which one could draw conclusions from premises, which became the precursor of reasoning.

Philosophers delineated the most important ideas related to artificial intelligence, but also needed a formalization of mathematics in three areas: computer science, logic and probability.   Read the rest of this entry »

Written by Teresa Escrig

March 14th, 2012 at 10:50 pm

The Service Robotics Revolution

leave a comment

I have always thought that working at a repetitive task everyday simply for money should not be something that a person does. Life which is meant to be lived to its fullest, becomes an experience of surviving, not an expression of creativity, or fulfillment of every individual’s passion, full potential and purpose.

I have dedicated my entire professional life, almost 20 years of research, to developing a Cognitive Brain, which can be installed in almost any vehicle to transform it into an autonomous robot. One designed to serve individuals, commerce and industry in a variety of ways, without any human intervention. I clearly envisioned this future and had such passion that I created a whole research group to pursue that dream.

What does that future look like? Read the rest of this entry »

Written by Teresa Escrig

March 14th, 2012 at 1:42 am

What a machine will never be able to do?

one comment


Definition of Artificial Intelligence

Artificial Intelligence (AI) could be defined as the science that creates computer programs that simulate intelligent processes normally done by people.

There is no general accepted definition for AI. There are two basic positions being confronted among researchers working in the field [Russell & Norvig 96]: the human-behavior-centered approach, and the rational approach.

Approaches to AI based on what we think is intelligence

The human-behavior-centered approach has two slightly different definitions of intelligence. It will be defined as intelligent, that which “acts” as a human, or that which “thinks” as a human. This approach is an empirical science and will need the definition of hypothesis and its confirmation with experiments.

For the defenders of intelligence as “that which acts as a human”, in 1950 Alan Turing defined the well-known Turing Test, which consisted of a human, asking questions to a computer. The test would be passed if the human could not determine if the answer came from a computer or another human at the other end of the terminal. Read the rest of this entry »

Written by Teresa Escrig

March 14th, 2012 at 1:25 am

Posted in Original posts