Teresa Escrig

News and oppinion about Cognitive AI & Robotics

What are the benefits of Artificial Intelligence in Robotics?

one comment

Happy New Year to all!  It’s been a while since my last post. Too busy. Now, I’m back.

————————————————————————————-

Robotics is not only a research field within artificial intelligence, but a field of application, one where all areas of artificial intelligence can be tested and integrated into a final result.

Amazing humanoid robots exhibit elegant and smooth motion capable of walking, running, and going up and down stairs.  They use their hands to protect themselves when falling, and to get up afterward.  They’re an example of the tremendous financial and human capital that is being devoted to research and development in the field of electronics, control and the design of robots.

Very often, the behavior of these robots contains a fixed number of pre-programmed instructions that are repeated regardless of  any changes in the environment. These robots have no autonomy, nor adaptation, to the changing environment, and therefore do not show intelligent behavior. We are amazed by the technology they provide, which is fantastic! But we can not infer that, because the robots are physically so realistic and the movements so precise and gentle, that they are able to do what we (people) do.

Let’s imagine that we see a robot in a film with a manipulator arm ironing a shirt. Today’s robotics technology is not advanced enough to be able to iron a shirt autonomously, as people do. Even if we see a robots arm grabbing the iron by the handle then sliding it over the fabric, (which has been placed there by a human) the speed of passage through the clothes and the number of times it will go through the same spot would surely be pre-programmed. If we were to lower the height of the ironing-board, the iron would probably float above the shirt at a height equal to that that we lowered the ironing-board, with the same movements, as if you were really ironing, without ever realizing that the iron is not touching the fabric. It would not be able to distinguish the effect of the iron on the shirt to determine whether it had any wrinkles, nor could it deduce wether the fabric still has wrinkles. Perhaps the iron is unplugged or the ironing program is not adjusted to the fabric type. Needless to say, the arm can not change the shirt being ironed and replace it with the next shirt to be ironed. Today’s robots can not autonomously iron, even if Hollywood would make it seem otherwise.

As in the robot-ironing example, there are many other things that robots can not currently do. We have seen so many movies that show advanced robotic skills, that the limits of science and technology in robotic intelligent behavior are unclear for most of the population, even for computer scientists not working directly in cognitive robotics.

Artificial Intelligence brings intelligent behavior to the robot, to be able to provide services to humans in unpredictable and changing environments, such as homes, hospitals, the work place, and all around us.

The basic contributions of AI in robotics are:

PERCEPTION – not only taking data from the environment, but transforming it into knowledge (and even wisdom) to be able to interpret and modify its behavior according to a result of this perception.

REASONING – drawing conclusions from data/knowledge taken from perception.

LEARNING – with new experiences, the robot needs to perceive and reason to obtain conclusions, but when the experiences are repeated, a learning process is required to store knowledge and speed up the process of intelligent response.

DECISION MAKING, or the ability to prioritize actions, is necessary to be able to be safe and effective in the solution of different autonomous applications.

HUMAN-ROBOT INTERACTION at many levels is also necessary. For example, natural language processing – understanding the meaning of sentences exchanged with humans, depending on the context and to be able to properly respond -, and emotions rapport.

We are going to analyze the first three areas a bit more in depth.

Any robot as an autonomous physical entity, has to perceive its environment, and interpret this perception, to move in a safe manner. Currently there are several types of sensors that can be used with robots:

  • Sonar sensors emit an inaudible sonar beam in the direction perpendicular to the sensor itself. The time between the emission of sound and the subsequent reception of its rebound in the environment obstacles is a measure of the distance between the sensor and the perceived obstacle. This type of sensor is very inaccurate, because its measurements are highly dependent on the reflective surface of the sound beam.  Mistakes are technically very difficult to counter. The scope of these sensors is between 50 cm and 3 m approx.
  • Infrared sensors emit an infrared light beam in the direction perpendicular to the sensor itself, and receive the rebound from the environment with light in a receiver. The distance between the sensor and the obstacle in the environment is also calculated using the time between transmission and reception. These sensors are very sensitive to changing ambient light, so they are not totally reliable. The scope of these sensors is 80 cm.
  • The laser sensors  (laser emission) range depends on the type of laser used. As in the sonar and infrared sensors, the elapsed time between emission and the reception beam, calculates the distance to an obstacle in the environment. There are laser sensors that perform a sweep in a single 180 degree plane. There are others that 3-dimensionally scan the environment. This full scan can be equated to the view with a video camera, it makes a reconstruction of the entire environment based on points or distances from the sensor to the environment. Lasers are much more robust to environmental characteristics and their measurements are quite reliable. However, there are still very expensive.
  • The Kinect sensor came to the market at the end of 2011, revolutionizing the robotic sensors spectrum. It contains an RGB camera, a depth sensor and a multi-array microphone running proprietary software, which provide full-body 3D motion capture, facial recognition and voice recognition capabilities. It is relatively easy to incorporate into the robot platform and software. The main drawbacks are that you can not recognize objects or faces (it only provides a cluster of color regions by distance) and it doesn’t work outdoors (yet).
  • The video camera is the most promising sensor, it is quite inexpensive and provides more accurate information of the environment. While there are huge advances in this area and we are very close to consider this sensor as the main one, we are still not quite there yet.

Computer Vision is a very active field of AI, which has made great progress on specific issues, such as face recognition, or recognition of defects in ceramic glaze. “Object recognition” remains an unresolved issue in general. As an example, we understand the concept “chair”, and can identify any chair we see, even a new model that we’ve never seen before. And we recognize a chair even when its partially hidden behind other objects in a scene. This is not yet solved in computer vision. One very active research area in computer vision is “quick search of a specific object in a scene”, without processing the entire image.

One of the latest advances in the area is cognitive vision, which uses qualitative recognition of object shapes, their relationships and ontologies to connect those qualitative shapes with names of objects and the concepts that they represent. It has many benefits, one of which is automatic tagging and fast processing. This technology has been developed at the University Jaume I and Cognitive Robots under my supervision. This will be explained in more detail in another post.

The current situation in commercial service robots is that we need sensor integration of most (if not all) of the above mentioned types of sensors. With a unidirectional laser (very common in commercial service robots), the robots only perceive what is happening in the plane of the laser, the rest of the environment is not perceived. Sonar and infrared sensors are cheaper and can be placed around the robot. The kinect sensor provides 3D obstacle detection. The use of each type of sensor, its interpretation, treatment of inconsistent information, and the integration of information from various types of sensors, remains an open research topic.

The implementation of a reasoning process is also basic to service robotics. The reasoning process allows the robot to infer reliable conclusions from premises. For example, if the robot is perceiving landmarks in the room at a certain relative orientation, this orientation can be used by the robot to know its relative position in its movement through the environment. The biggest problem encountered in any reasoning method for robots is the management of uncertain and vague information of the data perceived. There are many types of reasoning, all remain open fields of research within Artificial Intelligence: logical reasoning systems, probabilistic reasoning systems, case-based reasoning, fuzzy logic, and qualitative reasoning. The latest reasoning techniques developed are mental processes of analogy. We use ‘Qualitative Reasoning‘ as the reasoning technique incorporated in our product, “Cognitive Brain for Service Robotics ®” at Cognitive Robots.

The robots also have to be able to learn from their own experience. Learning is essential in order for them to function in unknown environments. They must be able to store data (from environmental or behavioral processes) that have ever been helpful to achieve a goal. Learning may be a memory (more or less elaborate) of experiences, as well as how these are then used when needed. There are many learning techniques: Inductive learning through semantic networks, can learn a function from examples of its inputs and outputs; neural networks; belief networks, allow learning probabilistic functions; reinforcement learning, allows a robot to react appropriately in unfamiliar environments, based only on their perceptions and occasional rewards. Learning qualitative models that describe behaviors to solve different tasks is, in my opinion, a better way for robots to learn as humans learn .

Perception, reasoning and learning are the three pillars of intelligence (human and robotic).

If these pillars are implemented in the most cognitive way we know, and integrated in a highly modular way (to be able to substitute a solution for a better one without affecting the whole system), we have a sound foundation to include the decision making process to adapt the same robotic architecture to solve different tasks. This has been the way of thinking and operating at Cognitive Robots.

One Response to 'What are the benefits of Artificial Intelligence in Robotics?'

Subscribe to comments with RSS or TrackBack to 'What are the benefits of Artificial Intelligence in Robotics?'.

  1. Very didactic and pedagogic post for understanding the new generations of robots which will come soon.

    Toni Ferraté

    4 Feb 13 at 11:50 pm

Leave a Reply