Teresa Escrig

News and oppinion about Cognitive AI & Robotics

Stopping wild fires with Autonomous Fire Fighting Aircrafts (AFFAs)

one comment

The tremendous effect of wild fires.

After working for over a year in the organization of this project, I am proud to announce Qualitative Artificial Intelligence’s (QAI) participation in a project that is especially touching for me: Autonomous Fire Fighting Aircrafts (AFFAs).

The number of wild fires has increased exponentially in the last several years, especially this year here in the Western United States, and the technology is ready to help mitigate the problem.

Here’s how it works: A fleet of autonomous electric aircraft (AFFA) will be transported by truck to the fire. The necessary water or retardant will also be transported in trucks to the site. The AFFA will work as a team and be interactively monitored by the incident commander through their Smart Pad. Each AFFA will be capable of carrying water/retardant, flying for 15-20 minutes, collecting data, return to the trucks position to recharge batteries, load retardant, and repeat the process.

The biggest benefit to firefighting is that this all happens autonomously, day and night, in adverse weather conditions and very close to the fire without putting lives at risk.  It provides accurate data of the position of the fire, weather and critical infrastructure to protect, which will feed fire models for improvement of future operations.

We have created an Indiegogo project to fund the development of 3 electric prototype aircraft at half scale with complete autonomous control. Quicksilver Industries will develop the aircraft with folding wings so they may be transported on trucks; US Aeronautics Inc. will provide the hybrid aircraft propulsion system; and Qualitative Artificial Intelligence (QAI) will provide the Autonomous and Intelligent Control system and analytics.

Please, consider contributing to our Indiegogo campaign according to your interest and possibilities, and most important, share this information with your contacts who might be interested. Lets stop the wild fires!


Written by Teresa Escrig

August 30th, 2015 at 12:05 am

Is Artificial Intelligence a Real Existential Threat?


480x347xMichelangelo-Robot-by-aaronufema-at-Flickr v2.jpg.pagespeed.ic.Lt2S_CQTWs

I was recently invited to be the keynote speaker at three events – (1) The first hardware committee meeting of the Keiretsu group of investors in Seattle in March; (2) The RoboBusiness Europe 2015 conference in Milan, Italy in April (http://www.robobusiness.eu/rb/); and (3) The International Conference in Cyber Conflict –CyCon- 2015 in Estonia in May (https://ccdcoe.org/cycon/home.html).

They all wanted me to talk about Artificial Intelligence being Friendly or Hostile to people – I did some research about the topic to prepare the presentations and here is a summary of what I found…

The ultimate goal of AI is to create creatures with human intelligence. This has been demonstrated to be a highly challenging problem, although the original group of researchers who coined the term AI in 1956, lead by John McCarthy, thought that a PhD student could resolve Artificial Vision in just a summer. Artificial Vision remains an open problem almost 60 years later. For some strange reason we keep thinking (and saying) that we are going to have artificial intelligence in the next few years. A comparably difficult problem might be to create a star from dust, but no scientist thinks or says that we are going to create a star in the next 5 years. Why do we have such a distorted view regarding the difficulty of AI?

220px-Raymond_Kurzweil_Fantastic_VoyageRay Kurzweil, Director of Engineering at Google, scientist, futurist and author of “The Singularity is Here” is a strong believer that AI (together with nanotechnology) is going to solve most of humanity’s problems, including global warming, cancer and other diseases, world hunger, complex macro issues like our economy, and even… our own mortality. He thinks we will progressively substitute our biological body by artificial matter. He is very optimistic.

On the other hand, there are many other influencers that are very worried about the direction that AI is taking.  At a conference at MIT, Elon Musk (founder of Tesla Motors & Space X) expressed  that: “with artificial intelligence we’re summoning the demon”, and humans might just be “the biological boot loader for digital super intelligence.” Bill Gates (founder of Microsoft) doesn’t understand “why some people are not concerned.” Stephen Hawking declared to the BBC that “the development of full artificial intelligence could spell the end of the human race.” Why are they so concerned? What are they seeing that we are not?

Lets look deeper into those questions… First we might consider why people are confused about the term AI?  Thanks to Hollywood filmmakers we associate AI closely with movies, yet AI is a broad topic.  It’s good to recognize that we use AI all the time in our daily lives, but often don’t realize that it is AI, then “as soon as it works, no one calls it AI anymore.”

There are several grades of AI:

  • AI Caliber 1) Artificial Narrow Intelligence (ANI). When an AI system knows a lot about one thing but nothing about the rest. We are now at the full bloom of narrow AI.
  • AI Caliber 2) Artificial General Intelligence (AGI). This will be the moment when AI arrives to have the intelligence of a human.
  • AI Caliber 3) Artificial Super Intelligence (ASI). This is when AI will surpass the intelligence of a human.

Figure 1. While the evolution of human intelligence is linear, artificial intelligence grows exponentially.

Although the evolution of human intelligence is linear, the evolution of artificial intelligence is exponential (see figure 1), it grows very slowly at the beginning but very fast once it arrives to a critical point. We are now at the point where AI systems have the intelligence of a mouse, and approaching the day where AI will have the intelligence of a person (see figure 2).

A serious test conducted in an AI conference found that 87% of AI experts believe that ASI will probably happen within this century and will have a huge impact on humanity; 10% of AI experts think that ASI won’t happen in this century; 2% believe that ASI will probably never happen or if it happens it won’t have that big of an impact; and 1% don’t have an opinion formed about the topic. I was among this last group.

Figure 2. Evolution of Artificial Intelligence.

Figure 2. Evolution of Artificial Intelligence.

To the question when will AGI and ASI happen?  The median expert prediction for AGI is 2040 and the median for ASI prediction is 2060 – only 45 years from now!

AI researchers are divided into those whose focus is on solving goals or challenges, such as knowledge representation, planning, reasoning, learning, natural language processing, and general artificial intelligence; those whose focus is on different approaches, such as brain simulation, symbolic AI, statistical or integration of approaches; and those whose focus is on tools, such as search & optimization, logic, probabilistics for uncertain reasoning, classifier & statistical methods, neural networks, control theory, and genetic algorithms.

The approach that tries to simulate our brain – also called deep learning – uses neural networks as a tool. The basic unit of the human brain is the neuron (Figure 3). A human brain has a hundred billion neurons ( – 11 zeros).

Figure 3. Sketch of a human neuron (we have around 10 power 11 of them).

Figure 3. Sketch of a human neuron (we have around 10 power 11 of them).

Figure 4 is a simplified representation of one artificial neuron. A neural network contains millions of neurons interconnected in 2 to 4 layers. Basically the researchers provide inputs to the neurons (i.e. pictures of the faces of people); each input has an associated weight which determines the level of importance of that input to that neuron – also called parameters that the researchers tune for the neural net to work properly. In each neuron a basic calculation is computed (a sum of multiplications) and the result is compared to a function or threshold – if the result is greater than the threshold the next neuron is activated or fired, and in that way the learning process is propagated. The output in this example will be a classification of the input pictures with their corresponding names.

Figure 4. Sketch of an artificial neuron. A deep learning algorithm has millions of them.

Figure 4. Sketch of an artificial neuron. A deep learning algorithm has millions of them.

Once the neural network is initially programmed by the researchers, they do not control how the neurons are fired. They only can include more inputs, watch the results, and play with the parameters. The neural net per se acts as a “black box”. (Currently even the “outside” is being programmed by genetic algorithms – we will explain this in the next section).

Another tool that the AI researchers use to emulate evolution is genetic algorithms. Here is a very simplistic explanation of genetic algorithms:

  1. Initialization – Create an initial population randomly
  2. Evaluation – Each member is evaluated against a function, such as ‘faster algorithms are better’ or ‘stronger materials are better but they shouldn’t be too heavy’
  3. Selection – Discard bad designs
  4. Crossover – Reproduction
  5. Mutation – Add a little bit of randomness to create quantum leaps
  6. Repeat! (go back to 2.)

Once the researchers have programmed and initialized a genetic algorithm, the steps of Selection, Reproduction and Mutation occur without any control from the researcher, it again acts as a “black box”.

To exemplify the implications of programming “black boxes” here’s a fictitious story that happens in 2045 (originally published in Wait but why?)

A group of engineers who founded a start-up called Robotica, created an arm manipulator called “Sarah”, whose purpose was to handwrite the note “We love our customers. Robotica”. They tried to cover the need of automatic handwriting because it is well know that the rate of opening letters increases if they are hand written. They created complex deep learning algorithms including Neural Networks and Genetic Algorithms whose goal was to improve the handwriting of the note. The researchers initially provided numerous examples of handwriting and Sarah was continually learning with the mistakes she made in her own handwritten notes. Once the system was advanced enough, Sarah requested that the engineers bring her other types of books to continue learning and improving her goal, which the engineers provided. Sarah was systematically improving to the delight of her owners. One day Sarah asked the engineers to be connected to the Internet to continue learning to improve her handwriting. By that time there was a law that forbid the connection of any narrow AI system to the Internet. However, after much deliberation, the researchers thought that Sarah was not intelligent enough to be of any significant threat and decided to connect her for a couple of hours. They did. Sarah was able to spread programs across the Internet in those two hours then acted normally for the following months, until she realized that she was not going to be able to continue improving her handwriting note if somebody ever disconnected her. So, slowly she decided she was going to get rid of the whole humanity and one day she just did (by spreading lethal gas through the air conditioning of every office in the world). Once free of the threat, she continued piling perfectly handwritten notes “We love our customers. Robotica.” around earth. She very soon decided Earth was not going to be large enough to pile such an amount of notes and started to conquer other planets.

This is the plot of many AI movies … The fact is that this ANI system was not programmed with the intention of destroying humanity. And even did not have any emotional charge when she did it. AI systems are amoral by definition. They are narrow and only focus on one goal, that don’t care about anything else. Somebody said once that a narrow AI which knows a lot about only one thing, doesn’t really know of anything at all. The question is how to program them in such a way that serves humanity without destroy us or the planet on their way to evolution.

Peter Norvig, director of Research at Google.

Peter Norvig, director of Research at Google.

Figure 5. The face of a cat emerged from the deep learning algorithms at Google.

Figure 5. The face of a cat emerged from the deep learning algorithms at Google.

A real example is provided by Peter Norvig (Director of Research at Google, professor at Stanford University and author of the text book “Artificial Intelligence. A Modern Approach” used to teach AI in all the universities around the world).  Norvig conducted an experiment running a huge Neural Network – using non-supervised learning – to process the first frame of all the videos in YouTube using 16,000 processors at a Google warehouse for one year. The result – the system was able to identify the face of a “cat” (Figure 5). Sergey Brin (founder and CEO of Google) commented at a TED talk: “People seem to like cats”. I would say something more than that… Deep learning (brain & evolution simulation approaches, neural networks, genetic algorithms and probabilistic methods), is not only inefficient, but it might also be dangerous – acting as a “black box”.

There are however other approaches and tools to AI, such as symbolic (the use of models to describe the way humans think), logic, and qualitative modeling for uncertainty reasoning which are not black boxes.

As I did the with brain & evolution simulation approaches, I’ll offer two simplified examples of this kind of AI in order to understand it better. This is the kind of AI that I have been working on and directing for over two decades.

Figure 6. The robot perceives numbers.

Figure 6. The robot perceives numbers.

Figure 7. Qualitative transformation from numbers into relevant knowledge.

Figure 7. Qualitative transformation from numbers into relevant knowledge.

The first example is qualitative modeling for autonomous robot navigation. Imagine a robot trying to perceive and learn its environment with a laser distance sensor on its top (Figure 6). Each second the laser delivers an array of numbers –distances from the robot to the obstacles in the room. Distance sensors contain intrinsic errors which provide uncertainty. The common way for researchers to deal with uncertainty has been using probabilistic methods (which are brute force or black boxes). On the other hand, symbolic AI uses qualitative models to store angles and distances to the most significant landmarks in the room (Figure 7). In qualitative knowledge representation there is a reference system formed by the movement of the robot from point a to point b, which divides the space into 15 qualitative regions, represented iconically in Figure 8 – i.e. the landmark C1 is to the “right-front” with respect to (wrt) the reference system formed by a and b, C1 wrt ac = right-front (rf).

Figure . Qualitative orientation.

Figure 8. Qualitative representation of orientation.

Then, a qualitative reasoning process can be applied to infer new knowledge from the one originally perceived: if we know that the orientation of C1 wrt ab is “right-front” and the orientation of C2 wrt bC1 is “right-front”, then we can infer that C2 wrt ab, i.e. the last landmark with respect to the first reference system is “right-front”. If we repeat this inference process as many times as we can with the original knowledge and the one we infer, we can develop a qualitative map independently of the position of the robot. The transformation from quantitative data into qualitative representation was one of my main discoveries after my PhD, and this simplified example is the core technology of Cognitive Robots, a company that I co-founded in 2007.

The second example of Symbolic AI is the use of qualitative modeling for cognitive vision. The widely spread methods for artificial vision consist of making sense of the numbers which correspond to the characteristics of each one of the thousands of pixels of an image (again using brute force or black box methods). Using qualitative modeling, we can process any image to obtain a set of regions, and then apply a set of qualitative models – for spatial description we use orientation and topology, and for visual description we use shape and color– which provides a set of tags with meaning associated to each image. We then associate the set of tags to a ontology to obtain the name of the objects and the concept associated to the object, i.e. how to use it, where to buy it, etc.

Examples of the use of this technology are marketing – we can identify the handbag of the picture 12 as “Authentic, Jimmy Choo, Day Medium, Zebra Printed, Suede Hobo Handbag, Cognac brown”, and where to buy it near you; or identify the disease of the skin; or provide real augmented reality with a camera and a projector in your glasses we can get more information related to the things we are seeing; robots will behave more intelligently and be more useful to us with automatic identification of objects and their associated concepts.

These two examples of symbolic AI were part of the products of Cognitive Robots. It was included in the Cognitive brain for Service Robotics ® incorporated into autonomous scrubber machines, and our own robotic platform, among others applications. As you might know, Cognitive Robots is not active anymore. I am currently regrouping a team to re-start a new company – Qualitative Artificial Intelligence. Stay tune for more information.

The most important benefits of qualitative models are:

  • They formalize human Common Sense reasoning.
  • Naturally deal with uncertainty, incomplete or partial information.
  • Transform information into knowledge and wisdom.
  • Extract relevant information, allowing Real Time processing.
  • They have a high level of abstraction, which make them extremely good for decisions.

They seem to have all the benefits that fields like cyber security, big data, robotics, and artificial vision are looking for.

My final insights related with the worry regarding AI… There is no way to know what “black box” ASI will do or what the consequences will be for us. My proposal is to use Symbolic, Logic, Qualitative & Cognitive approaches or at least use a combination of both, to always be able to access the reasoning behind any decision made by an AI system.


My keynote speech at RoboBusiness 2015

leave a comment

My keynote speech at RoboBusiness 2015
I’ve been invited to give a talk at RoboBusiness Europe in Milan the 29th of April. If you are going to attend the conference, please, come to see me.

Tittle: “What type of AI will provide “safe” intelligence to service robotics?”

Summary: The AI winter is long since over.  We are well into the spring of narrow AI. What were research projects just ten to fifteen years ago are now apps accessible at our fingertips.  If all the AI systems in the world suddenly stopped functioning, our economic infrastructure would grind to a halt. Of course, our AI systems are not smart enough — yet — to organize such a conspiracy. They understand things only in one way, which means that they don’t really understand them at all.

Strong AI will happen when a narrow AI system arrives to have human level intelligence. Would strong AI be a real existential threat to humanity, as many people seem to believe? Do we need to create a framework to develop narrow AI systems considering all risks? Do we need to start seriously considering Asimov’s 3 Laws of Robotics?

Bio: For two decades, Dr. Teresa Escrig has been a researcher and professor in Artificial Intelligence areas including Qualitative Modeling, Cognitive Vision, and Robotics.  She is the author of 3 books, more than 100 research articles, and the recipient of numerous awards.  From 2002 to 2010, she lead the research group Cognition for Robotics Research.  Since 2007, she has been the CEO of the spin-off Cognitive Robots, whose mission is to provide an integrated solution for the automation of any service vehicle, using a cognitive process that mimics the human mind.

Written by Teresa Escrig

April 18th, 2015 at 7:29 pm

How would your life be enhanced by wearing a virtual personal assistant?


Poster of the movie "Her"

Poster of the movie “Her”

I love comparing the intelligence of a device that appears in a movie with the reality of AI.  It can give us a visual glimpse of a very real possibility.

What do you think? Would you like to have a (wearable) virtual personal assistant helping you to make informed decisions? I certainly would. The human race could take a huge leap in evolution with such extended intelligence capabilities.

The movie ‘Her’ is a beautiful example of just that: Below is an excellent article with a very deep analysis of current and near future AI results.  It’s a great read, I’d love to hear your thoughts. Please, leave comments.

Can we Build “Her”?: What Samantha tells Us About the Future of AI

By Vlad Sejnoha, Nuance

What will the next generation of intelligent computing look like?

The movie Her has captured the public imagination with its vision of a lightning-fast evolutionary trajectory of virtual assistants, and the emotional bonds we could form with them. Is this a likely future?

The film’s narrative arc shows the evolution of the Samantha operating system and her relationship with her user, Theodore, transforming from a competent assistant, to a literary agent that proactively arranges the publication of Theodore’s letters, to an ideal girlfriend, and ultimately to an entity that loses interest in humans because they have become unsatisfying companions. Throughout, Samantha is an impressive conversationalist with a perfect command of language, a grasp of the broader context, a grounding in common sense, and a mastery of the emotional realm.

Continue reading…

Crucial Technology for AI and Robotics: a Kinect-like sensor is included in a smart-phone

leave a comment

3D model of reality created in TANGO project at Google.

3D model of reality created in project Tango at Google.

The Kinect sensor was a revolution for the Robotics industry, mainly because it was a relatively inexpensive way to have a 3D obstacle detection. It provided a set of distances from where the Kinect was positioned to the objects of the world.

The person responsible at Microsoft for the development of the Kinect sensor is now in charge of Project Tango at Google. Project Tango  integrates a Kinect-like sensor in a smart-phone (with all the others sensors already included in the smart-phone), providing a 3D model of reality. Crucial technology for AI and Robotics.

And also, can you imagine having instant access to wearable extended virtual reality? Instant access to the structure of the world in front of you – imagine where this road goes? What is the structure of this building? Or even – Show me where I can buy my favorite pair of jeans in this shopping mall?

And even further: Create a 3D model of your body, use it to virtually try on different clothes online (also in a 3D model), check out the look and fit, make a purchasing decision, drop it into a shopping card, and have it delivered to your door

Mmmm…my imagination flies. Love to hear where yours goes… Leave comments.

Here is the article (check out the amazing video):

Google announces Project Tango smartphone with Kinect-like 3D imaging sensors [VIDEO]

by Chris Chavez

Google was able to throw everyone a curve ball today with the announcement of Project Tango, their new in-house smartphone prototype outfitted with Kinect-like sensors.

The 5-inch smartphone as is being developed by Google’s Advanced Technology and Projects group (ATAP) the same people behind Project Ara. Project Tango is lead by Johnny Lee — a man who helped make the Microsoft Kinect possible (makes sense, right?). The goal of Project Tango is to ultimately give mobile devices a “human-scale understanding” of space and motion, allowing users to map the world around them in ways they never thought possible.

Continue reading…


Google has given an early prototype of the device to Matterport, which makes computer vision and perceptual computing solutions, like software that maps and creates 3D reconstructions of indoor spaces. Don’t miss the video of the 3D map result in this link! It’s amazing!


Is the long anticipated shift in robotics finally happening?


Whew… with so many exciting things happening in the robotics field lately, I just couldn’t remain silent anymore…

kiva robots

Kiva robots carrying shelves in a warehouse.

We were all wowed by Amazon’s acquisition in 2012 of  Kiva Systems for $775 million.  Kiva’s clever self-propelled robots scoot around warehouses in a numeric control dance to retrieve and carry entire shelf-units of items to their proper packaging point.

In December 2013 and January 2014, Google bought 7 robotics companies investing an unknown amount of money.  The Internet giant and pioneer of self-driving cars is serious about a robot-filled future. However we don’t know much about the intent of Google with all these acquisitions. They’re all a part of the Google X division, which is top secret by definition. Most of these companies have closed down their websites and retreated into stealth mode. My guess is that they are grouping up to decide the direction they’ll take to serve Google’s goals.

The robotics team is led by Andy Rubin, who recently stepped down as head of Android.

Here there is a brief summary of all Google’s acquisitions (and a bunch of links to dig deeper):

Arm manipulator of Industrial Perception, Inc.

Arm manipulator of Industrial Perception, Inc.

The biped robot at Schaft, Inc.

The biped robot at Schaft, Inc.









  • Industrial Perception, Inc (IPI) – spun off of the Menlo Park robotics company Willow Garage.  They have a 3D vision-guided robot to be used in manufacturing and logistics.
  • Schaft Inc. The Japanese team that got its start at Tokyo University. They took the top prize at DARPA’s Robotics Challenge Trial with their bipedal robot.
  • Redwood Robotics – started as a joint venture between Meka Robotics, SRI International, and Willow Garage (IPI’s parent). Redwood wants to build the “next generation arm” for robots.
  • Meka Robotics – A very nice torso robot with very sophisticated hands in a mobile platform with wheels.

  • Bot & Dolly  – a design and engineering studio that specializes in automation, robotics, and filmmaking. They use robots to help film commercials and movies like Gravity.
  • Holomini – The only thing we know about them is that they are creators of high-tech wheels for omnidirectional motion.
Bot & Dolly arm with camera.

Bot & Dolly arm with camera.


Holomini's wheels.

Holomini’s wheels.









  • Boston Dynamics  – The most high-profile of all the robotic companies that Google has acquired so far. They have two main robots: ATLAS -the sophisticated humanoid robot and Cheetah, also called the BigDog that can reach 28 mph.
ATLAS robot from Boston Robotics.

ATLAS robot from Boston Robotics.

BigDog from Boston Robotics.

BigDog from Boston Robotics.








In the middle of January 2014, Google acquired Nest for $3.2 billion dollars.

  • Nest  – is an automation startup whose product is a smoke and CO2 alarm that talks.

And at the end of January Google acquired DeepMind for more than $500 M (after having beaten out Facebook):

  • DeepMind – is an AI research company out of London founded by neuroscientist Demis Hassabis, Skype developer Jaan Tallin, and researcher Shane Leggthe.  They use the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms.

In 2012 Google hired Ray Kurzweil to work on machine learning and language processing, to actually understand the content of the Web pages and provide a better way to rank them beside the number of times a web site is mentioned in other web sites. According to Dr. Kurzweil… you will be able to “ask it more complex questions that might be a whole paragraph… It might engage in a dialogue with you to find out what you need… It might come back in two months if it finds something useful.”

imperial college london robotics lab

The butler robot from the Imperial College London Robotics Lab

And now Sir James Dyson (the bagless vacuum cleaner inventor) is investing £5M in the Imperial College London to develop a new generation of “intelligent domestic robots” (an Iron Man’s style robot), with a further £3 million investment from various sources over the next five years.

Dyson remains frustrated at his prototypes’ inability to navigate simple household obstacles after working on a robotic vacuum cleaner  to go along with his company’s famous bagless line for as long as a decade. Indeed, even the greatest Roomba finds itself at a loss under a tangle of dining room chairs, and would shrug its shoulders when faced with a flight of stairs.”

Is the tide finally turning in robotics?

Cognitive Robots wishes you Merry Christmas

leave a comment

Merry Christmas 2013

Google is buying several robotic companies. These are great news for the robotics industry!

2014 is going to be great! I can’t wait…

Merry Christmas! :-) Teresa

Written by Teresa Escrig

December 19th, 2013 at 7:28 pm

Autonomous scrubber machines: is the market ready for them?


11.19.12Cognitive Robots’ first product was the incorporation of our Cognitive Brain for Service Robotics (R) into commercial scrubber machines. This allows any existing commercial scrubber machine to be easily transformed into an autonomous and intelligent robot, that cleans floors, without the need of a human operator.

Did you know that, the operator of a scrubber machine has to follow the same path/pattern every single time they clean an area? It’s true, because otherwise people would be able to perceive the lines of movement of the scrubber on the floors, which are not considered aesthetically pleasing. The main corridors of an airport or a supermarket need to be cleaned longitudinally.

This job is so boring that industrial scrubber machines are increasingly being destroyed by the operators earlier and earlier. Therefore, scrubber manufacturers have changed their machines to be cheaper and with less electronics, resulting in lower life expectancy for their product.  The downside of this, is that in the long-term, due to replacement costs, end-user’s will spend more money to service their clients.

We are now in the midst of a global debate that is exploring the question, “Are robots taking jobs away or providing jobs for people?”  In the current economic climate, we need to decide if we want to maintain the status quo to protect low-profile jobs; or embrace advances that allow us to become more competitive and effective in our jobs, promote learning new skills, and provide jobs where human creativity and intelligence are necessary.

What do we want?

Here it is the specification sheet of the autonomous scrubber machine that Cognitive Robots can provide: specification sheet scrubber machines

Is this product good enough to solve the problem of automatic cleaning?

Is the market ready for this?  What do you think?

How I fell in love with Robotics?

one comment

International Women’s Day.

I received my PhD in Artificial Intelligence, in particular on cognitive models to simulate the way people think about space and time, to effectively move daily around their environment, without the use of any measurement tools. I applied those theoretical models to the movement of simulated robots through the streets of my hometown, Castellon, Spain. It was quite a theoretical thesis, and I really enjoyed working on it.

MINOLTA DIGITAL CAMERAAfter I finished my PhD thesis, I went to a IJCAI (International Joint Conference on Artificial Intelligence) conference in Japan to present my research. The Robocup competition was going on at the same venue as the conference. For the first time, Sony was there presenting their cat and dog robot pets in a fiberglass showcase. The movements of those little robots were so well done, that I stood there looking at them in amazement for a very long time. I thought, “I want to be working with these robots”, “I want to include the technology that I just developed for my thesis to these robots”, “the best way for the robots to move through their environment is by using cognitive models, and I am going to make this happen”! Read the rest of this entry »

Human aspect robots can either by repulsive or the base for cute service robots

leave a comment

A new android infant has been born thanks to the University of California San Diego’s Machine Perception Lab. The lab received funding from the National Science Foundation to contract Kokoro Co. Ltd. and Hanson Robotics, two companies that specialize in building lifelike animatronics and androids, to build a replicant based on a one year old baby. The resulting robot, which has been a couple of years in development, has finally been completed – and you can watch it smile and make cute faces.

With high definition cameras in the eyes, Diego San sees people, gestures, expressions, and uses AI modeled on human babies, to learn from people, the way that a baby hypothetically would. The facial expressions are important to establish a relationship, and communicate intuitively to people. As much a work of art as technology and science, this represents a step forward in the development of emotionally relevant robotics, building on previous work of David Hanson with the Machine Perception Lab such as the emotionally responsive Einstein shown at TED in 2009 (here another video).

Read more >

In 1970, the robotics professor Masahiro Mori coined the term uncanny valley, a hypothesis in the field of robotics and 3D computer animation, which holds that when human replicas look and act almost, but not perfectly, like actual human beings, it causes a response of revulsion among human observers. The “valley” refers to the dip in a graph of the comfort level of humans as a function of a robot‘s human likeness. The hypothesis has been linked to Ernst Jentsch‘s concept of “the uncanny” identified in a 1906 essay, “On the Psychology of the Uncanny” Jentsch’s conception was elaborated by Sigmund Freud in a 1919 essay entitled “The Uncanny” (“Das Unheimliche“).

Read more >

What I would say is that basic research is done to be used in a myriad of ways, so that can serve humans best.

And certainly this very advanced research in robotic expressions can help us to be closer to something as cute as Gumdrop, the 27-year old Bulgarian robot-actress.