Tittle: “What type of AI will provide “safe” intelligence to service robotics?”

Summary: The AI winter is long since over.  We are well into the spring of narrow AI. What were research projects just ten to fifteen years ago are now apps accessible at our fingertips.  If all the AI systems in the world suddenly stopped functioning, our economic infrastructure would grind to a halt. Of course, our AI systems are not smart enough — yet — to organize such a conspiracy. They understand things only in one way, which means that they don’t really understand them at all.

Strong AI will happen when a narrow AI system arrives to have human level intelligence. Would strong AI be a real existential threat to humanity, as many people seem to believe? Do we need to create a framework to develop narrow AI systems considering all risks? Do we need to start seriously considering Asimov’s 3 Laws of Robotics?

Bio: For two decades, Dr. Teresa Escrig has been a researcher and professor in Artificial Intelligence areas including Qualitative Modeling, Cognitive Vision, and Robotics.  She is the author of 3 books, more than 100 research articles, and the recipient of numerous awards.  From 2002 to 2010, she lead the research group Cognition for Robotics Research.  Since 2007, she has been the CEO of the spin-off Cognitive Robots, whose mission is to provide an integrated solution for the automation of any service vehicle, using a cognitive process that mimics the human mind.