
Tittle: “What type of AI will provide “safe” intelligence to service robotics?”
Summary: The AI winter is long since over. We are well into the spring of narrow AI. What were research projects just ten to fifteen years ago are now apps accessible at our fingertips. If all the AI systems in the world suddenly stopped functioning, our economic infrastructure would grind to a halt. Of course, our AI systems are not smart enough — yet — to organize such a conspiracy. They understand things only in one way, which means that they don’t really understand them at all.
Strong AI will happen when a narrow AI system arrives to have human level intelligence. Would strong AI be a real existential threat to humanity, as many people seem to believe? Do we need to create a framework to develop narrow AI systems considering all risks? Do we need to start seriously considering Asimov’s 3 Laws of Robotics?