Teresa Escrig

News and oppinion about Cognitive AI & Robotics

Archive for the ‘News on AI & Robotics’ Category

How would your life be enhanced by wearing a virtual personal assistant?

2 comments

Poster of the movie "Her"

Poster of the movie “Her”

I love comparing the intelligence of a device that appears in a movie with the reality of AI.  It can give us a visual glimpse of a very real possibility.

What do you think? Would you like to have a (wearable) virtual personal assistant helping you to make informed decisions? I certainly would. The human race could take a huge leap in evolution with such extended intelligence capabilities.

The movie ‘Her’ is a beautiful example of just that: Below is an excellent article with a very deep analysis of current and near future AI results.  It’s a great read, I’d love to hear your thoughts. Please, leave comments.

Can we Build “Her”?: What Samantha tells Us About the Future of AI

By Vlad Sejnoha, Nuance

What will the next generation of intelligent computing look like?

The movie Her has captured the public imagination with its vision of a lightning-fast evolutionary trajectory of virtual assistants, and the emotional bonds we could form with them. Is this a likely future?

The film’s narrative arc shows the evolution of the Samantha operating system and her relationship with her user, Theodore, transforming from a competent assistant, to a literary agent that proactively arranges the publication of Theodore’s letters, to an ideal girlfriend, and ultimately to an entity that loses interest in humans because they have become unsatisfying companions. Throughout, Samantha is an impressive conversationalist with a perfect command of language, a grasp of the broader context, a grounding in common sense, and a mastery of the emotional realm.

Continue reading…

Crucial Technology for AI and Robotics: a Kinect-like sensor is included in a smart-phone

leave a comment

3D model of reality created in TANGO project at Google.

3D model of reality created in project Tango at Google.

The Kinect sensor was a revolution for the Robotics industry, mainly because it was a relatively inexpensive way to have a 3D obstacle detection. It provided a set of distances from where the Kinect was positioned to the objects of the world.

The person responsible at Microsoft for the development of the Kinect sensor is now in charge of Project Tango at Google. Project Tango  integrates a Kinect-like sensor in a smart-phone (with all the others sensors already included in the smart-phone), providing a 3D model of reality. Crucial technology for AI and Robotics.

And also, can you imagine having instant access to wearable extended virtual reality? Instant access to the structure of the world in front of you – imagine where this road goes? What is the structure of this building? Or even – Show me where I can buy my favorite pair of jeans in this shopping mall?

And even further: Create a 3D model of your body, use it to virtually try on different clothes online (also in a 3D model), check out the look and fit, make a purchasing decision, drop it into a shopping card, and have it delivered to your door

Mmmm…my imagination flies. Love to hear where yours goes… Leave comments.

Here is the article (check out the amazing video):

Google announces Project Tango smartphone with Kinect-like 3D imaging sensors [VIDEO]

by Chris Chavez

Google was able to throw everyone a curve ball today with the announcement of Project Tango, their new in-house smartphone prototype outfitted with Kinect-like sensors.

The 5-inch smartphone as is being developed by Google’s Advanced Technology and Projects group (ATAP) the same people behind Project Ara. Project Tango is lead by Johnny Lee — a man who helped make the Microsoft Kinect possible (makes sense, right?). The goal of Project Tango is to ultimately give mobile devices a “human-scale understanding” of space and motion, allowing users to map the world around them in ways they never thought possible.

Continue reading…

 

Google has given an early prototype of the device to Matterport, which makes computer vision and perceptual computing solutions, like software that maps and creates 3D reconstructions of indoor spaces. Don’t miss the video of the 3D map result in this link! It’s amazing!

 

Is the long anticipated shift in robotics finally happening?

2 comments

Whew… with so many exciting things happening in the robotics field lately, I just couldn’t remain silent anymore…

kiva robots

Kiva robots carrying shelves in a warehouse.

We were all wowed by Amazon’s acquisition in 2012 of  Kiva Systems for $775 million.  Kiva’s clever self-propelled robots scoot around warehouses in a numeric control dance to retrieve and carry entire shelf-units of items to their proper packaging point.

In December 2013 and January 2014, Google bought 7 robotics companies investing an unknown amount of money.  The Internet giant and pioneer of self-driving cars is serious about a robot-filled future. However we don’t know much about the intent of Google with all these acquisitions. They’re all a part of the Google X division, which is top secret by definition. Most of these companies have closed down their websites and retreated into stealth mode. My guess is that they are grouping up to decide the direction they’ll take to serve Google’s goals.

The robotics team is led by Andy Rubin, who recently stepped down as head of Android.

Here there is a brief summary of all Google’s acquisitions (and a bunch of links to dig deeper):

Arm manipulator of Industrial Perception, Inc.

Arm manipulator of Industrial Perception, Inc.

The biped robot at Schaft, Inc.

The biped robot at Schaft, Inc.

 

 

 

 

 

 

 

 

  • Industrial Perception, Inc (IPI) – spun off of the Menlo Park robotics company Willow Garage.  They have a 3D vision-guided robot to be used in manufacturing and logistics.
  • Schaft Inc. The Japanese team that got its start at Tokyo University. They took the top prize at DARPA’s Robotics Challenge Trial with their bipedal robot.
  • Redwood Robotics – started as a joint venture between Meka Robotics, SRI International, and Willow Garage (IPI’s parent). Redwood wants to build the “next generation arm” for robots.
  • Meka Robotics – A very nice torso robot with very sophisticated hands in a mobile platform with wheels.

  • Bot & Dolly  – a design and engineering studio that specializes in automation, robotics, and filmmaking. They use robots to help film commercials and movies like Gravity.
  • Holomini – The only thing we know about them is that they are creators of high-tech wheels for omnidirectional motion.
Bot & Dolly arm with camera.

Bot & Dolly arm with camera.

 

Holomini's wheels.

Holomini’s wheels.

 

 

 

 

 

 

 

 

  • Boston Dynamics  – The most high-profile of all the robotic companies that Google has acquired so far. They have two main robots: ATLAS -the sophisticated humanoid robot and Cheetah, also called the BigDog that can reach 28 mph.
ATLAS robot from Boston Robotics.

ATLAS robot from Boston Robotics.

BigDog from Boston Robotics.

BigDog from Boston Robotics.

 

 

 

 

 

 

 

In the middle of January 2014, Google acquired Nest for $3.2 billion dollars.

  • Nest  – is an automation startup whose product is a smoke and CO2 alarm that talks.

And at the end of January Google acquired DeepMind for more than $500 M (after having beaten out Facebook):

  • DeepMind – is an AI research company out of London founded by neuroscientist Demis Hassabis, Skype developer Jaan Tallin, and researcher Shane Leggthe.  They use the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms.

In 2012 Google hired Ray Kurzweil to work on machine learning and language processing, to actually understand the content of the Web pages and provide a better way to rank them beside the number of times a web site is mentioned in other web sites. According to Dr. Kurzweil… you will be able to “ask it more complex questions that might be a whole paragraph… It might engage in a dialogue with you to find out what you need… It might come back in two months if it finds something useful.”

imperial college london robotics lab

The butler robot from the Imperial College London Robotics Lab

And now Sir James Dyson (the bagless vacuum cleaner inventor) is investing £5M in the Imperial College London to develop a new generation of “intelligent domestic robots” (an Iron Man’s style robot), with a further £3 million investment from various sources over the next five years.

Dyson remains frustrated at his prototypes’ inability to navigate simple household obstacles after working on a robotic vacuum cleaner  to go along with his company’s famous bagless line for as long as a decade. Indeed, even the greatest Roomba finds itself at a loss under a tangle of dining room chairs, and would shrug its shoulders when faced with a flight of stairs.”

Is the tide finally turning in robotics?

Human aspect robots can either by repulsive or the base for cute service robots

leave a comment

A new android infant has been born thanks to the University of California San Diego’s Machine Perception Lab. The lab received funding from the National Science Foundation to contract Kokoro Co. Ltd. and Hanson Robotics, two companies that specialize in building lifelike animatronics and androids, to build a replicant based on a one year old baby. The resulting robot, which has been a couple of years in development, has finally been completed – and you can watch it smile and make cute faces.

With high definition cameras in the eyes, Diego San sees people, gestures, expressions, and uses AI modeled on human babies, to learn from people, the way that a baby hypothetically would. The facial expressions are important to establish a relationship, and communicate intuitively to people. As much a work of art as technology and science, this represents a step forward in the development of emotionally relevant robotics, building on previous work of David Hanson with the Machine Perception Lab such as the emotionally responsive Einstein shown at TED in 2009 (here another video).

Read more >

In 1970, the robotics professor Masahiro Mori coined the term uncanny valley, a hypothesis in the field of robotics and 3D computer animation, which holds that when human replicas look and act almost, but not perfectly, like actual human beings, it causes a response of revulsion among human observers. The “valley” refers to the dip in a graph of the comfort level of humans as a function of a robot‘s human likeness. The hypothesis has been linked to Ernst Jentsch‘s concept of “the uncanny” identified in a 1906 essay, “On the Psychology of the Uncanny” Jentsch’s conception was elaborated by Sigmund Freud in a 1919 essay entitled “The Uncanny” (“Das Unheimliche“).

Read more >

What I would say is that basic research is done to be used in a myriad of ways, so that can serve humans best.

And certainly this very advanced research in robotic expressions can help us to be closer to something as cute as Gumdrop, the 27-year old Bulgarian robot-actress.

 

Many robotic prototypes built, few arrive to the market

one comment

I really enjoyed  the following video from iRobot that shows their museum of all of the robotic prototypes and applications they’ve been working on over the years.

It’s amazing stuff, and very important to realize the amount of work that needs to be done to prove a concept. Even when proven, the robot may not meet some of the needs of the user, and not a best seller anyway.

Next time you buy a sophisticated toy or a small (not so intelligent) vacuum cleaner, remember all of the time, money, research and work behind it!

Thank you iRobot for showing us this treasure!

Written by Teresa Escrig

February 11th, 2013 at 11:33 pm

Real or fiction? How far is the robotic industry to produce something like this?

leave a comment

Gumdrop is a 27-year old Bulgarian robot-actress who has appeared in films with Fred Astaire and Charlie Chaplin, and now she’s auditioning for a new film with someone called TikTok.

Gumdrop is one of the most cute and endearing robots that have been creating in film. This is a short film from Sky Captain and the World of Tomorrow director Kerry Conran is totally worthy of a feature length version, diving into the life of Grumdrop.

What are the features of this bot-actress that does not exist in our current robots yet?

  • Gumdrop is flexible in her body and mouth movements
  • Gumdrop has an intelligent communication
  • Gumdrop has a history as an individual robot (“when I was a litte robot”, she recalls)

Find the full article about the movie here.

The current robot that reminds me to Gumdrop is Tico, from Adele Robotics. Look at the following video:

Do you find more differences?

Written by Teresa Escrig

December 3rd, 2012 at 11:31 pm

Amazing examples of the variety of uses of service robotics

leave a comment

By Ann R. Thryft 11/12/2012

Service robots often mean robots that assist the elderly, or help with the rehabilitation of medical patients. But the range of services that robots can perform is extremely broad.

From a robotic fish that uses artificial intelligence to detect and identify pollution in seawater created by SHOAL,

 

 

 

 

 

 

To a telepresence PatrolBot which will let disabled police officers and military veterans serve as distance patrol officers, filling a gap in both the lack of patrol staff, and the lack of available jobs for disabled vets and officers, developed by Florida International University. Read the rest of this entry »

Fiona, a community robotic project to create an artificial mind

leave a comment

Adele Robotics has launched Fiona, a project for the robotics community to create an artificial mind.

This is another example of Cloud Robotics and reproducing the Apps economy for the robotics industry, the future of robotics.

Congratulations Adele!

Robotic Operating System (ROS), the standard that the robotics field desperately needed

one comment

October 19, 2012 by David Pietrocola at Robohub (Robohub is an online platform that brings together leading communicators in robotics research, start-ups, business, and education from around the world).

Open-source software is making it easier to reuse algorithms and allow engineers and researchers to focus on their problems of interest instead of reinventing the wheel for each project. Not an expert in path planning or don’t have the time (or patience) to implement SLAM? There’s a package for that. Manipulator control? Package for that too. Additionally, falling component prices and commercial-off-the-shelf (COTS) devices are making robotics hardware more available. This tutorial will teach you how to put together a simple remote teleoperation robot using these principles.

Read more >

Cognitive Brain for Service Robotics (R) from Cognitive Robots is created with ROS.

Open-source humanoid platform from NimbRo to compete in RoboCup’s TeenSize league

leave a comment

Once upon a time, when I finished my PhD dissertation, I went to the IJCAI conference in Kyoto, Japan, and the Robocup competition was taken place in the same venue. I absolutely fall in love with the Aibo dog and cat robots from Sony, that were exposed at the competition (before they were widely used at the same competition).

At that event I decided that I wanted to apply the results of my PhD to bring Intelligence to robots. And that is what I did. I started a research group at Jaume I University. My students play with the Aibos for years. And working on one of the challenges of the Robocup competition with my students, I put all the dots together, and after 10 years of research since my PhD was finished, the seed of Cognitive Robots was born. That technology became a patent pending for our company and is still ahead of the rest of the technology that brings Intelligence to the robots, as far as we know.

I have great memories about the Robocup competition. I agree that it is a great play ground to integrate and test technologies in the areas of AI and Robotics. And it is for sure much more that a toy test.

By , October 8, 2012

University of Bonn’s Team NimbRo are commercializing a humanoid platform, NimbRo-OP, for €20,000 (US$26,000) to compete in RoboCup‘s TeenSize league. It sounds rather expensive, but it will save teams the trouble of prototyping their own, and the untold hours of research and development that would normally require.

Read more >