Teresa Escrig

Knowledge and Possibilities to Empower People

How would your life be enhanced by wearing a virtual personal assistant?

leave a comment

Poster of the movie "Her"

Poster of the movie “Her”

I love comparing the intelligence of a device that appears in a movie with the reality of AI.  It can give us a visual glimpse of a very real possibility.

What do you think? Would you like to have a (wearable) virtual personal assistant helping you to make informed decisions? I certainly would. The human race could take a huge leap in evolution with such extended intelligence capabilities.

The movie ‘Her’ is a beautiful example of just that: Below is an excellent article with a very deep analysis of current and near future AI results.  It’s a great read, I’d love to hear your thoughts. Please, leave comments.

Can we Build “Her”?: What Samantha tells Us About the Future of AI

By Vlad Sejnoha, Nuance

What will the next generation of intelligent computing look like?

The movie Her has captured the public imagination with its vision of a lightning-fast evolutionary trajectory of virtual assistants, and the emotional bonds we could form with them. Is this a likely future?

The film’s narrative arc shows the evolution of the Samantha operating system and her relationship with her user, Theodore, transforming from a competent assistant, to a literary agent that proactively arranges the publication of Theodore’s letters, to an ideal girlfriend, and ultimately to an entity that loses interest in humans because they have become unsatisfying companions. Throughout, Samantha is an impressive conversationalist with a perfect command of language, a grasp of the broader context, a grounding in common sense, and a mastery of the emotional realm.

Continue reading…

Crucial Technology for AI and Robotics: a Kinect-like sensor is included in a smart-phone

leave a comment

3D model of reality created in TANGO project at Google.

3D model of reality created in project Tango at Google.

The Kinect sensor was a revolution for the Robotics industry, mainly because it was a relatively inexpensive way to have a 3D obstacle detection. It provided a set of distances from where the Kinect was positioned to the objects of the world.

The person responsible at Microsoft for the development of the Kinect sensor is now in charge of Project Tango at Google. Project Tango  integrates a Kinect-like sensor in a smart-phone (with all the others sensors already included in the smart-phone), providing a 3D model of reality. Crucial technology for AI and Robotics.

And also, can you imagine having instant access to wearable extended virtual reality? Instant access to the structure of the world in front of you – imagine where this road goes? What is the structure of this building? Or even – Show me where I can buy my favorite pair of jeans in this shopping mall?

And even further: Create a 3D model of your body, use it to virtually try on different clothes online (also in a 3D model), check out the look and fit, make a purchasing decision, drop it into a shopping card, and have it delivered to your door

Mmmm…my imagination flies. Love to hear where yours goes… Leave comments.

Here is the article (check out the amazing video):

Google announces Project Tango smartphone with Kinect-like 3D imaging sensors [VIDEO]

by Chris Chavez

Google was able to throw everyone a curve ball today with the announcement of Project Tango, their new in-house smartphone prototype outfitted with Kinect-like sensors.

The 5-inch smartphone as is being developed by Google’s Advanced Technology and Projects group (ATAP) the same people behind Project Ara. Project Tango is lead by Johnny Lee — a man who helped make the Microsoft Kinect possible (makes sense, right?). The goal of Project Tango is to ultimately give mobile devices a “human-scale understanding” of space and motion, allowing users to map the world around them in ways they never thought possible.

Continue reading…

 

Google has given an early prototype of the device to Matterport, which makes computer vision and perceptual computing solutions, like software that maps and creates 3D reconstructions of indoor spaces. Don’t miss the video of the 3D map result in this link! It’s amazing!

 

Is the long anticipated shift in robotics finally happening?

2 comments

Whew… with so many exciting things happening in the robotics field lately, I just couldn’t remain silent anymore…

kiva robots

Kiva robots carrying shelves in a warehouse.

We were all wowed by Amazon’s acquisition in 2012 of  Kiva Systems for $775 million.  Kiva’s clever self-propelled robots scoot around warehouses in a numeric control dance to retrieve and carry entire shelf-units of items to their proper packaging point.

In December 2013 and January 2014, Google bought 7 robotics companies investing an unknown amount of money.  The Internet giant and pioneer of self-driving cars is serious about a robot-filled future. However we don’t know much about the intent of Google with all these acquisitions. They’re all a part of the Google X division, which is top secret by definition. Most of these companies have closed down their websites and retreated into stealth mode. My guess is that they are grouping up to decide the direction they’ll take to serve Google’s goals.

The robotics team is led by Andy Rubin, who recently stepped down as head of Android.

Here there is a brief summary of all Google’s acquisitions (and a bunch of links to dig deeper):

Arm manipulator of Industrial Perception, Inc.

Arm manipulator of Industrial Perception, Inc.

The biped robot at Schaft, Inc.

The biped robot at Schaft, Inc.

 

 

 

 

 

 

 

 

  • Industrial Perception, Inc (IPI) - spun off of the Menlo Park robotics company Willow Garage.  They have a 3D vision-guided robot to be used in manufacturing and logistics.
  • Schaft Inc. The Japanese team that got its start at Tokyo University. They took the top prize at DARPA’s Robotics Challenge Trial with their bipedal robot.
  • Redwood Robotics – started as a joint venture between Meka Robotics, SRI International, and Willow Garage (IPI’s parent). Redwood wants to build the “next generation arm” for robots.
  • Meka Robotics – A very nice torso robot with very sophisticated hands in a mobile platform with wheels.

  • Bot & Dolly  – a design and engineering studio that specializes in automation, robotics, and filmmaking. They use robots to help film commercials and movies like Gravity.
  • Holomini – The only thing we know about them is that they are creators of high-tech wheels for omnidirectional motion.
Bot & Dolly arm with camera.

Bot & Dolly arm with camera.

 

Holomini's wheels.

Holomini’s wheels.

 

 

 

 

 

 

 

 

  • Boston Dynamics  – The most high-profile of all the robotic companies that Google has acquired so far. They have two main robots: ATLAS -the sophisticated humanoid robot and Cheetah, also called the BigDog that can reach 28 mph.
ATLAS robot from Boston Robotics.

ATLAS robot from Boston Robotics.

BigDog from Boston Robotics.

BigDog from Boston Robotics.

 

 

 

 

 

 

 

In the middle of January 2014, Google acquired Nest for $3.2 billion dollars.

  • Nest  – is an automation startup whose product is a smoke and CO2 alarm that talks.

And at the end of January Google acquired DeepMind for more than $500 M (after having beaten out Facebook):

  • DeepMind – is an AI research company out of London founded by neuroscientist Demis Hassabis, Skype developer Jaan Tallin, and researcher Shane Leggthe.  They use the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms.

In 2012 Google hired Ray Kurzweil to work on machine learning and language processing, to actually understand the content of the Web pages and provide a better way to rank them beside the number of times a web site is mentioned in other web sites. According to Dr. Kurzweil… you will be able to “ask it more complex questions that might be a whole paragraph… It might engage in a dialogue with you to find out what you need… It might come back in two months if it finds something useful.”

imperial college london robotics lab

The butler robot from the Imperial College London Robotics Lab

And now Sir James Dyson (the bagless vacuum cleaner inventor) is investing £5M in the Imperial College London to develop a new generation of “intelligent domestic robots” (an Iron Man’s style robot), with a further £3 million investment from various sources over the next five years.

Dyson remains frustrated at his prototypes’ inability to navigate simple household obstacles after working on a robotic vacuum cleaner  to go along with his company’s famous bagless line for as long as a decade. Indeed, even the greatest Roomba finds itself at a loss under a tangle of dining room chairs, and would shrug its shoulders when faced with a flight of stairs.”

Is the tide finally turning in robotics?

Cognitive Robots wishes you Merry Christmas

leave a comment

Merry Christmas 2013

Google is buying several robotic companies. These are great news for the robotics industry!

2014 is going to be great! I can’t wait…

Merry Christmas! :-) Teresa

Written by Teresa Escrig

December 19th, 2013 at 7:28 pm

Autonomous scrubber machines: is the market ready for them?

2 comments

11.19.12Cognitive Robots’ first product was the incorporation of our Cognitive Brain for Service Robotics (R) into commercial scrubber machines. This allows any existing commercial scrubber machine to be easily transformed into an autonomous and intelligent robot, that cleans floors, without the need of a human operator.

Did you know that, the operator of a scrubber machine has to follow the same path/pattern every single time they clean an area? It’s true, because otherwise people would be able to perceive the lines of movement of the scrubber on the floors, which are not considered aesthetically pleasing. The main corridors of an airport or a supermarket need to be cleaned longitudinally.

This job is so boring that industrial scrubber machines are increasingly being destroyed by the operators earlier and earlier. Therefore, scrubber manufacturers have changed their machines to be cheaper and with less electronics, resulting in lower life expectancy for their product.  The downside of this, is that in the long-term, due to replacement costs, end-user’s will spend more money to service their clients.

We are now in the midst of a global debate that is exploring the question, “Are robots taking jobs away or providing jobs for people?”  In the current economic climate, we need to decide if we want to maintain the status quo to protect low-profile jobs; or embrace advances that allow us to become more competitive and effective in our jobs, promote learning new skills, and provide jobs where human creativity and intelligence are necessary.

What do we want?

Here it is the specification sheet of the autonomous scrubber machine that Cognitive Robots can provide: specification sheet scrubber machines

Is this product good enough to solve the problem of automatic cleaning?

Is the market ready for this?  What do you think?

How I fell in love with Robotics?

one comment

International Women’s Day.

I received my PhD in Artificial Intelligence, in particular on cognitive models to simulate the way people think about space and time, to effectively move daily around their environment, without the use of any measurement tools. I applied those theoretical models to the movement of simulated robots through the streets of my hometown, Castellon, Spain. It was quite a theoretical thesis, and I really enjoyed working on it.

MINOLTA DIGITAL CAMERAAfter I finished my PhD thesis, I went to a IJCAI (International Joint Conference on Artificial Intelligence) conference in Japan to present my research. The Robocup competition was going on at the same venue as the conference. For the first time, Sony was there presenting their cat and dog robot pets in a fiberglass showcase. The movements of those little robots were so well done, that I stood there looking at them in amazement for a very long time. I thought, “I want to be working with these robots”, “I want to include the technology that I just developed for my thesis to these robots”, “the best way for the robots to move through their environment is by using cognitive models, and I am going to make this happen”! Read the rest of this entry »

Human aspect robots can either by repulsive or the base for cute service robots

leave a comment

A new android infant has been born thanks to the University of California San Diego’s Machine Perception Lab. The lab received funding from the National Science Foundation to contract Kokoro Co. Ltd. and Hanson Robotics, two companies that specialize in building lifelike animatronics and androids, to build a replicant based on a one year old baby. The resulting robot, which has been a couple of years in development, has finally been completed – and you can watch it smile and make cute faces.

With high definition cameras in the eyes, Diego San sees people, gestures, expressions, and uses AI modeled on human babies, to learn from people, the way that a baby hypothetically would. The facial expressions are important to establish a relationship, and communicate intuitively to people. As much a work of art as technology and science, this represents a step forward in the development of emotionally relevant robotics, building on previous work of David Hanson with the Machine Perception Lab such as the emotionally responsive Einstein shown at TED in 2009 (here another video).

Read more >

In 1970, the robotics professor Masahiro Mori coined the term uncanny valley, a hypothesis in the field of robotics and 3D computer animation, which holds that when human replicas look and act almost, but not perfectly, like actual human beings, it causes a response of revulsion among human observers. The “valley” refers to the dip in a graph of the comfort level of humans as a function of a robot‘s human likeness. The hypothesis has been linked to Ernst Jentsch‘s concept of “the uncanny” identified in a 1906 essay, “On the Psychology of the Uncanny” Jentsch’s conception was elaborated by Sigmund Freud in a 1919 essay entitled “The Uncanny” (“Das Unheimliche“).

Read more >

What I would say is that basic research is done to be used in a myriad of ways, so that can serve humans best.

And certainly this very advanced research in robotic expressions can help us to be closer to something as cute as Gumdrop, the 27-year old Bulgarian robot-actress.

 

Many robotic prototypes built, few arrive to the market

one comment

I really enjoyed  the following video from iRobot that shows their museum of all of the robotic prototypes and applications they’ve been working on over the years.

It’s amazing stuff, and very important to realize the amount of work that needs to be done to prove a concept. Even when proven, the robot may not meet some of the needs of the user, and not a best seller anyway.

Next time you buy a sophisticated toy or a small (not so intelligent) vacuum cleaner, remember all of the time, money, research and work behind it!

Thank you iRobot for showing us this treasure!

Written by Teresa Escrig

February 11th, 2013 at 11:33 pm

What are the benefits of Artificial Intelligence in Robotics?

one comment

Happy New Year to all!  It’s been a while since my last post. Too busy. Now, I’m back.

————————————————————————————-

Robotics is not only a research field within artificial intelligence, but a field of application, one where all areas of artificial intelligence can be tested and integrated into a final result.

Amazing humanoid robots exhibit elegant and smooth motion capable of walking, running, and going up and down stairs.  They use their hands to protect themselves when falling, and to get up afterward.  They’re an example of the tremendous financial and human capital that is being devoted to research and development in the field of electronics, control and the design of robots.

Very often, the behavior of these robots contains a fixed number of pre-programmed instructions that are repeated regardless of  any changes in the environment. These robots have no autonomy, nor adaptation, to the changing environment, and therefore do not show intelligent behavior. We are amazed by the technology they provide, which is fantastic! But we can not infer that, because the robots are physically so realistic and the movements so precise and gentle, that they are able to do what we (people) do. Read the rest of this entry »

Real or fiction? How far is the robotic industry to produce something like this?

leave a comment

Gumdrop is a 27-year old Bulgarian robot-actress who has appeared in films with Fred Astaire and Charlie Chaplin, and now she’s auditioning for a new film with someone called TikTok.

Gumdrop is one of the most cute and endearing robots that have been creating in film. This is a short film from Sky Captain and the World of Tomorrow director Kerry Conran is totally worthy of a feature length version, diving into the life of Grumdrop.

What are the features of this bot-actress that does not exist in our current robots yet?

  • Gumdrop is flexible in her body and mouth movements
  • Gumdrop has an intelligent communication
  • Gumdrop has a history as an individual robot (“when I was a litte robot”, she recalls)

Find the full article about the movie here.

The current robot that reminds me to Gumdrop is Tico, from Adele Robotics. Look at the following video:

Do you find more differences?

Written by Teresa Escrig

December 3rd, 2012 at 11:31 pm