Teresa Escrig

News and oppinion about Cognitive AI & Robotics

Archive for the ‘service robotics revolution’ tag

Skippy is an internet-controlled robot that skips stones across a pond

leave a comment

We are going to be amazed of the number and variety of applications that people will came up with in service robotics…

Look at this video of Skippy, an internet-controlled robot that skips stones across a pond.

By , July 11, 2012

Read more >

Why Amazon acquired Kiva?

leave a comment

by Mark P. Mills, 3/23/2012

Amazon’s enormous, automated and well-organized warehouses are the stuff of legend, as are their path-breaking joint ventures with vendors, repair operations and UPS shipping. Still, physical order fulfillment reportedly costs nearly 9 percent of their $40 billion in global revenues.

Amazon was amongst the first to build data centers at Cloud scale – a scale that Google engineers labeled “warehouse scale computing.”   But to disrupt traditional retail Amazon had to do more than create a customer-friendly Web interface for their warehouse-scale computers.  They had to solve the old-fashioned physical warehouse problem in order to distribute the objects they sold.

Enter Kiva’s robots, and their inevitable progeny; the logical connection between the cyber and physical worlds. Think of Kiva bots as the hands and feet of the Cloud. They are not autonomous Star-Trek-like agents, but are wirelessly connected to and controlled by the Cloud in real-time.

When you tap “place your order” on your iPad’s touch-screen you are literally reaching through the Cloud to become one with Kiva to grab a box in the warehouse. Such robots are practical today because of a confluence of enabling technologies; cheap and powerful processing and communications, advanced electro-motive power, and clever software. All this is the domain of computing and square in Amazon’s wheelhouse.

Amazon needs to own Kiva for the same reason they own computing.

Read more >

The rapidly evolving world of robotic technology

leave a comment

June 25 (Bloomberg) — Stanford University’s Marina Gorbis discusses the rapidly evolving world of robotic technology and how humans will interact with them, and learn from them over the next five to ten years. She interviews with Adam Johnson on Bloomberg Television’s “Bloomberg Rewind.” (Source: Bloomberg)

Marina Gorbis is the Executive Director of Institute for the Future.

Marina’s biography – During her tenture at IFTF, and previously with SRI International, Marina has worked with hundreds of organizations in business, education, government, and philanthropy, bringing a future perspective to improve innovation capacity, develop strategies, and design new products and services. A native of Odessa, Ukraine, Marina is particularly suited to see things from a global perspective. She has worked all over the world and feels equally at home in Silicon Valley, Europe, India, or Kazakhstan. Before becoming IFTF’s Executive Director in 2006, Marina created the Global Innovation Forum, a project comparing innovation strategies in different regions, and she founded Global Ethnographic Network (GEN), a multi-year ethnographic research program aimed at understanding daily lives of people in Brazil, Russia, India, China, and Silicon Valley. She also led IFTF’s Technology Horizons Program, focusing on interaction between technology and social organizations. She has been a guest blogger on BoingBoing.net and writes for IFTF and major media outlets. She is a frequent speaker on future organizational, technology, and social issues. Marina holds a Master’s Degree from the Graduate School of Public Policy at UC Berkeley.

Cognitive Robots enhances Kompai’s capabilities by incorporating its “Cognitive Brain for Service Robotics”

leave a comment

Since February 2011, Cognitive Robots and Robosoft have been collaborating on the framework of a European project, the ECHORD C-Kompai. The objective of the project is to enhance the companion robot Kompai with the cognitive capabilities provided by the “Cognitive Brain for Service Robotics ®” – CR-B100 – of Cognitive Robots.

The intent behind the improvement of the Kompai platform is to better serve the users – the elderly.

We have identified 3 aspects of the Kompai’s functionality to be improved in this project:

Read the rest of this entry »

Cognitive Robots’ corporate video

leave a comment

Cognitive Robots has successfully developed the world’s first truly autonomous Cognitive Brain for Service Robotics®, the CR-B100. Our mission is to provide an integrated solution for the automation of service vehicles, using state of the art cognitive processes that mimic the human brain.

Our Cognitive Brain incorporates four aspects of human intelligence: perception (object recognition), reasoning, learning and decision-making. This advanced level of artificial intelligence enables adaptation when uncertainty and unknown situations occur.

We’re actively seeking technical partnerships and investment capital.

Here you can see our corporate video:

Current accomplishments and activities of Cognitive Robots include:

  • CR-B100 has been adapted to commercial floor scrubbers (beta state).
  • CR-B100 has been fully incorporated into a Pioneer (Adept) research platform to prove out the full capabilities of the brain.
  • CR-B100 is currently being incorporated into Robosoft’s companion robot Kompai to enhance the Kompai’s capabilities with intelligence. This allows it to perceive the landmarks in the environment, automatically create its own map, avoid obstacles in 3D, clean the home intelligently, and make decisions to engage the elderly.
  • Cognitive Robots is about to launch its own Service Robotics platform using the CR-B100.
  • Another product of Cognitive Robots, the CR-B50 – Manual Assisted Driver- has been successfully incorporated into commercial forklifts, to increase security.
  • CR-B50 is now being incorporated into commercial buses.

Robotic glove developed by NASA and GM

leave a comment

While Robonaut 2 has been busy testing its technology in microgravity aboard the International Space Station, NASA and General Motors have been working together on the ground to find new ways those technologiescan be used.

The two groups began working together in 2007 on Robonaut 2, or R2, which in 2011 became the first humanoid robot in space. NASA and GM now are developing a robotic glove that auto workers and astronauts can wear to perform their respective jobs, while reducing the risk of repetitive stress injuries. Officially, it’s called the Human Grasp Assist device, but generally it’s called the K-Glove or Robo-Glove.

In this image, Robonaut and a spacesuit-gloved hand are extended toward each other to demonstrate the collaboration between robots and humans in space.

Image Credit: NASA

How this robotic glove can be used for other apps?

Cocorobo – A Talking, Dog-Watching Robot Vacuum Cleaner from Japan

leave a comment

Another Roomba coming from Japan – more features (receives up to 30 verbal commands and uses sonar and infrared sensors, 1 hour of continuous performance – don’t know if it is more intelligent) and more expensive (almost 4 times Roomba).

Is it a threat to Roomba?

By Sarah Berlow, May 8, 2012.

Cocorobo’s many gadgets make iRobot’s popular Roomba look like it should be sold alongside Easy Bake ovens. Voice recognition technology enables Cocorobo’s vacuum to respond to greetings or commands — in multiple languages or dialects.  (So far, though, its vocabulary is limited to about 30 phrases, such as “I understand.”)

Cocorobo dances around in reply to commands, resembling the Jetson housekeeper’s friendly compliance. A camera also enables Cocorobo to watch the pet left at home, sending photos via cloud technology to the owner’s iPhone or other smartphone. It can vacuum for up to an hour before requiring a recharge. It does so by linking itself at a port, and has a USB port installed in the vacuum to download updates, such as an expanded vocabulary.

With so much technology heaped onto it, Cocorobo’s vacuuming capability seems almost an afterthought, though Sharp claims it also has an extra-powerful vacuuming system.

Read more >

Spherical flying machine developed by the Japanese Department of Defence

leave a comment

This is the world first Spherical Flying Machine developed by the Research department at the Japan Administrate of Defense:

  • It flies vertically and horizontally, like a humming-bird.
  • It’s unmanned.
  • It can land in any attitude because it’s round
  • It can also move along the ground
  • It can fly 8 minutes continuously, from 0 to 60 Km/h
  • It was build from commercially available parts with a total cost of $1400 USD
  • Applications: rescue and recognizance

Source
Spherical Flying Machine – Watch More Funny Videos

We need Service Robots to feed disable students

6 comments

Dear Teresa, My name is Paul Doyle and I am Head of Access R&D at Hereward College in Coventry. Hereward is a residential college that supports disabled students. We have for some years developed a keen interest in the use of robotics as an assistive technology.

I have been in contact with many providers of robots over the years from the PR2 at Willow Garage to the Care-o-bot by Fraunhofer with little tangible progress. What we have failed to achieve to date is to embed and evaluate an actual device in a real care/living/education environment such as Hereward to see if it actually works and if it is financially viable!

I would like to challenge any robot for example to help with the scenario I posted recently on a Linkedin forum:

Today when I was having lunch in our refectory I observed a number of students (with a variety of physical disabilities) waiting in an orderly queue for a human career to help feed them their lunchtime meal. Due to a shortage of careers some of the students waited for a very long time before a staff member could ask what the student wanted from the menu, picked up the chosen meal from the counter and then fed the student in an appropriate manner (food at the right temperature consistency and rate).
This situation led me to ponder the questions could a robot have helped carry out these tasks to some degree, and bearing in mind the care staff are paid not much over minimum wage, when (if ever) will a robot alternative be a financially viable?”

I would hope manufacturers could see this exposure to a group of users as a development resource, as we have a residential care and education setting where such technologies can be tested in a managed and safe environment.

Many of the young people at Hereward will eventually be the recipients of assistive robot technologies if and when they come online, so hearing what they need/want would I imagine provide a useful insight to product developers.

Read the rest of this entry »

The SICK laser sensor is currently mandatory for autonomous robots – if we want the ability to perceive the world, and therefore show a bit of intelligence

8 comments

The security SICK laser sensor is currently mandatory for autonomous robots – if we want the ability to perceive the world, and therefore show a bit of intelligence. It costs almost 3000 euros. While not without its drawbacks, this sensor represents the  state of the art and is the most expensive component in a current autonomous robot.   If we produce robots as prototypes, not on a large scale, we can not provide inexpensive robots yet.

James Falasco – I am curious about the comment that the SICK sensor is mandatory . How so ?

Teresa – Jim, The SICK laser sensor is still mandatory for robots or vehicles that need to show intelligence because:

  • it’s the most reliable distance sensor for medium-long distances, much more than sonar or infrared (which is basically useful for very short distances)
  • it’s necessary to perceive the boundaries of the environment to autonomously build the map of it. The map is necessary for the robot to know where things are.
  • The linear laser, such as SICK, has also drawbacks. The main one is that it only perceives one line.
  • The best way to go would be to have all the information needed and interpreted from a camera, which would be much less expensive, and with richer information.
  • Although we have developed a cognitive vision system which gives meaning to the objects of an image, with two cameras you can get distances to objects, yet we still need further development and some integration to use only camera.
  • We have also integrated into the Cognitive Brain the Kinect sensor with great success. It gives us depth in a conical area in front of the robot, although with short reach (we can’t see the limits of the rooms) and very sensitive to light changes (not good in exterior settings yet).

Summary: We use laser, Kinect and camera sensors. We can’t avoid the laser yet, which is the most expensive component of the whole robot, by far.

I am sure that with more development we can make the camera work to completely substitute the laser. I would love to do it.

Comments of other experts on the subject are very welcome. Thanks.

Read the comments.

Read the rest of this entry »