Teresa Escrig

News and oppinion about Cognitive AI & Robotics

Archive for the ‘Artificial Intelligence’ tag

Hanson Robokind unveils latest version of its Zeno humanoid robot

leave a comment

by August 2, 2012

Built by Hanson Robotics, Zeno’s open-platform software allows for custom tinkering by the purchaser, but the robot is currently programmed for a number of functions as well as speaking 26 languages. In the video, it asserts that it can carry on “conversations” and show “compassion.” It can also “deliver education curricula,” provide autism treatment therapy and can answer questions. It demonstrated the last of these by fielding spoken questions on astronomy, sports and films.

Zeno will be joined by a “female” counterpart called Alice in August of 2012. Neither, however, will be selling for the US$300 that Hanson had hoped for five years ago. Though no price has been set, current Hanson RoboKind robots are valued on its website at up to US$16,750. However, the company is still keen on breaking into the mass market and plans to roll out smaller, cheaper “cousins” for Zeno sometime in 2013.

Read more >

There is a huge amount of work done in this platform. Congratulations to the team. This platform brings robotics closer to the public.

 

RP-VITA, the new iRobot Telepresence robot doctor

leave a comment

By , July 26, 2012

iRobot and InTouch Health are working under a partnership and joint development and licensing agreement to develop the RP-VITA, which will allow doctors and other health specialists to not only visit patients remotely, but to robotically navigate through wards, access patient records and even carry out examinations.

The RP-VITA is a combination of iRobot’s Robot Ava mobile robotics platform and the InTouch Telemedicine System. This produces what the partners refer to as a an “expandable telemedicine technology platform.”

It’s controlled by a simple iPad interface and has an enhanced autonomous navigation capability. That means it can be sent where needed with a single click. Using its Obstacle Detection Obstacle Avoidance (ODOA) system, the robot can proceed to its location on its own, navigating the hospital quickly, safely and accurately.

The robot allows doctors and staff real-time access to important clinical data from the patient’s online files, but it also can transmit live information by means of its built-in electronic stethoscope or by linking to diagnostic devices such as otoscopes and ultrasound machines.

The RP-VITA is being unveiled to the public at the InTouch Health 7th Annual Clinical Innovations Forum (July 26-28, 2012) in Santa Barbara, CA.

Read more >

 

Cognitive Robots includes Common-Sense Knowledge and Reasoning into their Robotics and Computer Vision solutions

5 comments

Representation, reasoning and learning are the basic principles of human intelligence. The emulation of human intelligence has been the aim of Artificial Intelligence since its origins in 1956.

In fact, converting raw data into information (data in the context of other data) and hence into knowledge (information in the context of other information), is critical for understanding activities, behaviors, and in general the world we try to model. Both in the Robotics and the Computer Vision areas we try to model the real world where the humans are operating.

The type of knowledge that Robotics and Computer Vision need to obtain is Common Sense Knowledge. Contra intuitively, common sense knowledge is more difficult to model than expert knowledge, which can be quite easily modeled by expert systems (a more or less closed research area since the 70s).

Both in Robotics and Computer Vision areas, Probabilistic and Bayesian models have historically been used as the way to represent, reason and learn from the world. These methods have provided very good initial results. The problem is that they have never been scalable. That is why there is no commercial intelligent robot that has the full ability to serve people yet. Although there exist many preliminary solutions including artificial vision, the percentage of false positives or negatives are still too high to consider it as completely reliable, and therefore artificial vision is still an open research area.

The problems detected in the probabilistic approaches have been twofold: Read the rest of this entry »

This Little Robot Could Totally Transform The Way Humanity Shops

leave a comment

by Jill Krasny  Jul. 20, 2012

AndyVision Future of Retail Project at Carnegie Mellon University. This project involves in-store digital signage for customers to browse the store’s 3D planograms, as well as an autonomous store-operations robot to assist in inventory management, including out-of-stock detection.

AndyVision manages inventory, but his influence might go farther than that, reports Motherboard’s Adam Clark Estes. Researchers say the lightweight, red-hoodied robot was built to “transform the shopping experience.”

Here, Estes explains how the “mechanized messenger” works:

“With the help of a video camera and an onboard computer that combines image-processing with machine learning algorithms, it can patrol the aisles counting stock and scanning for misplaced items … The data from the inventory scans are all sent to a large touchscreen, where customers can browse through what’s available in the store.”

Read more >

MIT creates intelligent car co-pilot that only interferes if you’re about to crash

leave a comment

By on July 13, 2012

Mechanical engineers and roboticists working at MIT have developed an intelligent automobile co-pilot that sits in the background and only interferes if you’re about to have an accident. If you fall asleep, for example, the co-pilot activates and keeps you on the road until you wake up again.

Like other autonomous and semi-autonomous solutions, the MIT co-pilot [research paper] uses an on-board camera and laser rangefinder to identify obstacles. These obstacles are then combined with various data points — such as the driver’s performance, and the car’s speed, stability, and physical characteristics — to create constraints. The co-pilot stays completely silent unless you come close to breaking one of these constraints — which might be as simple as a car in front braking quickly, or as complex as taking a corner too quickly. When this happens, a ton of robotics under the hood take over, only passing back control to the driver when the car is safe.

Read more >

Autonomous Underwater robots – another very active market area for robotics

leave a comment

With the ultimate goal of designing completely autonomous robots that can navigate and map cloudy underwater environments without any prior knowledge of the environment and detect mines as small as 10 cm in diameter, researchers at HoverGroup (MIT) have came up with algorithms to program a robot called the Hovering Autonomous Underwater Vehicle (HAUV).

To provide a detailed sweep of a ship’s hull, the researchers took a two-stage approach. Firstly, the robot is programmed to swim in a square around the ship’s hull at a safe distance of 10 meters (33 ft), using its sonar camera to gather data that is used to produce a grainy point cloud. Although a ship’s large propeller can be identified at this low resolution, it isn’t detailed enough to make out a small mine.

Additionally, the point cloud may not necessarily tell the robot where a ship’s structure begins and ends – a problem if it wants to avoid colliding with a ship’s propellers. To generate a three-dimensional, “watertight” mesh model of the ship, the researchers translated this point cloud into a solid structure by adapting computer-graphics algorithms to the sonar data.

Once the robot has a solid structure to work with, the robot moves onto the second stage. This sees the robot programmed to swim closer to the ship, with the idea of covering every point in the mesh at spaces of 10 centimeters apart.

Read more >

US Navy is also developing autonomous underwater hull-cleaning robots. The Robotic Hull Bio-inspired Underwater Grooming tool, or Hull BUG, is being developed by the US Office of Naval Research (ONR) and SeaRobotics.

The Hull BUG has four wheels, and attaches itself to the underside of ships using a negative pressure device that creates a vortex between the BUG and the hull. Much like a robotic vacuum cleaner, lawnmower or floor cleaner, the idea is that once it’s put in place, it can set about getting the job done without any outside control.

Onboard sensors allow it to steer around obstacles, and a fluorometer lets it detect biofilm, the goop in which barnacles and other greeblies settle. Once it detects biofilm, powerful brushes on its underside are activated, and the film is scrubbed off. In this way, it is intended more for the prevention of barnacles, than for their removal. Initial tests have shown it to be very effective.

Read more >

Biologically accurate robotic legs get the gait right

leave a comment

Very impressive video of the biologically accurate robotic legs in action.

By , July 10, 2012

The machine comprises simplified versions of the human neural, musculoskeletal and sensory feedback systems.

The robotic legs are unique in that they are controlled by a crude equivalent of the central pattern generator (CPG) – a neural network located in the spinal cord at the abdominal level and responsible for generating rhythmic muscle signals. These signals are modulated by the CPG as it gathers information from different body parts responding to external stimuli. As a result, we are able to walk without ever giving the activity much thought.

The most basic form of a CPG is called a half center and is made up of two neurons rhythmically alternating in producing a signal. An artificial version of a half center produces signals and gathers feedback from sensors in the robotic limbs, such as load sensors that notice when the angle of the walking surface has shifted.

Read more >

How NASA plans to land a 2000 pound rover on Mars

leave a comment

A month from now, the Mars Science Laboratory (Curiosity) rover is set to touch down on the surface of the Red Planet and begin its mission to learn more about the possible existence of life – past or present. Curiosity will attempt to touch down using a complex and unusual landing sequence unlike any other used for previous Mars rovers … here’s how the plan will unfold.

The entire process will be executed completely autonomously, managed not by human intervention.

The technology behind the landing is an interplay of hardware and software. On the software side, the computer algorithms that guide each part of the craft can be tested from Earth, simulations can be run, and new software updates can be installed – the final stable version was uploaded in the last few days of May.

Testing the hardware was not nearly as easy, since the right conditions can’t be recreated on Earth…

By , July 5, 2012

Read more >

Shimi the dancing robotic smartphone dock

leave a comment

Researchers at Georgia Tech’s Center for Music Technology have developed a one-foot-tall (30 cm) smartphone-enabled robot called Shimi, which they describe as an interactive “musical buddy.”

Shime is going to be unveiled tomorrow (June the 28th 2012) at the Google I/O conference in San Francisco.

Shimi can analyze a beat clapped by a user and scan the phone’s musical library to play the song that best matches the rhythm and tempo. The robot will then dance, tapping its foot and moving its head in time with the beat. With the speakers positioned as Shimi’s ears, the robot can also use the connected phone’s camera and face-detection software to move its head so that the sound follows the listener around the room.

Future apps in the works will allow users to shake their head when they don’t like the currently playing song and tell Shimi to skip to the next track with a wave of a hand. Again, these gestures are picked up using the phone’s built in camera. Shimi will also be able to recommend new music based on the user’s song choices.

Shimi was created by Professor Gil Weinberg, director of Georgia Tech’s Center for Music Technology, who hopes third party developers will get on board to expand Shimi’s capabilities further by creating their own apps. He developed the robot in collaboration with Professor Guy Hoffmann from MIT’s Media Lab and IDC in Israel, entrepreneur Ian Campbell and robot designer Roberto Aimi.

“We’ve packed a lot of exciting robotics technology into Shimi,” says Weinberg. “Shimi is actually the product of nearly a decade of musical robotics research.”

By , June 27, 2012

Read more >

GM studies driver attention in semi-autonomous cars

leave a comment

General Motors researchers, such as Innovation Program Manager Jeremy Salinger, are studying driver behavior in semi-autonomous driving situations. He points out that in semi-autonomous cars, it’s necessary to remain focused on driving, and on the road.

However, when the driving process requires less of our active attention, it becomes boring to just observe how the semi-autonomous device is operating our car.

Self-driving features are moving from concept vehicles to the production line. The 2013 models of the Cadillac XTS and ATS sedans will include a Driver Assist Package, which includes features such as full-speed range adaptive cruise control and automatic emergency braking.

“Driver assist features such as adaptive cruise control and automatic emergency braking are paving the way to self-driving automobiles,” says Salinger. “Some things are coming out this year that are basically the precursors to allowing cars to drive themselves.” These technologies focus on safety features, warning systems and crash avoidance and are the stepping stones that will allow future cars to drive autonomously.

By , June 21, 2012

Read more >

 

Written by Teresa Escrig

June 27th, 2012 at 3:55 pm