Teresa Escrig

News and oppinion about Cognitive AI & Robotics

Archive for the ‘Artificial’ tag

Crucial Technology for AI and Robotics: a Kinect-like sensor is included in a smart-phone

leave a comment

3D model of reality created in TANGO project at Google.

3D model of reality created in project Tango at Google.

The Kinect sensor was a revolution for the Robotics industry, mainly because it was a relatively inexpensive way to have a 3D obstacle detection. It provided a set of distances from where the Kinect was positioned to the objects of the world.

The person responsible at Microsoft for the development of the Kinect sensor is now in charge of Project Tango at Google. Project Tango  integrates a Kinect-like sensor in a smart-phone (with all the others sensors already included in the smart-phone), providing a 3D model of reality. Crucial technology for AI and Robotics.

And also, can you imagine having instant access to wearable extended virtual reality? Instant access to the structure of the world in front of you – imagine where this road goes? What is the structure of this building? Or even – Show me where I can buy my favorite pair of jeans in this shopping mall?

And even further: Create a 3D model of your body, use it to virtually try on different clothes online (also in a 3D model), check out the look and fit, make a purchasing decision, drop it into a shopping card, and have it delivered to your door

Mmmm…my imagination flies. Love to hear where yours goes… Leave comments.

Here is the article (check out the amazing video):

Google announces Project Tango smartphone with Kinect-like 3D imaging sensors [VIDEO]

by Chris Chavez

Google was able to throw everyone a curve ball today with the announcement of Project Tango, their new in-house smartphone prototype outfitted with Kinect-like sensors.

The 5-inch smartphone as is being developed by Google’s Advanced Technology and Projects group (ATAP) the same people behind Project Ara. Project Tango is lead by Johnny Lee — a man who helped make the Microsoft Kinect possible (makes sense, right?). The goal of Project Tango is to ultimately give mobile devices a “human-scale understanding” of space and motion, allowing users to map the world around them in ways they never thought possible.

Continue reading…

 

Google has given an early prototype of the device to Matterport, which makes computer vision and perceptual computing solutions, like software that maps and creates 3D reconstructions of indoor spaces. Don’t miss the video of the 3D map result in this link! It’s amazing!

 

Open-source humanoid platform from NimbRo to compete in RoboCup’s TeenSize league

leave a comment

Once upon a time, when I finished my PhD dissertation, I went to the IJCAI conference in Kyoto, Japan, and the Robocup competition was taken place in the same venue. I absolutely fall in love with the Aibo dog and cat robots from Sony, that were exposed at the competition (before they were widely used at the same competition).

At that event I decided that I wanted to apply the results of my PhD to bring Intelligence to robots. And that is what I did. I started a research group at Jaume I University. My students play with the Aibos for years. And working on one of the challenges of the Robocup competition with my students, I put all the dots together, and after 10 years of research since my PhD was finished, the seed of Cognitive Robots was born. That technology became a patent pending for our company and is still ahead of the rest of the technology that brings Intelligence to the robots, as far as we know.

I have great memories about the Robocup competition. I agree that it is a great play ground to integrate and test technologies in the areas of AI and Robotics. And it is for sure much more that a toy test.

By , October 8, 2012

University of Bonn’s Team NimbRo are commercializing a humanoid platform, NimbRo-OP, for €20,000 (US$26,000) to compete in RoboCup‘s TeenSize league. It sounds rather expensive, but it will save teams the trouble of prototyping their own, and the untold hours of research and development that would normally require.

Read more >

Human-Computer (or Robot) interface through Rough Sketches

leave a comment

A team from Rhode Island’s Brown University and the Technical University of Berlin have created software that analyzes users’ crude, cartoony sketches, and figures out what it is that they’re trying to draw.

To develop the system, the researchers started with a database made up of 250 categories of annotated photographs. Then, using Amazon’s Mechanical Turk crowd-sourcing service, they hired people to make rough sketches of objects from each of those categories. The resulting 20,000 sketches were then subjected to recognition and machine learning algorithms, in order to teach the system what general sort of sketches could be attributed to which categories. After seeing numerous examples of how various people drew a rabbit, for instance, it would learn that combinations of specific shapes usually meant “rabbit.”

Check out the video showing the performance of the application. It is amazing! This technology has a broad and very deep implication in many areas, robotics is just one.

The research  is  available online, together with a library of sample sketches, and other materials. The team is currently considering a ‘Pictionary’ type open source game to expand the systems’ drawing reference library.

Read More:

Will elderly embrace robot health care?

one comment

By THOMAS ROGERS, 08/20/2012

“Full robots with arms are still very expensive,” says Ashutosh Saxena, a professor in the department of computer science at Cornell, “but they are getting cheaper by the day.” He predicts that armless robots — capable of communicating verbally with the elderly and observing them in case of accidents — will hit the market within the next five years.

There’s just one hiccup: the elderly themselves.

Despite manufacturers’ hopes, robotic technology has proven to be alienating for many older people — even, the BBC reports, in Japan, a country with an intense, long-term love of all things robotic.

Alexander Libin, scientific director of simulation and education research at Medstar Health Research Institute, argues that one of the biggest challenges is that the elderly need to be able to communicate easily with them. Although many robots (and mobile phones) can now recognize voice commands, nonverbal cues pose a much bigger challenge. Libin, who has worked extensively on robot-patient interaction, believes that touch-sensitive technology — like the one used by Paro, the therapeutic seal robot — will play a large role in making robots palatable to seniors.

“The Japanese want robots to be like them,” says Libin, noting Japan’s long tradition of treating inanimate objects like living beings. In the United States, we’re more comfortable treating machines as machines. “We want things we can control.”

The path toward robot acceptance may also require  patience. Like other forms of social change, robot acceptance may simply require one generation to replace the previous one.

Read more >

Surfing Robot Tells Scientists Where the Sharks Are

one comment

Researchers at Stanford University have developed a Wave Glider robot which tracks the migratory patterns of great white sharks off the California coast, near San Francisco.

Stanford marine scientists have spent the past 12 years tracking the migratory patterns of sharks by placing acoustic tags on the animals that send a signal to a receiver when they pass within 1,500 feet.


Their goal is to use revolutionary technology that increases our capacity to observe our oceans and census populations, improve fisheries management models, and monitor animal responses to climate change.

The surfing robot will receive audio information from the shark’s tags and then it will propel itself forward through the water to follow the animal in an unobtrusive manner. The surfboard part acts like a WiFi hotspot, pinging the research team with the latest data about the sharks’ movements.

The Stanford team has released a new iPhone and iPad app called Shark Net to model the sharks’ patterns and offer real-time notifications when the robot crosses paths with certain sharks. The idea behind the app is to allow everyone to explore the places where these sharks live, and to get to know them just like their friends on Facebook.

Read more >

By August 20, 2012 Read more >

Willow Garage’ s PR2 robot giving the disable independence

leave a comment

Great job from Willow Garage. This is a nice example of the utility of robots in the near future. PR2 is too expensive to be acquired by a regular disable citizen, but you get the idea…

Read more >

 

Cloud Robotics: benefits to adopt, drawbacks to solve

22 comments

For us humans, with our non-upgradeable, offline meat brains, the possibility of acquiring new skills by connecting our heads to a computer network is still science fiction. It is a reality for robots.

Cloud Robotics can allow the robot to access vast amounts of processing power, data and offload compute-intensive tasks like image processing and voice recognition and even download new skills instantly, Matrix-style.

There is an excellent post at ieee spectrum about Cloud Robotics that I absolute recommend to read for those who want to know what is next in the Robotics world.

Here are the benefits I see by using Cloud-enable robots: Read the rest of this entry »

Hanson Robokind unveils latest version of its Zeno humanoid robot

leave a comment

by August 2, 2012

Built by Hanson Robotics, Zeno’s open-platform software allows for custom tinkering by the purchaser, but the robot is currently programmed for a number of functions as well as speaking 26 languages. In the video, it asserts that it can carry on “conversations” and show “compassion.” It can also “deliver education curricula,” provide autism treatment therapy and can answer questions. It demonstrated the last of these by fielding spoken questions on astronomy, sports and films.

Zeno will be joined by a “female” counterpart called Alice in August of 2012. Neither, however, will be selling for the US$300 that Hanson had hoped for five years ago. Though no price has been set, current Hanson RoboKind robots are valued on its website at up to US$16,750. However, the company is still keen on breaking into the mass market and plans to roll out smaller, cheaper “cousins” for Zeno sometime in 2013.

Read more >

There is a huge amount of work done in this platform. Congratulations to the team. This platform brings robotics closer to the public.

 

RP-VITA, the new iRobot Telepresence robot doctor

leave a comment

By , July 26, 2012

iRobot and InTouch Health are working under a partnership and joint development and licensing agreement to develop the RP-VITA, which will allow doctors and other health specialists to not only visit patients remotely, but to robotically navigate through wards, access patient records and even carry out examinations.

The RP-VITA is a combination of iRobot’s Robot Ava mobile robotics platform and the InTouch Telemedicine System. This produces what the partners refer to as a an “expandable telemedicine technology platform.”

It’s controlled by a simple iPad interface and has an enhanced autonomous navigation capability. That means it can be sent where needed with a single click. Using its Obstacle Detection Obstacle Avoidance (ODOA) system, the robot can proceed to its location on its own, navigating the hospital quickly, safely and accurately.

The robot allows doctors and staff real-time access to important clinical data from the patient’s online files, but it also can transmit live information by means of its built-in electronic stethoscope or by linking to diagnostic devices such as otoscopes and ultrasound machines.

The RP-VITA is being unveiled to the public at the InTouch Health 7th Annual Clinical Innovations Forum (July 26-28, 2012) in Santa Barbara, CA.

Read more >

 

Cognitive Robots includes Common-Sense Knowledge and Reasoning into their Robotics and Computer Vision solutions

5 comments

Representation, reasoning and learning are the basic principles of human intelligence. The emulation of human intelligence has been the aim of Artificial Intelligence since its origins in 1956.

In fact, converting raw data into information (data in the context of other data) and hence into knowledge (information in the context of other information), is critical for understanding activities, behaviors, and in general the world we try to model. Both in the Robotics and the Computer Vision areas we try to model the real world where the humans are operating.

The type of knowledge that Robotics and Computer Vision need to obtain is Common Sense Knowledge. Contra intuitively, common sense knowledge is more difficult to model than expert knowledge, which can be quite easily modeled by expert systems (a more or less closed research area since the 70s).

Both in Robotics and Computer Vision areas, Probabilistic and Bayesian models have historically been used as the way to represent, reason and learn from the world. These methods have provided very good initial results. The problem is that they have never been scalable. That is why there is no commercial intelligent robot that has the full ability to serve people yet. Although there exist many preliminary solutions including artificial vision, the percentage of false positives or negatives are still too high to consider it as completely reliable, and therefore artificial vision is still an open research area.

The problems detected in the probabilistic approaches have been twofold: Read the rest of this entry »