Teresa Escrig

News and oppinion about Cognitive AI & Robotics

Archive for the ‘cognitive vision’ tag

What are the benefits of Artificial Intelligence in Robotics?

one comment

Happy New Year to all!  It’s been a while since my last post. Too busy. Now, I’m back.

————————————————————————————-

Robotics is not only a research field within artificial intelligence, but a field of application, one where all areas of artificial intelligence can be tested and integrated into a final result.

Amazing humanoid robots exhibit elegant and smooth motion capable of walking, running, and going up and down stairs.  They use their hands to protect themselves when falling, and to get up afterward.  They’re an example of the tremendous financial and human capital that is being devoted to research and development in the field of electronics, control and the design of robots.

Very often, the behavior of these robots contains a fixed number of pre-programmed instructions that are repeated regardless of  any changes in the environment. These robots have no autonomy, nor adaptation, to the changing environment, and therefore do not show intelligent behavior. We are amazed by the technology they provide, which is fantastic! But we can not infer that, because the robots are physically so realistic and the movements so precise and gentle, that they are able to do what we (people) do. Read the rest of this entry »

Cognitive Robots includes Common-Sense Knowledge and Reasoning into their Robotics and Computer Vision solutions

5 comments

Representation, reasoning and learning are the basic principles of human intelligence. The emulation of human intelligence has been the aim of Artificial Intelligence since its origins in 1956.

In fact, converting raw data into information (data in the context of other data) and hence into knowledge (information in the context of other information), is critical for understanding activities, behaviors, and in general the world we try to model. Both in the Robotics and the Computer Vision areas we try to model the real world where the humans are operating.

The type of knowledge that Robotics and Computer Vision need to obtain is Common Sense Knowledge. Contra intuitively, common sense knowledge is more difficult to model than expert knowledge, which can be quite easily modeled by expert systems (a more or less closed research area since the 70s).

Both in Robotics and Computer Vision areas, Probabilistic and Bayesian models have historically been used as the way to represent, reason and learn from the world. These methods have provided very good initial results. The problem is that they have never been scalable. That is why there is no commercial intelligent robot that has the full ability to serve people yet. Although there exist many preliminary solutions including artificial vision, the percentage of false positives or negatives are still too high to consider it as completely reliable, and therefore artificial vision is still an open research area.

The problems detected in the probabilistic approaches have been twofold: Read the rest of this entry »

This Little Robot Could Totally Transform The Way Humanity Shops

leave a comment

by Jill Krasny  Jul. 20, 2012

AndyVision Future of Retail Project at Carnegie Mellon University. This project involves in-store digital signage for customers to browse the store’s 3D planograms, as well as an autonomous store-operations robot to assist in inventory management, including out-of-stock detection.

AndyVision manages inventory, but his influence might go farther than that, reports Motherboard’s Adam Clark Estes. Researchers say the lightweight, red-hoodied robot was built to “transform the shopping experience.”

Here, Estes explains how the “mechanized messenger” works:

“With the help of a video camera and an onboard computer that combines image-processing with machine learning algorithms, it can patrol the aisles counting stock and scanning for misplaced items … The data from the inventory scans are all sent to a large touchscreen, where customers can browse through what’s available in the store.”

Read more >

Autonomous Underwater robots – another very active market area for robotics

leave a comment

With the ultimate goal of designing completely autonomous robots that can navigate and map cloudy underwater environments without any prior knowledge of the environment and detect mines as small as 10 cm in diameter, researchers at HoverGroup (MIT) have came up with algorithms to program a robot called the Hovering Autonomous Underwater Vehicle (HAUV).

To provide a detailed sweep of a ship’s hull, the researchers took a two-stage approach. Firstly, the robot is programmed to swim in a square around the ship’s hull at a safe distance of 10 meters (33 ft), using its sonar camera to gather data that is used to produce a grainy point cloud. Although a ship’s large propeller can be identified at this low resolution, it isn’t detailed enough to make out a small mine.

Additionally, the point cloud may not necessarily tell the robot where a ship’s structure begins and ends – a problem if it wants to avoid colliding with a ship’s propellers. To generate a three-dimensional, “watertight” mesh model of the ship, the researchers translated this point cloud into a solid structure by adapting computer-graphics algorithms to the sonar data.

Once the robot has a solid structure to work with, the robot moves onto the second stage. This sees the robot programmed to swim closer to the ship, with the idea of covering every point in the mesh at spaces of 10 centimeters apart.

Read more >

US Navy is also developing autonomous underwater hull-cleaning robots. The Robotic Hull Bio-inspired Underwater Grooming tool, or Hull BUG, is being developed by the US Office of Naval Research (ONR) and SeaRobotics.

The Hull BUG has four wheels, and attaches itself to the underside of ships using a negative pressure device that creates a vortex between the BUG and the hull. Much like a robotic vacuum cleaner, lawnmower or floor cleaner, the idea is that once it’s put in place, it can set about getting the job done without any outside control.

Onboard sensors allow it to steer around obstacles, and a fluorometer lets it detect biofilm, the goop in which barnacles and other greeblies settle. Once it detects biofilm, powerful brushes on its underside are activated, and the film is scrubbed off. In this way, it is intended more for the prevention of barnacles, than for their removal. Initial tests have shown it to be very effective.

Read more >

How NASA plans to land a 2000 pound rover on Mars

leave a comment

A month from now, the Mars Science Laboratory (Curiosity) rover is set to touch down on the surface of the Red Planet and begin its mission to learn more about the possible existence of life – past or present. Curiosity will attempt to touch down using a complex and unusual landing sequence unlike any other used for previous Mars rovers … here’s how the plan will unfold.

The entire process will be executed completely autonomously, managed not by human intervention.

The technology behind the landing is an interplay of hardware and software. On the software side, the computer algorithms that guide each part of the craft can be tested from Earth, simulations can be run, and new software updates can be installed – the final stable version was uploaded in the last few days of May.

Testing the hardware was not nearly as easy, since the right conditions can’t be recreated on Earth…

By , July 5, 2012

Read more >

The rapidly evolving world of robotic technology

leave a comment

June 25 (Bloomberg) — Stanford University’s Marina Gorbis discusses the rapidly evolving world of robotic technology and how humans will interact with them, and learn from them over the next five to ten years. She interviews with Adam Johnson on Bloomberg Television’s “Bloomberg Rewind.” (Source: Bloomberg)

Marina Gorbis is the Executive Director of Institute for the Future.

Marina’s biography – During her tenture at IFTF, and previously with SRI International, Marina has worked with hundreds of organizations in business, education, government, and philanthropy, bringing a future perspective to improve innovation capacity, develop strategies, and design new products and services. A native of Odessa, Ukraine, Marina is particularly suited to see things from a global perspective. She has worked all over the world and feels equally at home in Silicon Valley, Europe, India, or Kazakhstan. Before becoming IFTF’s Executive Director in 2006, Marina created the Global Innovation Forum, a project comparing innovation strategies in different regions, and she founded Global Ethnographic Network (GEN), a multi-year ethnographic research program aimed at understanding daily lives of people in Brazil, Russia, India, China, and Silicon Valley. She also led IFTF’s Technology Horizons Program, focusing on interaction between technology and social organizations. She has been a guest blogger on BoingBoing.net and writes for IFTF and major media outlets. She is a frequent speaker on future organizational, technology, and social issues. Marina holds a Master’s Degree from the Graduate School of Public Policy at UC Berkeley.

DARPA looks at developing robots to sew uniforms

leave a comment

U.S. military uniforms may not be the most fashionable of clothes, but there are a lot of them. Every year, the Pentagon spends US$4 billion on uniforms and over 50,000 people are employed in their production. In an effort to cut costs and increase efficiency, DARPA has awarded a US$1.25 million contract SoftWear Automation, Inc. to develop “complete production facilities that produce garments with zero direct labor is the ultimate goal” – in other words, a robot factory that can make uniforms from beginning to end without human operators.

Sewing is a very complex task. I would love to know how they are going to do it!

 

By June 18, 2012

Read more >

The Future of Robotics: personal point of view

2 comments

The future of robotics is advancing towards the incorporation of increasing intelligence.

Intelligence includes, among other things, perception (interpreting the environment and extracting the most relevant information from it), reasoning (inferring new knowledge from the one we perceive, i.e. if we know that A implies B, and B implies C, then we can infer that A implies C), learning (as many people have pointed out in this thread already) and decision making to implement solutions to particular applications (such as security, companion, tele-presence robots, autonomous scrubber machines, vacuum cleaners, etc).

At Cognitive Robots, we have developed the first embryonic brain called “Cognitive Brain for Service Robotics” -CR-B100-, which integrates all these four aspects, in a patent pending software.

We have tested the “brain” in several “bodies” with excellent results.

Please, check this post for more information.

We are actively looking for partnerships and investment capital to bring our company Cognitive Robots to the next level.

If you know of a visionary mind with capital to invest, please, pass that person my email: mtescrig@c-robots.com

We are planning on going to crowdfunding resources like KickStarter and offering our own robotic platform (brain and body) for research and a smaller version for education. What are your thoughts on that?

Cognitive Robots enhances Kompai’s capabilities by incorporating its “Cognitive Brain for Service Robotics”

leave a comment

Since February 2011, Cognitive Robots and Robosoft have been collaborating on the framework of a European project, the ECHORD C-Kompai. The objective of the project is to enhance the companion robot Kompai with the cognitive capabilities provided by the “Cognitive Brain for Service Robotics ®” – CR-B100 – of Cognitive Robots.

The intent behind the improvement of the Kompai platform is to better serve the users – the elderly.

We have identified 3 aspects of the Kompai’s functionality to be improved in this project:

Read the rest of this entry »

Intelligent Cutting and Deboning System

leave a comment

The Georgia Tech Research Institute’s (GTRI) has developed an Intelligent Cutting and Deboning System. Using 3D imaging technology, this robot can debone an entire chicken with the skill of a human butcher and has the potential of saving the poultry industry millions of dollars by reducing costs and waste.

No very idyllic, but very practical.

Read more >