Teresa Escrig

News and oppinion about Cognitive AI & Robotics

Archive for the ‘Artificial’ tag

This Little Robot Could Totally Transform The Way Humanity Shops

leave a comment

by Jill Krasny  Jul. 20, 2012

AndyVision Future of Retail Project at Carnegie Mellon University. This project involves in-store digital signage for customers to browse the store’s 3D planograms, as well as an autonomous store-operations robot to assist in inventory management, including out-of-stock detection.

AndyVision manages inventory, but his influence might go farther than that, reports Motherboard’s Adam Clark Estes. Researchers say the lightweight, red-hoodied robot was built to “transform the shopping experience.”

Here, Estes explains how the “mechanized messenger” works:

“With the help of a video camera and an onboard computer that combines image-processing with machine learning algorithms, it can patrol the aisles counting stock and scanning for misplaced items … The data from the inventory scans are all sent to a large touchscreen, where customers can browse through what’s available in the store.”

Read more >

Autonomous Underwater robots – another very active market area for robotics

leave a comment

With the ultimate goal of designing completely autonomous robots that can navigate and map cloudy underwater environments without any prior knowledge of the environment and detect mines as small as 10 cm in diameter, researchers at HoverGroup (MIT) have came up with algorithms to program a robot called the Hovering Autonomous Underwater Vehicle (HAUV).

To provide a detailed sweep of a ship’s hull, the researchers took a two-stage approach. Firstly, the robot is programmed to swim in a square around the ship’s hull at a safe distance of 10 meters (33 ft), using its sonar camera to gather data that is used to produce a grainy point cloud. Although a ship’s large propeller can be identified at this low resolution, it isn’t detailed enough to make out a small mine.

Additionally, the point cloud may not necessarily tell the robot where a ship’s structure begins and ends – a problem if it wants to avoid colliding with a ship’s propellers. To generate a three-dimensional, “watertight” mesh model of the ship, the researchers translated this point cloud into a solid structure by adapting computer-graphics algorithms to the sonar data.

Once the robot has a solid structure to work with, the robot moves onto the second stage. This sees the robot programmed to swim closer to the ship, with the idea of covering every point in the mesh at spaces of 10 centimeters apart.

Read more >

US Navy is also developing autonomous underwater hull-cleaning robots. The Robotic Hull Bio-inspired Underwater Grooming tool, or Hull BUG, is being developed by the US Office of Naval Research (ONR) and SeaRobotics.

The Hull BUG has four wheels, and attaches itself to the underside of ships using a negative pressure device that creates a vortex between the BUG and the hull. Much like a robotic vacuum cleaner, lawnmower or floor cleaner, the idea is that once it’s put in place, it can set about getting the job done without any outside control.

Onboard sensors allow it to steer around obstacles, and a fluorometer lets it detect biofilm, the goop in which barnacles and other greeblies settle. Once it detects biofilm, powerful brushes on its underside are activated, and the film is scrubbed off. In this way, it is intended more for the prevention of barnacles, than for their removal. Initial tests have shown it to be very effective.

Read more >

Biologically accurate robotic legs get the gait right

leave a comment

Very impressive video of the biologically accurate robotic legs in action.

By , July 10, 2012

The machine comprises simplified versions of the human neural, musculoskeletal and sensory feedback systems.

The robotic legs are unique in that they are controlled by a crude equivalent of the central pattern generator (CPG) – a neural network located in the spinal cord at the abdominal level and responsible for generating rhythmic muscle signals. These signals are modulated by the CPG as it gathers information from different body parts responding to external stimuli. As a result, we are able to walk without ever giving the activity much thought.

The most basic form of a CPG is called a half center and is made up of two neurons rhythmically alternating in producing a signal. An artificial version of a half center produces signals and gathers feedback from sensors in the robotic limbs, such as load sensors that notice when the angle of the walking surface has shifted.

Read more >

Shimi the dancing robotic smartphone dock

leave a comment

Researchers at Georgia Tech’s Center for Music Technology have developed a one-foot-tall (30 cm) smartphone-enabled robot called Shimi, which they describe as an interactive “musical buddy.”

Shime is going to be unveiled tomorrow (June the 28th 2012) at the Google I/O conference in San Francisco.

Shimi can analyze a beat clapped by a user and scan the phone’s musical library to play the song that best matches the rhythm and tempo. The robot will then dance, tapping its foot and moving its head in time with the beat. With the speakers positioned as Shimi’s ears, the robot can also use the connected phone’s camera and face-detection software to move its head so that the sound follows the listener around the room.

Future apps in the works will allow users to shake their head when they don’t like the currently playing song and tell Shimi to skip to the next track with a wave of a hand. Again, these gestures are picked up using the phone’s built in camera. Shimi will also be able to recommend new music based on the user’s song choices.

Shimi was created by Professor Gil Weinberg, director of Georgia Tech’s Center for Music Technology, who hopes third party developers will get on board to expand Shimi’s capabilities further by creating their own apps. He developed the robot in collaboration with Professor Guy Hoffmann from MIT’s Media Lab and IDC in Israel, entrepreneur Ian Campbell and robot designer Roberto Aimi.

“We’ve packed a lot of exciting robotics technology into Shimi,” says Weinberg. “Shimi is actually the product of nearly a decade of musical robotics research.”

By , June 27, 2012

Read more >

The rapidly evolving world of robotic technology

leave a comment

June 25 (Bloomberg) — Stanford University’s Marina Gorbis discusses the rapidly evolving world of robotic technology and how humans will interact with them, and learn from them over the next five to ten years. She interviews with Adam Johnson on Bloomberg Television’s “Bloomberg Rewind.” (Source: Bloomberg)

Marina Gorbis is the Executive Director of Institute for the Future.

Marina’s biography – During her tenture at IFTF, and previously with SRI International, Marina has worked with hundreds of organizations in business, education, government, and philanthropy, bringing a future perspective to improve innovation capacity, develop strategies, and design new products and services. A native of Odessa, Ukraine, Marina is particularly suited to see things from a global perspective. She has worked all over the world and feels equally at home in Silicon Valley, Europe, India, or Kazakhstan. Before becoming IFTF’s Executive Director in 2006, Marina created the Global Innovation Forum, a project comparing innovation strategies in different regions, and she founded Global Ethnographic Network (GEN), a multi-year ethnographic research program aimed at understanding daily lives of people in Brazil, Russia, India, China, and Silicon Valley. She also led IFTF’s Technology Horizons Program, focusing on interaction between technology and social organizations. She has been a guest blogger on BoingBoing.net and writes for IFTF and major media outlets. She is a frequent speaker on future organizational, technology, and social issues. Marina holds a Master’s Degree from the Graduate School of Public Policy at UC Berkeley.

ESA tests autonomous rover in Chilean desert ahead of ExoMars mission

leave a comment

With remote control of rovers on Mars out of the question due to radio signals taking up to 40 minutes to make the round trip to and from the Red Planet, the European Space Agency (ESA) has developed a vehicle that is able to carry out instructions fully autonomously.

With Mars lacking any GPS satellites to help with navigation, the rover must determine how far it has moved relative to its starting point. However, as ESA’s Gianfranco Visentin points out, any errors in this “dead reckoning” method can “build up into risky uncertainties.”

To minimize any uncertainties, the team sought to fix the rover’s position on a map to an accuracy of one meter (3.28 ft). To build a 3D map of its surroundings, assess how far it had traveled and plan the most efficient route to avoid obstacles, Seeker relied on its stereo vision.

“We managed 5.1 km (3.16 miles), somewhat short of our 6 km goal, but an excellent result considering the variety of terrain crossed, changes in lighting conditions experienced and most of all this was ESA’s first large-scale rover test – though definitely not our last.”

“The difficulty comes with follow-on missions, which will require daily traverses of five to ten times longer,” he says. “With longer journeys, the rover progressively loses sense of where it is.”

By , June 19, 2012

Read more >

The Future of Robotics: personal point of view

2 comments

The future of robotics is advancing towards the incorporation of increasing intelligence.

Intelligence includes, among other things, perception (interpreting the environment and extracting the most relevant information from it), reasoning (inferring new knowledge from the one we perceive, i.e. if we know that A implies B, and B implies C, then we can infer that A implies C), learning (as many people have pointed out in this thread already) and decision making to implement solutions to particular applications (such as security, companion, tele-presence robots, autonomous scrubber machines, vacuum cleaners, etc).

At Cognitive Robots, we have developed the first embryonic brain called “Cognitive Brain for Service Robotics” -CR-B100-, which integrates all these four aspects, in a patent pending software.

We have tested the “brain” in several “bodies” with excellent results.

Please, check this post for more information.

We are actively looking for partnerships and investment capital to bring our company Cognitive Robots to the next level.

If you know of a visionary mind with capital to invest, please, pass that person my email: mtescrig@c-robots.com

We are planning on going to crowdfunding resources like KickStarter and offering our own robotic platform (brain and body) for research and a smaller version for education. What are your thoughts on that?

Cognitive Robots enhances Kompai’s capabilities by incorporating its “Cognitive Brain for Service Robotics”

leave a comment

Since February 2011, Cognitive Robots and Robosoft have been collaborating on the framework of a European project, the ECHORD C-Kompai. The objective of the project is to enhance the companion robot Kompai with the cognitive capabilities provided by the “Cognitive Brain for Service Robotics ®” – CR-B100 – of Cognitive Robots.

The intent behind the improvement of the Kompai platform is to better serve the users – the elderly.

We have identified 3 aspects of the Kompai’s functionality to be improved in this project:

Read the rest of this entry »

Research at Stanford may lead to computers that understand humans

leave a comment

A new trend has emerged in the past few years and has led to the development of technologies like Siri, iPhone’s “personal assistant.” It entails using mathematical tools, namely probability and statistics, to try and model how people use language to communicate in social situations. The work at Stanford builds directly on this branch of research.

Although statistics provide an initial solution to problems, in my opinion it is very primitive and has considerable limitations. It uses the brute force of the computer and no cognition. Other techniques, like qualitative models, have been demonstrated to be much more useful for extracting relevant information from any system, and then processing that information to make decisions. That is the technology being used in the “Cognitive Brain for Service Robotics (R)” of Cognitive Robots. You can find a link to my book that explains the basics here.

By , June 6, 2012

Read more>

Cognitive Robots’ corporate video

leave a comment

Cognitive Robots has successfully developed the world’s first truly autonomous Cognitive Brain for Service Robotics®, the CR-B100. Our mission is to provide an integrated solution for the automation of service vehicles, using state of the art cognitive processes that mimic the human brain.

Our Cognitive Brain incorporates four aspects of human intelligence: perception (object recognition), reasoning, learning and decision-making. This advanced level of artificial intelligence enables adaptation when uncertainty and unknown situations occur.

We’re actively seeking technical partnerships and investment capital.

Here you can see our corporate video:

Current accomplishments and activities of Cognitive Robots include:

  • CR-B100 has been adapted to commercial floor scrubbers (beta state).
  • CR-B100 has been fully incorporated into a Pioneer (Adept) research platform to prove out the full capabilities of the brain.
  • CR-B100 is currently being incorporated into Robosoft’s companion robot Kompai to enhance the Kompai’s capabilities with intelligence. This allows it to perceive the landmarks in the environment, automatically create its own map, avoid obstacles in 3D, clean the home intelligently, and make decisions to engage the elderly.
  • Cognitive Robots is about to launch its own Service Robotics platform using the CR-B100.
  • Another product of Cognitive Robots, the CR-B50 – Manual Assisted Driver- has been successfully incorporated into commercial forklifts, to increase security.
  • CR-B50 is now being incorporated into commercial buses.