Teresa Escrig

News and oppinion about Cognitive AI & Robotics

Archive for the ‘research’ tag

Shimi the dancing robotic smartphone dock

leave a comment

Researchers at Georgia Tech’s Center for Music Technology have developed a one-foot-tall (30 cm) smartphone-enabled robot called Shimi, which they describe as an interactive “musical buddy.”

Shime is going to be unveiled tomorrow (June the 28th 2012) at the Google I/O conference in San Francisco.

Shimi can analyze a beat clapped by a user and scan the phone’s musical library to play the song that best matches the rhythm and tempo. The robot will then dance, tapping its foot and moving its head in time with the beat. With the speakers positioned as Shimi’s ears, the robot can also use the connected phone’s camera and face-detection software to move its head so that the sound follows the listener around the room.

Future apps in the works will allow users to shake their head when they don’t like the currently playing song and tell Shimi to skip to the next track with a wave of a hand. Again, these gestures are picked up using the phone’s built in camera. Shimi will also be able to recommend new music based on the user’s song choices.

Shimi was created by Professor Gil Weinberg, director of Georgia Tech’s Center for Music Technology, who hopes third party developers will get on board to expand Shimi’s capabilities further by creating their own apps. He developed the robot in collaboration with Professor Guy Hoffmann from MIT’s Media Lab and IDC in Israel, entrepreneur Ian Campbell and robot designer Roberto Aimi.

“We’ve packed a lot of exciting robotics technology into Shimi,” says Weinberg. “Shimi is actually the product of nearly a decade of musical robotics research.”

By , June 27, 2012

Read more >

The rapidly evolving world of robotic technology

leave a comment

June 25 (Bloomberg) — Stanford University’s Marina Gorbis discusses the rapidly evolving world of robotic technology and how humans will interact with them, and learn from them over the next five to ten years. She interviews with Adam Johnson on Bloomberg Television’s “Bloomberg Rewind.” (Source: Bloomberg)

Marina Gorbis is the Executive Director of Institute for the Future.

Marina’s biography – During her tenture at IFTF, and previously with SRI International, Marina has worked with hundreds of organizations in business, education, government, and philanthropy, bringing a future perspective to improve innovation capacity, develop strategies, and design new products and services. A native of Odessa, Ukraine, Marina is particularly suited to see things from a global perspective. She has worked all over the world and feels equally at home in Silicon Valley, Europe, India, or Kazakhstan. Before becoming IFTF’s Executive Director in 2006, Marina created the Global Innovation Forum, a project comparing innovation strategies in different regions, and she founded Global Ethnographic Network (GEN), a multi-year ethnographic research program aimed at understanding daily lives of people in Brazil, Russia, India, China, and Silicon Valley. She also led IFTF’s Technology Horizons Program, focusing on interaction between technology and social organizations. She has been a guest blogger on BoingBoing.net and writes for IFTF and major media outlets. She is a frequent speaker on future organizational, technology, and social issues. Marina holds a Master’s Degree from the Graduate School of Public Policy at UC Berkeley.

DARPA looks at developing robots to sew uniforms

leave a comment

U.S. military uniforms may not be the most fashionable of clothes, but there are a lot of them. Every year, the Pentagon spends US$4 billion on uniforms and over 50,000 people are employed in their production. In an effort to cut costs and increase efficiency, DARPA has awarded a US$1.25 million contract SoftWear Automation, Inc. to develop “complete production facilities that produce garments with zero direct labor is the ultimate goal” – in other words, a robot factory that can make uniforms from beginning to end without human operators.

Sewing is a very complex task. I would love to know how they are going to do it!

 

By June 18, 2012

Read more >

ESA tests autonomous rover in Chilean desert ahead of ExoMars mission

leave a comment

With remote control of rovers on Mars out of the question due to radio signals taking up to 40 minutes to make the round trip to and from the Red Planet, the European Space Agency (ESA) has developed a vehicle that is able to carry out instructions fully autonomously.

With Mars lacking any GPS satellites to help with navigation, the rover must determine how far it has moved relative to its starting point. However, as ESA’s Gianfranco Visentin points out, any errors in this “dead reckoning” method can “build up into risky uncertainties.”

To minimize any uncertainties, the team sought to fix the rover’s position on a map to an accuracy of one meter (3.28 ft). To build a 3D map of its surroundings, assess how far it had traveled and plan the most efficient route to avoid obstacles, Seeker relied on its stereo vision.

“We managed 5.1 km (3.16 miles), somewhat short of our 6 km goal, but an excellent result considering the variety of terrain crossed, changes in lighting conditions experienced and most of all this was ESA’s first large-scale rover test – though definitely not our last.”

“The difficulty comes with follow-on missions, which will require daily traverses of five to ten times longer,” he says. “With longer journeys, the rover progressively loses sense of where it is.”

By , June 19, 2012

Read more >

The Future of Robotics: personal point of view

2 comments

The future of robotics is advancing towards the incorporation of increasing intelligence.

Intelligence includes, among other things, perception (interpreting the environment and extracting the most relevant information from it), reasoning (inferring new knowledge from the one we perceive, i.e. if we know that A implies B, and B implies C, then we can infer that A implies C), learning (as many people have pointed out in this thread already) and decision making to implement solutions to particular applications (such as security, companion, tele-presence robots, autonomous scrubber machines, vacuum cleaners, etc).

At Cognitive Robots, we have developed the first embryonic brain called “Cognitive Brain for Service Robotics” -CR-B100-, which integrates all these four aspects, in a patent pending software.

We have tested the “brain” in several “bodies” with excellent results.

Please, check this post for more information.

We are actively looking for partnerships and investment capital to bring our company Cognitive Robots to the next level.

If you know of a visionary mind with capital to invest, please, pass that person my email: mtescrig@c-robots.com

We are planning on going to crowdfunding resources like KickStarter and offering our own robotic platform (brain and body) for research and a smaller version for education. What are your thoughts on that?

Intelligent Cutting and Deboning System

leave a comment

The Georgia Tech Research Institute’s (GTRI) has developed an Intelligent Cutting and Deboning System. Using 3D imaging technology, this robot can debone an entire chicken with the skill of a human butcher and has the potential of saving the poultry industry millions of dollars by reducing costs and waste.

No very idyllic, but very practical.

Read more >

Research at Stanford may lead to computers that understand humans

leave a comment

A new trend has emerged in the past few years and has led to the development of technologies like Siri, iPhone’s “personal assistant.” It entails using mathematical tools, namely probability and statistics, to try and model how people use language to communicate in social situations. The work at Stanford builds directly on this branch of research.

Although statistics provide an initial solution to problems, in my opinion it is very primitive and has considerable limitations. It uses the brute force of the computer and no cognition. Other techniques, like qualitative models, have been demonstrated to be much more useful for extracting relevant information from any system, and then processing that information to make decisions. That is the technology being used in the “Cognitive Brain for Service Robotics (R)” of Cognitive Robots. You can find a link to my book that explains the basics here.

By , June 6, 2012

Read more>

Cognitive Robots’ corporate video

leave a comment

Cognitive Robots has successfully developed the world’s first truly autonomous Cognitive Brain for Service Robotics®, the CR-B100. Our mission is to provide an integrated solution for the automation of service vehicles, using state of the art cognitive processes that mimic the human brain.

Our Cognitive Brain incorporates four aspects of human intelligence: perception (object recognition), reasoning, learning and decision-making. This advanced level of artificial intelligence enables adaptation when uncertainty and unknown situations occur.

We’re actively seeking technical partnerships and investment capital.

Here you can see our corporate video:

Current accomplishments and activities of Cognitive Robots include:

  • CR-B100 has been adapted to commercial floor scrubbers (beta state).
  • CR-B100 has been fully incorporated into a Pioneer (Adept) research platform to prove out the full capabilities of the brain.
  • CR-B100 is currently being incorporated into Robosoft’s companion robot Kompai to enhance the Kompai’s capabilities with intelligence. This allows it to perceive the landmarks in the environment, automatically create its own map, avoid obstacles in 3D, clean the home intelligently, and make decisions to engage the elderly.
  • Cognitive Robots is about to launch its own Service Robotics platform using the CR-B100.
  • Another product of Cognitive Robots, the CR-B50 – Manual Assisted Driver- has been successfully incorporated into commercial forklifts, to increase security.
  • CR-B50 is now being incorporated into commercial buses.

Intelligent goggles for partly-sighted people

leave a comment

“Intelligent” goggles for partly-sighted people have been developed at Universidad Carlos III in Madrid, Spain. The system consists of a pair of stereoscopic digital cameras mounted on either side of a virtual reality headset, with two digital screens in front of the wearer’s eyes in place of lenses. The cameras scan the field of vision in front of the headset, convert it to digital code and then feed this to a separate computer package. The computer then runs an algorithm developed by the team, that determines the distance and outline of any objects seen. What the cameras scan is displayed on the headset’s screens and information about the objects is conveyed to the wearer by overlaying them with color-coded silhouettes.

“It detects objects and people who move within the visual field that a person with no visual pathologies would have,” said Professor Vergaz, leader of the research team who has developed the “intelligent” goggles. “Very often the patient does not detect them due to problems of contrast. The information regarding depth is what is most missed by patients who use this type of technical aid.”

By , May 30, 2012

Read more >

Google moves closer to becoming an Artificial Intelligence Engine

leave a comment

Are we going to see improvements in our internet search soon?

I was thinking that Google couldn’t change or improve because it was so big, well-established and essentially a monopoly. Perhaps it still can offer new solutions…

by , Thursday, May 17, 2012

Google began rolling out a feature that gives searchers in the United States the potential to access more relevant and in-depth responses to answers without leaving the page. The concept is built on something the company calls “knowledge graph,” which ties together words to create relationships.

There are a multitude of sources behind this data. The search results page displays a variety of content related to keyword queries, bringing up a list of facts, photos, and landmarks, as well as quick links to other popular uses for the search term. Think of a Web beneath the user interface layer of the Internet that ties together all information across the Web.

Rob Garner, vice president of strategy at agency iCrossing, said Google’s knowledge graph takes another step in the company’s long transition to develop an artificial intelligence engine — semantic search. “It’s something Google’s doing in parallel to Schema.org in terms of relating object, places and people,” he said. “Looking at the schema for a person you can actually define the relationship with other people using schema vocabulary.”

For example, someone looking for information on Marie Curie will see her birth and death dates, but also details on her education and scientific discoveries. The search engine understands much more…

Read more >