Teresa Escrig

News and oppinion about Cognitive AI & Robotics

Archive for the ‘artificial vision’ tag

Crucial Technology for AI and Robotics: a Kinect-like sensor is included in a smart-phone

leave a comment

3D model of reality created in TANGO project at Google.

3D model of reality created in project Tango at Google.

The Kinect sensor was a revolution for the Robotics industry, mainly because it was a relatively inexpensive way to have a 3D obstacle detection. It provided a set of distances from where the Kinect was positioned to the objects of the world.

The person responsible at Microsoft for the development of the Kinect sensor is now in charge of Project Tango at Google. Project Tango  integrates a Kinect-like sensor in a smart-phone (with all the others sensors already included in the smart-phone), providing a 3D model of reality. Crucial technology for AI and Robotics.

And also, can you imagine having instant access to wearable extended virtual reality? Instant access to the structure of the world in front of you – imagine where this road goes? What is the structure of this building? Or even – Show me where I can buy my favorite pair of jeans in this shopping mall?

And even further: Create a 3D model of your body, use it to virtually try on different clothes online (also in a 3D model), check out the look and fit, make a purchasing decision, drop it into a shopping card, and have it delivered to your door

Mmmm…my imagination flies. Love to hear where yours goes… Leave comments.

Here is the article (check out the amazing video):

Google announces Project Tango smartphone with Kinect-like 3D imaging sensors [VIDEO]

by Chris Chavez

Google was able to throw everyone a curve ball today with the announcement of Project Tango, their new in-house smartphone prototype outfitted with Kinect-like sensors.

The 5-inch smartphone as is being developed by Google’s Advanced Technology and Projects group (ATAP) the same people behind Project Ara. Project Tango is lead by Johnny Lee — a man who helped make the Microsoft Kinect possible (makes sense, right?). The goal of Project Tango is to ultimately give mobile devices a “human-scale understanding” of space and motion, allowing users to map the world around them in ways they never thought possible.

Continue reading…

 

Google has given an early prototype of the device to Matterport, which makes computer vision and perceptual computing solutions, like software that maps and creates 3D reconstructions of indoor spaces. Don’t miss the video of the 3D map result in this link! It’s amazing!

 

Is the long anticipated shift in robotics finally happening?

2 comments

Whew… with so many exciting things happening in the robotics field lately, I just couldn’t remain silent anymore…

kiva robots

Kiva robots carrying shelves in a warehouse.

We were all wowed by Amazon’s acquisition in 2012 of  Kiva Systems for $775 million.  Kiva’s clever self-propelled robots scoot around warehouses in a numeric control dance to retrieve and carry entire shelf-units of items to their proper packaging point.

In December 2013 and January 2014, Google bought 7 robotics companies investing an unknown amount of money.  The Internet giant and pioneer of self-driving cars is serious about a robot-filled future. However we don’t know much about the intent of Google with all these acquisitions. They’re all a part of the Google X division, which is top secret by definition. Most of these companies have closed down their websites and retreated into stealth mode. My guess is that they are grouping up to decide the direction they’ll take to serve Google’s goals.

The robotics team is led by Andy Rubin, who recently stepped down as head of Android.

Here there is a brief summary of all Google’s acquisitions (and a bunch of links to dig deeper):

Arm manipulator of Industrial Perception, Inc.

Arm manipulator of Industrial Perception, Inc.

The biped robot at Schaft, Inc.

The biped robot at Schaft, Inc.

 

 

 

 

 

 

 

 

  • Industrial Perception, Inc (IPI) – spun off of the Menlo Park robotics company Willow Garage.  They have a 3D vision-guided robot to be used in manufacturing and logistics.
  • Schaft Inc. The Japanese team that got its start at Tokyo University. They took the top prize at DARPA’s Robotics Challenge Trial with their bipedal robot.
  • Redwood Robotics – started as a joint venture between Meka Robotics, SRI International, and Willow Garage (IPI’s parent). Redwood wants to build the “next generation arm” for robots.
  • Meka Robotics – A very nice torso robot with very sophisticated hands in a mobile platform with wheels.

  • Bot & Dolly  – a design and engineering studio that specializes in automation, robotics, and filmmaking. They use robots to help film commercials and movies like Gravity.
  • Holomini – The only thing we know about them is that they are creators of high-tech wheels for omnidirectional motion.
Bot & Dolly arm with camera.

Bot & Dolly arm with camera.

 

Holomini's wheels.

Holomini’s wheels.

 

 

 

 

 

 

 

 

  • Boston Dynamics  – The most high-profile of all the robotic companies that Google has acquired so far. They have two main robots: ATLAS -the sophisticated humanoid robot and Cheetah, also called the BigDog that can reach 28 mph.
ATLAS robot from Boston Robotics.

ATLAS robot from Boston Robotics.

BigDog from Boston Robotics.

BigDog from Boston Robotics.

 

 

 

 

 

 

 

In the middle of January 2014, Google acquired Nest for $3.2 billion dollars.

  • Nest  – is an automation startup whose product is a smoke and CO2 alarm that talks.

And at the end of January Google acquired DeepMind for more than $500 M (after having beaten out Facebook):

  • DeepMind – is an AI research company out of London founded by neuroscientist Demis Hassabis, Skype developer Jaan Tallin, and researcher Shane Leggthe.  They use the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms.

In 2012 Google hired Ray Kurzweil to work on machine learning and language processing, to actually understand the content of the Web pages and provide a better way to rank them beside the number of times a web site is mentioned in other web sites. According to Dr. Kurzweil… you will be able to “ask it more complex questions that might be a whole paragraph… It might engage in a dialogue with you to find out what you need… It might come back in two months if it finds something useful.”

imperial college london robotics lab

The butler robot from the Imperial College London Robotics Lab

And now Sir James Dyson (the bagless vacuum cleaner inventor) is investing £5M in the Imperial College London to develop a new generation of “intelligent domestic robots” (an Iron Man’s style robot), with a further £3 million investment from various sources over the next five years.

Dyson remains frustrated at his prototypes’ inability to navigate simple household obstacles after working on a robotic vacuum cleaner  to go along with his company’s famous bagless line for as long as a decade. Indeed, even the greatest Roomba finds itself at a loss under a tangle of dining room chairs, and would shrug its shoulders when faced with a flight of stairs.”

Is the tide finally turning in robotics?

What are the benefits of Artificial Intelligence in Robotics?

one comment

Happy New Year to all!  It’s been a while since my last post. Too busy. Now, I’m back.

————————————————————————————-

Robotics is not only a research field within artificial intelligence, but a field of application, one where all areas of artificial intelligence can be tested and integrated into a final result.

Amazing humanoid robots exhibit elegant and smooth motion capable of walking, running, and going up and down stairs.  They use their hands to protect themselves when falling, and to get up afterward.  They’re an example of the tremendous financial and human capital that is being devoted to research and development in the field of electronics, control and the design of robots.

Very often, the behavior of these robots contains a fixed number of pre-programmed instructions that are repeated regardless of  any changes in the environment. These robots have no autonomy, nor adaptation, to the changing environment, and therefore do not show intelligent behavior. We are amazed by the technology they provide, which is fantastic! But we can not infer that, because the robots are physically so realistic and the movements so precise and gentle, that they are able to do what we (people) do. Read the rest of this entry »

Fiona, a community robotic project to create an artificial mind

leave a comment

Adele Robotics has launched Fiona, a project for the robotics community to create an artificial mind.

This is another example of Cloud Robotics and reproducing the Apps economy for the robotics industry, the future of robotics.

Congratulations Adele!

Cognitive Robots’ Cognitive Brain for Service Robotics has been successfully incorporated into Robosoft’s Kompai companion robot

leave a comment

Last week the results of the ECHORD C-Brain experiment was presented at IROS’12 conference in Portugal.

The overall goal of the project is to enhance the Kompai companion robotic platform from Robosoft (picture on the left) with the Cognitive Brain for Service Robotics ® (CBRAIN) from Cognitive Robots (picture on the right). The existing functionalities of the KOMPAI platform will remain and be enhanced with the cognitive capabilities of the CBRAIN.

The original capabilities of the Kompai at the beginning of the project were:

  1. Autonomous navigation solution based on traditional techniques such as laser-based SLAM (Simultaneous Localization and Mapping).
  2. Linear Obstacle detection at the height of the laser.
  3. Advanced dialog: the robot can receive verbal commands and give verbal responses.

The initial limitations that where identify in the Kompai platform and were addressed in this project were:

  • No automatic map building. A technician needs to manually create the map of each new environment (half day of work). Every single time the layout of that home is changed, the technician needs to go back to the home to re-learn the map of the environment for the robot.
  • No 3D obstacle avoidance. The current sensor of the Kompai is a laser, which provide linear distance measurement of the obstacles at the height of the laser. Read the rest of this entry »

Open-source humanoid platform from NimbRo to compete in RoboCup’s TeenSize league

leave a comment

Once upon a time, when I finished my PhD dissertation, I went to the IJCAI conference in Kyoto, Japan, and the Robocup competition was taken place in the same venue. I absolutely fall in love with the Aibo dog and cat robots from Sony, that were exposed at the competition (before they were widely used at the same competition).

At that event I decided that I wanted to apply the results of my PhD to bring Intelligence to robots. And that is what I did. I started a research group at Jaume I University. My students play with the Aibos for years. And working on one of the challenges of the Robocup competition with my students, I put all the dots together, and after 10 years of research since my PhD was finished, the seed of Cognitive Robots was born. That technology became a patent pending for our company and is still ahead of the rest of the technology that brings Intelligence to the robots, as far as we know.

I have great memories about the Robocup competition. I agree that it is a great play ground to integrate and test technologies in the areas of AI and Robotics. And it is for sure much more that a toy test.

By , October 8, 2012

University of Bonn’s Team NimbRo are commercializing a humanoid platform, NimbRo-OP, for €20,000 (US$26,000) to compete in RoboCup‘s TeenSize league. It sounds rather expensive, but it will save teams the trouble of prototyping their own, and the untold hours of research and development that would normally require.

Read more >

Shoal, the robo-fish that monitors oxygen levels and salinity of waters north of Spain

leave a comment

By , October 1, 2012

A five foot long (1.5 meter) robo-fish prototype that monitors oxygen levels and salinity is currently being tested in waters north of Spain as part of the EU-funded Shoal Consortium project.

The idea is to have real-time monitoring of pollution, so that if someone is dumping chemicals or something is leaking, it can be detected straight away, find out what is causing the problem and put a stop to it.

Traditional robots use propellers or thrusters for propulsion, however Shoal robot-fish uses the fin of a fish to propel itself through the water.

The Shoal robot-fish costs US$32,000, and it operates for just eight hours before needing to be charged. However, there’s no doubt that if this problem can be overcome (with, perhaps, some sort of underwater charging station) the robo-fish will find homes in coastal waters around the world.

Read more >  And more >

AISOY1 II, a programmable inexpensive robot with emotions

leave a comment

By , September 19, 2012

Spanish start-up Aisoy Robotics is marketing a new robot that, while it may look similar to the famous Furby, is actually a fully programmable research and development platform.

The Aisoy1 II robot comes with a variety of sensors (touch, light, position, temperature, and camera), microphone and speaker, RGB LEDs in its body, and a 70 mini-LED matrix display (for animated lips). Four servos control the robot’s neck rotation, eyelids, and eyebrows. The platform doesn’t move.

The package includes a dialogue system for speech recognition and synthesis, as well as computer vision software for stuff like face and object recognition, all running on the Linux operating system. The company claims even complete novices can take advantage of these functions without having to learn how to code thanks to DIA, its visual programming tool. The program runs in HTML5 compatible browsers, allowing you to select nodes that control the robot’s various sensors and behaviors.

Read more >

Aisoy 1 II includes a dialogue system for speech recognition and synthesis, as well as com...As the Thymio II, a specific non-standard programming language is against the robotic community efforts for standardization. However, the fact that is HTML5 compatible contributes to the creation of the Robotics App Economy.

The most important feature of Aisoy1 II, which is not mentioned in the previous article, is its emotional motor, a very interesting AI feature at the service of developers for a very low price. As their creators said: ” humans would not take decisions without emotions”. This emotional motor can be a key factor for development of the robotic industry.

Very cute little and inexpensive robots that can help to promote robotics education at schools and colleges.

Baxter, the new Arm Manipulator with behavioral robotics from Rethink Robotics

leave a comment

This is the company and the robot that Amazon has been contemplating to acquire to provide a complete automatic solution for the retail industry. The last piece of the puzzle after Amazon’s Kiva acquisition for $775 M.

By , September 18, 2012

Baxter, the first product of Rethink Robotics, an ambitious start-up company in a revived manufacturing district, is a significant bet that robots in the future will work directly with humans in the workplace.

Here in a brick factory that was once one of the first electrified manufacturing sites in New England, Rodney A. Brooks, the legendary roboticist who is Rethink’s founder, proves its safety by placing his head in the path of Baxter’s arm while it moves objects on an assembly line.

The $22,000 robot that Rethink will begin selling in October is the clearest evidence yet that robotics is more than a laboratory curiosity or a tool only for large companies with vast amounts of capital.

Baxter will come equipped with a library of simple tasks or behaviors.

Rethink itself has made a significant effort to design a robot that mimics biological systems. The concept is called behavioral robotics, a design approach that was pioneered by Dr. Brooks in the 1990s and was used by NASA to build an early generation of vehicles that explored Mars.

Dr. Brooks first proposed the idea in 1989 in a paper titled “Fast, Cheap and Out of Control: A Robot Invasion of the Solar System.” Rather than sending a costly system that had a traditional and expensive artificial intelligence based control system, fleets of inexpensive systems could explore like insects. It helped lead to Sojourner, an early Mars vehicle.

The next generation of robots will increasingly function as assistants to human workers, freeing them for functions like planning, design and troubleshooting.

Rethink’s strategy calls for the robot to double as a “platform,” a computerized system that other developers can add both hardware devices and software applications for particular purposes. It is based on open-source software efforts — including the Robot Operating System, or ROS, developed by the Silicon Valley company Willow Garage, and a separate project called OpenCV, or Open Source Computer Vision Library.

That will make it possible for independent developers to extend the system in directions that Rethink hasn’t considered, much in the same way the original Apple II computer had slots for additional peripheral cards.

“We will publish an interface for the end of the wrist,” Dr. Brooks said. That will mean that while Baxter comes with a simple hand, or “end effector,” it will be able to adapt the system with more complex and capable hands that will be able to perform tasks that require greater dexterity.

Read more >

 

Cognitive Robots includes Common-Sense Knowledge and Reasoning into their Robotics and Computer Vision solutions

5 comments

Representation, reasoning and learning are the basic principles of human intelligence. The emulation of human intelligence has been the aim of Artificial Intelligence since its origins in 1956.

In fact, converting raw data into information (data in the context of other data) and hence into knowledge (information in the context of other information), is critical for understanding activities, behaviors, and in general the world we try to model. Both in the Robotics and the Computer Vision areas we try to model the real world where the humans are operating.

The type of knowledge that Robotics and Computer Vision need to obtain is Common Sense Knowledge. Contra intuitively, common sense knowledge is more difficult to model than expert knowledge, which can be quite easily modeled by expert systems (a more or less closed research area since the 70s).

Both in Robotics and Computer Vision areas, Probabilistic and Bayesian models have historically been used as the way to represent, reason and learn from the world. These methods have provided very good initial results. The problem is that they have never been scalable. That is why there is no commercial intelligent robot that has the full ability to serve people yet. Although there exist many preliminary solutions including artificial vision, the percentage of false positives or negatives are still too high to consider it as completely reliable, and therefore artificial vision is still an open research area.

The problems detected in the probabilistic approaches have been twofold: Read the rest of this entry »