Robot Learns to Cook by Watching YouTube
Well, sort of. What happened is that a robot learned how to use kitchen tools by watching videos on YouTube. But imagine a future version of the robot that doesn’t need any help figuring out how to make the perfect omelet, because it learned all the necessary steps by watching YouTube. It might sound like science fiction, but a team at the University of Maryland has just made a significant breakthrough that will bring this scenario one step closer to reality.
Also read: The Robots Are Coming to Take Your Job
Researchers at the University of Maryland Institute for Advanced Computer Studies (UMIACS) and the National Information Communications Technology Research Centre of Excellence in Australia (NICTA) are developing robotic systems that are able to teach themselves. The robots are able to learn the intricate grasping and manipulation movements required for cooking by watching online cooking videos. The key breakthrough is that the robots can “think” for themselves, determining the best combination of observed motions that will allow them to efficiently accomplish a given task.
Humanoid Robots Learn How To Do Human Jobs
The study, “Robot Learning Manipulation Action Plans by ‘Watching’ Unconstrained Videos from the World Wide Web,” which will be presented on Jan. 29, 2015, at the Association for the Advancement of Artificial Intelligence Conference in Austin, Texas, is freely available online.
Computer science professor Yiannis Aloimonos, Director of the Computer Vision Lab at the University of Maryland, said:
“By having flexible robots, we’re contributing to the next phase of automation. This will be the next industrial revolution. We will have smart manufacturing environments and completely automated warehouses. It would be great to use autonomous robots for dangerous work—to defuse bombs and clean up nuclear disasters such as the Fukushima event. We have demonstrated that it is possible for humanoid robots to do our human jobs.”
The researchers achieved this milestone by combining approaches from three distinct research areas: artificial intelligence, or the design of computers that can make their own decisions; computer vision, or the engineering of systems that can accurately identify shapes and movements; and natural language processing, or the development of robust systems that can understand spoken commands. Combining these methods permits developing self-learning robots that learn by watching others, which is the same way humans learn. In this case, the robot learned a “vocabulary” of actions, they can then string them together in a way that achieves a given goal.
This work confirms that Artificial Intelligence (AI) and robotics may soon permit creating robots that can learn how to do human tasks. We recently reported that a legal consulting firm believes that Artificial Intelligence could replace lawyers by 2030, and AI programs are learning how to analyze the genome more deeply than previous approaches. The march of robotics and AI seems unstoppable and leads to widespread fears that autonomous technology might become too difficult to control.
Images from John T. Consoli, University of Maryland and Shutterstock.