Robots teach other robots
May 10, 2017
by Adam Conner-Simons
MIT doctoral candidate Claudia Pérez-D'Arpino discusses her
work teaching the Optimus robot to perform various tasks, including
picking up a bottle. Credit: Jason Dorfman/MIT CSAIL
Most robots are programmed using one of two methods: learning from
demonstration, in which they watch a task being done and then replicate
it, or via motion-planning techniques such as optimization or sampling,
which require a programmer to explicitly specify a task's goals and
constraints.
Both
methods have drawbacks. Robots that learn from demonstration can't
easily transfer one skill they've learned to another situation and
remain accurate. On the other hand, motion planning systems that use
sampling or optimization can adapt to these changes but are
time-consuming, since they usually have to be hand-coded by expert
programmers.
Researchers from MIT's Computer Science and Artificial Intelligence
Laboratory (CSAIL) have recently developed a system that aims to bridge
the two techniques: C-LEARN, which allows noncoders to teach robots a
range of tasks simply by providing some information about how objects
are typically manipulated and then showing the robot a single demo of the task.
Importantly, this enables users to teach robots skills that can be
automatically transferred to other robots that have different ways of
moving—a key time- and cost-saving measure for companies that want a
range of robots to perform similar actions.
"By combining the intuitiveness of learning from demonstration with
the precision of motion-planning algorithms, this approach can help
robots do new types of tasks that they haven't been able to learn
before, like multistep assembly using both of their arms," says Claudia
Pérez-D'Arpino, a PhD student who wrote a paper on C-LEARN with MIT
Professor Julie Shah.
Comments
Post a Comment