Semantic Understanding of Task Outcomes: Visually Identifying Failure Modes Autonomously Discovered in Simulation

Published in ICRA Representing a Complex World, 2018

J. Bowkett, R. Detry, and L.H. Matthies

Download paper here

We present a model for identifying and recognizing task success and distinct modes of task failure in robot manipulation applications. Our model leverages physics simulation and clustering to learn symbolic failure modes, and a deep network to extract visual signatures for each mode and to guide failure recovery. We present an early experiment where we apply our model to the archetypal manipulation task of placing objects into a container. A CNN is trained on synthetic depth images generated and labeled in simulation, and we demonstrate the ability of the network to compute task outcomes in both synthetic and real depth images.

Recommended citation: Bowkett J., Detry R., Matthies L.H. “Semantic Understanding of Task Outcomes: Visually Identifying Failure Modes Autonomously Discovered in Simulation.” ICRA 2018 Workshop – Representing a Complex World