TR2009-071

Vision Guided Robot System for Picking Objects By Casting Shadows


    •  Agrawal, A.K., Sun, Y., Barnwell, J.C., Raskar, R., "Vision Guided Robot System for Picking Objects by Casting Shadows", International Journal of Robotics Research, DOI: 10.1177/​0278364909353955, Vol. 29, No. 2-5, February 2010.
      BibTeX TR2009-071 PDF
      • @article{Agrawal2010feb,
      • author = {Agrawal, A.K. and Sun, Y. and Barnwell, J.C. and Raskar, R.},
      • title = {Vision Guided Robot System for Picking Objects by Casting Shadows},
      • journal = {International Journal of Robotics Research},
      • year = 2010,
      • volume = 29,
      • number = {2-5},
      • month = feb,
      • doi = {10.1177/0278364909353955},
      • url = {https://www.merl.com/publications/TR2009-071}
      • }
  • Research Areas:

    Computer Vision, Control, Robotics

Abstract:

We present a complete vision guided robot system for model based 3D pose estimation and picking of singulated 3D objects. Our system employs a novel vision sensor consisting of a video camera surrounded by eight flashes (light emitting diodes). By capturing images under different flashes and observing the shadows, depth edges or silhouettes in the scene are obtained. The silhouettes are segmented into different objects and each silhouette is matched across a database of object silhouettes in different poses to find the coarse 3D pose. The database is pre-computed using a Computer Aided Design (CAD) model of the object. The pose is refined using a fully projective formulation [ACB98] of Lowe's model based pose estimation algorithm [Low91, Low87]. The estimated pose is transferred to robot coordinate system utilizing the hand-eye and camera calibration parameters, which allows the robot to pick the object.

Our system outperforms conventional systems using 2D sensors with intensity-based features as well as 3D sensors. We handle complex ambient illumination conditions, challenging specular backgrounds, diffuse as well as specular objects, and texture-less objects, on which traditional systems usually fail. Our vision sensor is capable of computing depth edges in real time and is low cost. Our approach is simple and fast for practical implementation. We present real experimental results using our custom designed sensor mounted on a robot arm to demonstrate the effectiveness of our technique.

 

  • Related News & Events