A Theory of Minimal 3D Point to 3D Plane Registration and Its Generalization

    •  Ramalingam, S.; Taguchi, Y., "A Theory of Minimal 3D Point to 3D Plane Registration and Its Generalization", International Journal of Computer Vision, DOI: 10.1007/S11263-012-0576-X, September 2012.
      BibTeX Download PDF
      • @article{Ramalingam2012sep,
      • author = {Ramalingam, S. and Taguchi, Y.},
      • title = {A Theory of Minimal 3D Point to 3D Plane Registration and Its Generalization},
      • journal = {International Journal of Computer Vision},
      • year = 2012,
      • month = sep,
      • doi = {10.1007/S11263-012-0576-X},
      • url = {}
      • }
  • MERL Contact:
  • Research Area:

    Computer Vision

Registration of 3D data is a key problem in many applications in computer vision, computer graphics and robotics. This paper provides a family of minimal solutions for the 3D-to-3D registration problem in which the 3D data are represented as points and planes. Such scenarios occur frequently when a 3D sensor provides 3D points and our goal is to register them to a 3D object represented by a set of planes. In order to compute the 6 degrees-of-freedom transformation between the sensor and the object, we need at least six points on three or more planes. We systematically investigate and develop pose estimation algorithms for several configurations, including all minimal configurations, that arise from the distribution of points on planes. We also identify the degenerate configurations in such registrations. The underlying algebraic equations used in many registration problems are the same and we show that many 2D-to-3D and 3D-to-3D pose estimation/registration algorithms involving points, lines, and planes can be mapped to the proposed framework. We validate our theory in simulations as well as in three real-world applications: registration of a robotic arm with an object using a contact sensor, registration of planar city models with 3D point clouds obtained using multi-view reconstruction, and registration between depth maps generated by a Kinect sensor