TR2007-025

Depth Estimation for View Synthesis in Multiview Video Coding


    •  Ince, S., Martinian, E., Yea, S., Vetro, A., "Depth Estimation for View Synthesis in Multiview Video Coding", 3DTV-Conference (3DTV-CON), May 2007.
      BibTeX TR2007-025 PDF
      • @inproceedings{Ince2007may,
      • author = {Ince, S. and Martinian, E. and Yea, S. and Vetro, A.},
      • title = {Depth Estimation for View Synthesis in Multiview Video Coding},
      • booktitle = {3DTV-Conference (3DTV-CON)},
      • year = 2007,
      • month = may,
      • url = {https://www.merl.com/publications/TR2007-025}
      • }
  • MERL Contact:
  • Research Area:

    Digital Video

Abstract:

The compression of multiview video in an end-to-end 3D system is required to reduce the amount of visual information. Since multiple cameras usually have a common field of view, high compression ratios can be achieved if both the temporal and inter-view redundancy are exploited. View synthesis prediction is a new coding tool for multiview video that essentially generates virtual views of a scene using images from neighboring cameras and estimated depth values. In this work, we consider depth estimation for view synthesis in multiview video encoding. We focus on generating smooth and accurate depth maps, which can be efficiently coded. We present several improvements to the reference block-based depth estimation approach and demonstrate that the proposed method of depth estimation is not only efficient for view synthesis prediction, but also produces depth maps that require much fewer bits to code.

 

  • Related News & Events