TALK  |  User-guided 2D-to-3D Conversion

Date released: Feb 21, 2012

  •  TALK   User-guided 2D-to-3D Conversion
  • Date & Time:

    Tuesday, February 21, 2012; 12:00 PM

  • Abstract:

    The problem of converting monoscopic footage into stereoscopic or multi-view content is inherently difficult and ill-posed. On the surface, this does not appear to be the case as the problem may be summed up as, "Given single-view image or video, create one or more views as if they were taken from a different camera view." However, capturing a three-dimensional scene as a two-dimensional image is a lossy process and any information regarding the distance of objects to the camera is lost. Methods exist for extracting depth information from a monoscopic view and it is possible to obtain metrically-correct depth estimates under certain conditions. But since conversion is primarily used as a post-processing stage in film production, the user requires a degree of control over the results. This, in turn, makes it ill-posed as there is no way to know ahead of time what the user wants from the conversion. In this talk we will present the work being done at Ryerson University on user-guided 2D-to-3D conversion. In particular, we will focus on how existing image segmentation techniques may be combined to produce reasonable depth maps for conversion while still providing complete control to the user. We will also discuss how our research can be applied to both images and video without any significant alterations to our methods.

  • Speaker:

    Dimitri Androutsos, Richard Rzeszutek
    Ryerson University

  • MERL Host:

    Anthony Vetro

  • Research Area: