TR2012-030

Variable Focus Video: Reconstructing Depth and Video for Dynamic Scenes


    •  Shroff, N.; Veeraraghavan, A.; Taguchi, Y.; Tuzel, O.; Agrawal, A.; Chellappa, R., "Variable Focus Video: Reconstructing Depth and Video for Dynamic Scenes", IEEE International Conference on Computational Photography (ICCP), April 2012.
      BibTeX Download PDF
      • @inproceedings{Shroff2012apr,
      • author = {Shroff, N. and Veeraraghavan, A. and Taguchi, Y. and Tuzel, O. and Agrawal, A. and Chellappa, R.},
      • title = {Variable Focus Video: Reconstructing Depth and Video for Dynamic Scenes},
      • booktitle = {IEEE International Conference on Computational Photography (ICCP)},
      • year = 2012,
      • month = apr,
      • url = {http://www.merl.com/publications/TR2012-030}
      • }
  • MERL Contact:
  • Research Areas:

    Computational Photography, Computer Vision


Traditional depth from defocus (DFD) algorithms assume that the camera and the scene are static during acquisition time. In this paper, we examine the effects of camera and scene motion on DFD algorithms. We show that, given accurate estimates of optical ow (OF), one can robustly warp the focal stack (FS) images to obtain a virtual static FS and apply traditional DFD algorithms on the static FS. Acquiring accurate OF in the presence of varying focal blur is a challenging task. We show how defocus blur variations cause inherent biases in the estimates of optical ow. We then show how to robustly handle these biases and compute accurate OF estimates in the presence of varying focal blur. This leads to an architecture and an algorithm that converts a traditional 30 fps video camera into a co-located 30 fps image and a range sensor. Further, the ability to extract image and range information allows us to render images with artistic depth of eld effects, both extending and reducing the depth of eld of the captured images. We demonstrate experimental results on challenging scenes captured using a camera prototype.