Skip to main content
Log in

Shape from Sharp and Motion-Blurred Image Pair

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Motion blur due to camera shake is a common occurrence. During image capture, the apparent motion of a scene point in the image plane varies according to both camera motion and scene structure. Our objective is to infer the camera motion and the depth map of static scenes using motion blur as a cue. To this end, we use an unblurred–blurred image pair. Initially, we develop a technique to estimate the transformation spread function (TSF) which symbolizes the camera shake. This technique uses blur kernels estimated at different points across the image. Based on the estimated TSF, we recover the complete depth map of the scene within a regularization framework.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  • Alvertos, N., Brzakovic, D., & Gonzalez, R. C. (1989). Camera geometries for image matching in 3-D machine vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(9), 897–915.

    Article  Google Scholar 

  • Babacan, S. D., Wang, J., Molina, R., & Katsaggelos, A. K. (2010). Bayesian blind deconvolution from differently exposed image pairs. IEEE Transactions on Image Processing, 19(11), 2874–2888.

    Article  MathSciNet  Google Scholar 

  • Bhavsar, A., & Rajagopalan, A. N. (2012). Towards unrestrained depth inference with coherent occlusion filling. International Journal of Computer Vision, 97(2), 167–190.

    Article  MATH  MathSciNet  Google Scholar 

  • Boracchi, G. (2009). Estimating the 3D direction of a translating camera from a single motion-blurred image. Pattern Recognition Letters, 30(7), 671–681.

    Article  MathSciNet  Google Scholar 

  • Brodatz, P. (1966). Textures: A photographic album for artists and designers. New York: Dover Publications.

    Google Scholar 

  • Canon U.S.A.: Lens Image Stabilization (2012). Retrieved December 25, 2012, from http://www.usa.canon.com/cusa/consumer/standard_display/Lens_Advantage_IS.html.

  • Caglioti, V., & Giusti, A. (2008). On the apparent transparency of a motion blurred object. International Journal of Computer Vision, 86(2–3), 243–255.

    MathSciNet  Google Scholar 

  • Chaudhuri, S., & Rajagopalan, A. N. (1999). Depth from defocus: A real aperture imaging approach. New York: Springer.

    Book  Google Scholar 

  • Dai, S., & Wu, Y. (2008). Motion from blur. In Proceedings of Computer Vision and Pattern Recognition (CVPR).

  • Dannilidis, K., & Nagel, H. H. (1993). The coupling of rotation and translation in motion estimation of planar surfaces. In Proceedings of Computer Vision and Pattern Recognition (CVPR).

  • Favaro, P., & Soatto, S. (2004). A variational approach to scene reconstruction and image segmentation from motion-blur cues. In Proceedings of Computer Vision and Pattern Recognition (CVPR).

  • Favaro, P., Burger, M., & Soatto, S. (2004). Scene and motion reconstruction from defocused and motion-blurred images via anisotropic diffusion. In Proceedings of European Conference on Computer Vision (ECCV).

  • Favaro, P., & Soatto, S. (2006). 3-D shape estimation and image restoration exploiting defocus and motion-blur. New York: Springer.

    Google Scholar 

  • Favaro, P., Soatto, S., Burger, M., & Osher, S. (2008). Shape from defocus via diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(30), 518–531.

    Article  Google Scholar 

  • Felzenszwalb, P., & Huttenlocher, D. (2004). Efficient graph-based image segmentation. International Journal of Computer Vision, 59(2), 167–181.

    Article  Google Scholar 

  • Felzenszwalb, P., & Huttenlocher, D. (2006). Efficient belief propagation for early vision. International Journal of Computer Vision, 70(1), 41–54.

    Article  Google Scholar 

  • Fergus, R., Singh, B., Hertzmann, A., Roweis, S. T., & Freeman, W. T. (2006). Removing camera shake from a single photograph. ACM Transactions on Graphics, 25(3), 787–794.

    Article  Google Scholar 

  • Fox, J. S. (1988). Range from translational motion blurring. In Proceedings of Computer Vision and Pattern Recognition (CVPR).

  • Gupta, A., Joshi, N., Zitnick, L., Cohen, M., & Curless, B. (2010). Single image deblurring using motion density functions. In Proceedings of European Conference on Computer Vision (ECCV).

  • Hartley, R., & Zisserman, A. (2004). Multiple view geometry in computer vision (2nd ed.). Cambridge: Cambridge University Press.

    Book  MATH  Google Scholar 

  • Hirsch, M., Schuler, C. J., Harmeling, S., & Scholkopf, B. (2011). Fast removal of non-uniform camera shake. In Proceedings of International Conference on Computer Vision (ICCV).

  • Hu, Z., & Yang, M. H. (2012). Fast non-uniform deblurring using constrained camera pose subspace. In Proceedings of British Machine Vision Conference (BMVC).

  • Joshi, N., Kang, S. B., & Zitnick, L. (2010). Image deblurring using inertial measurement sensors. ACM Transactions on Graphics, 29(4), 1–9.

    Article  Google Scholar 

  • Klein, G., & Drummond, T. (2005). A single-frame visual gyroscope. In Proceedings of British Machine Vision Conference (BMVC).

  • Levin, A., Weiss, Y., Durand, F., & Freeman, W. T. (2009). Understanding and evaluating blind deconvolution algorithms. In Proceedings of Computer Vision and Pattern Recognition (CVPR).

  • Lin, H. Y., & Chang, C. H. (2006). Depth recovery from motion blurred images. In Proceedings of International Conference on Pattern Recognition (ICPR).

  • Liu, J., Ji, S. & Ye, J. (2009). SLEP: Sparse learning with efficient projections, Arizona State University. Retrieved December 25, 2012 from http://www.public.asu.edu/~jye02/Software/SLEP.

  • Liu, J., & Ye, J. (2009). Efficient Euclidean projections in linear time. In: Proceedings of International Conference on Machine Learning (ICML).

  • Paramanand, C. & Rajagopalan, A. N. (2010a). Unscented transformation for depth from motion-blur in videos. In Proceedings of IEEE Workshop on Three Dimensional Information Extraction for Video Analysis and Mining (in conjunction with CVPR).

  • Paramanand, C., & Rajagopalan, A. N. (2010b). Inferring image transformation and structure from motion-blurred images. In Proceedings of British Machine Vision Conference (BMVC).

  • Paramanand, C., & Rajagopalan, A. N. (2012). Depth from motion and optical blur with unscented Kalman filter. IEEE Transactions on Image Processing, 21(5), 2798–2811.

    Article  MathSciNet  Google Scholar 

  • Portz, T., Zhang, L., & Jiang, H. (2012). Optical flow in the presence of spatially-varying motion blur. In Proceedings of Computer Vision and Pattern Recognition (CVPR).

  • Sellent, A., Eisemann, M., Goldlucke, B., Cremers, D., & Magnor, M. (2011). Motion field estimation from alternate exposure images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(8), 1577–1589.

    Article  Google Scholar 

  • Shan, Q., Jia, J., & Agarwala, A. (2008). High-quality motion deblurring from a single image. ACM Transactions on Graphics, 27(3).

  • Sorel, M., & Flusser, J. (2008). Space-variant restoration of images degraded by camera motion blur. IEEE Transactions on Image Processing, 17(2), 105–116.

    Article  MathSciNet  Google Scholar 

  • Sorel, M., & Sroubek, F. (2009). Space-variant deblurring using one blurred and one underexposed image. In Proceedings of International Conference on Image Processing (ICIP).

  • Tai, Y., Du, H., Brown, M. S., & Lin, S. (2010a). Correction of spatially varying image and video motion blur using a hybrid camera. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(6), 1012–1028.

    Article  Google Scholar 

  • Tai, Y., Lin, S., Kong, N., & Shin, S. Y. (2010b). Coded exposure imaging for projective motion deblurring. In Proceedings of Computer Vision and Pattern Recognition (CVPR).

  • Tai, Y., Tan, P., & Brown, M. S. (2011). Richardson–Lucy deblurring for scenes under projective motion path. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(8), 1603–1618.

    Article  Google Scholar 

  • Watanabe, M., & Nayar, S. K. (1998). Rational filters for passive depth from defocus. International Journal of Computer Vision, 27(3), 205–225.

    Google Scholar 

  • Whyte, O., Sivic, J., Zisserman, A., & Ponce, J. (2010). Non-uniform deblurring for shaken images. In Proceedings of Computer Vision and Pattern Recognition (CVPR).

  • Whyte, O., Sivic, J., Zisserman, A., & Ponce, J. (2012). Non-uniform deblurring for shaken images. International Journal of Computer Vision, 98(2), 168–186.

    Article  MATH  MathSciNet  Google Scholar 

  • Xu, L., & Jia, J. (2012). Depth-aware motion deblurring. In Proceedings of International Conference on Computational Photography (ICCP).

  • Yuan, L., Sun, J., Quan, L., & Shum, H. Y. (2007). Image deblurring with blurred/noisy image pairs. ACM Transactions on Graphics, 26(3), 1–10.

    Article  MATH  Google Scholar 

  • Zheng, Y., Nobuhara, S., & Sheikh, Y. (2011). Structure from motion blur in low light. In Proceedings of Computer Vision and Pattern Recognition (CVPR).

Download references

Acknowledgments

The authors would like to thank the anonymous reviewers for their useful comments and suggestions. They gratefully acknowledge the support given by Department of Atomic Energy Science Research Council, India.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to C. Paramanand.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Paramanand, C., Rajagopalan, A.N. Shape from Sharp and Motion-Blurred Image Pair. Int J Comput Vis 107, 272–292 (2014). https://doi.org/10.1007/s11263-013-0685-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-013-0685-1

Keywords

Navigation