Skip to main content

Advertisement

Log in

Coded Aperture Pairs for Depth from Defocus and Defocus Deblurring

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

The classical approach to depth from defocus (DFD) uses lenses with circular apertures for image capturing. We show in this paper that the use of a circular aperture severely restricts the accuracy of DFD. We derive a criterion for evaluating a pair of apertures with respect to the precision of depth recovery. This criterion is optimized using a genetic algorithm and gradient descent search to arrive at a pair of high resolution apertures. These two coded apertures are found to complement each other in the scene frequencies they preserve. This property enables them to not only recover depth with greater fidelity but also obtain a high quality all-focused image from the two captured images. Extensive simulations as well as experiments on a variety of real scenes demonstrate the benefits of using the coded apertures over conventional circular apertures.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Campos, J., & Yzuel, M. (1989). Axial and extra-axial responses in aberrated optical systems with apodizers. Journal of Modern Optics, 36(6), 733–749.

    Article  Google Scholar 

  • Caroli, E., Stephen, J., Cocco, G., Natalucci, L., & Spizzichino, A. (1987). Coded aperture imaging in X- and Gamma-ray astronomy. Space Science Reviews, 349–403.

  • Dowski, E. (1993). Passive ranging with an incoherent optical system. PhD Thesis, Colorado Univ., Boulder, CO.

  • Dowski, E., & Johnson, G. (1999). Wavefront coding: A modern method of achieving high performance and/or low cost imaging systems. Proceedings of SPIE, 3779, 137–145.

    Article  Google Scholar 

  • Farid, H., & Simoncelli, E. (1998). Range estimation by optical differentiation. Journal of the Optical Society of America A, 15(7), 1777–1786.

    Article  Google Scholar 

  • Favaro, P., & Soatto, S. (2000). Shape and radiance estimation from the information divergence of blurred images. In ECCV (pp. 755–768).

    Google Scholar 

  • Favaro, P., & Soatto, S. (2005). A geometric approach to shape from defocus. IEEE Transactions on Pattern Analysis and Machine Intelligence, 406–417.

  • Girod, B., & Adelson, E. (1990). System for ascertaining direction of blur in a range-from-defocus camera. US Patent 4,965,442.

  • Gottesman, S., & Fenimore, E. (1989). New family of binary arrays for coded aperture imaging. Applied Optics, 20, 4344–4352.

    Article  Google Scholar 

  • Greengard, A., Schechner, Y., & Piestun, R. (2006). Depth from diffracted rotation. Optics Letters, 31(2), 181–183.

    Article  Google Scholar 

  • Hasinoff, S., & Kutulakos, K. (2009). Confocal stereo. International Journal of Computer Vision, 81(1), 82–104.

    Article  Google Scholar 

  • Hausler, G. (1972). A method to increase the depth of focus by two step image processing. Optics Communications, 6, 38–42.

    Article  Google Scholar 

  • Hiura, S., & Matsuyama, T. (1998). Depth measurement by the multi-focus camera. In CVPR (pp. 953–959).

    Google Scholar 

  • Klarquist, W., Geisler, W., & Bovik, A. (1995). Maximum-likelihood depth-from-defocus for active vision. In IEEE/RSJ international conference on intelligent robots and systems (pp. 374–379).

    Google Scholar 

  • Levin, A., Fergus, R., Durand, F., & Freeman, W. (2007). Image and depth from a conventional camera with a coded aperture. Proceedings of ACM SIGGRAPH, 26(3), 70.

    Article  Google Scholar 

  • Levin, A., Hasinoff, S., Green, P., Durand, F., & Freeman, W. (2009). 4D frequency analysis of computational cameras for depth of field extension (p. 97).

  • Liang, C., Lin, T., Wong, B., Liu, C., & Chen, H. (2008). Programmable aperture photography: multiplexed light field acquisition. Proceedings of ACM SIGGRAPH, 27(3), 1–10.

    Article  Google Scholar 

  • Mills, J., & Thompson, B. (1986). Effect of aberrations and apodization on the performance of coherent optical systems. Journal of the Optical Society of America A, 3(5), 694–703.

    Article  Google Scholar 

  • Mino, M., & Okano, Y. (1971). Improvement in the OTF of a defocused optical system through the use of shaded apertures. Applied Optics, 10, 2219–2225.

    Article  Google Scholar 

  • Nagahara, H., Kuthirummal, S., Zhou, C., & Nayar, S. (2008). Flexible depth of field photography. In ECCV (Vol. 3).

    Google Scholar 

  • Nayar, S., Watanabe, M., & Noguchi, M. (1996). Real-time focus range sensor. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(12), 1186–1198.

    Article  Google Scholar 

  • Ojeda-Castaneda, J., Andres, P., & Diaz, A. (1986). Annular apodizers for low sensitivity to defocus and to spherical aberration. Optics Letters, 11(8), 487–489.

    Article  Google Scholar 

  • Ojeda-Castañeda, J., Berriel-Valdos, L., & Montes, E. (1987). Bessel annular apodizers: imaging characteristics. Applied Optics, 26, 2770–2772.

    Article  Google Scholar 

  • Ojeda-Castaneda, J., Ramos, R., & Noyola-Isgleas, A. (1988). High focal depth by apodization and digital restoration. Applied Optics, 27(12), 2583–2586.

    Article  Google Scholar 

  • Pentland, A. (1987). A new sense for depth of field. IEEE Transactions on Pattern Analysis and Machine Intelligence, 9(4), 423–430.

    Article  Google Scholar 

  • Piana, M., & Bertero, M. (1996). Regularized deconvolution of multiple images of the same object. Journal of the Optical Society of America A, 13(7), 1516–1523.

    Article  Google Scholar 

  • Rajagopalan, A., & Chaudhuri, S. (1997a). A variational approach to recovering depth from defocused images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(10), 1159.

    Article  Google Scholar 

  • Rajagopalan, A., & Chaudhuri, S. (1997b). Optimal selection of camera parameters for recovery of depth from defocused images. In CVPR (pp. 219–224).

    Google Scholar 

  • Rajagopalan, A., & Chaudhuri, S. (1999). An MRF model-based approach to simultaneous recovery of depth and restoration from defocused images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(7), 577.

    Article  Google Scholar 

  • Rajan, D., & Chaudhuri, S. (2003). Simultaneous estimation of super-resolved scene and depth map from low resolution defocused observations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1102–1117.

  • Rav-Acha, A., & Peleg, S. (2005). Two motion-blurred images are better than one. Pattern Recognition Letters, 26(3), 311–318.

    Article  Google Scholar 

  • Schechner, Y., & Kiryati, N. (1993). The optimal axial interval in estimating depth from defocus. In ICCV (pp. 843–848).

    Google Scholar 

  • Schechner, Y., & Kiryati, N. (1998). Depth from defocus vs. stereo: How different really are they? International Journal of Computer Vision, 1784–1786.

  • Subbarao, M., & Gurumoorthy, N. (1988). Depth recovery from blurred edges. In CVPR (pp. 498–503).

    Google Scholar 

  • Subbarao, M., & Surya, G. (1994). Depth from defocus: A spatial domain approach. International Journal of Computer Vision, 13(3), 271–294.

    Article  Google Scholar 

  • Subbarao, M., & Tyan, J. (1997). Noise sensitivity analysis of depth-from-defocus by a spatial-domain approach. Proceedings of SPIE, 3174, 174.

    Article  Google Scholar 

  • Van der Schaaf, A., & Van Hateren, J. (1996). Modelling the power spectra of natural images: statistics and information. Vision Research, 36(17), 2759–2770.

    Article  Google Scholar 

  • Varamit, C., & Indebetouw, G. (1985). Imaging properties of defocused partitioned pupils. Journal of the Optical Society of America A, 2(6), 799–802.

    Article  Google Scholar 

  • Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., & Tumblin, J. (2007). Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Transactions on Graphics, 26(3), 69.

    Article  Google Scholar 

  • Watanabe, M., & Nayar, S. (1998). Rational filters for passive depth from defocus. International Journal of Computer Vision, 27(3), 203–225.

    Article  Google Scholar 

  • Weiss, Y., & Freeman, W. (2007). What makes a good model of natural images. In CVPR (pp. 1–8).

    Google Scholar 

  • Welford, W. (1960). Use of annular apertures to increase focal depth. Journal of the Optical Society of America A, 8, 749–753.

    Google Scholar 

  • Xiong, Y., & Shafer, S. (1993). Depth from focusing and defocusing. In CVPR (pp. 68–68).

    Google Scholar 

  • Zhou, C., & Nayar, S. (2009). What are good apertures for defocus deblurring. In International conference of computational photography, San Francisco, USA.

    Google Scholar 

  • Zhou, C., Lin, S., & Nayar, S. (2009). Coded aperture pairs for depth from defocus. In ICCV, Kyoto, Japan.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Changyin Zhou.

Additional information

This is the extended version of a paper that appeared in Zhou et al. (2009).

Electronic Supplementary Material

Below is the link to the electronic supplementary material.

(PDF 16.7 MB)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhou, C., Lin, S. & Nayar, S.K. Coded Aperture Pairs for Depth from Defocus and Defocus Deblurring. Int J Comput Vis 93, 53–72 (2011). https://doi.org/10.1007/s11263-010-0409-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-010-0409-8

Keywords

Navigation