Abstract
The classical approach to depth from defocus (DFD) uses lenses with circular apertures for image capturing. We show in this paper that the use of a circular aperture severely restricts the accuracy of DFD. We derive a criterion for evaluating a pair of apertures with respect to the precision of depth recovery. This criterion is optimized using a genetic algorithm and gradient descent search to arrive at a pair of high resolution apertures. These two coded apertures are found to complement each other in the scene frequencies they preserve. This property enables them to not only recover depth with greater fidelity but also obtain a high quality all-focused image from the two captured images. Extensive simulations as well as experiments on a variety of real scenes demonstrate the benefits of using the coded apertures over conventional circular apertures.
Similar content being viewed by others
References
Campos, J., & Yzuel, M. (1989). Axial and extra-axial responses in aberrated optical systems with apodizers. Journal of Modern Optics, 36(6), 733–749.
Caroli, E., Stephen, J., Cocco, G., Natalucci, L., & Spizzichino, A. (1987). Coded aperture imaging in X- and Gamma-ray astronomy. Space Science Reviews, 349–403.
Dowski, E. (1993). Passive ranging with an incoherent optical system. PhD Thesis, Colorado Univ., Boulder, CO.
Dowski, E., & Johnson, G. (1999). Wavefront coding: A modern method of achieving high performance and/or low cost imaging systems. Proceedings of SPIE, 3779, 137–145.
Farid, H., & Simoncelli, E. (1998). Range estimation by optical differentiation. Journal of the Optical Society of America A, 15(7), 1777–1786.
Favaro, P., & Soatto, S. (2000). Shape and radiance estimation from the information divergence of blurred images. In ECCV (pp. 755–768).
Favaro, P., & Soatto, S. (2005). A geometric approach to shape from defocus. IEEE Transactions on Pattern Analysis and Machine Intelligence, 406–417.
Girod, B., & Adelson, E. (1990). System for ascertaining direction of blur in a range-from-defocus camera. US Patent 4,965,442.
Gottesman, S., & Fenimore, E. (1989). New family of binary arrays for coded aperture imaging. Applied Optics, 20, 4344–4352.
Greengard, A., Schechner, Y., & Piestun, R. (2006). Depth from diffracted rotation. Optics Letters, 31(2), 181–183.
Hasinoff, S., & Kutulakos, K. (2009). Confocal stereo. International Journal of Computer Vision, 81(1), 82–104.
Hausler, G. (1972). A method to increase the depth of focus by two step image processing. Optics Communications, 6, 38–42.
Hiura, S., & Matsuyama, T. (1998). Depth measurement by the multi-focus camera. In CVPR (pp. 953–959).
Klarquist, W., Geisler, W., & Bovik, A. (1995). Maximum-likelihood depth-from-defocus for active vision. In IEEE/RSJ international conference on intelligent robots and systems (pp. 374–379).
Levin, A., Fergus, R., Durand, F., & Freeman, W. (2007). Image and depth from a conventional camera with a coded aperture. Proceedings of ACM SIGGRAPH, 26(3), 70.
Levin, A., Hasinoff, S., Green, P., Durand, F., & Freeman, W. (2009). 4D frequency analysis of computational cameras for depth of field extension (p. 97).
Liang, C., Lin, T., Wong, B., Liu, C., & Chen, H. (2008). Programmable aperture photography: multiplexed light field acquisition. Proceedings of ACM SIGGRAPH, 27(3), 1–10.
Mills, J., & Thompson, B. (1986). Effect of aberrations and apodization on the performance of coherent optical systems. Journal of the Optical Society of America A, 3(5), 694–703.
Mino, M., & Okano, Y. (1971). Improvement in the OTF of a defocused optical system through the use of shaded apertures. Applied Optics, 10, 2219–2225.
Nagahara, H., Kuthirummal, S., Zhou, C., & Nayar, S. (2008). Flexible depth of field photography. In ECCV (Vol. 3).
Nayar, S., Watanabe, M., & Noguchi, M. (1996). Real-time focus range sensor. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(12), 1186–1198.
Ojeda-Castaneda, J., Andres, P., & Diaz, A. (1986). Annular apodizers for low sensitivity to defocus and to spherical aberration. Optics Letters, 11(8), 487–489.
Ojeda-Castañeda, J., Berriel-Valdos, L., & Montes, E. (1987). Bessel annular apodizers: imaging characteristics. Applied Optics, 26, 2770–2772.
Ojeda-Castaneda, J., Ramos, R., & Noyola-Isgleas, A. (1988). High focal depth by apodization and digital restoration. Applied Optics, 27(12), 2583–2586.
Pentland, A. (1987). A new sense for depth of field. IEEE Transactions on Pattern Analysis and Machine Intelligence, 9(4), 423–430.
Piana, M., & Bertero, M. (1996). Regularized deconvolution of multiple images of the same object. Journal of the Optical Society of America A, 13(7), 1516–1523.
Rajagopalan, A., & Chaudhuri, S. (1997a). A variational approach to recovering depth from defocused images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(10), 1159.
Rajagopalan, A., & Chaudhuri, S. (1997b). Optimal selection of camera parameters for recovery of depth from defocused images. In CVPR (pp. 219–224).
Rajagopalan, A., & Chaudhuri, S. (1999). An MRF model-based approach to simultaneous recovery of depth and restoration from defocused images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(7), 577.
Rajan, D., & Chaudhuri, S. (2003). Simultaneous estimation of super-resolved scene and depth map from low resolution defocused observations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1102–1117.
Rav-Acha, A., & Peleg, S. (2005). Two motion-blurred images are better than one. Pattern Recognition Letters, 26(3), 311–318.
Schechner, Y., & Kiryati, N. (1993). The optimal axial interval in estimating depth from defocus. In ICCV (pp. 843–848).
Schechner, Y., & Kiryati, N. (1998). Depth from defocus vs. stereo: How different really are they? International Journal of Computer Vision, 1784–1786.
Subbarao, M., & Gurumoorthy, N. (1988). Depth recovery from blurred edges. In CVPR (pp. 498–503).
Subbarao, M., & Surya, G. (1994). Depth from defocus: A spatial domain approach. International Journal of Computer Vision, 13(3), 271–294.
Subbarao, M., & Tyan, J. (1997). Noise sensitivity analysis of depth-from-defocus by a spatial-domain approach. Proceedings of SPIE, 3174, 174.
Van der Schaaf, A., & Van Hateren, J. (1996). Modelling the power spectra of natural images: statistics and information. Vision Research, 36(17), 2759–2770.
Varamit, C., & Indebetouw, G. (1985). Imaging properties of defocused partitioned pupils. Journal of the Optical Society of America A, 2(6), 799–802.
Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., & Tumblin, J. (2007). Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Transactions on Graphics, 26(3), 69.
Watanabe, M., & Nayar, S. (1998). Rational filters for passive depth from defocus. International Journal of Computer Vision, 27(3), 203–225.
Weiss, Y., & Freeman, W. (2007). What makes a good model of natural images. In CVPR (pp. 1–8).
Welford, W. (1960). Use of annular apertures to increase focal depth. Journal of the Optical Society of America A, 8, 749–753.
Xiong, Y., & Shafer, S. (1993). Depth from focusing and defocusing. In CVPR (pp. 68–68).
Zhou, C., & Nayar, S. (2009). What are good apertures for defocus deblurring. In International conference of computational photography, San Francisco, USA.
Zhou, C., Lin, S., & Nayar, S. (2009). Coded aperture pairs for depth from defocus. In ICCV, Kyoto, Japan.
Author information
Authors and Affiliations
Corresponding author
Additional information
This is the extended version of a paper that appeared in Zhou et al. (2009).
Electronic Supplementary Material
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Zhou, C., Lin, S. & Nayar, S.K. Coded Aperture Pairs for Depth from Defocus and Defocus Deblurring. Int J Comput Vis 93, 53–72 (2011). https://doi.org/10.1007/s11263-010-0409-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11263-010-0409-8