Skip to main content
Log in

You Only Look Yourself: Unsupervised and Untrained Single Image Dehazing Neural Network

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

In this paper, we study two challenging and less-touched problems in single image dehazing, namely, how to make deep learning achieve image dehazing without training on the ground-truth clean image (unsupervised) and an image collection (untrained). An unsupervised model will avoid the intensive labor of collecting hazy-clean image pairs, and an untrained model is a “real” single image dehazing approach which could remove haze based on the observed hazy image only and no extra images are used. Motivated by the layer disentanglement, we propose a novel method, called you only look yourself (YOLY) which could be one of the first unsupervised and untrained neural networks for image dehazing. In brief, YOLY employs three joint subnetworks to separate the observed hazy image into several latent layers, i.e., scene radiance layer, transmission map layer, and atmospheric light layer. After that, three layers are further composed to the hazy image in a self-supervised manner. Thanks to the unsupervised and untrained characteristics of YOLY, our method bypasses the conventional training paradigm of deep models on hazy-clean pairs or a large scale dataset, thus avoids the labor-intensive data collection and the domain shift issue. Besides, our method also provides an effective learning-based haze transfer solution thanks to its layer disentanglement mechanism. Extensive experiments show the promising performance of our method in image dehazing compared with 14 methods on six databases. The code could be accessed at www.pengxi.me.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  • Ancuti, C., Ancuti, C. O., Timofte, R., Van Gool, L., Zhang, L., & Yang, M., et al. (2018a). NTIRE 2018 challenge on image dehazing: Methods and results. In IEEE conference on computer vision and pattern recognition workshops (pp. 1004–1014). Salt Lake City, UT.

  • Ancuti, C., Ancuti, C. O., Timofte, R., & De Vleeschouwer, C. (2018b). I-HAZE: A dehazing benchmark with real hazy and haze-free indoor images. In Advanced concepts for intelligent vision systems (pp. 620–631). Poitiers, France.

  • Ancuti, C. O., Ancuti, C., Timofte, R., & De Vleeschouwer, C. (2018c). O-HAZE: A dehazing benchmark with real hazy and haze-free outdoor images. In IEEE conference on computer vision and pattern recognition workShops (pp. 754–762). Salt Lake City, UT.

  • Berman, D., Treibitz, T., & Avidan, S. (2016). Non-local image dehazing. In IEEE conference on computer vision and pattern recognition (pp. 1674–1682). NV: Las Vegas.

  • Berman, D., Treibitz, T., & Avidan, S. (2017). Air-light estimation using haze-lines. In IEEE international conference on computational photography (pp. 1–9). Romania: Cluj-Napoca.

  • Cai, B., Xu, X., Jia, K., Qing, C., & Tao, D. (2016). DehazeNet: An end-to-end system for single image haze removal. IEEE Transactions on Image Processing, 25(11), 5187–5198.

    Article  MathSciNet  MATH  Google Scholar 

  • Chen, C., Do, M. N., & Wang, J. (2016). Robust image and video dehazing with visual artifact suppression via gradient residual minimization. In European conference on computer vision (pp. 576–591). Amsterdam, The Netherlands.

  • Gou, Y., Li, B., Liu, Z., Yang, S., Peng, X. (2020). CLEARER: Multi-Scale neural architecture search for image restoration. In Neural information processing systems. Canada: Vancouver.

  • Hahner, M., Sakaridis, D D., Zaech, C. & Gool L. V. J. (2019). Semantic understanding of foggy scenes with purely synthetic data. In IEEE intelligent transportation systems conference, New Zealand (pp. 3675–3681).

  • He, K., Sun, J., & Tang, X. (2009). Single image haze removal using dark channel prior. In IEEE conference on computer vision and pattern recognition (pp. 1956–1963). Miami, Florida, USA.

  • Heckel, R., & Hand, P. (2019) Deep decoder: Concise image representations from untrained non-convolutional networks. In International conference on learning representations, New Orleans, LA.

  • Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning (pp. 448–456). Lille, France.

  • Irani, M. (2019). Double-DIP: Unsupervised image decomposition via coupled deep-image-priors. In IEEE conference on computer vision and pattern recognition (pp. 11026–11035). Long Beach, CA.

  • Kingma, D. P., & Welling, M. (2014). Auto-encoding variational bayes. In International conference on learning representations, Banff, Canada.

  • Kingma, J., & Diederikand, B. (2015). Adam: A method for stochastic optimization. In International conference on learning representations, San Diego, CA.

  • Krull, A., Buchholz, T. O., & Jug, F. (2019). Noise2Void—Learning denoising from single noisy images. In IEEE conference on computer vision and pattern recognition (pp. 2129–2137). Long Beach, CA.

  • Lehtinen, J., Munkberg, J., Hasselgren, J., Laine, S., Karras, T., Aittala, M., et al. (2018). Noise2Noise: Learning image restoration without clean data. In International conference on machine learning (pp. 2971–2980). Stockholm, Sweden.

  • Li, B., Peng, X., Wang, Z., Xu, J., & Feng, D. (2017). AOD-net: All-in-one dehazing network. In International conference on computer vision (pp. 4780–4788). Venice, Italy.

  • Li, B., Ren, W., Fu, D., Tao, D., Feng, D., Zeng, W., et al. (2019). Benchmarking single image dehazing and beyond. IEEE Transactions on Image Processing, 28(1), 492–505.

    Article  MathSciNet  MATH  Google Scholar 

  • Li, R., Pan, J., Li, Z., & Tang, J. (2018). Single image dehazing via conditional generative adversarial network. In IEEE conference on computer vision and pattern recognition (pp. 8202–8211). Salt Lake City, UT.

  • Li, Y., Tan, R. T., & Brown, M. S. (2015). Nighttime haze removal with glow and multiple light colors. In The IEEE international conference on computer vision (ICCV) (pp. 226–234). Santiago, Chile.

  • Liu, X., Ma, Y., Shi, Z., & Chen, J. (2019). GridDehazeNet: Attention-based multi-scale network for image dehazing. In International conference on computer vision (pp. 7314–7323). Seoul, Korea.

  • Mei, K., Jiang, A., Li, J., & Wang, M. (2018). Progressive feature fusion network for realistic image dehazing. In Asian conference on computer vision (ACCV) (pp. 203–215). Perth, Australia

  • Meng, G., Wang, Y., Duan, J., Xiang, S., & Pan, C. (2013). Efficient image dehazing with boundary constraint and contextual regularization. In International conference on computer vision (pp. 617–624). Sydney, Australia.

  • Nayar, S. K., & Narasimhan, S. G. (1999). Vision in bad weather. In IEEE international conference on computer vision, Kerkyra, Corfu, Greece (Vol. 2, pp. 820–827).

  • Qu, Y., Chen, Y., Huang, J., & Xie, Y. (2019). Enhanced Pix2pix dehazing network. In IEEE conference on computer vision and pattern recognition (pp. 8160–8168). Long Beach, CA.

  • Ren, W., Liu, S., Zhang, H., Pan, J., Cao, X., & Yang, M. H. (2016). Single image dehazing via multi-scale convolutional neural networks. In European conference on computer vision (pp. 154–169). Amsterdam, The Netherlands.

  • Sakaridis, C., Dai, D., Hecker, S., & Van Gool, L. (2018a). Model adaptation with synthetic and real data for semantic dense foggy scene understanding. In The European conference on computer vision, Munich, Germany (Vol. 11217, pp. 707–724).

  • Sakaridis, C., Dai, D., & Van Gool, L. (2018b). Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision, 126, 973–992.

  • Tan, R. T. (2008). Visibility in bad weather from a single image. In IEEE conference on computer vision and pattern recognition

  • Tarel, J. P., & Hautiere, N. (2009). Fast visibility restoration from a single color or gray level image. In International conference on computer vision (pp. 2201–2208). Kyoto, Japan.

  • Ulyanov, D., Vedaldi, A., & Lempitsky, V. (2018). Deep image prior. In IEEE conference on computer vision and pattern recognition (pp. 9446–9454). Salt Lake City, UT.

  • Zhang, H., & Patel, V. M. (2018). Densely connected pyramid dehazing network. In IEEE conference on computer vision and pattern recognition (pp. 3194–3203). Salt Lake City, UT.

  • Zhang, H., Sindagi, V., Patel, V. M. (2017). Joint transmission map estimation and dehazing using deep networks. CoRR arXiv:1708.00581

  • Zhang, H., Sindagi, V., Patel, V. M. (2018). Multi-scale single image dehazing using perceptual pyramid deep network. In The IEEE conference on computer vision and pattern recognition workshops

  • Zhu, H., Peng, X., Chandrasekhar, V., Li, L., & Lim, J-H. (2018). DehazeGAN: When image dehazing meets differential programming. In International joint conferences on artificial intelligence (pp. 1234–1240). Stockholm, Sweden.

  • Zhu, Q., Mai, J., & Shao, L. (2015). A fast single image haze removal algorithm using color attenuation prior. IEEE Transactions on Image Processing, 24, 3522–3533.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would thank to the anonymous reviewers for the constructive comments and valuable suggestions that greatly improve this work. This work was supported in part by NFSC under Grants U19A2081, U19A2078, 61625204, and 61836006; in part by the Fundamental Research Funds for the Central Universities under Grant YJ201949; in part by the Fund of Sichuan University Tomorrow Advancing Life; and in part by A*STAR AME Programmatic under Grant A18A1b0045

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xi Peng.

Additional information

Communicated by Vishal Patel.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Boyun Li and Yuanbiao Gou have contribute equally to this work.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, B., Gou, Y., Gu, S. et al. You Only Look Yourself: Unsupervised and Untrained Single Image Dehazing Neural Network. Int J Comput Vis 129, 1754–1767 (2021). https://doi.org/10.1007/s11263-021-01431-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-021-01431-5

Keywords

Navigation