Skip to main content

AIM 2020: Scene Relighting and Illumination Estimation Challenge

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12537))

Abstract

We review the AIM 2020 challenge on virtual image relighting and illumination estimation. This paper presents the novel VIDIT dataset used in the challenge and the different proposed solutions and final evaluation results over the 3 challenge tracks. The first track considered one-to-one relighting; the objective was to relight an input photo of a scene with a different color temperature and illuminant orientation (i.e., light source position). The goal of the second track was to estimate illumination settings, namely the color temperature and orientation, from a given image. Lastly, the third track dealt with any-to-any relighting, thus a generalization of the first track. The target color temperature and orientation, rather than being pre-determined, are instead given by a guide image. Participants were allowed to make use of their track 1 and 2 solutions for track 3. The tracks had 94, 52, and 56 registered participants, respectively, leading to 20 confirmed submissions in the final competition stage.

M. El Helou, R. Zhou, S. Süsstrunk, and R. Timofte are the challenge organizers, and the other authors are challenge participants.

Appendix A lists all the teams and affiliations.

https://github.com/majedelhelou/VIDIT.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Afifi, M., Brown, M.S.: Sensor-independent illumination estimation for DNN models. In: British Machine Vision Conference (BMVC), p. 11 (2019)

    Google Scholar 

  2. Afifi, M., Brown, M.S.: What else can fool deep learning? addressing color constancy errors on deep neural network performance. In: IEEE International Conference on Computer Vision (ICCV), pp. 243–252 (2019)

    Google Scholar 

  3. Afifi, M., Price, B., Cohen, S., Brown, M.S.: When color constancy goes wrong: correcting improperly white-balanced images. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1535–1544 (2019)

    Google Scholar 

  4. Barron, J.T.: Convolutional color constancy. In: IEEE International Conference on Computer Vision (ICCV), pp. 379–387 (2015)

    Google Scholar 

  5. Barron, J.T., Malik, J.: Color constancy, intrinsic images, and shape estimation. In: European Conference on Computer Vision (ECCV), pp. 57–70 (2012)

    Google Scholar 

  6. Bell, S., Bala, K., Snavely, N.: Intrinsic images in the wild. ACM Trans. Graph. (TOG) 33(4), 159 (2014)

    Article  Google Scholar 

  7. Bousseau, A., Paris, S., Durand, F.: User-assisted intrinsic images. In: ACM SIGGRAPH Asia, pp. 1–10 (2009)

    Google Scholar 

  8. Burton, G.J., Moorhead, I.R.: Color and spatial structure in natural scenes. Appl. Opt. 26(1), 157–170 (1987)

    Article  Google Scholar 

  9. Cabon, Y., Murray, N., Humenberger, M.: Virtual kitti 2. arXiv preprint arXiv:2001.10773 (2020)

  10. Chen, J., Adams, A., Wadhwa, N., Hasinoff, S.W.: Bilateral guided upsampling. ACM Trans. Graph. (TOG) 35(6), 1–8 (2016)

    Article  Google Scholar 

  11. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255 (2009)

    Google Scholar 

  12. Dherse, A.P., Everaert, M.N., Gwizdała, J.J.: Scene relighting with illumination estimation in the latent space on an encoder-decoder scheme. arXiv preprint arXiv:2006.02333 (2020)

  13. Dong, L., Jiang, Z., Li, C.: An ensemble neural network for scene relighting with light classification. In: Proceedings of the European Conference on Computer Vision Workshops (ECCVW) (2020)

    Google Scholar 

  14. D. Sabarinathan, Beham, M., Roomi, S.: Moire image restoration using multi level hyper vision net. Image and Video Processing arXiv:2004.08541 (2020)

  15. El Helou, M., Dümbgen, F., Süsstrunk, S.: AAM: an assessment metric of axial chromatic aberration. In: IEEE International Conference on Image Processing (ICIP), pp. 2486–2490 (2018)

    Google Scholar 

  16. El Helou, M., Zhou, R., Barthas, J., Süsstrunk, S.: VIDIT: virtual image dataset for illumination transfer. arXiv preprint arXiv:2005.05460 (2020)

  17. El Helou, M., et al.: AIM 2020: scene relighting and illumination estimation challenge. In: European Conference on Computer Vision Workshops (2020)

    Google Scholar 

  18. Finlayson, G.D., Drew, M.S., Lu, C.: Intrinsic images by entropy minimization. In: European Conference on Computer Vision (ECCV), pp. 582–595 (2004)

    Google Scholar 

  19. Fuoli, D., et al.: AIM 2020 challenge on video extreme super-resolution: methods and results. In: European Conference on Computer Vision Workshops (2020)

    Google Scholar 

  20. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR).,pp. 770–778 (2016)

    Google Scholar 

  21. Hu, Z., Huang, X., Li, Y., Wang, Q.: SA-AE for any-to-any relighting. In: Proceedings of the European Conference on Computer Vision Workshops (ECCVW) (2020)

    Google Scholar 

  22. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269 (2017)

    Google Scholar 

  23. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4700–4708 (2017)

    Google Scholar 

  24. Ignatov, A., et al.: AIM 2020 challenge on learned image signal processing pipeline. In: European Conference on Computer Vision Workshops (2020)

    Google Scholar 

  25. Ignatov, A., et al.: AIM 2020 challenge on rendering realistic bokeh. In: European Conference on Computer Vision Workshops (2020)

    Google Scholar 

  26. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  27. Kovacs, B., Bell, S., Snavely, N., Bala, K.: Shading annotations in the wild. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6998–7007 (2017)

    Google Scholar 

  28. Li, X., Wang, W., Hu, X., Yang, J.: Selective kernel networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 510–519 (2019)

    Google Scholar 

  29. Li, Z., Snavely, N.: Learning intrinsic image decomposition from watching the world. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9039–9048 (2018)

    Google Scholar 

  30. Liu, P., Zhang, H., Zhang, K., Lin, L., Zuo, W.: Multi-level wavelet-CNN for image restoration. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 773–782 (2018)

    Google Scholar 

  31. Llanos, B., Yang, Y.H.: Simultaneous demosaicing and chromatic aberration correction through spectral reconstruction. In: IEEE Conference on Computer and Robot Vision (CRV), pp. 17–24 (2020)

    Google Scholar 

  32. Matsushita, Y., Nishino, K., Ikeuchi, K., Sakauchi, M.: Illumination normalization with time-dependent intrinsic images for video surveillance. Trans. Pattern Anal. Mach. Intell. 26(10), 1336–1347 (2004)

    Article  Google Scholar 

  33. Murmann, L., Gharbi, M., Aittala, M., Durand, F.: A dataset of multi-illumination images in the wild. In: IEEE International Conference on Computer Vision (ICCV), pp. 4080–4089 (2019)

    Google Scholar 

  34. Nagano, K., et al.: Deep face normalization. ACM Trans. Graph. (TOG) 38(6), 183 (2019)

    Article  Google Scholar 

  35. Nathan, D.S., Beham, M.P.: LightNet: deep learning based illumination estimation from virtual images. In: European Conference on Computer Vision Workshops (2020)

    Google Scholar 

  36. Ntavelis, E., et al.: AIM 2020 challenge on image extreme inpainting. In: European Conference on Computer Vision Workshops (2020)

    Google Scholar 

  37. Puthussery, D., P S, H., Kuriakose, M., C V., J.: WDRN: a wavelet decomposed relightnet for image relighting. In: European Conference on Computer Vision Workshops (2020)

    Google Scholar 

  38. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  39. Shen, J., Yang, X., Jia, Y., Li, X.: Intrinsic images using optimization. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3481–3487 (2011)

    Google Scholar 

  40. Shi, W., et al.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1874–1883 (2016)

    Google Scholar 

  41. Son, S., et al.: AIM 2020 challenge on video temporal super-resolution. In: European Conference on Computer Vision Workshops (2020)

    Google Scholar 

  42. Sun, T., et al.: Single image portrait relighting. ACM Trans. Graph. (TOG) 38(4), 79 (2019)

    Article  Google Scholar 

  43. Tan, M., Le, Q.V.: Efficientnet: rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946 (2019)

  44. Tappen, M.F., Freeman, W.T., Adelson, E.H.: Recovering intrinsic images from a single image. In: Advances in Neural Information Processing Systems, pp. 1367–1374 (2003)

    Google Scholar 

  45. Torralba, A., Oliva, A.: Statistics of natural image categories. Netw. Comput. Neural Syst. 14(3), 391–412 (2003)

    Article  Google Scholar 

  46. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022 (2016)

  47. Wang, L.W., Siu, W.C., Liu, Z.S., Li, C.T., Lun, D.P.: Deep relighting networks for image light source manipulation. In: Proceedings of the European Conference on Computer Vision Workshops (ECCVW) (2020)

    Google Scholar 

  48. Wang, R., Zhang, Q., Fu, C.W., Shen, X., Zheng, W.S., Jia, J.: Underexposed photo enhancement using deep illumination estimation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6849–6857 (2019)

    Google Scholar 

  49. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  50. Wei, P., et al.: AIM 2020 challenge on real image super-resolution. In: European Conference on Computer Vision Workshops (2020)

    Google Scholar 

  51. Weiss, Y.: Deriving intrinsic images from image sequences. In: IEEE International Conference on Computer Vision (ICCV), vol. 2, pp. 68–75 (2001)

    Google Scholar 

  52. Woo, S., Park, J., Lee, J.Y., Kweon, I.S.: CBAM convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 1–17 (2018)

    Google Scholar 

  53. Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1492–1500 (2017)

    Google Scholar 

  54. Xu, Z., Sunkavalli, K., Hadap, S., Ramamoorthi, R.: Deep image-based relighting from optimal sparse samples. ACM Trans. Graph. (TOG) 37(4), 126 (2018)

    Article  Google Scholar 

  55. Zhang, H., Goodfellow, I., Metaxas, D., Odena, A.: Self-attention generative adversarial networks. In: International Conference on Machine Learning (ICML), pp. 7354–7363 (2019)

    Google Scholar 

  56. Zhang, K., et al.: AIM 2020 challenge on efficient super-resolution: methods and results. In: European Conference on Computer Vision Workshops (2020)

    Google Scholar 

  57. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 586–595 (2018)

    Google Scholar 

  58. Zhao, J., Hou, Y., Liu, Z., Xie, H., Liu, S.: Modified color CCD moiré method and its application in optical distortion correction. Precis. Eng. 65, 279–286 (2020)

    Article  Google Scholar 

  59. Zhou, H., Hadap, S., Sunkavalli, K., Jacobs, D.W.: Deep single-image portrait relighting. In: IEEE International Conference on Computer Vision (ICCV), pp. 7194–7202 (2019)

    Google Scholar 

Download references

Acknowledgements

We thank all AIM 2020 sponsors: Huawei, MediaTek, NVIDIA, Qualcomm, Google and CVL, ETH Zurich (https://data.vision.ee.ethz.ch/cvl/aim20/). We also note that all tracks were supported by the CodaLab infrastructure (https://competitions.codalab.org).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Majed El Helou .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1322 KB)

A Teams and Affiliations

A Teams and Affiliations

AIM challenge organizers

Members: Majed El Helou, Ruofan Zhou, Sabine Süsstrunk ({majed.elhelou, ruofan.zhou,sabine.susstrunk}@epfl.ch, EPFL, Switzerland), and Radu Timofte (radu.timofte@vision.ee.ethz.ch, ETH Zürich, Switzerland).

– AiRiA_CG –

Members: Yu Zhu(zhuyu.cv@gmail.com), Liping Dong, Zhuolong Jiang, Chenghua Li, Cong Leng, Jian Cheng

Affiliation: Nanjing Artificial Intelligence Chip Research, Institute of Automation, Chinese Academy of Sciences (AiRiA); MAICRO.

– CET_CVLab –

Members: Densen Puthussery (puthusserydensen@gmail.com), Hrishikesh P S, Melvin Kuriakose, Jiji C V

Affiliation: College of Engineering, Trivandrum, India.

– debut_kele –

Members: Kele Xu (kelele.xu@gmail.com), Hengxing Cai, Yuzhong Liu

Affiliation: National University of Defense Technology, China.

– DeepRelight –

Members: Li-Wen Wang\(^1\) (liwen.wang@connect.polyu.hk), Zhi-Song Liu\(^{1,2}\), Chu-Tak Li\(^1\), Wan-Chi Siu\(^1\), Daniel P. K. Lun\(^1\)

Affiliation: \(^1\)Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, \(^2\)CS laboratory at the Ecole Polytechnique (Palaiseau).

– Hertz –

Members: Sourya Dipta Das\(^1\) (dipta.juetce@gmail.com), Nisarg A. Shah\(^2\),

Akashdeep Jassal\(^3\)

Affiliation: \(^1\)Jadavpur University, Kolkata, India, \(^2\)Indian Institute of Technology Jodhpur, India, \(^3\)Punjab Engineering College (PEC), Chandigarh, India.

– Image Lab –

Members: Sabari Nathan\(^1\) (sabarinathantce@gmail.com), M.Parisa Beham\(^2\), R.Suganya\(^3\)

Affiliation: \(^1\)Couger Inc, Tokyo, Japan, \(^2\)Sethu Institute of Technology, India, \(^3\)Thiagarajar College of Engineering, India.

– IPCV_IITM –

Members: Maitreya Suin (maitreyasuin21@gmail.com), Kuldeep Purohit, A. N. Rajagopalan

Affiliation: Indian Institute of Technology Madras, India.

– lyl –

Members: Tongtong Zhao\(^1\) (daitoutiere@gmail.com), Shanshan Zhao\(^2\)

Affiliation: \(^1\)Dalian Maritime University,\(^2\) China Everbright Bank.

– NPU-CVPG –

Members: Zhongyun Hu (zy_h@mail.nwpu.edu.cn), Xin Huang, Yaning Li, Qing Wang

Affiliation: Computer Vision and Computational Photography Group, School of Computer Science, Northwestern Polytechnical University.

– RGETH –

Members: George Chogovadze (chogeorg@student.ethz.ch), Rémi Pautrat

Affiliation: ETH Zurich, Switzerland.

– YorkU –

Members: Mahmoud Afifi (mafifi@eecs.yorku.ca), Michael S. Brown

Affiliation: EECS, York University, Toronto, ON, Canada.

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

El Helou, M. et al. (2020). AIM 2020: Scene Relighting and Illumination Estimation Challenge. In: Bartoli, A., Fusiello, A. (eds) Computer Vision – ECCV 2020 Workshops. ECCV 2020. Lecture Notes in Computer Science(), vol 12537. Springer, Cham. https://doi.org/10.1007/978-3-030-67070-2_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-67070-2_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-67069-6

  • Online ISBN: 978-3-030-67070-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics