skip to main content
survey

A Deep Journey into Super-resolution: A Survey

Authors Info & Claims
Published:28 May 2020Publication History
Skip Abstract Section

Abstract

Deep convolutional networks–based super-resolution is a fast-growing field with numerous practical applications. In this exposition, we extensively compare more than 30 state-of-the-art super-resolution Convolutional Neural Networks (CNNs) over three classical and three recently introduced challenging datasets to benchmark single image super-resolution. We introduce a taxonomy for deep learning–based super-resolution networks that groups existing methods into nine categories including linear, residual, multi-branch, recursive, progressive, attention-based, and adversarial designs. We also provide comparisons between the models in terms of network complexity, memory footprint, model input and output, learning details, the type of network losses, and important architectural differences (e.g., depth, skip-connections, filters). The extensive evaluation performed shows the consistent and rapid growth in the accuracy in the past few years along with a corresponding boost in model complexity and the availability of large-scale datasets. It is also observed that the pioneering methods identified as the benchmarks have been significantly outperformed by the current contenders. Despite the progress in recent years, we identify several shortcomings of existing techniques and provide future research directions towards the solution of these open problems. Datasets and codes for evaluation are publicly available at https://github.com/saeed-anwar/SRsurvey.

References

  1. Namhyuk Ahn, Byungkon Kang, and Kyung-Ah Sohn. 2018. Fast, accurate, and, lightweight super-resolution with cascading residual network. In Proceedings of the ECCV.Google ScholarGoogle ScholarCross RefCross Ref
  2. Saeed Anwar and Nick Barnes. 2019. Densely residual Laplacian super-resolution. arXiv (2019). Retreived 22 December 2019.Google ScholarGoogle Scholar
  3. Saeed Anwar, Cong Phouc Huynh, and Fatih Porikli. 2020. Identity enhanced image denoising. In IEEE Computer Vision and Pattern Recognition Workshops (CVPRW’20).Google ScholarGoogle Scholar
  4. Yancheng Bai, Yongqiang Zhang, Mingli Ding, and Bernard Ghanem. 2018. SOD-MTGAN: Small object detection via multi-task generative adversarial network. In Proceedings of the ECCV.Google ScholarGoogle ScholarCross RefCross Ref
  5. Marco Bevilacqua, Aline Roumy, Christine Guillemot, and Marie Line Alberi-Morel. 2012. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. (2012).Google ScholarGoogle Scholar
  6. Yochai Blau, Roey Mechrez, Radu Timofte, Tomer Michaeli, and Lihi Zelnik-Manor. 2018. The 2018 PIRM challenge on perceptual image super-resolution. In Proceedings of the ECCV.Google ScholarGoogle Scholar
  7. Adrian Bulat, Jing Yang, and Georgios Tzimiropoulos. 2018. To learn image super-resolution, use a GAN to learn how to do image degradation first. In Proceedings of the ECCV.Google ScholarGoogle ScholarCross RefCross Ref
  8. Jianrui Cai, Shuhang Gu, Radu Timofte, and Lei Zhang. 2019. Ntire 2019 challenge on real image super-resolution: Methods and results. In Proceedings of the CVPRW.Google ScholarGoogle ScholarCross RefCross Ref
  9. Hong Chang, Dit-Yan Yeung, and Yimin Xiong. 2004. Super-resolution through neighbor embedding. In Proceedings of the CVPR.Google ScholarGoogle ScholarCross RefCross Ref
  10. J. Choi and M. Kim. 2017. A deep convolutional neural network with selection units for super-resolution. In Proceedings of the CVPRW.Google ScholarGoogle Scholar
  11. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12 (2011), 2493--2537.Google ScholarGoogle ScholarCross RefCross Ref
  12. George E. Dahl, Dong Yu, Li Deng, and Alex Acero. 2012. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. Trans. Audio, Speech Lang. Proc. 20, 1 (2012), 30--42.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recognition. 248--255.Google ScholarGoogle ScholarCross RefCross Ref
  14. Chao Dong, Chen Loy, Kaiming He, and Xiaoou Tang. 2014. Learning a deep convolutional network for image super-resolution. In Proceedings of the ECCV.Google ScholarGoogle ScholarCross RefCross Ref
  15. Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. 2016. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38, 2 (2016), 295--307.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Chao Dong, Chen Change Loy, and Xiaoou Tang. 2016. Accelerating the super-resolution convolutional neural network. In Proceedings of the ECCV.Google ScholarGoogle ScholarCross RefCross Ref
  17. Weisheng Dong, Zhangxi Yan, Xin Li, and Guangming Shi. 2018. Learning hybrid sparsity prior for image restoration: Where deep learning meets sparse coding. arXiv (2018). Accessed May 16, 2019.Google ScholarGoogle Scholar
  18. Y. Fan, H. Shi, J. Yu, D. Liu, W. Han, H. Yu, Z. Wang, X. Wang, and T. S. Huang. 2017. Balanced two-stage residual networks for image super-resolution. In Proceedings of the CVPRW.Google ScholarGoogle Scholar
  19. Raanan Fattal. 2007. Image upsampling via imposed edge statistics. ACM Trans. Graph. (2007), 95--es.Google ScholarGoogle Scholar
  20. William T. Freeman, Thouis R. Jones, and Egon C. Pasztor. 2002. Example-based super-resolution. IEEE Comput. Graph. Applic. 22, 2 (2002), 56--65.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Azuma Fujimoto, Toru Ogawa, Kazuyoshi Yamamoto, Yusuke Matsui, Toshihiko Yamasaki, and Kiyoharu Aizawa. 2016. Manga109 dataset and creation of metadata. In Proceedings of the MANPU @ ICPR.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Donald Geman and Chengda Yang. 1995. Nonlinear image recovery with half-quadratic regularization. IEEE Trans. Image Proc. 4, 7 (1995), 932--946.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. 2016. Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 38, 1 (2016), 142--158.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Proceedings of the NIPS.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Hayit Greenspan. 2008. Super-resolution in medical imaging. Comput. J. 52, 1 (2008), 43--63.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Muhammad Haris, Greg Shakhnarovich, and Norimichi Ukita. 2018. Deep backprojection networks for super-resolution. In Proceedings of the CVPR.Google ScholarGoogle Scholar
  27. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the ICCV.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the CVPR.Google ScholarGoogle ScholarCross RefCross Ref
  29. Jie Hu, Li Shen, and Gang Sun. 2018. Squeeze-and-excitation networks. In Proceedings of the CVPR.Google ScholarGoogle ScholarCross RefCross Ref
  30. Yanting Hu, Xinbo Gao, Jie Li, Yuanfei Huang, and Hanzi Wang. 2018. Single image super-resolution via cascaded multi-scale cross network. arXiv (2018). Accessed June 12, 2019.Google ScholarGoogle Scholar
  31. Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. 2017. Densely connected convolutional networks. In Proceedings of the CVPR.Google ScholarGoogle Scholar
  32. Jinggang Huang and David Mumford. 1999. Statistics of natural images and models. In Proceedings of the CVPR.Google ScholarGoogle Scholar
  33. Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. 2015. Single image super-resolution from transformed self-exemplars. In Proceedings of the CVPR.Google ScholarGoogle ScholarCross RefCross Ref
  34. Zheng Hui, Xiumei Wang, and Xinbo Gao. 2018. Fast and accurate single image super-resolution via information distillation network. In Proceedings of the CVPR.Google ScholarGoogle ScholarCross RefCross Ref
  35. Michal Irani and Shmuel Peleg. 1991. Improving resolution by image registration. Comput. Vis. Image Underst. 53, 3 (1991), 231--239.Google ScholarGoogle Scholar
  36. Jianbo Jiao, Wei-Chih Tu, Shengfeng He, and Rynson W. H. Lau. 2017. FormResNet: Formatted residual learning for image restoration. In Proceedings of the CVPRW.Google ScholarGoogle Scholar
  37. Younghyun Jo, Seoung Wug Oh, Jaeyeon Kang, and Seon Joo Kim. 2018. Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation. In Proceedings of the CVPR. 3224--3232.Google ScholarGoogle ScholarCross RefCross Ref
  38. Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the ECCV.Google ScholarGoogle ScholarCross RefCross Ref
  39. Alexia Jolicoeur-Martineau. 2019. The relativistic discriminator: A key element missing from standard GAN. In Proceedings of the ICLR.Google ScholarGoogle Scholar
  40. Salman Khan, Hossein Rahmani, Syed Afaq Ali Shah, and Mohammed Bennamoun. 2018. A Guide to Convolutional Neural Networks for Computer Vision. Morgan 8 Claypool.Google ScholarGoogle Scholar
  41. Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. 2016. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the CVPR.Google ScholarGoogle ScholarCross RefCross Ref
  42. Jun-Hyuk Kim, Jun-Ho Choi, Manri Cheon, and Jong-Seok Lee. 2018. RAM: Residual attention module for single image super-resolution. arXiv (2018). Accessed September 22, 2019.Google ScholarGoogle Scholar
  43. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the ICLR.Google ScholarGoogle Scholar
  44. Ivan Krasin, Tom Duerig, Neil Alldrin, Vittorio Ferrari, Sami Abu-El-Haija, Alina Kuznetsova, Hassan Rom, Jasper Uijlings, Stefan Popov, Shahab Kamali, Matteo Malloci, Jordi Pont-Tuset, Andreas Veit, Serge Belongie, Victor Gomes, Abhinav Gupta, Chen Sun, Gal Chechik, David Cai, Zheyun Feng, Dhyanesh Narayanan, and Kevin Murphy. 2017. OpenImages: A public dataset for large-scale multi-label and multi-class image classification. Retrieved from: https://storage.googleapis.com/openimages/web/index.html.Google ScholarGoogle Scholar
  45. Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In Proceedings of the ICML.Google ScholarGoogle Scholar
  46. Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Tom Duerig, and Vittorio Ferrari. 2020. The open images dataset v4. International Journal of Computer Vision (2020), 1--26.Google ScholarGoogle ScholarCross RefCross Ref
  47. Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang. 2017. Deep Laplacian pyramid networks for fast and accurate superresolution. In Proceedings of the CVPR.Google ScholarGoogle Scholar
  48. Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang. 2018. Fast and accurate image super-resolution with deep Laplacian pyramid networks. IEEE Trans. Pattern Anal. Mach. Intell. 41, 11 (2018), 2599--2613.Google ScholarGoogle ScholarCross RefCross Ref
  49. Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew P. Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang et al. 2017. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4681--4690.Google ScholarGoogle Scholar
  50. Chongyi Li, Saeed Anwar, and Fatih Porikli. 2020. Underwater scene prior inspired deep underwater image and video enhancement. Pattern Recog. 98 (2020), 107038.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Sheng Li, Fengxiang He, Bo Du, Lefei Zhang, Yonghao Xu, and Dacheng Tao. 2019. Fast spatio-temporal residual network for video super-resolution. In Proceedings of the CVPR. 10522--10531.Google ScholarGoogle ScholarCross RefCross Ref
  52. Zhen Li, Jinglei Yang, Zheng Liu, Xiaomin Yang, Gwanggil Jeon, and Wei Wu. 2019. Feedback network for image super-resolution. In Proceedings of the CVPR. 3867--3876.Google ScholarGoogle ScholarCross RefCross Ref
  53. Thomas Lillesand, Ralph W. Kiefer, and Jonathan Chipman. 2014. Remote Sensing and Image Interpretation. John Wiley and Sons.Google ScholarGoogle Scholar
  54. Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. 2017. Enhanced deep residual networks for single image super-resolution. In Proceedings of the CVPRW.Google ScholarGoogle ScholarCross RefCross Ref
  55. Andrei P. Lobanov. 2005. Resolution limits in astronomical images. arXiv preprint astro-ph/0503225 (2005).Google ScholarGoogle Scholar
  56. Jonathan Long, Evan Shelhamer, and Trevor Darrell. 2015. Fully convolutional networks for semantic segmentation. In Proceedings of the CVPR.Google ScholarGoogle ScholarCross RefCross Ref
  57. Chao Ma, Chih-Yuan Yang, Xiaokang Yang, and Ming-Hsuan Yang. 2017. Learning a no-reference quality metric for single-image super-resolution. Comput. Vis. Image Underst. 158 (2017), 1--16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Kede Ma, Zhengfang Duanmu, Qingbo Wu, Zhou Wang, Hongwei Yong, Hongliang Li, and Lei Zhang. 2017. Waterloo exploration database: New challenges for image quality assessment models. IEEE Trans. Image Proc. 26, 2 (2017), 1004--1016.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Andrew L. Maas, Awni Y. Hannun, and Andrew Y. Ng. 2013. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the ICML.Google ScholarGoogle Scholar
  60. Xiaojiao Mao, Chunhua Shen, and Yu-Bin Yang. 2016. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In Proceedings of the NIPS.Google ScholarGoogle Scholar
  61. David Martin, Charless Fowlkes, Doron Tal, and Jitendra Malik. 2001. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the ICCV.Google ScholarGoogle ScholarCross RefCross Ref
  62. Anish Mittal, Anush Krishna Moorthy, and Alan Conrad Bovik. 2012. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Proc. 21, 12 (2012), 4695--4708.Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Anish Mittal, Rajiv Soundararajan, and Alan C. Bovik. 2012. Making a “completely blind” image quality analyzer. IEEE Sig. Proc. Lett. 20, 3 (2012), 209--212.Google ScholarGoogle ScholarCross RefCross Ref
  64. Sivaram Prasad Mudunuri and Soma Biswas. 2016. Low resolution face recognition across variations in pose and illumination. IEEE Trans. Pattern Anal. Mach. Intell. 38, 5 (2016), 1034--1040.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Seungjun Nah, Sungyong Baik, Seokil Hong, Gyeongsik Moon, Sanghyun Son, Radu Timofte, and Kyoung Mu Lee. 2019. NTIRE 2019 challenge on video deblurring and super-resolution: Dataset and study. In Proceedings of the CVPRW.Google ScholarGoogle Scholar
  66. Seong-Jin Park, Hyeongseok Son, Sunghyun Cho, Ki-Sang Hong, and Seungyong Lee. 2018. SRFeat: Single image super-resolution with feature discrimination. In Proceedings of the ECCV.Google ScholarGoogle ScholarCross RefCross Ref
  67. Ekta Prashnani, Hong Cai, Yasamin Mostofi, and Pradeep Sen. 2018. PieAPP: Perceptual image-error assessment through pairwise preference. In Proceedings of the CVPR.Google ScholarGoogle ScholarCross RefCross Ref
  68. Yajun Qiu, Ruxin Wang, Dapeng Tao, and Jun Cheng. 2019. Embedded block residual network: A recursive restoration model for single-image super-resolution. In Proceedings of the ICCV. 4180--4189.Google ScholarGoogle ScholarCross RefCross Ref
  69. Alec Radford, Luke Metz, and Soumith Chintala. 2016. Unsupervised representation learning with deep convolutional generative adversarial networks. In Proceedings of the ICLR.Google ScholarGoogle Scholar
  70. H. Ren, M. El-Khamy, and J. Lee. 2017. Image super resolution based on fusing multiple convolution neural networks. In Proceedings of the CVPRW.Google ScholarGoogle Scholar
  71. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. In Proceedings of the NIPS.Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the MICCAI.Google ScholarGoogle ScholarCross RefCross Ref
  73. Mehdi S. M. Sajjadi, Bernhard Schölkopf, and Michael Hirsch. 2017. EnhanceNet: Single image super-resolution through automated texture synthesis. In Proceedings of the ICCV.Google ScholarGoogle ScholarCross RefCross Ref
  74. Hamid R. Sheikh, Alan C. Bovik, and Gustavo De Veciana. 2005. An information fidelity criterion for image quality assessment using natural scene statistics. IEEE Trans. Image Proc. 14, 12 (2005), 2117--2128.Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P. Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. 2016. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the CVPR.Google ScholarGoogle ScholarCross RefCross Ref
  76. Wenzhe Shi, Jose Caballero, Lucas Theis, Ferenc Huszar, Andrew Aitken, Christian Ledig, and Zehan Wang. 2016. Is the deconvolution layer the same as a convolutional layer?arXiv (2016). Accesses November 28, 2019.Google ScholarGoogle Scholar
  77. Assaf Shocher, Nadav Cohen, and Michal Irani. 2018. “Zero-shot” super-resolution using deep internal learning. CVPR (2018).Google ScholarGoogle Scholar
  78. Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In Proceedings of the ICLR.Google ScholarGoogle Scholar
  79. Edward Smith, Scott Fujimoto, and David Meger. 2018. Multi-view silhouette and depth decomposition for high resolution 3D object representation. In Proceedings of the NIPS. 6478--6488.Google ScholarGoogle Scholar
  80. Ashwin Swaminathan, Min Wu, and K. J. Ray Liu. 2008. Digital image forensics via intrinsic fingerprints. IEEE Trans. Inf. Forens. Secur. 3, 1 (2008), 101--117.Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. Ying Tai, Jian Yang, and Xiaoming Liu. 2017. Image super-resolution via deep recursive residual network. In Proceedings of the CVPR.Google ScholarGoogle ScholarCross RefCross Ref
  82. Ying Tai, Jian Yang, Xiaoming Liu, and Chunyan Xu. 2017. MemNet: A persistent memory network for image restoration. In Proceedings of the CVPR.Google ScholarGoogle ScholarCross RefCross Ref
  83. Omkar Thawakar, Prashant W. Patil, Akshay Dudhane, Subrahmanyam Murala, and Uday Kulkarni. 2019. Image and video super resolution using recurrent generative adversarial network. In Proceedings of the AVSS. IEEE, 1--8.Google ScholarGoogle ScholarCross RefCross Ref
  84. Radu Timofte, Eirikur Agustsson, Luc Van Gool, Ming-Hsuan Yang, Lei Zhang, Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, Kyoung Mu Lee, et al. 2017. Ntire 2017 challenge on single image super-resolution: Methods and results. In Proceedings of the CVPRW.Google ScholarGoogle ScholarCross RefCross Ref
  85. Radu Timofte, Shuhang Gu, Jiqing Wu, and Luc Van Gool. 2018. Ntire 2018 challenge on single image super-resolution: Methods and results. In Proceedings of the CVPRW.Google ScholarGoogle ScholarCross RefCross Ref
  86. Tong Tong, Gen Li, Xiejie Liu, and Qinquan Gao. 2017. Image super-resolution using dense skip connections. In Proceedings of the ICCV.Google ScholarGoogle ScholarCross RefCross Ref
  87. Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. 2018. Deep image prior. In Proceedings of the CVPR.Google ScholarGoogle Scholar
  88. Xintao Wang, Kelvin C. K. Chan, Ke Yu, Chao Dong, and Chen Change Loy. 2019. EDVR: Video restoration with enhanced deformable convolutional networks. In Proceedings of the CVPR. 0--0.Google ScholarGoogle ScholarCross RefCross Ref
  89. Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Chen Change Loy, Yu Qiao, and Xiaoou Tang. 2018. ESRGAN: Enhanced super-resolution generative adversarial networks. In Proceedings of the ECCVW.Google ScholarGoogle Scholar
  90. Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. 2004. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Proc. 13, 4 (2004), 600--612.Google ScholarGoogle ScholarDigital LibraryDigital Library
  91. Zhaowen Wang, Ding Liu, Jianchao Yang, Wei Han, and Thomas Huang. 2015. Deep networks for image super-resolution with sparse prior. In Proceedings of the ICCV.Google ScholarGoogle ScholarCross RefCross Ref
  92. Z. Wang, E. P. Simoncelli, and A. C. Bovik. 2003. Multiscale structural similarity for image quality assessment. In Proceedings of the ACSSC.Google ScholarGoogle Scholar
  93. Chih-Yuan Yang, Chao Ma, and Ming-Hsuan Yang. 2014. Single-image super-resolution: A benchmark. In Proceedings of the ECCV.Google ScholarGoogle ScholarCross RefCross Ref
  94. Jianchao Yang, Zhe Lin, and Scott Cohen. 2013. Fast image super-resolution based on in-place example regression. In Proceedings of the CVPR.Google ScholarGoogle ScholarDigital LibraryDigital Library
  95. Jianchao Yang, John Wright, Thomas S. Huang, and Yi Ma. 2010. Image super-resolution via sparse representation. IEEE Trans. Image Proc. 19, 11 (2010), 2861--2873.Google ScholarGoogle ScholarDigital LibraryDigital Library
  96. Yuan Yuan, Siyuan Liu, Jiawei Zhang, Yongbing Zhang, Chao Dong, and Liang Lin. 2018. Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks. In Proceedings of the CVPRW.Google ScholarGoogle ScholarCross RefCross Ref
  97. Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. 2019. Learning enriched features for real image restoration and enhancement. arXiv preprint 2003.06792 (2019).Google ScholarGoogle Scholar
  98. Roman Zeyde, Michael Elad, and Matan Protter. 2010. On single image scale-up using sparse-representations. In Proceedings of the ICCS.Google ScholarGoogle Scholar
  99. Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. 2017. Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Proc. 26, 7 (2017), 3142--3155.Google ScholarGoogle ScholarDigital LibraryDigital Library
  100. Kai Zhang, Wangmeng Zuo, Shuhang Gu, and Lei Zhang. 2017. Learning deep CNN denoiser prior for image restoration. In Proceedings of the CVPR.Google ScholarGoogle ScholarCross RefCross Ref
  101. Kai Zhang, Wangmeng Zuo, and Lei Zhang. 2018. Learning a single convolutional super-resolution network for multiple degradations. In Proceedings of the CVPR.Google ScholarGoogle ScholarCross RefCross Ref
  102. Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the CVPR.Google ScholarGoogle ScholarCross RefCross Ref
  103. Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. 2018. Image super-resolution using very deep residual channel attention networks. In Proceedings of the ECCV.Google ScholarGoogle ScholarCross RefCross Ref
  104. Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. 2018. Residual dense network for image super-resolution. In Proceedings of the CVPR.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. A Deep Journey into Super-resolution: A Survey

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Computing Surveys
          ACM Computing Surveys  Volume 53, Issue 3
          May 2021
          787 pages
          ISSN:0360-0300
          EISSN:1557-7341
          DOI:10.1145/3403423
          Issue’s Table of Contents

          Copyright © 2020 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 28 May 2020
          • Online AM: 7 May 2020
          • Revised: 1 March 2020
          • Accepted: 1 March 2020
          • Received: 1 January 2020
          Published in csur Volume 53, Issue 3

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • survey
          • Research
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format