[1] 孙旭,李晓光,李嘉锋,等.基于深度学习的图像超分辨率复原研究进展[J].自动化学报,2017,43(5):697-709.
SUN X,LI X G,LI J F,et al.Review on deep learning based image super-resolution restoration algorithms[J].Acta Automatic Sinica,2017,43(5):697-709.
[2] 杨帅锋.基于学习的超级分辨率重建算法研究[D].北京:北京交通大学,2015.
YANG S F.Super-resolution reconstruction algorithms based on learning method[D].Beijing:Beijing Jiaotong University,2015.
[3] HARRIS J L.Diffraction and resolving power[J].Journal of the Optical Society of America,1964,54(7):931-936.
[4] GOODMAN J W.Introduction to Fourier optics[M].[S.l.]:Roberts and Company Publishers,2005.
[5] TSAI R.Multiframe image restoration and registration[J].Advance Computer Visual and Image Processing,1984,1:317-339.
[6] YANG J,WRIGHT J,HUANG T,et al.Image super-resolution via sparse representation[J].IEEE Transactions on Image Processing,2010,19(11):2861-2873.
[7] DONG C,LOY C C,HE K,et al.Image super-resolution using deep convolutional networks[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2015,38(2):295-307.
[8] 彭潇雨.基于卷积神经网络的单幅图像超分辨率重建[D].南昌:华东交通大学,2021.
PENG X Y.Super-resolution reconstruction of single image based on convolutional neural network[D].Nanchang:East China Jiaotong University,2021
[9] KIM J,LEC J K,LEE K M.Accurate image super-resolution using very deep convolutional networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2016:1646-1654.
[10] HE K,ZHANG X,REN S,et al.Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2016:770-778.
[11] LEDIG C,THEIS L,HUSZáR F,et al.Photo-realistic single image super-resolution using a generative adversarial network[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2017:4681-4690.
[12] 乔昕,魏延.一种改进的SRGAN图像超分辨重建算法[J].计算机时代,2021(1):72-75.
QIAO X,WEI Y.Research on the algorithm of image super-resolution reconstruction with improved SRGAN[J].Computer Era,2021(1):72-75.
[13] WANG X,YU K,WU S,et al.ESRGAN:enhanced super-resolution generative adversarial networks[C]//Proceedings of the 15th European Conference on Computer Vision.Berlin:Springer,2018:63-79.
[14] GOODFEILLOW I J,POUGET-ABADIE J,MIRZA M,et al.Generative adversarial networks[C]//Advances in Neural Information Processing Systems,2014:2672-2680.
[15] 王海迪.基于生成对抗网络的超分辨率图像重建方法研究[D].郑州:郑州大学,2021.
WANG H D.Research on super-resolution image reconstruction method based on generative adversarial networks[D].Zhengzhou:Zhengzhou University,2021.
[16] NASH J F.Non-cooperative games[J].Annals of Mathematics,1951,54(2):286-295.
[17] NASH J F.Equilibrium points in n-person games[J].Proceedings of National Academy of Sciences,1950,36(1):48-49.
[18] MAASA L,HANNUN A Y,NG A Y.Rectifier nonlinearities improve neural network acoustic models[C]//ICML Workshop on Deep Learning for Audio,Speech and Language Processing,2013.
[19] IOFFE S,SZEGEDY C.Batch normalization:accelerating deep network training by reducing internal covariate shift[C]//International Conference on Machine Learning,2015:448-456.
[20] HE K,ZHANG X,REN S,et al.Delving deep into rectifiers:surpassing human-level performance on imagenet classification[C]//Proceedings of the IEEE International Conference on Computer Vision,2015:1026-1034.
[21] 张杨忆,林泓,管钰华,等.改进残差块和对抗损失的GAN图像超分辨率重建[J].哈尔滨工业大学学报,2019,51(11):128-137.
ZHANG Y Y,LIN H,GUAN Y H,et al.GAN image super-resolution reconstruction model with improved residual block and adversarial loss[J].Journal of Harbin Institute of Technology,2019,51(11):128-137.
[22] 纪鹏飞.基于卷积神经网络的图像超分辨率重建算法研究与应用[D].哈尔滨:哈尔滨理工大学,2021.
JI P F.Research and application of image super-resolution reconstruction algorithm based on convolutional neural network[D].Harbin:Harbin University of Science and Technology,2021.
[23] JOLICOEUR-MARTINEAU A.The relativistic discriminator:a key element missing from standard GAN[J].arXiv:1807.00734,2018.
[24] KIM J,LEC J K,LEE K M.Deeply-recursive convolutional network for image super-resolution[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2016:1637-1645.