[1] SHELHAMER E,LONG J,DARRELL T.Fully convolutional networks for semantic segmentation[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,39(4):640-651.
[2] MIRZA M,OSINDERO S.Conditional generative adversarial nets[J].arXiv:1411.1784,2014.
[3] GOODFELLOW I J,POUGET-ABADIE J,MIRZA M,et al.Generative adversarial networks[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems.Cambridge:MIT Press,2014:2672-2680.
[4] LIU Y F,QIN Z C,et al.Auto-painter:Cartoon image generation from sketch by using conditional wasserstein generative adversarial networks[J].Neurocomputing,2018,311:78-87.
[5] ZHANG L M,JI Y,LIN X.Style transfer for anime sketches with enhanced residual U-net and auxiliary classifier GAN[C]//Proceedings of Asian Conference on Pattern Recognition(ACPR 2017).Los Alamitos,CA:IEEE Computer Society,2018:506-511.
[6] ISOLA P,ZHU J Y,ZHOU T H.Image-to-image translation with conditional adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Los Alamitos:IEEE Computer Society,2017:5967-5976.
[7] WU J,ZHANG Y,WANG K.Skip connection U-net for white matter hyperintensities segmentation from MRI[J].IEEE Access,2019,7:155194-155202.
[8] LIU S,ZHANG X.Image decolorization combining local features and exposure features[J].IEEE Transactions on Multimedia,2019,21(10):2461-2472.
[9] HE K,ZHANG X,REN S.Deep residual learning for image recognition[C]//Proceedings of 29th IEEE Conference on Computer Vision and Pattern Recognition.Los Alamitos,CA:IEEE Computer Society,2016:770-778.
[10] LIU S,ZHANG X.Automatic grayscale image colorization using histogram regression[J].Pattern Recognition Letters,2012,33:1673-1681.
[11] 邓森,徐进轩.自适应深度残差椒盐噪声滤除算法[J].计算机辅助设计与图形学学报,2020,32(8):1248-1257.
DENG S,XU J X.Adaptive salt-and-pepper denoising based on deep residual network[J].Journal of Computer-Aided Design & Computer Graphics,2020,32(8):1248-1257.
[12] LUCAS A,KATSAGGELOS A K,LOPEZ-TAPUIA S,et al.Generative adversarial networks and perceptual losses for video super-resolution[C]//Proceedings of 25th IEEE International Conference on Image Processing(ICIP).Los Alamitos,CA:IEEE Computer Society,2018:51-55.
[13] DAI B,FIDLER S,URTASUN R,et al.Towards diverse and natural image descriptions via a conditional GAN[C]//Proceedings of the IEEE International Conference on Computer Vision.Los Alamitos:IEEE Computer Society,2017:2989-2998.
[14] GAN C,GAN Z,HE X D,et al.StyleNet:Generating attractive visual captions with styles[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Los Alamitos:IEEE Computer Society,2017:3137-3146.
[15] ZHANG R,ISOLA P,EFROS A A.Colorful image colorization[C]//Proceedings of the European Conference on Computer Vision.Heidelberg:Springer,2016:649-666.
[16] WANG Z,BOVIK A C,SHEIKH H R,et al.Image quality assessment:From error visibility to structural similarity[J].IEEE Transactions on Image Processing,2004,13(4):600-612.
[17] ZHANG L,ZHANG L,MOU X,et al.FSIM:A feature similarity index for image quality assessment[J].IEEE Transactions on Image Processing,2011,20(8):2378-2386.
[18] HEUSEL M,RAMSAUER H,UNTERTHINER T,et al.GANS trained by a two time-scale update rule converge to a nash equilibrium[C]//Advances in Neural Information Processing Systems,2017:6626-6637.