[1] JING Y, YANG Y, FENG Z, et al. Neural style transfer: a review[J]. IEEE Transactions on Visualization and Computer Graphics, 2019, 26(11): 3365-3385.
[2] GATYS L A, ECKER A S, BETHGE M. A neural algorithm of artistic style[J]. arXiv:1508.06576, 2015.
[3] 唐稔为, 刘启和, 谭浩. 神经风格迁移模型综述[J]. 计算机工程与应用, 2021, 57(19): 32-43.
TANG R W, LIU Q H, TAN H. A review of neural style transfer models[J]. Computer Engineering and Applications, 2021, 57(19): 32-43.
[4] JOHNSON J, ALAHI A, FEI-FEI L. Perceptual losses for real-time style transfer and super-resolution[C]//Proceedings of the 14th European Conference on Computer Vision. Cham: Springer, 2016: 694-711.
[5] ULYANOV D, LEBEDEV V, VEDALDI A, et al. Texture networks: feed-forward synthesis of textures and stylized images[C]//Proceedings of the 33rd International Conference on Machine Learning, 2016: 1349-1357.
[6] ULYNOV D, VEDALDI A, LEMPITSKY V. Improved texture networks: maximizing quality and diversity in feed-forward stylization and texture synthesis[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 6924-6932.
[7] DUMOULIN V, SHLENS J, KUDLUR M. A learned representation for artistic style[J]. arXiv:1610.07629, 2016.
[8] HUANG X, BELONGIE S. Arbitrary style transfer in real-time with adaptive instance normalization[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision, 2017: 1501-1510.
[9] LI Y, FANG C, YANG J, et al. Universal style transfer via feature transforms[C]//Advances in Neural Information Processing Systems 30, 2017.
[10] PARK D Y, LEE K H. Arbitrary style transfer with style-attentional networks[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 5880-5888.
[11] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]//Advances in Neural Information Processing Systems 27, 2014.
[12] MIRZA M, OSINDERO S. Conditional generative adversarial nets[J]. arXiv:1411. 1784, 2014.
[13] RADFORD A, METZ L, CHINTALA S. Unsupervised representation learning with deep convolutional generative adversarial networks[J]. arXiv: 1511. 06434, 2015.
[14] ISOLA P, ZHU J Y, ZHOU T, et al. Image-to-image translation with conditional adversarial networks[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 1125-1134.
[15] ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision, 2017: 2223-2232.
[16] YI Z, ZHANG H, TAN P, et al. DualGAN: unsupervised dual learning for image-to-image translation[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision, 2017: 2849-2857.
[17] KIM T, CHA M, KIM H, et al. Learning to discover cross-domain relations with generative adversarial networks[C]//Proceedings of the 34th International Conference on Machine Learning, 2017: 1857-1865.
[18] ZHAO Y, WU R, DONG H. Unpaired image-to-image translation using adversarial consistency loss[C]//Proceedings of the 16th European Conference on Computer Vision. Cham: Springer, 2020: 800-815.
[19] 林锦, 陈昭炯, 叶东毅. 局部色彩可控的中国山水画仿真生成方法[J]. 小型微型计算机系统, 2021, 42(9): 1985-1991.
LIN J, CHEN Z J, YE D Y. Local color controllable Chinese landscape painting simulation generation method[J]. Journal of Chinese Computer Systems, 2021, 42(9): 1985-1991.
[20] 滕少华, 袁萧勇, 张巍. 生成对抗网络的素描生成方法[J]. 小型微型计算机系统, 2022, 43(4): 852-857.
TENG S H, YUAN X Y, ZHANG W. Sketch generation method based on generative adversarial network[J]. Journal of Chinese Computer Systems, 2022, 43(4): 852-857.
[21] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]//Proceedings of the 15th European Conference on Computer Vision. Cham: Springer, 2018: 3-19.
[22] MA N, ZHANG X, LIU M, et al. Activate or not: learning customized activation[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 8032-8042.
[23] YANG G, ZHAO H, SHI J, et al. SegStereo: exploiting semantic information for disparity estimation[C]//Proceedings of the 15th European Conference on Computer Vision. Cham: Springer, 2018: 636-651.
[24] LIU Y Q, DU X, SHEN H L, et al. Estimating generalized gaussian blur kernels for out-of-focus image deblurring[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 31(3): 829-843.
[25] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, 2018: 7132-7141.
[26] HU J, SHEN L, ALBANIE S, et al. Gather-excite: exploiting feature context in convolutional neural networks[C]//Advances in Neural Information Processing Systems 31, 2018.
[27] XUE A. End-to-end chinese landscape painting creation using generative adversarial networks[C]//Proceedings of the 2021 IEEE/CVF Winter Conference on Applications of Computer Vision, 2021: 3863-3871. |