[1] LI Y, HUANG X, ZHAO G. Joint local and global information learning with single apex frame detection for micro-expression recognition[J]. IEEE Transactions on Image Processing, 2020, 30: 249-263.
[2] 米爱中, 张伟, 乔应旭, 等. 人脸妆容迁移研究综述[J]. 计算机工程与应用, 2022, 58(2): 15-26.
MI A Z, ZHANG W, QIAO Y X, et al. Review of research on facial makeup transfer[J]. Computer Engineering and Applications, 2022, 58(2): 15-26.
[3] LEYVAND T, COHEN-OR D, DROR G, et al. Data-driven enhancement of facial attractiveness[J]. ACM Transactions on Graphics, 2008, 27(3): 1-9.
[4] BLANZ V, BASSO C, POGGIO T, et al. Reanimating faces in images and video[C]//Proceedings of the Computer Graphics Forum (CGF), 2003: 641-650.
[5] VLASIC D, BRAND M, PFISTER H, et al. Face transfer with multilinear models[J]. ACM Transactions on Graphics, 2005, 24(3): 426-433.
[6] ZHAO Y, YANG L, PEI E, et al. Action unit driven facial expression synthesis from a single image with patch attentive GAN[C]//Proceedings of the Computer Graphics Forum (CGF), 2021: 47-61.
[7] 吴宇宁, 金琴. 强度和类型可控的人脸表情生成[J]. 中国科技论文, 2022(3): 246-251.
WU Y N, JIN Q. Facial expression generation base on intensity and type control[J]. China Sciencepaper, 2022(3): 246-251.
[8] TANG H, SEBE N. Facial expression translation using landmark guided GANs[J]. IEEE Transactions on Affective Computing, 2022, 13(4): 1986-1997.
[9] NIE X, DING H, QI M, et al. URCA-GAN: upsample residual channel-wise attention generative adversarial network for image-to-image translation[J]. Neurocomputing, 2021, 443: 75-84.
[10] BLANZ V, VETTER T. A morphable model for the synthesis of 3D faces[C]//Proceedings of the Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), 1999: 187-194.
[11] WANG M, BRADLEY D, ZAFEIRIOU S, et al. Facial expression synthesis using a global‐local multilinear framework[C]//Proceedings of the Conference on Computer Graphics Forum (CGF), 2020: 235-245.
[12] THIES J, ZOLLH?FER M, STAMMINGER M, et al. Face2Face: real-time face capture and reenactment of RGB videos[C]//Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2009: 637-646.
[13] CAO C, WENG Y, ZHOU S, et al. FaceWarehouse: a 3D facial expression database for visual computing[J]. IEEE Transactions on Visualization and Computer Graphics, 2013, 20(3): 413-425.
[14] THIES J, ZOLLH?FER M, THEOBALT C, et al. HeadOn: real-time reenactment of human portrait videos[J]. ACM Transactions on Graphics, 2018, 37(4): 1-13.
[15] AVERBUCH-ELOR H, COHEN-OR D, KOPF J, et al. Bringing portraits to life[J]. ACM Transactions on Graphics, 2017, 36(6): 1-13.
[16] HE Z, ZUO W, KAN M, et al. AttGAN: facial attribute editing by only changing what you want[J]. IEEE Transactions on Image Processing, 2019, 28(11): 5464-5478.
[17] CHEN Y C, XU X, TIAN Z, et al. Homomorphic latent space interpolation for unpaired image-to-image translation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019: 2408-2416.
[18] WU R L, ZHANG G J, LU S J, et al. Cascade EF-GAN: progressive facial expression editing with local focuses[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020: 5021-5030.
[19] TANG J S, SHAO Z W, MA L Z. Fine-grained expression manipulation via structured latent space[C]//Proceedings of the IEEE International Conference on Multimedia and Expo (ICME), 2020: 1-6.
[20] PUMAROLA A, AGUDO A, MARTINEZ A M, et al. GANimation: one-shot anatomically consistent facial animation[J]. International Journal of Computer Vision, 2020, 128(3): 698-713.
[21] TANG H, LIU H, XU D, et al. AttentionGAN: unpaired image-to-image translation using attention-guided generative adversarial networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(4): 1972-1987.
[22] XIA Y F, ZHENG W B, WANG Y M, et al. Local and global perception generative adversarial network for facial expression synthesis[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 32(3): 1443-1452.
[23] BAHDANAU D, CHO K, BENGIO Y. Neural machine translation by jointly learning to align and translate[J]. arXiv:1409.0473,2014.
[24] YANG Y, QI Y. Image super-resolution via channel attention and spatial graph convolutional network[J]. Pattern Recognition, 2021, 52: 2260-2268.
[25] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139-144.
[26] 胡晓瑞, 林璟怡, 李东, 等. 基于面部动作编码系统的表情生成对抗网络[J]. 计算机工程与应用, 2020, 56(18): 150-156.
HU X R, LIN J Y, LI D, et al. Facial expression generative adversarial networks based on facial action coding system[J]. Computer Engineering and Applications, 2020, 56(18): 150-156.
[27] 舒祥波, 施成龙, 孙运莲, 等. 基于类别注意实例归一化机制的人脸年龄合成[J]. 软件学报, 2022, 33(7): 2716-2728.
SHU X B, SHI C L, SUN Y L, et al. Class-aware instance normalization mechanism for face age synthesis[J]. Journal of Software, 2022, 33(7): 2716-2728.
[28] MIRZA M, OSINDERO S. Conditional generative adversarial nets[J]. arXiv:1411.1784,2014.
[29] ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017: 2223-2232.
[30] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018: 7132-7141.
[31] HOU Q, ZHOU D, FENG J. Coordinate attention for efficient mobile network design[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021: 13713-13722.
[32] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision (ECCV), 2018: 3-19.
[33] ISOLA P, ZHU J Y, ZHOU T, et al. Image-to-image translation with conditional adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017: 1125-1134.
[34] LANGNER O, DOTSCH R, BIJLSTRA G, et al. Presentation and validation of the Radboud faces database[J]. Cognition and Emotion, 2010, 24(8): 1377-1388.
[35] KINGMA D P, BA J. Adam: a method for stochastic optimization[J]. arXiv:1412.6980,2014.
[36] HEUSEL M, RAMSAUER H, UNTERTHINER T, et al. GANs trained by a two time-scale update rule converge to a local nash equilibrium[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017: 6629-6640.
[37] ALMOHAMMAD A, GHINEA G. Stego image quality and the reliability of PSNR[C]//Proceedings of the IEEE Conference on International Conference on Image Processing Theory, Tools and Applications (IPTA), 2010: 215-220.
[38] WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612.
[39] MEGVII. Face++ research toolkit[EB/OL]. [2022-09-15], http://www.faceplusplus.com.
[40] CHOI Y, CHOI M, KIM M, et al. StarGAN: unified generative adversarial networks for multi-domain image-to-image translation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018: 8789-8797. |