[1] 兰健华. “深度伪造”换脸技术在电影中的应用研究[J]. 电影文学, 2023(24): 47-53.
LAN J H. Research on the application of “deepfake” face-swapping technology in films[J]. Film Literature, 2023, (24): 47-53.
[2] HUANG X, BELONGIE S. Arbitrary style transfer in real-time with adaptive instance normalization[C]//Proceedings of the IEEE International Conference on Computer Vision, 2017: 1501-1510.
[3] CHEN R, CHEN X, NI B, et al. SimSwap: an efficient framework for high fidelity face swapping[C]//Proceedings of the 28th ACM International Conference on Multimedia, 2020: 2003-2011.
[4] ROSBERG F, AKSOY E E, ALONSO-FERNANDEZ F, et al. FaceDancer: pose-and occlusion-aware high fidelity face swapping[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023: 3454-3463.
[5] KIM J, LEE J, ZHANG B T. Smooth-swap: a simple enhancement for face-swapping with smoothness[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 10779-10788.
[6] SHIOHARA K, YANG X, TAKETOMI T. BlendFace: re-designing identity encoders for face-swap**[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 7634-7644.
[7] LI L, BAO J, YANG H, et al. FaceShifter: towards high fidelity and occlusion aware face swapping[J]. arXiv:1912.13457, 2019.
[8] HEUSEL M, RAMSAUER H, UNTERTHINER T, et al. GANs trained by a two time-scale update rule converge to a local nash equilibrium[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017: 6629 - 6640.
[9] EGGER B, SMITH W A P, TEWARI A, et al. 3D morphable face models—past, present, and future[J]. ACM Transactions on Graphics, 2020, 39(5): 1-38.
[10] BLANZ V, VETTER T. A morphable model for the synthesis of 3D faces[C]//Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, 2023: 187-194.
[11] NIRKIN Y , MASI I, TUAN A T , et al. On face segmentation, face swapping, and face perception[C]//Proceedings of the 13th IEEE International Conference on Automatic Face Gesture Recognition, 2018: 98-105.
[12] NIRKIN Y, KELLER Y, HASSNER T. FSGAN: subject agnostic face swapping and reenactment[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019: 7184-7193.
[13] ZENG H, ZHANG W, FAN C, et al. FlowFace: semantic flow-guided shape-aware face swapping[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2023: 3367-3375.
[14] ZHAO W, RAO Y, SHI W, et al. DiffSwap: high-fidelity and controllable face swapping via 3D-aware masked diffusion[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 8568-8577.
[15] HO J, JAIN A, ABBEEL P. Denoising diffusion probabilistic models[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020: 6840-6851.
[16] GAO G, HUANG H, FU C, et al. Information bottleneck disentanglement for identity swapping[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 3404-3413.
[17] CHEN J, KAO S, HE H, et al. Run, don't walk: chasing higher flops for faster neural networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 12021-12031.
[18] REN X, CHEN X, YAO P, et al. Reinforced disentanglement for face swap** without skip connection[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 20665-20675.
[19] ISOLA P, ZHU J Y, ZHOU T, et al. Image-to-image translation with conditional adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 1125-1134.
[20] KARRAS T, LAINE S, AILA T. A style-based generator architecture for generative adversarial networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 4401-4410.
[21] DENG J, GUO J, XUE N, et al. ArcFace: additive angular margin loss for deep face recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 4690-4699.
[22] YU F, KOLTUN V. Multi-scale context aggregation by dilated convolutions[J]. arXiv:1511.07122, 2015.
[23] WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision, 2018: 3-19.
[24] CHEN D, HE M, FAN Q, et al. Gated context aggregation network for image dehazing and deraining[C]//Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 2019: 1375-1383.
[25] LIN S, YANG L, SALEEMI I, et al. Robust high-resolution video matting with temporal guidance[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2022: 238-247.
[26] WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612.
[27] DENG J, DONG W, SOCHER R. ImageNet: a large-scale hierarchical image database[C]//Proceedings of the IEEE Computer Vision and Pattern Recognition, 2009: 248-255.
[28] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv:1409. 1556, 2014.
[29] GULRAJANI I, AHMED F, ARJOVSKY M, et al. Improved training of Wasserstein GANs[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017: 5769-5779.
[30] CAO Q, SHEN L, XIE W, et al. Vggface2: a dataset for recognising faces across pose and age[C]//Proceedings of the 13th IEEE International Conference on Automatic Face & Gesture Recognition, 2018: 67-74.
[31] KAZEMI V, SULLIVAN J. One millisecond face alignment with an ensemble of regression trees[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014: 1867-1874.
[32] DENG J, GUO J, VERVERAS E, et al. RetinaFace: single-shot multi-level face localisation in the wild[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 5203-5212.
[33] ROSSLER A, COZZOLINO D, VERDOLIVA L, et al. FaceForensics++: learning to detect manipulated facial images[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019: 1-11.
[34] KINGMA D P, BA J. Adam: a method for stochastic optimization[J]. arXiv:1412.6980, 2014.
[35] HUANG G, LIU Z, MAATEN V D L, et al. Densely connected convolutional networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 4700-4708.
[36] WANG H, WANG Y, ZHOU Z, et al. CosFace: large margin cosine loss for deep face recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 5265-5274.
[37] HUANG Y, WANG Y, TAI Y, et al. Curricularface: adaptive curriculum learning loss for deep face recognition[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 5901-5910.
[38] SANYAL S, BOLKART T, FENG H, et al. Learning to regress 3D face shape and expression from an image without 3D supervision[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 7763-7772.
[39] DeepFakes, Inc. History of Gituhub[EB/OL]. (2023-12-09)[2024-04-12]. https://github.com/deepfakes/faceswap.
[40] FaceSwap, Inc. History of Gituhub[EB/OL]. (2023-01-18)[2024-04-12]. https://github.com/ondyari/FaceForensics/tree/master/dataset/FaceSwapKowalski.
[41] WANG Y, CHEN X, ZHU J, et al. HifiFace: 3D shape and semantic prior guided high fidelity face swapping[J]. arXiv:2106.09965, 2021. |