[1] HAN X T, WU Z X, WU Z, et al. VITON: an image-based virtual try-on network[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 7543-7552.
[2] BELONGIE S, MALIK J, PUZICHA J. Shape matching and object recognition using shape contexts[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 24(4): 509-522.
[3] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]//Advances in Neural Information Processing Systems, 2014.
[4] WANG B, ZHENG H, LIANG X, et al. Toward characteristic-preserving image-based virtual try-on network[C]//Proceedings of the European Conference on Computer Vision (ECCV), 2018: 589-604.
[5] YANG H, ZHANG R, GUO X, et al. Towards photo-realistic virtual try-on by adaptively generating-preserving image content[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 7850-7859.
[6] MINAR M R, TUAN T T, AHN H, et al. CP-VTON+: clothing shape and texture preserving image-based virtual try-on[C]//Proceedings of the IEEE 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020: 10-14.
[7] DHARIWAL P, NICHOL A. Diffusion models beat GANs on image synthesis[C]//Advances in Neural Information Processing Systems, 2021: 8780-8794.
[8] SHA T, ZHANG W, SHEN T, et al. Deep person generation: a survey from the perspective of face, pose, and cloth synthesis[J]. ACM Computing Surveys, 2023, 55(12): 1-37.
[9] YANG H, ZHANG R, GUO X, et al. Towards photo-realistic virtual try-on by adaptively generating-preserving image content[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 7850-7859.
[10] GE C, SONG Y, GE Y, et al. Disentangled cycle consistency for highly-realistic virtual try-on[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 16928-16937.
[11] HE S, SONG Y Z, XIANG T. Style-based global appearance flow for virtual try-on[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2022: 3470-3479.
[12] KARRAS T, LAINE S, AILA T. A style-based generator architecture for generative adversarial networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 4401-4410.
[13] GAO Y, WEI F, BAO J, et al. High-fidelity and arbitrary face editing[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 16115-16124.
[14] ISOLA P, ZHU J Y, ZHOU T, et al. Image-to-image translation with conditional adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 1125-1134.
[15] ZHANG L, RAO A, AGRAWALA M. Adding conditional control to text-to-image diffusion models[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision,2023: 3836-3847.
[16] ROMBACH R, BLATTMANN A, LORENZ D, et al. High-resolution image synthesis with latent diffusion models[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 10684-10695.
[17] MOU C, WANG X, XIE L, et al. T2I-adapter: learning adapters to dig out more controllable ability for text-to-image diffusion models[J]. arXiv:2302.08453, 2023.
[18] CANNY J. A computational approach to edge detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1986 (6): 679-698.
[19] ISOLA P, ZHU J Y, ZHOU T, et al. Image-to-image translation with conditional adversarial networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 1125-1134.
[20] LIN M, CHEN Q, YAN S. Network in network[J]. arXiv:1312.4400, 2013.
[21] RUMELHART D E, HINTON G E, WILLIAMS R J. Learning representations by back-propagating errors[J]. Nature, 1986, 323(6088): 533-536.
[22] RONNEBERGER O, FISCHER P, BROX T. U-Net: convolutional networks for biomedical image segmentation[C]//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention(MICCAI 2015), Munich, Germany, October 5-9, 2015: 234-241.
[23] GATYS L A, ECKER A S, BETHGE M. A neural algorithm of artistic style[J]. arXiv:1508.06576, 2015.
[24] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv:1409.
1556, 2014.
[25] WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612.
[26] ZHANG R, ISOLA P, EFROS A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 586-595.
[27] SALIMANS T, GOODFELLOW I, ZAREMBA W, et al. Improved techniques for training gans[C]//Advances in Neural Information Processing Systems, 2016. |