[1] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]//Advances in Neural Information Processing Systems 27, 2014.
[2] MIRZA M, OSINDERO S. Conditional generative adversarial nets[J]. arXiv:1411.1784, 2014.
[3] RADFORD A, METZ L, CHINTALA S. Unsupervised representation learning with deep convolutional generative adversarial networks[J]. arXiv:1511.06434, 2015.
[4] BROCK A, DONAHUE J, SIMONYAN K. Large scale GAN training for high fidelity natural image synthesis[J]. arXiv:1809.11096, 2018.
[5] KARRAS T, LAINE S, AILA T. A style-based generator architecture for generative adversarial networks[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 4401-4410.
[6] TAO M, BAO B K, TANG H, et al. GALIP: generative adversarial clips for text-to-image synthesis[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 14214-14223.
[7] RADFORD A, KIM J W, HALLACY C, et al. Learning transferable visual models from natural language supervision[C]//Proceedings of the 38th International Conference on Machine Learning, 2021: 8748-8763.
[8] KANG M, ZHU J Y, ZHANG R, et al. Scaling up GANs for text-to-image synthesis[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 10124-10134.
[9] SCHUHMANN C, BEAUMONT R, VENCU R, et al. LAION-5B: an open large-scale dataset for training next generation image-text models[C]//Advances in Neural Information Processing Systems 35, 2022: 25278-25294.
[10] WANG X, LI Y, ZHANG H, et al. Towards real-world blind face restoration with generative facial prior[C]//Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 9168-9178.
[11] LEDIG C, THEIS L, HUSZáR F, et al. Photo-realistic single image super-resolution using a generative adversarial network[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 4681-4690.
[12] COZZOLINO D, GRAGNANIELLO D, VERDOLIVA L. Image forgery detection through residual-based local descriptors and block-matching[C]//Proceedings of the 2014 IEEE International Conference on Image Processing, 2014: 5297-5301.
[13] NATARAJ L, MOHAMMED T M, CHANDRASEKARAN S, et al. Detecting GAN generated fake images using co-occurrence matrices[J]. arXiv:1903.06836, 2019.
[14] LI H, LI B, TAN S, et al. Identification of deep network generated images using disparities in color components[J]. Signal Processing, 2020, 174: 107616.
[15] MATERN F, RIESS C, STAMMINGER M. Exploiting visual artifacts to expose deepfakes and face manipulations[C]//Proceedings of the 2019 IEEE Winter Applications of Computer Vision Workshops, 2019: 83-92.
[16] MCCLOSKEY S, ALBRIGHT M. Detecting GAN-generated imagery using saturation cues[C]//Proceedings of the 2019 IEEE International Conference on Image Processing, 2019: 4584-4588.
[17] ZHANG X, KARAMAN S, CHANG S F. Detecting and simulating artifacts in GAN fake images[C]//Proceedings of the 2019 IEEE International Workshop on Information Forensics and Security, 2019: 1-6.
[18] KINGMA D P, DHARIWAL P. Glow: generative flow with invertible 1×1 convolutions[C]//Advances in Neural Information Processing Systems 31, 2018.
[19] BI X, LIU B, YANG F, et al. Detecting generated images by real images only[J]. arXiv:2311.00962, 2023.
[20] KARRAS T, LAINE S, AITTALA M, et al. Analyzing and improving the image quality of StyleGAN[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 8110-8119.
[21] ZAMIR S W, ARORA A, KHAN S, et al. CycleISP: real image restoration via improved data synthesis[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 2696-2705.
[22] FRANK J, EISENHOFER T, SCH?NHERR L, et al. Leveraging frequency analysis for deep fake image recognition[C]//Proceedings of the 37th International Conference on Machine Learning, 2020: 3247-3258.
[23] CHAI L, BAU D, LIM S N, et al. What makes fake images detectable? Understanding properties that generalize[C]//Proceedings of the 16th European Conference on Computer Vision, Glasgow, Aug 23-28, 2020. Cham: Springer, 2020: 103-120.
[24] JU Y, JIA S, KE L, et al. Fusing global and local features for generalized ai-synthesized image detection[C]//Proceedings of the 2022 IEEE International Conference on Image Processing, 2022: 3465-3469.
[25] ZHONG N, XU Y, QIAN Z, et al. Rich and poor texture contrast: a simple yet effective approach for ai-generated image detection[J]. arXiv:2311.12397, 2023.
[26] CHIERCHIA G, POGGI G, SANSONE C, et al. A Bayesian-MRF approach for PRNU-based image forgery detection[J]. IEEE Transactions on Information Forensics and Security, 2014, 9(4): 554-567.
[27] SCHERHAG U, DEBIASI L, RATHGEB C, et al. Detection of face morphing attacks based on PRNU analysis[J]. IEEE Transactions on Biometrics, Behavior, and Identity Science, 2019, 1(4): 302-317.
[28] FRIDRICH J, KODOVSKY J. Rich models for steganalysis of digital images[J]. IEEE Transactions on information Forensics and Security, 2012, 7(3): 868-882.
[29] TAN C, ZHAO Y, WEI S, et al. Learning on gradients: generalized artifacts representation for GAN-generated images detection[C]//Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 12105-12114.
[30] YU N, DAVIS L S, FRITZ M. Attributing fake images to GANs: learning and analyzing GAN fingerprints[C]//Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision, 2019: 7556-7566.
[31] MARRA F, SALTORI C, BOATO G, et al. Incremental learning for the detection and classification of GAN-generated images[C]//Proceedings of the 2019 IEEE International Workshop on Information Forensics and Security, 2019: 1-6.
[32] JAVED K, SHAFAIT F. Revisiting distillation and incremental classifier learning[C]//Proceedings of the 14th Asian Conference on Computer Vision, Perth, Dec 2-6, 2018. Cham: Springer, 2018.
[33] LOPEZ-PAZ D, RANZATO M. Gradient episodic memory for continual learning[C]//Advances in Neural Information Processing Systems 30, 2017: 6467-6476.
[34] REBUFFI S A, KOLESNIKOV A, SPERL G, et al. ICARL: incremental classifier and representation learning[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 2001-2010.
[35] WANG S Y, WANG O, ZHANG R, et al. CNN-generated images are surprisingly easy to spot for now[C]//Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 8695-8704.
[36] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016: 770-778.
[37] RICKER J, DAMM S, HOLZ T, et al. Towards the detection of diffusion model deepfakes[J]. arXiv:2210.14571, 2022.
[38] GRAGNANIELLO D, COZZOLINO D, MARRA F, et al. Are GAN generated images easy to detect? A critical analysis of the state-of-the-art[C]//Proceedings of the 2021 IEEE International Conference on Multimedia and Expo, 2021: 1-6.
[39] 乔通, 陈彧星, 谢世闯, 等. 多色彩通道特征融合的 GAN 合成图像检测方法[J]. 电子学报, 2024, 52(3): 924-936.
QIAO T, CHEN Y X, XIE S C, et al. GAN synthetic image detection using fused features in the multi-color channels[J]. Acta Electronica Sinica, 2024, 52(3): 924-936.
[40] KARRAS T, AILA T, LAINE S, et al. Progressive growing of GANs for improved quality, stability, and variation[J]. arXiv:1710.10196, 2017.
[41] ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision, 2017: 2223-2232.
[42] CHOI Y, CHOI M, KIM M, et al. StarGAN: unified generative adversarial networks for multi-domain image-to-image translation[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, 2018: 8789-8797.
[43] PARK T, LIU M Y, WANG T C, et al. Semantic image synthesis with spatially-adaptive normalization[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 2337-2346.
[44] WEST J, BERGSTROM C. Calling bullshit: the art of skepticism in a data-driven world[M]. New York: Random House, 2020.
[45] DHARIWAL P, NICHOL A. Diffusion models beat GANs on image synthesis[C]//Advances in Neural Information Processing Systems 34, 2021: 8780-8794.
[46] NICHOL A, DHARIWAL P, RAMESH A, et al. GLIDE: towards photorealistic image generation and editing with text-guided diffusion models[J]. arXiv:2112.10741, 2021.
[47] HOLZ D. Midjourney[EB/OL]. [2024-05-13]. https://www.midjourney.com/home/.
[48] ROMBACH R, BLATTMANN A, LORENZ D, et al. High-resolution image synthesis with latent diffusion models[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 10684-10695.
[49] GU S, CHEN D, BAO J, et al. Vector quantized diffusion model for text-to-image synthesis[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 10696-10706.
[50] WUKONG. MindSpore[EB/OL]. [2024-05-13]. https://xihe.mindspore.cn/modelzoo/wukong.
[51] RAMESH A, DHARIWAL P, NICHOL A, et al. Hierarchical text-conditional image generation with CLIP latents[J]. arXiv:2204.06125, 2022. |