[1] LONG Y, XIA G S, LI S, et al. On creating benchmark dataset for aerial image interpretation: reviews, guidances, and million-aid[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 4205-4230.
[2] ZHU X X, TUIA D, MOU L, et al. Deep learning in remote sensing: a comprehensive review and list of resources[J]. IEEE Geoscience and Remote Sensing Magazine, 2017, 5(4): 8-36.
[3] LI W, HSU C Y. GeoAI for large-scale image analysis and machine vision: recent progress of artificial intelligence in geography[J]. ISPRS International Journal of Geo-Information, 2022, 11(7): 385.
[4] LEE S, HYUN J, SEONG H, et al. Unsupervised domain adaptation for semantic segmentation by content transfer[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2021: 8306-8315.
[5] XU M, YOON S, FUENTES A, et al. A comprehensive survey of image augmentation techniques for deep learning[J]. arXiv:2205.01491, 2022.
[6] KINGMA D P, WELLING M. Auto-encoding variational bayes[J]. arXiv:1312.6114, 2013.
[7] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139-144.
[8] YANG L, ZHANG Z, SONG Y, et al. Diffusion models: a comprehensive survey of methods and applications[J]. arXiv:2209.00796, 2022.
[9] SUN X, WANG P, YAN Z, et al. FAIR1M: a benchmark dataset for fine-grained object recognition in high-resolution remote sensing imagery[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2022, 184: 116-130.
[10] 禹文奇, 程塨, 王美君, 等. MAR20: 遥感图像军用飞机目标识别数据集[J]. 遥感学报, 2023, 27(12): 2688-2696.
YU W Q, CHENG G, WANG M J, et al. MAR20: a benchmark for military aircraft recognition in remote sensing images[J]. National Remote Sensing Bulletin, 2023, 27(12): 2688-2696.
[11] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 770-778.
[12] LIANG W, LIU Z, LIU C. Dawson: a domain adaptive few shot generation framework[J]. arXiv:2001.00576, 2020.
[13] CLOU?TRE L, DEMERS M. FIGR: few-shot image generation with reptile[J]. arXiv:1901.02199, 2019.
[14] TOLSTIKHIN I, BOUSQUET O, GELLY S, et al. Wasserstein auto-encoders[J]. arXiv:1711.01558, 2017.
[15] BARTUNOV S, VETROV D. Few-shot generative modelling with generative matching networks[C]//International Conference on Artificial Intelligence and Statistics, 2018: 670-678.
[16] HONG Y, NIU L, ZHANG J, et al. MatchingGAN: matching-based few-shot image generation[C]//2020 IEEE International Conference on Multimedia and Expo (ICME), 2020: 1-6.
[17] GU Z, LI W, HUO J, et al. LofGAN: fusing local representations for few-shot image generation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 8463-8471.
[18] ANTONIOU A, STORKEY A, EDWARDS H. Data augmentation generative adversarial networks[J]. arXiv:1711.
04340, 2017.
[19] HONG Y, NIU L, ZHANG J, et al. DeltaGAN: towards diverse few-shot image generation with sample-specific delta[J]. arXiv:2009.08753, 2020.
[20] DING G, HAN X, WANG S, et al. Attribute group editing for reliable few-shot image generation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 11194-11203.
[21] FRID-ADAR M, DIAMANT I, KLANG E, et al. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification[J]. Neurocomputing, 2018, 321: 321-331.
[22] PEI Z, JIANG H, LI X, et al. Data augmentation for rolling bearing fault diagnosis using an enhanced few-shot wasserstein auto-encoder with meta-learning[J]. Measurement Science and Technology, 2021, 32(8): 084007.
[23] 张曼, 李杰, 朱新忠, 等. 基于改进DCGAN算法的遥感数据集增广方法[J]. 计算机科学, 2021, 48(6A): 80-84.
ZHANG M, LI J, ZHU X Z, et al. Augmentation technology of remote sensing dataset based on improved DCGAN algorithm[J]. Computer Science, 2021, 48(6A): 80-84.
[24] 陈国炜, 刘磊, 郭嘉逸, 等. 基于生成对抗网络的半监督遥感图像飞机检测[J]. 中国科学院大学学报, 2020, 37(4): 539-546.
CHEN G W, LIU L, GUO J Y, et al. Semi-supervised airplane detection in remote sensing images using generative adversarial networks[J]. Journal of University of Chinese Academy of Sciences, 2020, 37(4): 539-546.
[25] 杨志钢, 杨远兰, 苍思远, 等. 基于GAN的船舶遥感图像数据增广方法[J]. 应用科技, 2022, 49(5): 8-14.
YANG Z G, YANG Y L, CANG S Y, et al. Data augmentation method of ship remote sensing images based on GAN[J]. Applied Science and Technology, 2022, 49(5): 8-14.
[26] 姜雨辰, 朱斌. 少样本条件下基于生成对抗网络的遥感图像数据增强[J]. 激光与光电子学进展, 2021, 58(8): 238-244.
JIANG Y C, ZHU B. Data augmentation for remote sensing image based on generative adversarial networks under condition of few samples[J]. Laser & Optoelectronics Progress, 2021, 58(8): 238-244.
[27] SOHN K, LEE H, YAN X. Learning structured output representation using deep conditional generative models[C]//Proceedings of the 28th International Conference on Neural Information Processing Systems, 2015.
[28] GIRIN L, LEGLAIVE S, BIE X, et al. Dynamical variational autoencoders: a comprehensive review[J]. arXiv:2008.12595, 2020.
[29] VAHDAT A, KAUTZ J. NVAE: a deep hierarchical variational autoencoder[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020: 19667-19679.
[30] VAN DEN OORD A, VINYALS O. Neural discrete representation learning[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017.
[31] HUANG H, HE R, SUN Z, et al. IntroVAE: introspective variational autoencoders for photographic image synthesis[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018.
[32] DANIEL T, TAMAR A. Soft-IntroVAE: analyzing and improving the introspective variational autoencoder[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 4391-4400.
[33] HEUSEL M, RAMSAUER H, UNTERTHINER T, et al. GANs trained by a two time-scale update rule converge to a local nash equilibrium[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2017.
[34] ZHANG R, ISOLA P, EFROS A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 586-595. |