[1] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 770-778.
[2] DING Y, ZHANG Z L, ZHAO X F, et al. Deep hybrid: multi-graph neural network collaboration for hyperspectral image classification[J]. Defence Technology, 2023, 23: 164-176.
[3] LI F, ZENG A, LIU S, et al. Lite DETR: an interleaved multi-scale encoder for efficient detr[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 18558-18567.
[4] YAN J, LIU Y, SUN J, et al. Cross modal transformer: towards fast and robust 3D object detection[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 18268-18278.
[5] ALAYRAC J B, DONAHUE J, LUC P, et al. Flamingo: a visual language model for few-shot learning[C]//Advances in Neural Information Processing Systems, 2022: 23716-23736.
[6] KIM D, KIM J, CHO S, et al. Universal few-shot learning of dense prediction tasks with visual token matching[C]//Proceedings of the Eleventh International Conference on Learning Representations, 2022.
[7] RAVI S, LAROCHELLE H. Optimization as a model for few-shot learning[C]//Proceedings of the International Conference on Learning Representations, 2016.
[8] HOSPEDALES T, ANTONIOU A, MICAELLI P, et al. Meta-learning in neural networks: a survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 44(9): 5149-5169.
[9] RAJENDRAN J, IRPAN A, JANG E. Meta-learning requires meta-augmentation[C]//Advances in Neural Information Processing Systems, 2020: 5705-5715.
[10] YAO H, WANG Y, WEI Y, et al. Meta-learning with an adaptive task scheduler[C]//Advances in Neural Information Processing Systems, 2021: 7497-7509.
[11] RAMALHO T, GARNELO M. Adaptive posterior learning: few-shot learning with a surprise-based memory module[J]. arXiv:1902.02527, 2019.
[12] 刘兵, 杨娟, 汪荣贵, 等. 结合记忆与迁移学习的小样本学习[J]. 计算机工程与应用, 2022, 58(19): 242-249.
LIU B, YANG J, WANG R G, et al. Memory-based transfer learning for few-shot learning[J]. Computer Engineering and Applications, 2022, 58(19): 242-249.
[13] WANG W, DUAN L, WANG Y, et al. MMT: cross domain few-shot learning via meta-memory transfer[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(12): 15018-15035.
[14] SNELL J, SWERSKY K, ZEMEL R. Prototypical networks for few-shot learning[C]//Advances in Neural Information Processing Systems, 2017.
[15] SIMON C, KONIUSZ P, NOCK R, et al. Adaptive subspaces for few-shot learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 4136-4145.
[16] ZHOU Y, YU L. Few-shot learning via weighted prototypes from graph structure[J]. Pattern Recognition Letters, 2023, 176: 230-235.
[17] FINN C, ABBEEL P, LEVINE S. Model-agnostic meta-learning for fast adaptation of deep networks[C]//Proceedings of the International Conference on Machine Learning, 2017: 1126-1135.
[18] ZHANG Y, ZUO X, ZHENG X, et al. Improving metric-based few-shot learning with dynamically scaled softmax loss[J]. Image and Vision Computing, 2023, 140: 104860.
[19] JIA J, FENG X, YU H. Few-shot classification via efficient meta-learning with hybrid optimization[J]. Engineering Applications of Artificial Intelligence, 2024, 127: 107296.
[20] SANTORO A, BARTUNOV S, BOTVINICK M, et al. Meta-learning with memory-augmented neural networks[C]//Proceedings of the International Conference on Machine Learning, 2016: 1842-1850.
[21] YE H J, HU H, ZHAN D C, et al. Few-shot learning via embedding adaptation with set-to-set functions[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 8808-8817.
[22] HUANG W, YUAN Z, YANG A, et al. TAE-net: task-adaptive embedding network for few-shot remote sensing scene classification[J]. Remote Sensing, 2021, 14(1): 111.
[23] LEE Sanghyuk, LEE Seunghyun, SONG B C. Efficient meta-learning through task-specific pseudo labelling[J]. Electronics, 2023, 12(13): 2757.
[24] ZHANG H, QUE H, REN J, et al. Transductive semantic knowledge graph propagation for zero-shot learning[J]. Journal of the Franklin Institute, 2023, 360(17): 13108-13125.
[25] AHMED M, SERAJ R, ISLAM S M S. The k-means algorithm: a comprehensive survey and performance evaluation[J]. Electronics, 2020, 9(8): 1295.
[26] GUO Q, YIN Z, WANG P. An improved three-way k-means algorithm by optimizing cluster centers[J]. Symmetry, 2022, 14(9): 1821.
[27] LIM J Y, LIM K M, LEE C P, et al. SSL-ProtoNet: self-supervised learning prototypical networks for few-shot learning[J]. Expert Systems with Applications, 2024, 238: 122173.
[28] VINYALS O, BLUNDELL C, LILLICRAP T, et al. Matching networks for one shot learning[C]//Advances in Neural Information Processing Systems, 2016.
[29] REN M, TRIANTAFILLOU E, RAVI S, et al. Meta-learning for semi-supervised few-shot classification[J]. arXiv:1803. 00676, 2018.
[30] RUSSAKOVSKY O, DENG J, SU H, et al. ImageNet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115: 211-252.
[31] LAKE B, SALAKHUTDINOV R, GROSS J, et al. One shot learning of simple visual concepts[C]//Proceedings of the Annual Meeting of the Cognitive Science Society, 2011.
[32] CHEN Y, LIU Z, XU H, et al. Meta-baseline: exploring simple meta-learning for few-shot learning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 9062-9071.
[33] CHEN C, LI K, WEI W, et al. Hierarchical graph neural networks for few-shot learning[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 32(1): 240-252.
[34] ZHOU F, ZHANG L, WEI W. Meta-generating deep attentive metric for few-shot classification[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(10): 6863-6873. |