[1] LECUN Y, BENGIO Y, HINTON G. Deep learning[J]. Nature, 2015, 521: 436-444.
[2] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, 2016: 770-778.
[3] LONG J, SHELHAMER E, DARRELL T. Fully convolutional networks for semantic segmentation[C]//Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, 2015: 3431-3440.
[4] NAIL D, MAMMONE R. Meta-neural networks that learn by learning[C]//Proceedings of the International Joint Conference on Neural Networks, 1992: 437-422.
[5] THRUN S, PRATT L. Learning to learn: introduction and overview[M]. Cham: Springer, 1998.
[6] FINN C, ABBEEL P, LEVINE S. Model-agnostic meta-learning for fast adaptation of deep networks[C]//Proceedings of the 34th International Conference on Machine Learning, 2017: 1126-1135.
[7] ANTONIOU A, HARRISON E, STORKEY A. How to train your MAML[C]//Proceedings of the 7th International Conference on Learning Representations, 2019: 28.
[8] BAIK S, CHOI J, KIM H, et al. Meta-learning with task adaptive loss function for few-shot learning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 9465-9474.
[9] RAGHU A, RAGHU M, BENGIO S, et al. Rapid learning or feature reuse? towards understanding the effectiveness of MAML[C]//Proceedings of the International Conference on Learning Representations, 2020.
[10] PAN S, YANG Q. A survey on transfer learning[J]. IEEE Transactions on Knowledge and Data Engineering, 2010, 22(10): 1345-1359.
[11] ZHUANG F Z, QI Z Y, DUAN K, et al. A comprehensive survey on transfer learning[J]. Proceedings of the IEEE, 2021, 109(1): 43-76.
[12] LIU Z T, WU B H, HAN M T, et al. Speech emotion recognition based on meta-transfer learning with domain adaption[J]. Soft Computing, 2023, 147: 110766.
[13] VINYALS O, BLUNDELL C, LILLICRAP T, et al. Matching networks for one shot learning[C]//Advances in Neural Information Processing Systems, 2016: 3637-3645.
[14] ORESHKIN B, RODRIGUE Z, LACOSTA A. TADAM: task dependent adaptive metric for improved few-shot learning[C]//Advances in Neural Information Processing Systems, 2018: 719-729.
[15] REN M, ELENI T, SACHIN R, et al. Meta-learning for semi-supervised few-shot classification[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018: 719-729.
[16] LAKE B, SALAKHUTDINOV R, TENENBAUM J. Human-level concept learning through probabilistic program induction[J]. Science, 2015, 350: 1332-1338.
[17] ZHANG J, SONG J, GAO L, et al. Progressive meta-learning with curriculum[J]. Proceedings of the IEEE, 2022, 32(9): 5916-5930.
[18] RAVI S, LAROCHELLE H. Optimization as a model for few shot learning[C]//Proceedings of the International Conference on Learning Representations, 2017: 450-463.
[19] HUISMAN M, RIJN J, PLAAT A. A survey of deep meta-learning[J]. Artificial Intelligence Review, 2021, 54(6): 4483- 4541.
[20] 李凡长, 刘洋, 吴鹏翔, 等. 元学习研究综述[J]. 计算机学报, 2021, 44(2): 422-446.
LI F Z, LIU Y, WU X P, et al. A survey on recent advances in meta-learning[J]. Chinese Journal of Computers, 2021, 44(2): 422-446.
[21] PAN S, TSANG I, KWOK J, et al. Domain adaptation via transfer component analysis[J]. IEEE Transactions on Neural Networks, 2011, 22(2): 199-210.
[22] ZAMIN A, SAX A, SHEN W, et al. Taskonomy: disentangling task transfer learning[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018: 3712-3722.
[23] ERHAN D, BENGIO Y, COURVILLE A, et al. Why does unsupervised pre-training help deep learning?[J]. Journal of Machine Learning Research, 2010, 11: 625-660.
[24] SNELL J, SWERSKY K, RICHARD Z. Prototypical networks for few-shot learning[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017: 4080-4090.
[25] KINGMA D, BA J. ADAM: a method for stochastic optimization[J]. arXiv:1412.6980, 2014.
[26] YAO H X, ZHANG L, FINN C. Meta-learning with fewer tasks through task interpolation[C]//Proceedings of the International Conference on Learning Representations, 2022.
[27] SUNG F, YANG Y, ZHANG L, et al. Learning to compare: relation network for few-shot learning[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018: 1199-1208.
[28] FAN C, RAM P, LIU S J. Sign-MAML: efficient model-agnostic meta-learning by signSGD[J]. arXiv:2109.07497,2021.
[29] LI H Y, EIGEN D, DODGE S, et al. Finding task-relevant features for few-shot learning by category traversal[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 1-10.
[30] OSWALD J, ZHAO D, KOBAYASHI S, et al. Learning where to learn: gradient sparsity in meta and continual learning[C]//Proceedings of the 35th Conference on Neural Information Processing Systems, 2021: 5250-5263.
[31] FINN C, XU K, LEVINE S. Probabilistic model-agnostic meta-learning[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems, 2018: 9537-9548.
[32] SANTORO A, BARTUNOV S, BOTVINICK M, et al. Meta-learning with memory-augmented neural networks[C]//Proceedings of the International Conference on Machine Learning, 2016: 2740-2751.
[33] HILLER M, HARANDI M, DRUMMOND T, et al. On enforcing better conditioned meta-learning for rapid few-shot adaptation[J]. arXiv:2206.07260, 2022.
[34] HOU R, CHANG H, MA B P, et al. Cross attention network for few-shot classification[C]//Proceedings of the International Conference on Neural Information Processing Systems, 2019: 4003-4014.
[35] XUE W, WANG W. One-shot image classification by learning to restore prototypes[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2020: 6558-6565.
[36] LEE K, MAJI S, RAVICHANDRAN A, et al. Meta-learning with differentiable convex optimization[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 1-10.
[37] BOUSMALIS K, TRIGEORGIS G, SILBERMAN N, et al. Domain separation networks[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016: 343-351. |