[1] BAYER M, KAUFHOLD M A, REUTER C. A survey on data augmentation for text classification[J]. ACM Computing Surveys, 2022, 55(7): 1-39.
[2] MINAEE S, KALCHBRENNER N, CAMBRIA E, et al. Deep learning--based text classification: a comprehensive review[J]. ACM Computing Surveys, 2021, 54(3): 1-40.
[3] LI Q, PENG H, LI J, et al. A survey on text classification: from traditional to deep learning[J]. ACM Transactions on Intelligent Systems and Technology , 2022, 13(2): 1-41.
[4] WU H, LIU Y, WANG J. Review of text classification methods on deep learning[J]. Computers, Materials & Continua, 2020, 63(3): 1309-1321.
[5] GASPARETTO A, MARCUZZO M, ZANGARI A, et al. A survey on text classification algorithms: from text to predictions[J]. Information, 2022, 13(2): 83.
[6] WANG L, CHEN R, LI L. Knowledge-guided prompt learning for few-shot text classification[J]. Electronics, 2023, 12(6): 1486.
[7] ZHANG P, CHAI T, XU Y. Adaptive prompt learning-based few-shot sentiment analysis[J]. Neural Processing Letters, 2023, 55(6): 7259-7272.
[8] GU J, HAN Z, CHEN S, et al. A systematic survey of prompt engineering on vision-language foundation models[J]. arXiv:2307.12980, 2023.
[9] PETERS M E, NEUMANN M, IYYER M, et al. Deep contextualized word representations[J]. arXiv:1802.05365, 2018.
[10] RADFORD A, NARASIMHAN K, SALIMANS T, et al. Improving language understanding by generative pre-training[EB/OL]. (2020-09-26)[2023-09-26]. https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf.
[11] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[J]. arXiv:1810.04805, 2018.
[12] FU Z, YANG H, SO A M C, et al. On the effectiveness of parameter-efficient fine-tuning[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2023: 12799-12807.
[13] HAN X, ZHANG Z, DING N, et al. Pre-trained models: past, present and future[J]. AI Open, 2021, 2: 225-250.
[14] 陈德光, 马金林, 马自萍, 等. 自然语言处理预训练技术综述[J]. 计算机科学与探索, 2021, 15(8): 1359-1389.
CHEN G D, MA J L, MA Z P, et al. Review of pre-training techniques for natural language processing[J]. Journal of Frontiers of Computer Science and Technology, 2021, 15(8): 1359-1389.
[15] ZHANG Z, WANG B. Prompt learning for news recommendation[J]. arXiv:2304.05263, 2023.
[16] CHANG K W, TSENG W C, LI S W, et al. Speechprompt: an exploration of prompt tuning on generative spoken language model for speech processing tasks[J]. arXiv:2203.
16773, 2022.
[17] LIU P, YUAN W, FU J, et al. Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing[J]. ACM Computing Surveys, 2023, 55(9): 1-35.
[18] PETRONI F, ROCKT?SCHEL T, LEWIS P, et al. Language models as knowledge bases?[J]. arXiv:1909.01066, 2019.
[19] BROWN T, MANN B, RYDER N, et al. Language models are few-shot learners[C]//Advances in Neural Information Processing Systems, 2020: 1877-1901.
[20] SCHICK T, SCHüTZE H. Exploiting cloze questions for few shot text classification and natural language inference[J]. arXiv:2001.07676, 2020.
[21] JIANG Z, XU F F, ARAKI J, et al. How can we know what language models know?[J]. Transactions of the Association for Computational Linguistics, 2020, 8: 423-438.
[22] YUAN W, NEUBIG G, LIU P. BARTscore: evaluating generated text as text generation[C]//Advances in Neural Information Processing Systems, 2021: 27263-27277.
[23] HAVIV A, BERANT J, GLOBERSON A. BERTese: learning to speak to BERT[J]. arXiv:2103.05327, 2021.
[24] WALLACE E, FENG S, KANDPAL N, et al. Universal adversarial triggers for attacking and analyzing nlp[J]. arXiv:1908.07125, 2019.
[25] SHIN T, RAZEGHI Y, LOGAN IV R L, et al. AutoPrompt: eliciting knowledge from language models with automatically generated prompts[J]. arXiv:2010.15980, 2020.
[26] GAO T, FISCH A, CHEN D. Making pre-trained language models better few-shot learners[J]. arXiv:2012.15723, 2020.
[27] DAVISON J, FELDMAN J, RUSH A M. Commonsense knowledge mining from pretrained models[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019: 1173-1178.
[28] LI X L, LIANG P. Warp-tuning: optimizing continuous prompts for generation[J]. arXiv:2101.00190, 2021.
[29] LESTER B, AL-RFOU R, CONSTANT N. The power of scale for parameter-efficient prompt tuning[J]. arXiv:2104.
08691, 2021.
[30] ZHONG Z, FRIEDMAN D, CHEN D. Factual probing is [mask]: learning vs. learning to recall[J]. arXiv:2104.05240, 2021.
[31] QIN G, EISNER J. Learning how to ask: querying lms with mixtures of soft prompts[J]. arXiv:2104.06599, 2021.
[32] HAMBARDZUMYAN K, KHACHATRIAN H, MAY J. Warp: word-level adversarial reprogramming[J]. arXiv:2101.
00121, 2021.
[33] LIU X, ZHENG Y, DU Z, et al. GPT understands, too[J]. arXiv:2103.10385, 2021.
[34] LIU X, JI K, FU Y, et al. P-tuning v2: prompt tuning can be comparable to fine-tuning universally across scales and tasks[J]. arXiv:2110.07602, 2021.
[35] HAN X, ZHAO W, DING N, et al. PTR: prompt tuning with rules for text classification[J]. AI Open, 2022, 3: 182-192.
[36] ZHANG R, SUN Y, YANG J, et al. Knowledge-augmented frame semantic parsing with hybrid prompt-tuning[C]//Proceedings of the International Conference on Acoustics, Speech and Signal Processing, 2023: 1-5.
[37] PEREZ E, KIELA D, CHO K. True few-shot learning with language models[C]//Advances in Neural Information Processing Systems, 2021: 11054-11070.
[38] AGHAJANYAN A, OKHONKO D, LEWIS M, et al. HTLM: hyper-text pre-training and prompting of language models[J]. arXiv:2107.06955, 2021.
[39] ZHAO M, SCHüTZE H. Discrete and soft prompting for multilingual models[J]. arXiv:2109.03630, 2021.
[40] GU Y, HAN X, LIU Z, et al. PPT: pre-trained prompt tuning for few-shot learning[J]. arXiv:2109.04332, 2021.
[41] 鲍琛龙, 吕明阳, 唐晋韬, 等. 与知识相结合的提示学习研究综述[J]. 中文信息学报, 2023, 37(7): 1-12.
BAO C L, LYU M Y, TANG J T, et al. A survey of prompt learning combined with knowledge[J]. Journal of Chinese Information Processing, 2023, 37(7): 1-12.
[42] HU S, DING N, WANG H, et al. Knowledgeable prompt-tuning: incorporating knowledge into prompt verbalizer for text classification[J]. arXiv:2108.02035, 2021.
[43] WANG J, WANG C, LUO F, et al. Towards unified prompt tuning for few-shot text classification[J]. arXiv:2205.05313, 2022.
[44] ZHU Y, ZHOU X, QIANG J, et al. Prompt-learning for short text classification[J]. arXiv:2202.11345, 2022.
[45] LI J, TANG T, NIE J Y, et al. Learning to transfer prompts for text generation[J]. arXiv:2205.01543, 2022.
[46] ZHANG L, GAO X. Transfer adaptation learning: a decade survey[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(1): 23-44.
[47] VU T, LESTER B, CONSTANT N, et al. Spot: better frozen model adaptation through soft prompt transfer[J]. arXiv:2110.07904, 2021.
[48] WANG C, WANG J, QIU M, et al. Transprompt: towards an automatic transferable prompting framework for few-shot text classification[C]//Proceedings of the 2021 Conference on Empirical Methods In Natural Language Processing, 2021: 2792-2802.
[49] WANG S, FANG H, KHABSA M, et al. Entailment as few-shot learner[J]. arXiv:2104.14690, 2021.
[50] SUN Y, ZHENG Y, HAO C, et al. NSP-BERT: a prompt-based few-shot learner through an original pre-training task-next sentence prediction[J]. arXiv:2109.03564, 2021.
[51] HARRANDO I, REBOUD A, SCHLEIDER T, et al. ProZe: explainable and prompt-guided zero-shot text classification[J]. IEEE Internet Computing, 2022, 26(6): 69-77.
[52] DAN Y, ZHOU J, CHEN Q, et al. Enhancing class understanding via prompt-tuning for zero-shot text classification[C]//Proceedings of the International Conference on Acoustics, Speech and Signal Processing, 2022: 4303-4307.
[53] GUO Y, CODELLA N C, KARLINSKY L, et al. A broader study of cross-domain few-shot learning[C]//Proceedings of the 16th European Conference on Computer Vision, Glasgow, UK, August 23-28, 2020: 124-141.
[54] MAYER C W F, LUDWIG S, BRANDT S. Prompt text classifications with transformer models! an exemplary introduction to prompt-based learning with large language models[J]. Journal of Research on Technology in Education, 2023, 55(1): 125-141.
[55] SONG C, SHAO T, LIN K, et al. Investigating prompt learning for Chinese few-shot text classification with pre-trained language models[J]. Applied Sciences, 2022, 12(21): 11117.
[56] WANG H, XU C, MCAULEY J. Automatic multi-label prompting: simple and interpretable few-shot classification[J]. arXiv:2204.06305, 2022.
[57] 于碧辉, 蔡兴业, 魏靖烜. 基于提示学习的小样本文本分类方法[J]. 计算机应用, 2023, 43(9): 2735-2740.
YU B H, CAI X Y, WEI J X. Few-shot text classification method based on prompt learning[J]. Journal of Computer Applications, 2023, 43(9): 2735-2740.
[58] HU T, CHEN Z, GE J, et al. A Chinese few-shot text classification method utilizing improved prompt learning and unlabeled data[J]. Applied Sciences, 2023, 13(5): 3334.
[59] WEI L, LI Y, ZHU Y, et al. Prompt tuning for multi-label text classification: how to link exercises to knowledge concepts?[J]. Applied Sciences, 2022, 12(20): 10363.
[60] SONG R, LIU Z, CHEN X, et al. Label prompt for multi-label text classification[J]. Applied Intelligence, 2023, 53(8): 8761-8775. |