Computer Engineering and Applications ›› 2024, Vol. 60 ›› Issue (23): 1-27.DOI: 10.3778/j.issn.1002-8331.2407-0436
• Research Hotspots and Reviews • Previous Articles Next Articles
CUI Jinman, LI Dongmei, TIAN Xuan, MENG Xianghao, YANG Yu, CUI Xiaohui
Online:
2024-12-01
Published:
2024-11-29
崔金满,李冬梅,田萱,孟湘皓,杨宇,崔晓晖
CUI Jinman, LI Dongmei, TIAN Xuan, MENG Xianghao, YANG Yu, CUI Xiaohui. Survey on Prompt Learning[J]. Computer Engineering and Applications, 2024, 60(23): 1-27.
崔金满, 李冬梅, 田萱, 孟湘皓, 杨宇, 崔晓晖. 提示学习研究综述[J]. 计算机工程与应用, 2024, 60(23): 1-27.
Add to citation manager EndNote|Ris|BibTeX
URL: http://cea.ceaj.org/EN/10.3778/j.issn.1002-8331.2407-0436
[1] HOWARD J, RUDER S. Universal language model fine-tuning for text classification[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2018: 328-339. [2] RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[J]. OpenAI Blog, 2019, 1(8): 9. [3] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019: 4171-4186. [4] LIU Y, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized BERT pretraining approach[J]. arXiv:1907.11692, 2019. [5] RAFFEL C, SHAZEER N, ROBERTS A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. The Journal of Machine Learning Research, 2020, 21(1): 5485-5551. [6] 舒文韬, 李睿潇, 孙天祥, 等. 大型语言模型:原理、实现与发展[J]. 计算机研究与发展, 2024, 61(2): 351-361. SHU W T, LI R X, SUN T X, et al. Large language models: principles, implementation, and progress[J]. Journal of Computer Research and Development, 2024, 61(2): 351-361. [7] BROWN T, MANN B, RYDER N, et al. Language models are few-shot learners[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020: 1877-1901. [8] LESTER B, AL-RFOU R, CONSTANT N. The power of scale for parameter-efficient prompt tuning[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021: 3045-3059. [9] LIU X, ZHENG Y, DU Z, et al. GPT understands, too[J]. AI Open, 2024, 5: 208-215. [10] WEI C, XIE S M, MA T. Why do pretrained language models help in downstream tasks? an analysis of head and prompt tuning[C]//Proceedings of the 35th International Conference on Neural Information Processing Systems, 2021: 16158-16170. [11] 张钦彤, 王昱超, 王鹤羲, 等. 大语言模型微调技术的研究综述[J]. 计算机工程与应用, 2024, 60(17): 17-33. ZHANG Q T, WANG Y C, WANG H X, et al. Comprehensive review of large language model fine-tuning[J]. Computer Engineering and Applications, 2024, 60(17): 17-33. [12] WANG Y, HUANG Z, HONG X. S-prompts learning with pre-trained transformers: an occam’s razor for domain incremental learning[C]//Proceedings of the 36th International Conference on Neural Information Processing Systems, 2022: 5682-5695. [13] KHATTAK M U, RASHEED H, MAAZ M, et al. Maple: multi-modal prompt learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 19113-19122. [14] JIA M, TANG L, CHEN B C, et al. Visual prompt tuning[C]//Proceedings of the 17th European Conference on Computer Vision(ECCV 2022), Tel Aviv, Israel, October 23-27, 2022. Cham: Springer Nature, 2022: 709-727. [15] SOLLAMI M, JAIN A. Multimodal conditionality for natural language generation[J]. arXiv:2109.01229, 2021. [16] OpenAI. ChatGPT[EB/OL]. (2023)[2024-06-15]. https://openai.com/blog/chatgpt. [17] OpenAI. GPT-4 technical report[R/OL]. (2023)[2024-06-15]. https://cdn.openai.com/papers/gpt-4.pdf. [18] 桑基韬, 于剑. 从ChatGPT看AI未来趋势和挑战[J]. 计算机研究与发展, 2023, 60(6): 1191-1201. SANG J T, YU J. ChatGPT: a glimpse into AI’s future[J]. Journal of Computer Research and Development, 2023, 60(6): 1191-1201. [19] PETRONI F, ROCKT?SCHEL T, RIEDEL S, et al. Language models as knowledge bases?[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019: 2463-2473. [20] WEI J, WANG X, SCHUURMANS D, et al. Chain-of-thought prompting elicits reasoning in large language models[C]//Proceedings of the 36th International Conference on Neural Information Processing Systems, 2022: 24824-24837. [21] YAO S, YU D, ZHAO J, et al. Tree of thoughts: deliberate problem solving with large language models[C]//Proceedings of the 37th International Conference on Neural Information Processing Systems, 2023: 11809-11822. [22] GAO L, MADAAN A, ZHOU S, et al. PAL: program-aided language models[C]//Proceedings of the International Conference on Machine Learning, 2023: 10764-10799. [23] CHU Z, CHEN J, CHEN Q, et al. A survey of chain of thought reasoning: advances, frontiers and future[J]. arXiv:2309.15402, 2023. [24] LIU P, YUAN W, FU J, et al. Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing[J]. ACM Computing Surveys, 2023, 55(9): 1-35. [25] SHIN T, RAZEGHI Y, LOGAN IV R L, et al. AutoPrompt: eliciting knowledge from language models with automatically generated prompts[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020: 4222-4235. [26] SU Y, WANG X, QIN Y, et al. On transferability of prompt tuning for natural language processing[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022: 3949-3969. [27] LIU F, LIN H, HAN X, et al. Pre-training to match for unified low-shot relation extraction[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022: 5785-5795. [28] CHIA Y K, BING L, PORIA S, et al. RelationPrompt: leveraging prompts to generate synthetic data for zero-shot relation triplet extraction[C]//Findings of the Association for Computational Linguistics (ACL 2022), 2022: 45-57. [29] WANG J, ZHANG L, LIU J, et al. MatchPrompt: prompt-based open relation extraction with semantic consistency guided clustering[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022: 7875-7888. [30] LOGAN IV R, BALA?EVI? I, WALLACE E, et al. Cutting down on prompts and parameters: simple few-shot learning with language models[C]//Findings of the Association for Computational Linguistics (ACL 2022), 2022: 2824-2835. [31] HAMBARDZUMYAN K, KHACHATRIAN H, MAY J. WARP: word-level adversarial reprogramming[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021: 4921-4933. [32] ZHOU K, YANG J, LOY C C, et al. Learning to prompt for vision-language models[J]. International Journal of Computer Vision, 2022, 130(9): 2337-2348. [33] JU C, HAN T, ZHENG K, et al. Prompting visual-language models for efficient video understanding[C]//Proceedings of the 17th European Conference on Computer Vision (ECCV 2022), 2022: 105-124. [34] YAO Y, ZHANG A, ZHANG Z, et al. CPT: colorful prompt tuning for pre-trained vision-language models[J]. AI Open, 2024, 5: 30-38. [35] 鲍琛龙, 吕明阳, 唐晋韬, 等. 与知识相结合的提示学习研究综述[J]. 中文信息学报, 2023, 37(7): 1-12. BAO C L, LYU M Y, TANG J T, et al. A survey of prompt learning combined with knowledge[J]. Journal of Chinese Information Processing, 2023, 37(7): 1-12. [36] QIAO S, OU Y, ZHANG N, et al. Reasoning with language model prompting: a survey[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023: 5368-5393. [37] LIU P, ZHANG L, GULLA J A. Pre-train, prompt and recommendation: a comprehensive survey of language modelling paradigm adaptations in recommender systems[J]. Transactions of the Association for Computational Linguistics, 2023, 11: 1553-1571. [38] 廖宁, 曹敏, 严骏驰. 视觉提示学习综述[J]. 计算机学报, 2024, 47(4): 790-820. LIAO N, CAO M, YAN J C. Visual prompt learning: a survey[J]. Chinese Journal of Computers, 2024, 47(4): 790-820. [39] GAO T, FISCH A, CHEN D. Making pre-trained language models better few-shot learners[C]//Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021: 3816-3830. [40] SCHICK T, SCHüTZE H. Exploiting cloze-questions for few-shot text classification and natural language inference[C]//Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 2021: 255-269. [41] SCHICK T, SCHüTZE H. It’s not just size that matters: small language models are also few-shot learners[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021: 2339-2352. [42] CUI L, WU Y, LIU J, et al. Template-based named entity recognition using BART[C]//Findings of the Association for Computational Linguistics (ACL-IJCNLP 2021), 2021: 1835-1845. [43] DING N, CHEN Y, HAN X, et al. Prompt-learning for fine-grained entity typing[C]//Findings of the Association for Computational Linguistics (EMNLP 2022), 2022: 6888-6901. [44] ZHANG S, JI T, JI W, et al. Zero-shot event detection based on ordered contrastive learning and prompt-based prediction[C]//Findings of the Association for Computational Linguistics( NAACL 2022), 2022: 2572-2580. [45] WANG S, FANG H, KHABSA M, et al. Entailment as few-shot learner[J]. arXiv:2104.14690, 2021. [46] SUN Y, ZHENG Y, HAO C, et al. NSP-BERT: a prompt-based few-shot learner through an original pre-training task—next sentence prediction[C]//Proceedings of the 29th International Conference on Computational Linguistics, 2022: 3233-3250. [47] SAINZ O, DE LACALLE O L, LABAKA G, et al. Label verbalization and entailment for effective zero and few-shot relation extraction[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021: 1199-1212. [48] RADFORD A, KIM J W, HALLACY C, et al. Learning transferable visual models from natural language supervision[C]//Proceedings of the International Conference on Machine Learning, 2021: 8748-8763. [49] CHO J, LEI J, TAN H, et al. Unifying vision-and-language tasks via text generation[C]//Proceedings of the International Conference on Machine Learning, 2021: 1931-1942. [50] ZHANG R, GUO Z, ZHANG W, et al. PointCLIP: point cloud understanding by CLIP[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022: 8542-8552. [51] JIN W, CHENG Y, SHEN Y, et al. A good prompt is worth millions of parameters: low-resource prompt-based learning for vision-language models[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022: 2763-2775. [52] LI D, LI J, LI H, et al. Align and prompt: video-and-language pre-training with entity prompts[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022: 4943-4953. [53] KIRILLOV A, MINTUN E, RAVI N, et al. Segment anything[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 4015-4026. [54] YU Y, YANG X, LI Q, et al. H2rbox-v2: boosting hbox-supervised oriented object detection via symmetric learning[J]. arXiv:2304.04403, 2023. [55] YANG J, GAO M, LI Z, et al. Track anything: segment anything meets videos[J]. arXiv:2304.11968, 2023. [56] WANG T, ZHANG J, FEI J, et al. Caption anything: interactive image description with diverse multimodal controls[J]. arXiv:2305.02677, 2023. [57] WANG X, ZHANG X, CAO Y, et al. SegGPT: segmenting everything in context[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023: 1130-1140. [58] JIANG Z, XU F F, ARAKI J, et al. How can we know what language models know?[J]. Transactions of the Association for Computational Linguistics, 2020, 8: 423-438. [59] ZHAO L, ZHENG F, ZENG W, et al. Domain-oriented prefix-tuning: towards efficient and generalizable fine-tuning for zero-shot dialogue summarization[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022: 4848-4862. [60] JIAN Y, GAO C, VOSOUGHI S. Contrastive learning for prompt-based few-shot language learners[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022: 5577-5587. [61] SONG C, CAI F, WANG M, et al. TaxonPrompt: taxonomy-aware curriculum prompt learning for few-shot event classification[J]. Knowledge-Based Systems, 2023, 264: 110290. [62] YANG X, FENG S, WANG D, et al. Few-shot multimodal sentiment analysis based on multimodal probabilistic fusion prompts[C]//Proceedings of the 31st ACM International Conference on Multimedia, 2023: 6045-6053. [63] LI X L, LIANG P. Prefix-tuning: optimizing continuous prompts for generation[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021: 4582-4597. [64] VU T, LESTER B, CONSTANT N, et al. SPoT: better frozen model adaptation through soft prompt transfer[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022: 5039-5059. [65] ZHONG Z, FRIEDMAN D, CHEN D. Factual probing is [MASK]: learning vs. learning to recall[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021: 5017-5033. [66] WANG C, WANG J, QIU M, et al. TransPrompt: towards an automatic transferable prompting framework for few-shot text classification[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021: 2792-2802. [67] TANG T, LI J, ZHAO W X, et al. Context-tuning: learning contextualized prompts for natural language generation[C]//Proceedings of the 29th International Conference on Computational Linguistics, 2022: 6340-6354. [68] WU Z, WANG S, GU J, et al. IDPG: an instance-dependent prompt generation method[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022: 5507-5521. [69] ZHANG Y, FEI H, LI D, et al. PromptGen: automatically generate prompts using generative models[C]//Findings of the Association for Computational Linguistics (NAACL 2022), 2022: 30-37. [70] GU Y, HAN X, LIU Z, et al. PPT: pre-trained prompt tuning for few-shot learning[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022: 8410-8423. [71] HOU Y, DONG H, WANG X, et al. MetaPrompting: learning to learn better prompts[C]//Proceedings of the 29th International Conference on Computational Linguistics, 2022: 3251-3262. [72] ZHOU K, YANG J, LOY C C, et al. Conditional prompt learning for vision-language models[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 16816-16825. [73] YAO H, ZHANG R, XU C. Visual-language prompt tuning with knowledge-guided context optimization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 6757-6767. [74] ZOU X, YANG J, ZHANG H, et al. Segment everything everywhere all at once[C]//Proceedings of the 37th International Conference on Neural Information Processing Systems, 2023: 19769-19782. [75] RAO Y, ZHAO W, CHEN G, et al. DenseCLIP: language-guided dense prediction with context-aware prompting[C]//Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022: 18061-18070. [76] SOHN K, CHANG H, LEZAMA J, et al. Visual prompt tuning for generative transfer learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 19840-19851. [77] HUANG S, GONG B, PAN Y, et al. VoP: text-video co-operative prompt tuning for cross-modal retrieval[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 6565-6574. [78] ZHU J, LAI S, CHEN X, et al. Visual prompt multi-modal tracking[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 9516-9526. [79] RUAN C, WANG H. Dynamic visual prompt tuning for parameter efficient transfer learning[C]//Proceedings of the Chinese Conference on Pattern Recognition and Computer Vision (PRCV), 2023: 293-303. [80] SHU M, NIE W, HUANG D A, et al. Test-time prompt tuning for zero-shot generalization in vision-language models[C]//Proceedings of the 36th International Conference on Neural Information Processing Systems, 2022: 14274-14289. [81] MA X, ZHANG J, GUO S, et al. SwapPrompt: test-time prompt adaptation for vision-language models[C]//Proceedings of the 37th International Conference on Neural Information Processing Systems, 2023: 65252-65264. [82] LIU Z, SUN H, PENG Y, et al. DART: dual-modal adaptive online prompting and knowledge retention for test-time adaptation[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2024: 14106-14114. [83] HAN X, ZHAO W, DING N, et al. PTR: prompt tuning with rules for text classification[J]. AI Open, 2022, 3: 182-192. [84] CHEN X, ZHANG N, XIE X, et al. KnowPrompt: knowledge-aware prompt-tuning with synergistic optimization for relation extraction[C]//Proceedings of the ACM Web Conference, 2022: 2778-2788. [85] LI H, MO T, FAN H, et al. KiPT: knowledge-injected prompt tuning for event detection[C]//Proceedings of the 29th International Conference on Computational Linguistics, 2022: 1943-1952. [86] WANG J, HUANG W, SHI Q, et al. Knowledge prompting in pre-trained language model for natural language understanding[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022: 3164-3177. [87] YE H, ZHANG N, DENG S, et al. Ontology-enhanced prompt-tuning for few-shot learning[C]//Proceedings of the ACM Web Conference, 2022: 778-787. [88] CHEN X, LI L, ZHANG N, et al. Decoupling knowledge from memorization: Retrieval-augmented prompt learning[C]//Proceedings of the 36th International Conference on Neural Information Processing Systems, 2022: 23908-23922. [89] YAO Y, MAO S, ZHANG N, et al. Schema-aware reference as prompt improves data-efficient knowledge graph construction[C]//Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2023: 911-921. [90] ZHANG R, HU X, LI B, et al. Prompt, generate, then cache: cascade of foundation models makes strong few-shot learners[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 15211-15222. [91] CARON M, TOUVRON H, MISRA I, et al. Emerging properties in self-supervised vision transformers[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 9650-9660. [92] RAMESH A, PAVLOV M, GOH G, et al. Zero-shot text-to-image generation[C]//Proceedings of the International Conference on Machine Learning, 2021: 8821-8831. [93] YANG Z, GAN Z, WANG J, et al. An empirical study of GPT-3 for few-shot knowledge-based VQA[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2022: 3081-3089. [94] SHAO Z, YU Z, WANG M, et al. Prompting large language models with answer heuristics for knowledge-based visual question answering[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 14974-14983. [95] YANG J, ZHANG H, LI F, et al. Set-of-mark prompting unleashes extraordinary visual grounding in GPT-4V[J]. arXiv:2310.11441, 2023. [96] WANG X, WEI J, SCHUURMANS D, et al. Self-consistency improves chain of thought reasoning in language models[C]//Proceedings of the Eleventh International Conference on Learning Representations, 2023. [97] LI Y, LIN Z, ZHANG S, et al. Making language models better reasoners with step-aware verifier[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023: 5315-5333. [98] ZHOU D, SCH?RLI N, HOU L, et al. Least-to-most prompting enables complex reasoning in large language models[C]//Proceedings of the Eleventh International Conference on Learning Representations, 2023. [99] DIAO S, WANG P, LIN Y, et al. Active prompting with chain-of-thought for large language models[J]. arXiv:2302. 12246, 2023. [100] SHAO Z, GONG Y, SHEN Y, et al. Synthetic prompting: generating chain-of-thought demonstrations for large language models[C]//Proceedings of the 40th International Conference on Machine Learning, 2023: 30706-30775. [101] SHUM K, DIAO S, ZHANG T. Automatic prompt augmentation and selection with chain-of-thought from labeled data[C]//Findings of the Association for Computational Linguistics (EMNLP 2023), 2023: 12113-12139. [102] KOJIMA T, GU S S, REID M, et al. Large language models are zero-shot reasoners[C]//Proceedings of the 36th International Conference on Neural Information Processing Systems, 2022: 22199-22213. [103] ZHANG Z, ZHANG A, LI M, et al. Automatic chain of thought prompting in large language models[C]//Proceedings of the Eleventh International Conference on Learning Representations, 2022. [104] DU Y, LI S, TORRALBA A, et al. Improving factuality and reasoning in language models through multiagent debate[C]//Proceedings of the 41st International Conference on Machine Learning, 2024. [105] ZHAO R, LI X, JOTY S, et al. Verify-and-edit: a knowledge-enhanced chain-of-thought framework[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023: 5823-5840. [106] LI X, ZHAO R, CHIA Y K, et al. Chain of knowledge: a framework for grounding large language models with structured knowledge bases[J]. arXiv:2305.13269, 2023. [107] WANG J, SUN Q, CHEN N, et al. Boosting language models reasoning with chain-of-knowledge prompting[J]. arXiv:2306.06427, 2023. [108] WANG K, DUAN F, WANG S, et al. Knowledge-driven CoT: exploring faithful reasoning in llms for knowledge-intensive question answering[J]. arXiv:2308.13259, 2023. [109] YAO S, ZHAO J, YU D, et al. ReAct: synergizing reasoning and acting in language models[C]//Proceedings of the Eleventh International Conference on Learning Representations, 2023. [110] LIU J, LIU A, LU X, et al. Generated knowledge prompting for commonsense reasoning[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022: 3154-3169. [111] ZHANG Z, ZHANG A, LI M, et al. Multimodal chain-of-thought reasoning in language models[J]. arXiv:2302. 00923, 2023. [112] HE L, LI Z, CAI X, et al. Multi-modal latent space learning for chain-of-thought reasoning in language models[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2024: 18180-18187. [113] YORAN O, WOLFSON T, BOGIN B, et al. Answering questions by meta-reasoning over multiple chains of thought[C]//Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023: 5942-5966. [114] BESTA M, BLACH N, KUBICEK A, et al. Graph of thoughts: solving elaborate problems with large language models[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2024: 17682-17690. [115] SEL B, AL-TAWAHA A, KHATTAR V, et al. Algorithm of thoughts: enhancing exploration of ideas in large language models[C]//Proceedings of the 41st International Conference on Machine Learning, 2024. [116] NING X, LIN Z, ZHOU Z, et al. Skeleton-of-thought: large language models can do parallel decoding[J]. arXiv:2307.15337, 2023. [117] CHEN W, MA X, WANG X, et al. Program of thoughts prompting: disentangling computation from reasoning for numerical reasoning tasks[J]. Transactions on Machine Learning Research, 2023: 2835-8856. [118] BI Z, CHEN J, JIANG Y, et al. CodeKGC: code language model for generative knowledge graph construction[J]. arXiv:2304.09048, 2023. [119] WANG X, LI S, JI H. Code4Struct: code generation for few-shot structured prediction from natural language[J]. arXiv:2210.12810, 2022. [120] LI P, SUN T, TANG Q, et al. CodeIE: large code generation models are better few-shot information extractors[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023: 15339-15353. [121] CHEN M, TWOREK J, JUN H, et al. Evaluating large language models trained on code[J]. arXiv:2107.03374, 2021. [122] ZELIKMAN E, WU Y, MU J, et al. STaR: self-taught reasoner bootstrapping reasoning with reasoning[C]//Proceedings of the 36th International Conference on Neural Information Processing Systems, 2022: 15476-15488. [123] SCHICK T, SCHMID H, SCHüTZE H. Automatically identifying words that can serve as labels for few-shot text classification[C]//Proceedings of the 28th International Conference on Computational Linguistics, 2020:5569-5578. [124] MA R, ZHOU X, GUI T, et al. Template-free prompt tuning for few-shot NER[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022: 5721-5732. [125] WANG H, XU C, MCAULEY J. Automatic multi-label prompting: simple and interpretable few-shot classification[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022: 5483-5492. [126] YU Z, GAO T, ZHANG Z, et al. Automatic label sequence generation for prompting sequence?to?sequence models[C]//Proceedings of the 29th International Conference on Computational Linguistics, 2022: 4965-4975. [127] ZHANG N, LI L, CHEN X, et al. Differentiable prompt makes pre-trained language models better few-shot learners[C]//Proceedings of the Tenth International Conference on Learning Representations(ICLR 2022), 2022. [128] CHEN X, LI L, DENG S, et al. LightNER: a lightweight tuning paradigm for low-resource NER via pluggable prompting[C]//Proceedings of the 29th International Conference on Computational Linguistics, 2022: 2374-2387. [129] CUI G, HU S, DING N, et al. Prototypical verbalizer for prompt-based few-shot tuning[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022: 7014-7024. [130] HU S, DING N, WANG H, et al. Knowledgeable prompt-tuning: incorporating knowledge into prompt verbalizer for text classification[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022: 2225-2240. [131] NI S, KAO H Y. KPT++: Refined knowledgeable prompt tuning for few-shot text classification[J]. Knowledge-Based Systems, 2023, 274: 110647. [132] ZHANG H, ZHANG X, HUANG H, et al. Prompt-based meta-learning for few-shot text classification[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022: 1342-1357. [133] SOCHER R, PERELYGIN A, WU J, et al. Recursive deep models for semantic compositionality over a sentiment treebank[C]//Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, 2013: 1631-1642. [134] PANG B, LEE L. Seeing stars: exploiting class relationships for sentiment categorization with respect to rating scales[C]//Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, 2005: 115-124. [135] HU M, LIU B. Mining and summarizing customer reviews[C]//Proceedings of the tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2004: 168-177. [136] 李冬梅, 张扬, 李东远, 等. 实体关系抽取方法研究综述[J]. 计算机研究与发展, 2020, 57(7): 1424-1448. LI D M, ZHANG Y, LI D Y, et al. Review of entity relation extraction methods[J]. Journal of Computer Research and Development, 2020, 57(7): 1424-1448. [137] HENDRICKX I, KIM S N, KOZAREVA Z, et al. SemEval-2010 task 8: multi-way classification of semantic relations between pairs of nominals[C]//Proceedings of the 5th International Workshop on Semantic Evaluation, 2010: 33-38. [138] ZHANG Y, ZHONG V, CHEN D, et al. Position-aware attention and supervised data improve slot filling[C]//Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2017:?35-45. [139] ALT C, GABRYSZAK A, HENNIG L. TACRED-Revisited: a thorough evaluation of the tacred relation extraction task[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 1558-1569. [140] STOICA G, PLATANIOS E A, PóCZOS B. Re-TACRED: addressing shortcomings of the tacred dataset[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2021: 13843-13850. [141] HAN X, ZHU H, YU P, et al. FewRel: a large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2018: 4803-4809. [142] GAO T, HAN X, ZHU H, et al. FewRel 2.0: towards more challenging few-shot relation classification[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019: 6250-6255. [143] CHEN C Y, LI C T. ZS-BERT: towards zero-shot relation extraction with attribute representation learning[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021: 3470-3479. [144] BOWMAN S, ANGELI G, POTTS C, et al. A large annotated corpus for learning natural language inference[C]//Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015: 632-642. [145] WILLIAMS A, NANGIA N, BOWMAN S. A broad-coverage challenge corpus for sentence understanding through inference[C]//Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 2018: 1112-1122. [146] WANG A, SINGH A, MICHAEL J, et al. GLUE: a multi-task benchmark and analysis platform for natural language understanding[C]//Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, 2018: 353-355. [147] HU H, RICHARDSON K, XU L, et al. OCNLI: original chinese natural language inference[C]//Findings of the Association for Computational Linguistics (EMNLP 2020), 2020: 3512-3526. [148] RAJPURKAR P, ZHANG J, LOPYREV K, et al. SQuAD: 100,000+ questions for machine comprehension of text[C]//Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 2016: 2383-2392. [149] BRAGG J, COHAN A, LO K, et al. Flex: unifying eevaluation for few-shot NLP[C]//Proceedings of the 35th International Conference on Neural Information Processing Systems, 2021: 15787-15800. [150] MIN S, LEWIS M, HAJISHIRZI H, et al. Noisy channel language model prompting for few-shot text classification[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022: 5316-5330. [151] ZHU Y, ZHOU X, QIANG J, et al. Prompt-learning for short text classification[J]. IEEE Transactions on Knowledge & Data Engineering, 2024, 36(10): ?5328-5339. [152] LEE D H, KADAKIA A, TAN K, et al. Good examples make a faster learner: simple demonstration-based learning for low-resource ner[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022: 2687-2700. [153] HUANG Y, HE K, WANG Y, et al. COPNER: contrastive learning with prompt guiding for few-shot named entity recognition [C]//Proceedings of the 29th International Conference on Computational Linguistics, 2022: 2515-2527. [154] WANG Y, XU C, SUN Q, et al. PromDA: prompt-based data augmentation for low-resource NLU tasks[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022: 4242-4255. [155] JIANG P, AGARWAL S, JIN B, et al. Text-augmented open knowledge graph completion via pre-trained language models[C]//Findings of the Association for Computational Linguistics (ACL 2023), 2023: 11161-11180. [156] TAN Z, ZHANG X, WANG S, et al. MSP: multi-stage prompting for making pre-trained language models better translators[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022: 6131-6142. [157] LI Y, YIN Y, LI J, et al. Prompt-driven neural machine translation[C]//Findings of the Association for Computational Linguistics (ACL 2022), 2022: 2579-2590. [158] GUO H, LIU J, HUANG H, et al. LVP-M3: language-aware visual prompt for multilingual multimodal machine translation[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022: 2862-2872. [159] ZHANG B, HADDOW B, BIRCH A. Prompting large language model for machine translation: a case study[C]//Proceedings of the 40th International Conference on Machine Learning, 2023: 41092-41110. [160] ZHANG X, ZHAO J, LECUN Y. Character-level convolutional networks for text classification[C]//Proceedings of the 28th International Conference on Neural Information Processing Systems, 2015: 649-657. [161] MCAULEY J, LESKOVEC J. Hidden factors and hidden topics: understanding rating dimensions with review text[C]//Proceedings of the 7th ACM Conference on Recommender Systems, 2013: 165-172. [162] LEHMANN J, ISELE R, JAKOB M, et al. DBPedia—a large-scale, multilingual knowledge base extracted from wikipedia[J]. Semantic Web, 2015, 6(2): 167-195. [163] SANG E T K, DE MEULDER F. Introduction to the CoNLL-2003 shared task: language-independent named entity recognition[C]//Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, 2003: 142-147. [164] PRADHAN S, MOSCHITTI A, XUE N, et al. Towards robust linguistic analysis using ontonotes[C]//Proceedings of the Seventeenth Conference on Computational Natural Language Learning, 2013: 143-152. [165] LIU J, PASUPAT P, WANG Y, et al. Query understanding enhanced by hierarchical parsing structures[C]//Proceedings of the 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, 2013: 72-77. [166] DING N, XU G, CHEN Y, et al. Few-NERD: a few-shot named entity recognition dataset[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021: 3198-3213. [167] TOUTANOVA K, CHEN D, PANTEL P, et al. Representing text for joint embedding of text and knowledge bases[C]//Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015: 1499-1509. [168] DETTMERS T, MINERVINI P, STENETORP P, et al. Convolutional 2D knowledge graph embeddings[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2018: 1811-1818. [169] PAPINENI K, ROUKOS S, WARD T, et al. BLEU: a method for automatic evaluation of machine translation[C]//Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 2002: 311-318. [170] REI R, STEWART C, FARINHA A C, et al. COMET: a neural framework for MT evaluation[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020: 2685-2702. [171] FARHADI A, ENDRES I, HOIEM D, et al. Describing objects by their attributes[C]//Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009: 1778-1785. [172] DENG J, DONG W, SOCHER R, et al. ImageNet: a large-scale hierarchical image database[C]//Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009: 248-255. [173] XIAO J, HAYS J, EHINGER K A, et al. SUN database: large-scale scene recognition from abbey to zoo[C]//Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010: 3485-3492. [174] FEI-FEI L, FERGUS R, PERONA P. Learning generative visual models from few training examples: an incremental bayesian approach tested on 101 object categories[C]//Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop, 2004: 178. [175] YU L, POIRSON P, YANG S, et al. Modeling context in referring expressions[C]//Proceedings of the 14th European Conference on Computer Vision (ECCV 2016), Amsterdam, The Netherlands, October 11?14, 2016. Cham: Springer International Publishing, 2016: 69-85. [176] MAO J, HUANG J, TOSHEV A, et al. Generation and comprehension of unambiguous object descriptions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 11-20. [177] GOYAL Y, KHOT T, SUMMERS-STAY D, et al. Making the V in VQA matter: elevating the role of image understanding in visual question answering[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017: 6325-6334. [178] ZHU Y, GROTH O, BERNSTEIN M, et al. Visual7W: grounded question answering in images[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016: 4995-5004. [179] ELLIOTT D, FRANK S, SIMA’AN K, et al. Multi30K: multilingual english?german image descriptions[C]//Proceedings of the 5th Workshop on Vision and Language, 2016: 70-74. [180] WU Z, SONG S, KHOSLA A, et al. 3D shapenets: a deep representation for volumetric shapes[C]//Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015: 1912-1920. [181] UY M A C, PHAM Q H, HUA B S, et al. Revisiting point cloud classification: a new benchmark dataset and classification model on real-world data[C]//Proceedings: 2019 International Conference on Computer Vision, 2019. [182] MARINO K, RASTEGARI M, FARHADI A, et al. Ok-VQA: a visual question answering benchmark requiring external knowledge[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019: 3195-3204. [183] AGRAWAL H, DESAI K, WANG Y, et al. Nocaps: novel object captioning at scale[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019: 8948-8957. [184] YOUNG P, LAI A, HODOSH M, et al. From image descriptions to visual denotations: new similarity metrics for semantic inference over event descriptions[J]. Transactions of the Association for Computational Linguistics, 2014, 2: 67-78. [185] VINYALS O, BLUNDELL C, LILLICRAP T, et al. Matching networks for one shot learning[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016: 3637-3645. [186] BAIN M, NAGRANI A, VAROL G, et al. Frozen in time: a joint video and image encoder for end-to-end retrieval[C]//Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021: 1708-1718. [187] XU J, MEI T, YAO T, et al. MSR-VTT: a large video description dataset for bridging video and language[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016: 5288-5296. [188] HENDRICKS L A, WANG O, SHECHTMAN E, et al. Localizing moments in video with natural language[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), 2017: 5804-5813. [189] XU D, ZHAO Z, XIAO J, et al. Video question answering via gradually refined attention over appearance and motion[C]//Proceedings of the 25th ACM International Conference on Multimedia, 2017: 1645-1653. [190] NIU T, ZHU S, PANG L, et al. Sentiment analysis on multi-view social data[C]//Proceedings of the 22nd International Conference on Multi-Media Modeling (MMM 2016), 2016: 15-27. [191] YANG X, FENG S, WANG D, et al. Image-text multimodal emotion classification via multi-view attentional network[J]. IEEE Transactions on Multimedia, 2020, 23: 4014-4026. [192] NILSBACK M E, ZISSERMAN A. Automated flower classification over a large number of classes[C]//Proceedings of the 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, 2008: 722-729. [193] PATRICK H, BENJAMIN B, ANDREAS D, et al. EuroSAT: a novel dataset and deep learning benchmark for land use and land cover classification[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019, 12(7): 2217-2226. [194] ZHOU B, ZHAO H, PUIG X, et al. Semantic understanding of scenes through the ADE20k dataset[J]. International Journal of Computer Vision, 2019, 127: 302-321. [195] VAN HORN G, BRANSON S, FARRELL R, et al. Building a bird recognition app and large scale dataset with citizen scientists: the fine print in fine-grained dataset collection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015: 595-604. [196] GEBRU T, KRAUSE J, WANG Y, et al. Fine-grained car detection for visual census estimation[C]//Proceedings of the 31st AAAI Conference on Artificial Intelligence, 2017: 4502-4508. [197] CABA HEILBRON F, ESCORCIA V, GHANEM B, et al. Activitynet: a large-scale video benchmark for human activity understanding[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015: 961-970. [198] HENDRYCKS D, ZHAO K, BASART S, et al. Natural adversarial examples[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021: 15262-15271. [199] HENDRYCKS D, BASART S, MU N, et al. The many faces of robustness: a critical analysis of out-of-distribution generalization[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021: 8340-8349. [200] SCHWENK D, KHANDELWAL A, CLARK C, et al. A-OKVQA: a benchmark for visual question answering using world knowledge[C]//Proceedings of the European Conference on Computer Vision. Cham: Springer Nature, 2022: 146-162. [201] LU P, MISHRA S, XIA T, et al. Learn to explain: multimodal reasoning via thought chains for science question answering[C]//Proceedings of the 36th International Conference on Neural Information Processing Systems, 2022: 2507-2521. [202] BAR A, GANDELSMAN Y, DARRELL T, et al. Visual prompting via image inpainting[C]//Proceedings of the 36th International Conference on Neural Information Processing Systems, 2022: 25005-25017. [203] HE X, YANG D, FENG W, et al. CPL: counterfactual prompt learning for vision and language models[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022: 3407-3418. [204] DU Y, WEI F, ZHANG Z, et al. Learning to prompt for open-vocabulary object detection with vision-language model[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 14084-14093. [205] WEBSON A, PAVLICK E. Do prompt-based models really understand the meaning of their prompts?[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022: 2300-2344. [206] ZHAO C, WANG Y, JIANG X, et al. Learning domain invariant prompt for vision-language models[J]. IEEE Transactions on Image Processing, 2024, 33: 1348-1360. [207] FANG L, KUANG Y, LIU Q, et al. Rethinking remote sensing pretrained model: instance-aware visual prompting for remote sensing scene classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 1-13. [208] TSIMPOUKELLI M, MENICK J L, CABI S, et al. Multimodal few-shot learning with frozen language models[C]//Proceedings of the 35th International Conference on Neural Information Processing Systems, 2021: 200-212. [209] YANG Z, WANG S, RAWAT B P S, et al. Knowledge injected prompt based fine-tuning for multi-label few-shot ICD coding[C]//Findings of the Association for Computational Linguistics (EMNLP 2022), 2022: 1767-1781. [210] CODA-FORNO J, WITTE K, JAGADISH A K, et al. Inducing anxiety in large language models increases exploration and bias[J]. arXiv:2304.11111, 2023. |
[1] | LIU Muyun, BIAN Chunjiang, CHEN Hongzhen. Few-Shot Remote Sensing Aircraft Image Generation Algorithm Based on Feature Disentangling [J]. Computer Engineering and Applications, 2024, 60(9): 244-253. |
[2] | ZHOU Bojun, CHEN Zhiyu. Survey of Few-Shot Image Classification Based on Deep Meta-Learning [J]. Computer Engineering and Applications, 2024, 60(8): 1-15. |
[3] | DING Zhengwei, BAI Hexiang, HU Shen. CME-Based Few-Shot Detection Model with Enhanced Multiscale Deep Features [J]. Computer Engineering and Applications, 2024, 60(6): 222-229. |
[4] | CAI Guoyong, LI Anqing. Prompt-Learning Inspired Approach to Unsupervised Sentiment Style Transfer [J]. Computer Engineering and Applications, 2024, 60(5): 146-155. |
[5] | FANG Hong, LI Desheng, JIANG Guangjie. Efficient Cross-Domain Transformer Few-Shot Semantic Segmentation Network [J]. Computer Engineering and Applications, 2024, 60(4): 142-152. |
[6] | ZHANG Duona, ZHAO Hongjia, LU Yuanyao, CUI Jian, ZHANG Baochang. Few-Shot Scene Classification with Attention Mechanism in Remote Sensing [J]. Computer Engineering and Applications, 2024, 60(4): 173-182. |
[7] | ZHANG Hengwei, XU Linsen, CHEN Gen, WANG Zhihuan, SUI Xiang. Upper Limb Action Recognition Based on Transfer Learning and sEMG [J]. Computer Engineering and Applications, 2024, 60(20): 124-132. |
[8] | DENG Gelong, HUANG Guoheng, CHEN Ziyan. Category Decoupled Few-Shot Classification for Graph Neural Network [J]. Computer Engineering and Applications, 2024, 60(2): 129-136. |
[9] | ZHANG Qintong, WANG Yuchao, WANG Hexi, WANG Junxin, CHEN Hai. Comprehensive Review of Large Language Model Fine-Tuning [J]. Computer Engineering and Applications, 2024, 60(17): 17-33. |
[10] | GU Xunxun, LIU Jianping, XING Jialu, REN Haiyu. Text Classification:Comprehensive Review of Prompt Learning Methods [J]. Computer Engineering and Applications, 2024, 60(11): 50-61. |
[11] | ZENG Huiling, LI Lin, LYU Siyang, HE Zheng. Risk Identification Method for News Public Opinion Driven by Prompt Learning [J]. Computer Engineering and Applications, 2024, 60(1): 182-188. |
[12] | HUANG Youwen, DOU Heng, XIAO Guiguang. Few-Shot Object Detection Based on Fusion of Classification Correction and Sample Amplification [J]. Computer Engineering and Applications, 2024, 60(1): 254-262. |
[13] | LIU Tao, KE Zunwang, Wushour·Silamu. Survey of Few-Shot Relation Classification [J]. Computer Engineering and Applications, 2023, 59(9): 1-12. |
[14] | LU Yan, WANG Yangping, WANG Wenrun. Transformer-Based Few-Shot and Fine-Grained Image Classification Method [J]. Computer Engineering and Applications, 2023, 59(23): 219-227. |
[15] | WEI Ting, LI Xinlei, LIU Hui. Survey on Image Semantic Segmentation in Dilemma of Few-Shot [J]. Computer Engineering and Applications, 2023, 59(2): 1-11. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||