[1] 李娜, 姜恩波, 朱一真, 等. 政策工具自动识别方法与实证研究[J]. 图书情报工作, 2021, 65(7): 115-122.
LI N, JIANG E B, ZHU Y Z, et al. Policy tool identification method and empirical research based on deep learning[J]. Library and Information Service, 2021, 65(7): 115-122.
[2] 霍朝光, 霍帆帆, 王婉如, 等. 基于WordBERT和BiLSTM的政策工具自动分类方法研究[J]. 图书情报知识, 2023, 40(3): 129-138.
HUO C G, HUO F F, WANG W R, et al. Automatic classificatin method of policy tools based on WordBERT and BiLSTM[J]. Documentation, Information & Knowledge, 2023, 40(3): 129-138.
[3] 马雨萌, 黄金霞, 王昉, 等. 融合BERT与多尺度CNN的科技政策内容多标签分类研究[J]. 情报杂志, 2022, 41(11): 157-163.
MA Y M, HUANG J X, WANG F, et al. Research on multi-label classification of S&T policy content combing BERT and multi-sale CNN[J]. Journal of Intelligence, 2022, 41(11): 157-163.
[4] 朱娜娜, 王航, 张家乐, 等. 基于预训练语言模型的政策识别研究[J]. 中文信息学报, 2022, 36(2): 104-110.
ZHU N N, WANG H, ZHANG J L, et al. Policy identification based on pretrained language model[J]. Journal of Chinese Information Processing, 2022, 36(2): 104-110.
[5] GUO C, PLEISS G, SUN Y, et al. On calibration of modern neural networks[C]//International Conference on Machine Learning, 2017: 1321-1330.
[6] WEI J, TAY Y, BOMMASANI R, et al. Emergent abilities of large language models[J]. arXiv:2206.07682, 2022.
[7] 张华平, 李林翰, 李春锦. ChatGPT中文性能测评与风险应对[J]. 数据分析与知识发现, 2023, 7(3): 16-25.
ZHANG H P, LI L H, LI C J. ChatGPT performance evaluation on Chinese language and risk measures[J]. Data Analysis and Knowledge Discovery, 2023, 7(3): 16-25.
[8] ZHOU J, KE P, QIU X, et al. ChatGPT: potential, prospects, and limitations[J]. Frontiers of Information Technology & Electronic Engineering, 2023: 1-6.
[9] LIANG D, YI B. Two-stage three-way enhanced technique for ensemble learning in inclusive policy text classification[J]. Information Sciences, 2021, 547: 271-288.
[10] 胡吉明, 付文麟, 钱玮, 等. 融合主题模型和注意力机制的政策文本分类模型[J]. 情报理论与实践, 2021, 44 (7): 159-165.
HU J M, FU W L, QIAN W, et al. Research on policy text classification method based on topic model and attention mechanism[J]. Information Studies: Theory & Application, 2021, 44(7): 159-165.
[11] 石金泽. 基于图神经网络的政策文本多标签分类[D]. 济南: 齐鲁工业大学, 2023.
SHI J Z. Multi-label classification of policy text based on graph neural network[D]. Jinan: Qilu University of Technology, 2023.
[12] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA: Association for Computational Linguistics, 2019: 4171-4186.
[13] YE J, CHEN X, XU N, et al. A comprehensive capability analysis of GPT-3 and GPT-3.5 series models[J]. arXiv:2303.10420, 2023.
[14] SUN X, LI X, LI J, et al. Text classification via large language models[J]. arXiv:2305.08377, 2023.
[15] 张重生, 陈杰, 李岐龙, 等. 深度对比学习综述[J]. 自动化学报, 2023, 49(1): 15-39.
ZHANG C S, CHEN J, LI Q L, et al. Deep contrastive learning: a survey[J]. Acta Automatica Sinica, 2023, 49(1): 15-39.
[16] KHOSLA P, TETERWAKT P, WANG C, et al. Supervised contrastive learning[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems, 2020: 18661-18673.
[17] GUNEL B, DU J, CONNEAU A, et al. Supervised contrastive learning for pre-trained language model fine-tuning[C]//International Conference on Learning Representations, 2021.
[18] GAO T, YAO X, CHEN D. SimCSE: simple contrastive learning of sentence embeddings[C]//2021 Conference on Empirical Methods in Natural Language Processing, 2021: 6894-6910.
[19] SEDGHAMIZ H, RAVAL S, SANTUS E, et al. SupCL-Seq: supervised contrastive learning for downstream optimized sequence representations[C]//Findings of the Association for Computational Linguistics, 2021: 3398-3403.
[20] LI S, HU X, LIN L, et al. Pair-level supervised contrastive learning for natural language inference[C]//2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022: 8237-8241.
[21] 高怡, 纪焘, 吴苑斌, 等. 基于标签增强和对比学习的鲁棒小样本事件检测[J]. 中文信息学报, 2023, 37(4): 98-108.
GAO Y, JI T, WU Y B, et al. Robust few shot event detection based on label augmentation and contrastive learning[J]. Journal of Chinese Information Processing, 2023, 37(4): 98-108.
[22] LIU Y, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized BERT pretraining approach[J]. arXiv:1907.11692, 2019.
[23] LI B, HOU Y, CHE W. Data augmentation approaches in natural language processing: a survey[J]. AI Open, 2022, 3: 71-90.
[24] DAI H, LIU Z, LIAO W, et al. AugGPT: leveraging ChatGPT for text data augmentation[J]. arXiv:2302.13007, 2023.
[25] 严豫, 杨笛, 尹德春. 融合大语言模型知识的对比提示情感分析方法[J]. 情报杂志, 2023, 42(11): 126-134.
YAN Y, YANG D, YIN D C. Contrastive-based prompt-tuning sentiment analysis method incorporating large language model knowledge[J]. Journal of Intelligence , 2023, 42(11): 126-134.
[26] SUN X, DONG L, LI X, et al. Pushing the limits of ChatGPT on NLP Tasks[J]. arXiv:2306.09719, 2023.
[27] BANG Y, CAHYAWIJAVA S, LEE N, et al. A multitask, multilingual, multimodal evaluation of ChatGPT on reasoning, hallucination, and interactivity[J]. arXiv:2302.04023, 2023.
[28] ZHANG Z, ZHANG A, LI M, et al. Automatic chain of thought prompting in large language models[J]. arXiv:2210.
03493, 2022.
[29] ROTHWELL R. Reindustrialization and technology: towards a national policy framework[J]. Science and Public Policy, 1985, 12(3): 113-130.
[30] XU L, LU X, YUAN C, et al. FewCLUE: a Chinese few-shot learning evaluation benchmark[J]. arXiv:2107.07498, 2021.
[31] SUN Y, WANG S, FENG S, et al. Ernie 3.0: large-scale knowledge enhanced pre-training for language understanding and generation[J]. arXiv:2107.02137, 2021.
[32] SUN Z, LI X, SUN X, et al. ChineseBERT: Chinese pretraining enhanced by glyph and pinyin information[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021: 2065-2075.
[33] LIU X, ZHENG Y, DU Z, et al. GPT understands, too[J]. arXiv:2103.10385, 2021.
[34] LIU X, JI K, FU Y, et al. P-tuning v2: prompt tuning can be comparable to fine-tuning universally across scales and tasks[J]. arXiv:2110.07602, 2021. |