[1] 冀振燕, 孔德焱, 刘伟, 等. 基于深度学习的命名实体识别研究[J]. 计算机集成制造系统, 2022, 28(6): 1603-1615.
JI Z Y, KONG D Y, LIU W, et al. A deep learning-based named entity recognition study[J]. Computer Integrated Manufacturing System, 2022, 28(6): 1603-1615.
[2] LAMPLE G, BALLESTEROS M, SUBRAMANIAN S, et al. Neural architectures for named entity recognition[C]//Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016: 260-270.
[3] ZHANG Y, YANG J. Chinese NER using lattice LSTM[C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, 2018: 1554-1564.
[4] DENG J, CHENG L, WANG Z. Self-attention-based BiGRU and capsule network for named entity recognition[J]. arXiv:2002.00735, 2020.
[5] LAI Q, ZHOU Z, LIU S. Joint entity-relation extraction via improved graph attention networks[J]. Symmetry, 2020, 12(10): 1746.
[6] YAN C, SU Q, WANG J. MoGCN: mixture of gated convolutional neural network for named entity recognition of Chinese historical texts[J]. IEEE Access, 2020, 8: 181629-181639.
[7] CHEN H, LIN Z, DING G, et al. GRN: gated relation network to enhance convolutional neural network for named enti ty recognition[C]//Proceedings of the AAAI Conference on Artificial Intelligence, 2019, 33(1): 6236-6243.
[8] LIU Y, OT M, GOY N, et al. Roberta: a robustly optimized bert pretraining approach[J]. arXiv:1907.11692, 2019.
[9] YANG Z, DAI Z, YANG Y, et al. XLNet: generalized autoregressive pretraining for language understanding[C]//Advances in Neural Information Processing Systems, 2019: 5754-5764.
[10] LI X, FENG J, MENG Y, et al. A unified MRC framework for named entity recognition[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, 2020: 5849-5859.
[11] XUE M, YU B, ZHANG Z, et al. Coarse-to-fine pretraining for named entity recognition[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020: 6345-6354.
[12] 谢雪景, 谢忠, 马凯, 等. 结合BERT与BiGRU-Attention-CRF模型的地质命名实体识别[J]. 地质通报, 2023, 42(5): 846-855.
XIE X J, XIE Z, MA K, et al. Geologically named entity identification combining BERT with the BiGRU-Attention-CRF model[J]. Geological Notification, 2023, 42(5): 846-855.
[13] 贾猛, 王裴岩, 张桂平, 等. 面向工艺文本的命名实体识别方法研究[J]. 中文信息学报, 2022, 36(3): 54-63.
JIA M, WANG P Y, ZHANG G P, et al. Research on named entity recognition methods for process text[J]. Chinese Inf- ormatics Journal, 2022, 36 (3): 54-63.
[14] LU Y, YUE Z, JI D. Multi-prototype Chinese character embedding[C]//Proceedings of the Language Resources and Evaluation. European Language Resources Association, 2016: 855-859.
[15] 袁健, 章海波. 多粒度融合嵌入的中文实体识别模型[J]. 小型微型计算机系统, 2022, 43(4): 741-761.
YUAN J, ZHANG H B. A Chinese entity recognition model for multi-granularity fusion embedding[J]. Small Microcom-puter System, 2022, 43 (4): 741-761.
[16] 李丹, 徐童, 郑毅, 等. 部首感知的中文医疗命名实体识别[J]. 中文信息学报, 2020, 34(12): 54-64.
LI D, XU T, ZHENG Y, et al. The Chinese medical naming entity identification perception[J]. Chinese Informatics Journal, 2020, 34(12): 54-64.
[17] 罗凌, 杨志豪, 宋雅文, 等. 基于笔画ELMo和多任务学习的中文电子病历命名实体识别研究[J]. 计算机学报, 2020, 43(10): 1943-1957.
LUO L, YANG Z H, SONG Y W, et al. Based on stroke ELMo and multi-task learning[J]. Journal of Computer Science, 2020, 43(10): 1943-1957.
[18] GOLDBERG Y, LEVY O. Word2vec explained: deriving Mikolov’s negative-sampling word-embedding method[J]. arXiv:1402.3722, 2014.
[19] 李娜. 基于条件随机场的方志古籍别名自动抽取模型构建[J]. 中文信息学报, 2018, 32(11): 41-48.
LI N. Construction of automatic alias extraction model of ancient books based on conditional random field[J]. Chinese Information Journal, 2018, 32 (11): 41-48.
[20] RUOTIAN M, MINLONG P, QI Z, et al. Simplify the usage of lexiconin Chinese NER[C]//Proceedings of the 58th Annual Meeting of the Association for Computation Linguistics, 2020: 5951-5960.
[21] SUN Z, LI X, SUN X, et al. Chinesebert: Chinese pretraining enhanced by glyph and pinyin information[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021: 2065-2075.
[22] DEVLIN J, CHANG M W, LEE K, et al. BERT: pretraining of deep bidirectional transformers for language understan- ding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019: 4171-4186.
[23] WEISCHEDEL R, PRADHAN S, RAMSHAW L, et al. OntoNotes release 4.0[EB/OL]. (2011-02-15)[2023-01-20]. https://catalog.ldc.upenn.edu/LDC2011T03.
[24] RUI X D, PENG J X, XIAO Y Z, et al. A neural multidigraph model for Chinese NER with gazetteers[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019: 1462-1467.
[25] CUI Y, CHE W, LIU T, et al. Pre-training with whole word masking for Chinese BERT[J]. Institute of Electrical and Electronics Engineers, 2021, 29: 3504-3514.
[26] QIN Q, ZHAO S, LIU C. A BERT-BiGRU-CRF model for entity recognition of Chinese electronic medical records[J]. Complexity, 2021(3): 1-11.
[27] CAO P, CHEN Y, LIU K, et al. Adversarial transfer lear- ning for Chinese named entity recognition with selfAtte- ntion mechanism[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2018: 182-192.
[28] LI X Y, YIN F, SUN Z J, et al. Entity-relation extraction as multi-turn question answering[C]//Proceedings of the 57th Conference of the Association for Computational Linguistics, 2019: 1340-1350.
[29] 范涛, 王昊, 张卫, 等. 基于机器阅读理解的非遗文本实体抽取研究[J]. 数据分析与知识发现, 2022, 6(12): 70-79.
FAN T, WANG H, ZHANG W, et al. This intangible text entity extraction study based on machine reading comprehension[J]. Data Analysis and Knowledge Discovery, 2022, 6(12): 70-79. |