[1] 马欢欢, 孔繁之, 高建强. 中文电子病历命名实体识别方法研究[J]. 医学信息学杂志, 2020, 41(4): 24-29.
MA H H, KONG F Z, GAO J Q. Study on named entity recognition method of Chinese electronic medical records[J]. Journal of Medical Informatics, 2020, 41(4): 24-29.
[2] 付秀, 陈麒麟, 李杰, 等. 基于智能预问诊的全景多学科会诊平台的设计与应用[J]. 中国数字医学, 2021, 16(10): 79-82.
FU X, CHEN Q L, LI J, et al. Design and application of the panoramic multi-disciplinary treatment platform based on intelligent pre-consultation[J]. China Digital Medicine, 2021, 16(10): 79-82.
[3] SHANG J B, LIU L Y, GU X T, et al. Learning named entity tagger using domain-specific dictionary[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2018: 2054-2064.
[4] 龚乐君, 张知菲. 基于领域词典与CRF双层标注的中文电子病历实体识别[J]. 工程科学学报, 2020, 42(4): 469-475.
GONG L J, ZHANG Z F. Clinical named entity recogniion from Chinese electronic medical records using a double-layer annotation model combining a domain dictionary with CRF[J]. Chinese Journal of Engineering, 2020, 42(4): 469-475.
[5] 高冰涛, 张阳, 刘斌. BioTrHMM: 基于迁移学习的生物医学命名实体识别算法[J]. 计算机应用研究, 2019, 36(1): 45-48.
GAO B T, ZHANG Y, LIU B. BioTrHMM: named entity recognition algorithm based on transfer learning in biomedical texts[J]. Application Research of Computers, 2019, 36(1): 45-48.
[6] RABINER L, JUANG B. An introduction to hidden Markov models[J]. IEEE ASSP Magazine, 1986, 3(1): 4-16.
[7] JAYNESE T. Information theory and statistical mechanics[J]. Physical Review, 1957, 106(4): 620-630.
[8] LAFFERTY J, MCCALLUM A, PEREIRA F C N. Conditional random fields: probabilistic models for segmenting and labeling sequence data[C]//Proceedings of the Eighteenth International Conference on Machine Learning (ICML 2001), Williams College, USA, June 28-July 1, 2001: 282-289.
[9] KIM Y. Convolutional neural networks for sentence classification[EB/OL]. (2014-08-25)[2022-01-05]. https://arxiv.org/abs/1408.5882.
[10] HOCHREITER S, SCHMIDHUBER J. Long short-termmemory[J]. Neural Computation, 1997, 9(8): 1735-1780.
[11] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems, 2017: 5998-6008.
[12] YIN M W, MOU C J, XIONG K N, et al. Chinese clinical named entity recognition with radical-level feature and self-attention mechanism[J]. Journal of Biomedical Informatics, 2019, 98: 103289.
[13] 赵珍珍, 董彦如, 刘静, 等. 融合词信息和图注意力的医学命名实体识别[J]. 计算机工程与应用, 2024, 60(11): 147-155.
ZHAO Z Z, DONG Y R, LIU J, et al. Medical named entity recognition incorporating word information and graph attention[J]. Computer Engineering and Applications, 2024, 60(11): 147-155.
[14] WEN S, ZENG B, LIAO W. Named entity recognition for instructions of Chinese medicine based on pre-trained language model[C]//2021 3rd International Conference on Natural Language Processing (ICNLP), 2021: 139-144.
[15] 张云秋, 汪洋, 李博诚. 基于RoBERTa-wwm动态融合模型的中文电子病历命名实体识别[J]. 数据分析与知识发现, 2022, 6(2/3): 242-250.
ZHANG Y Q, WANG Y, LI B C. Identifying named entities of Chinese electronic medical records based on RoBERTa-wwm dynamic fusion model[J]. Data Analysis and Knowledge Discovery, 2022, 6(2/3): 242-250.
[16] LEE J, YOON W, KIM S, et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining[J]. Bioinformatics, 2020, 36(4): 1234-1240.
[17] SYMEONIDOU A, SAZONAU V, GROTH P. Transfer learning for biomedicalnamed entity recognition with BioBERT[C]//SEMANTICS Posters & Demos, 2019: 1-5.
[18] CUI Y, CHE W, LIU T, et al. Pre-training with whole word masking for chinese bert[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2021, 29: 3504-3514.
[19] JAWAHAR G, SAGOT B, SEDDAH D. What does BERT learn about the structure of language?[C]//57th Annual Meeting of the Association for Computational Linguistics, 2019.
[20] SANG E F, BUCHHOLZ S. Introduction to the CoNLL-2000 shared task: Chunking[J]. arXiv:cs/0009008, 2000.
[21] CONNEAU A, KIELA D. Senteval: an evaluation toolkit for universal sentence representations[J]. arXiv:1803.05449, 2018.
[22] ALBILALI E, ALTWAIRESH N, HOSNY M. What does BERT learn from Arabic machine reading comprehension datasets?[C]//Proceedings of the Sixth Arabic Natural Language Processing Workshop, 2021: 32-41.
[23] ANTOUN W, BALY F, HAJJ H. Arabert: Transformer-based model for arabic language understanding[J]. arXiv:2003.00104, 2020.
[24] ZAN H Y, LI W X, ZHANG K L, et al. Building a pediatric medical corpus: word segmentation and named entity annotation[C]//21st Workshop on Chinese Lexical Semantics (CLSW 2020), Hong Kong, China, May 28-30, 2020. [S.l.]: Springer International Publishing, 2021: 652-664.
[25] ZHANG N, CHEN M, BI Z, et al. Cblue: a Chinese biomedical language understanding evaluation benchmark[J]. arXiv:2106.08087, 2021.
[26] ZHANG Y, YANG J. Chinese NER using lattice LSTM[J]. arXiv:1805.02023, 2018.
[27] PENG N, DREDZE M. Named entity recognition for Chinese social media with jointly trained embeddings[C]//Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015: 548-554.
[28] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[J]. arXiv:1810.04805, 2018.
[29] CUI Y, CHE W, LIU T, et al. Revisiting pre-trained models for Chinese natural language processing[J]. arXiv:2004.13922, 2020.
[30] SUN Y, WANG S, LI Y, et al. Ernie: enhanced representation through knowledge integration[J]. arXiv:1904.09223, 2019. |