[1] LIAO K, LIU Q, WEI Z, et al. Task-oriented dialogue system for automatic disease diagnosis via hierarchical reinforcement learning[J]. arXiv:2004.14254, 2020.
[2] LIU W, TANG J, QIN J, et al. MedDG: a large-scale medical consultation dataset for building medical dialogue system[J]. arXiv:2010.07497, 2020.
[3] DELISLE S, KIM B, DEEPAK J, et al. Using the electronic medical record to identify community-acquired pneumonia: toward a replicable automated strategy[J]. PLoS One, 2013, 8(8): e70944.
[4] WANG B, XIE Q, PEI J, et al. Pre-trained language models in biomedical domain: a systematic survey[J]. arXiv:2110. 05006, 2021.
[5] WACHTER R, GOLDSMITH J. To combat physician burnout and improve care, fix the electronic health record[J]. Harvard Business Review, 2018.
[6] ZHANG N, CHEN M, BI Z, et al. CBLUE: a Chinese biomedical language understanding evaluation benchmark[J]. arXiv:2106.08087, 2021.
[7] ZOU Y, ZHAO L, KANG Y, et al. Topic-oriented spoken dialogue summarization for customer service with saliency-aware topic modeling[J]. arXiv:2012.07311, 2020.
[8] YUAN C M, LITTLE D J, MARKS E S, et al. The electronic medical record and nephrology fellowship education in the united states: an opinion survey[J]. Clinical Journal of the American Society of Nephrology, 2020, 15(7): 949-956.
[9] LIU W, TANG J, LIANG X, et al. Heterogeneous graph reasoning for knowledge-grounded medical dialogue system[J]. Neurocomputing, 2021, 442: 260-268.
[10] DU N, WANG M, TRAN L, et al. Learning to infer entities, properties and their relations from clinical conversations[J]. arXiv:1908.11536, 2019.
[11] WEI Z, LIU Q, PENG B, et al. Task-oriented dialogue system for automatic diagnosis[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2018: 201-207.
[12] LIN X, HE X, CHEN Q, et al. Enhancing dialogue symptom diagnosis with global attention and symptom graph[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, and the 9th International Joint Conference on Natural Language Processing, 2019: 5033-5042.
[13] VERGA P, STRUBELL E, MCCALLUM A. Simultaneously self-attending to all mentions for full-abstract biological relation extraction[J]. arXiv:1802.10569, 2018.
[14] NAN G, GUO Z, SEKULI? I, et al. Reasoning with latent structure refinement for document-level relation extraction[J]. arXiv:2005.06312, 2020.
[15] XU B, WANG Q, LYU Y, et al. Entity structure within and throughout: modeling mention dependencies for document-level relation extraction[J]. arXiv:2102.10249, 2021.
[16] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[J]. arXiv:1810.04805, 2018.
[17] ZHANG N, CHEN X, XIE X, et al. Document-level relation extraction as semantic segmentation[J]. arXiv:2106. 03618, 2021.
[18] LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision, 2017: 2980-2988.
[19] CUI Y, CHE W, LIU T, et al. Pre-training with whole word masking for Chinese BERT[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2021, 29: 3504-3514.
[20] CUI Y, CHE W, LIU T, et al. Revisiting pre-trained models for Chinese natural language processing[J]. arXiv:2004. 13922, 2020.
[21] LIU Y, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized BERT pretraining approach[J]. arXiv:1907.11692, 2019. |