Computer Engineering and Applications ›› 2021, Vol. 57 ›› Issue (5): 131-138.DOI: 10.3778/j.issn.1002-8331.1911-0175

Previous Articles     Next Articles

Early Warning of Critical Illness Based on Explicable Hierarchical Attention Mechanism

WANG Tiangang, ZHANG Xiaobin, MA Hongye, CAI Hongwei   

  1. 1.College of Computer Science, Xi’an Polytechnic University, Xi’an 710048, China
    2.Department of Network Information, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an 710061, China
    3.Department of Critical Care Medicine, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an 710061, China
  • Online:2021-03-01 Published:2021-03-02

可解释的层次注意力机制网络危重症预警

王天罡,张晓滨,马红叶,蔡宏伟   

  1. 1.西安工程大学 计算机科学学院,西安 710048
    2.西安交通大学第一附属医院 网络信息部,西安 710061
    3.西安交通大学第一附属医院 重症医学科,西安 710061

Abstract:

Accuracy and interpretability are two factors that determine whether the prediction model can be applied successfully. Traditional statistical analysis models, such as Logistic regression, are widely used because it comprehends easily, although the prediction isn’t accurate. In contrast, the deep learning “black box model” based on RNN or CNN has high accuracy but is often difficult to understand. The balancing of these factors in the medical field is a great challenge for current research. This paper intends to establish an Interpretable Hierarchical Attention Network(IHAN) based on output optimization to give early warning of the possible severe and critical disease of patients in rescue process through the experimental analysis of CIS system data collected in a hospital. IHAN is superior to other neural network models in terms of experimental accuracy, and can imitate human behavior. It focuses on abnormalities in the two dimensions of time and risk factors in patients’ physiological data, and achieves a better interpretability while maintaining a higher accuracy.

Key words: hierarchical attention mechanism, Recurrent Neural Network(RNN), early warning of disease, interpretability

摘要:

准确性和可解释性是决定预测模型是否能够成功应用的两个主要因素。Logistic回归等统计分析模型尽管预测精度不高,但因其易于表达而被广泛采用。与之相对的基于循环神经网络(RNN)或卷积神经网络(CNN)等深度学习“黑盒模型”,准确率较高却通常难以理解。在医疗领域上述因素的权衡是目前相关研究面临的巨大挑战,通过对某三甲医院CIS系统采集住院患者生理指标数据进行实验分析,建立了基于可解释的层次注意力网络(Interpretable Hierarchical Attention Network,IHAN)用于提前预警患者抢救过程中可能并发的危急重症。IHAN在实验准确率方面优于其他神经网络模型,并且能够模仿人类的行为,重点关注患者生理数据中时间及风险因素两个维度上的异常,在保持较高准确率的情况下,同时达到了较好的可解释性。

关键词: 层次注意力机制, 循环神经网络, 疾病预警, 可解释性