计算机工程与应用 ›› 2023, Vol. 59 ›› Issue (14): 1-14.DOI: 10.3778/j.issn.1002-8331.2208-0322

• 热点与综述 • 上一篇    下一篇

可解释人工智能研究综述

赵延玉,赵晓永,王磊,王宁宁   

  1. 1.北京信息科技大学 信息系统研究所,北京 100129
    2.北京信息科技大学 北京材料基因工程高精尖创新中心,北京 100129
  • 出版日期:2023-07-15 发布日期:2023-07-15

Review of Explainable Artificial Intelligence

ZHAO Yanyu, ZHAO Xiaoyong, WANG Lei, WANG Ningning   

  1. 1.1nformation Systems Institute, Beijing Information Science & Technology University, Beijing 100129, China
    2.Advanced Innovation Center for Materials Genome Engineering, Beijing Information Science & Technology University, Beijing 100129, China
  • Online:2023-07-15 Published:2023-07-15

摘要: 随着机器学习和深度学习的发展,人工智能技术已经逐渐应用在各个领域。然而采用人工智能的最大缺陷之一就是它无法解释预测的依据。模型的黑盒性质使得在医疗、金融和自动驾驶等关键任务应用场景中人类还无法真正信任模型,从而限制了这些领域中人工智能的落地应用。推动可解释人工智能(explainable artificial intelligence,XAI)的发展成为实现关键任务应用落地的重要问题。目前,国内外相关领域仍缺少有关可解释人工智能的研究综述,也缺乏对因果解释方法的关注以及对可解释性方法评估的研究。从解释方法的特点出发,将主要可解释性方法分为三类:独立于模型的方法、依赖于模型的方法和因果解释方法,分别进行总结分析,对解释方法的评估进行总结,列举出可解释人工智能的应用,讨论当前可解释性存在的问题并进行展望。

关键词: 可解释性, 人工智能, 机器学习, 深度学习, 评估

Abstract: With the development of machine learning and deep learning, artificial intelligence technology has been gradually applied in various fields. However, one of the biggest drawbacks of adopting AI is its inability to explain the basis for predictions. The black-box nature of the models makes it impossible for humans to truly trust them yet in mission-critical application scenarios such as healthcare, finance, and autonomous driving, thus limiting the grounded application of AI in these areas. Driving the development of explainable artificial intelligence(XAI) has become an important issue for achieving mission-critical applications on the ground. At present, there is still a lack of research reviews on XAI in related fields at home and abroad, as well as a lack of studies focusing on causal explanation methods and the evaluation of explainable methods. Therefore, this study firstly starts from the characteristics of explanatory methods and divides the main explainable methods into three categories:model-independent methods, model-dependent methods, and causal explanation methods from the perspective of explanation types, and summarizes and analyzes them respectively, then summarizes the evaluation of explanation methods, lists the applications of explainable AI, and finally discusses the current problems of explainability and provides an outlook.

Key words: explainability, artificial intelligence, machine learning, deep learning, evaluation