Computer Engineering and Applications ›› 2024, Vol. 60 ›› Issue (11): 115-128.DOI: 10.3778/j.issn.1002-8331.2301-0173

• Pattern Recognition and Artificial Intelligence • Previous Articles     Next Articles

Motion Imagery Signal Analysis Incorporating Spatio-Temporal Adaptive Graph Convolution

LIU Jing, KANG Xiaohui, DONG Zehao, LI Xuan, ZHAO Wei, WANG Yu   

  1. 1.College of Computer and Cyber Security, Hebei Normal University, Shijiazhuang 050024, China
    2.College of Software, Hebei Normal University, Shijiazhuang 050024, China
    3.College of Mathematical Sciences, Hebei Normal University, Shijiazhuang 050024, China
  • Online:2024-06-01 Published:2024-05-31

融入时空自适应图卷积的运动想象信号解析

刘京,康晓慧,董泽浩,李璇,赵薇,王余   

  1. 1.河北师范大学 计算机与网络空间安全学院,石家庄 050024
    2.河北师范大学 软件学院,石家庄 050024
    3.河北师范大学 数学科学学院,石家庄 050024

Abstract: Brain-computer interface (BCI) technology based on motor imagery (MI) EEG signals has been widely concerned and studied in the medical application of motor function rehabilitation for stroke patients. However, the MI signal has the characteristics of low signal-to-noise ratio and large volume variability, which leads to excessive noise in the EEG signal and affects the classification performance. Therefore, how to fully extract MI signal features to obtain higher single-subject classification accuracy, and how to train a general model with excellent cross-subject performance are urgent problems to be solved when MI-BCI system is used in practical applications. In response to this problem, this paper proposes a spatiotemporal adaptive graph convolutional network model for different subjects, which extracts MI feature signals from two dimensions of time and spatio for classification. The model includes four modules:spatial adaptive graph convolution module, temporal adaptive graph convolution module, feature fusion module and feature classification module. The spatial adaptive graph convolution module dynamically constructs the spatial graph representation through feature similarity between channels, and gets rid of the limitation of artificially constructs graph representation. The time-adaptive graph convolution module divides the time series of EEG signals into multiple time segments and calculates the similarity between time segments, so as to adaptively construct the time map representation of EEG signals and eliminate the influence of noise. Finally, feature fusion and classification are performed. The results show that the proposes method achieves an average classification accuracy of 90.45% and 91.64% is achieved by using 10-fold cross-validation method on BCIIV2a dataset and 91.64% on HGD dataset. Compared with the current state-of-the-art methods, this method achieves a higher accuracy rate, proving the effectiveness of our model. By using transfer learning to experiment on different individuals, the average accuracy is increased by 1.66 percentage points, which proves the robustness of the model.

Key words: brain-computer interface, motor imagery, deep learning, graph convolution network

摘要: 基于运动想象(motor imagery,MI)脑电信号的脑机接口(brain-computer interface,BCI)技术在脑卒中患者运动功能康复医疗应用中得到广泛关注和研究。然而MI信号具有低信噪比和个体差异性大的特点,导致脑电信号噪声过大从而影响分类性能。因此,如何充分提取MI信号特征以得到更高的单被试分类精度,以及如何训练一个在跨被试上表现优秀的通用模型是MI-BCI系统用于实际应用时急需解决的问题。针对该问题,提出了一种面向不同被试的时空自适应图卷积网络模型,从时空两个维度提取MI特征信号进行分类。模型包括四个模块:空间自适应图卷积模块、时间自适应图卷积模块、特征融合模块和特征分类模块。空间自适应图卷积模块通过通道间的特征相似性动态构造空间图表示,摆脱人为构造的图表示限制。时间自适应图卷积模块将脑电信号的时间序列划为多个时间片段,计算时间段间的相似性,自适应构造脑电信号的时间图表示,消除了噪声影响。最后,进行特征融合并分类。结果表明,在BCIIV2a数据集上使用10-折交叉验证的方法和HGD数据集上分别达到了90.45%和91.64%的平均分类精度,与目前性能较好的方法相比达到了更高的准确率,证明了该模型的有效性;使用迁移学习对不同个体进行实验,平均准确率提高了1.66个百分点,证明了模型的鲁棒性。

关键词: 脑机接口, 运动想象, 深度学习, 图卷积