Computer Engineering and Applications ›› 2025, Vol. 61 ›› Issue (14): 230-237.DOI: 10.3778/j.issn.1002-8331.2405-0111

• Pattern Recognition and Artificial Intelligence • Previous Articles     Next Articles

Continuous Learning Algorithm Combining Eigenface and Orthogonal Weight Modified

LIAO Dingding, LIU Junfeng, ZENG Jun, XU Shikang   

  1. 1.School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China
    2.School of Electric Power Engineering, South China University of Technology, Guangzhou 510641, China
  • Online:2025-07-15 Published:2025-07-15

结合本征脸与正交权重修正的连续学习算法

廖丁丁,刘俊峰,曾君,徐诗康   

  1. 1.华南理工大学 自动化科学与工程学院,广州 510641 
    2.华南理工大学 电力学院,广州 510641

Abstract: Current conventional deep neural networks exhibit catastrophic forgetting of learned knowledge in continuous learning. In recent years, orthogonal weight correction (OWM) has been considered an effective continuous learning algorithm, but it exhibits poor continuous learning performance in large datasets and is extremely sensitive to the selection of random samples. A continuous learning algorithm (BZL-OWM) based on the combination of eigenface method and orthogonal weight correction is proposed to address the above issues. The intrinsic face method is used to improve the input space representation of neural network layers, enabling weight parameters to be corrected in a more accurate orthogonal direction, thereby achieving better continuous learning performance. A large number of incremental continuous learning experiments on multiple datasets have shown that the continuous learning ability of BZL-OWM algorithm is significantly better than that of the original OWM algorithm, especially in a large number of scenarios where the average test accuracy improvement rate can reach 50%.

Key words: continuous learning, deep learning, class incremental learning, orthogonal weight modified (OWM)

摘要: 常规的深度神经网络在连续学习中表现出对已学习知识的灾难性遗忘。近年来正交权重修正(orthogonal weight modified,OWM)被认为是一种行之有效的连续学习算法,然而其在大批次数数据集中表现出较差的连续学习性能,且对随机样本的选择极为敏感。针对上述问题,提出一种基于本征脸法与正交权重修正相结合的连续学习算法(BZL-OWM)。本征脸法用来改善神经网络层的输入空间表示,使得权重参数能在更准确的正交方向上进行权重修正,从而实现更优的连续学习性能。在多个数据集上进行的大量类增量连续学习实验表明,BZL-OWM算法的连续学习能力显著优于原OWM算法,尤其在大批次数场景中的平均测试精度提升率可达50%。

关键词: 连续学习, 深度学习, 类增量学习, 正交权重修正(OWM)