计算机工程与应用 ›› 2011, Vol. 47 ›› Issue (12): 200-202.

• 图形、图像、模式识别 • 上一篇    下一篇

LDLT并行分解优化大规模SVM的训练效率

覃 华,徐燕子   

  1. 广西大学 计算机与信息工程学院,南宁 530004
  • 收稿日期:1900-01-01 修回日期:1900-01-01 出版日期:2011-04-21 发布日期:2011-04-21

Improving SVM’s learning efficiency by using matrix LDLT parallel decomposition

QIN Hua,XU Yanzi   

  1. College of Computer and Information Engineering,Guangxi University,Nanning 530004,China
  • Received:1900-01-01 Revised:1900-01-01 Online:2011-04-21 Published:2011-04-21

摘要: 支持向量机在大规模训练集上学习时,存在学习时间长、泛化能力下降的问题。研究使用路径跟踪内点法构建面向大规模训练集的SVM学习算法,找到影响算法学习效率的关键是求解大型线性修正方程,首先使用降维法降低修正方程的维数,再使用矩阵LDLT并行分解高效地求解子修正方程,达到优化大规模SVM学习效率的目的,实验结果说明SVM训练效率提升的同时不影响SVM模型的泛化能力。

关键词: 大规模支持向量机, 路径跟踪内点法, 矩阵LDLT并行分解

Abstract: If the support vector machine is trained on large-scale datasets,the training time will be longer and generalization capability will be descended.Path following interior point method is proposed to design the SVM’s learning algorithm on large-scale datasets,and the key point being negative impact on SVM’s learning efficiency on large-scale datasets is to solve the large-scale iterative direction equations efficiently.To improve the SVM’s learning efficiency,the dimensions of direction equations are degraded,then LDLT parallel decomposition method is used to solve the direction sub-equations efficiently.The experimental results show that the new SVM’s training algorithm is efficient for large-scale datasets and the generalization capacity of SVM is not affected.

Key words: large-scale support vector machine, path following method, matrix LDLT parallel decomposition