计算机工程与应用 ›› 2008, Vol. 44 ›› Issue (9): 82-86.

• 理论研究 • 上一篇    下一篇

基于复拟随机样本的统计学习理论的理论基础

张植明1,田景峰2,哈明虎1   

  1. 1.河北大学 数学与计算机学院,河北 保定 071002
    2.华北电力大学 科技学院,河北 保定 071051
  • 收稿日期:2007-09-22 修回日期:2007-12-02 出版日期:2008-03-21 发布日期:2008-03-21
  • 通讯作者: 张植明

Theoretical foundations of statistical learning theory of complex quasi-random samples

ZHANG Zhi-ming1,TIAN Jing-feng2,HA Ming-hu1   

  1. 1.College of Mathematics and Computer Sciences,Hebei University,Baoding,Hebei 071002,China
    2.Science and Technology College,North China Electric Power University,Baoding,Hebei 071051,China
  • Received:2007-09-22 Revised:2007-12-02 Online:2008-03-21 Published:2008-03-21
  • Contact: ZHANG Zhi-ming

摘要: 引入复拟(概率)随机变量,准范数的定义。给出了复拟随机变量的期望和方差的概念及若干性质;证明了基于复拟随机变量的马尔可夫不等式,契比雪夫不等式和辛钦大数定律;提出了拟概率空间中复经验风险泛函、复期望风险泛函以及复经验风险最小化原则等定义。证明并讨论了基于复拟随机样本的统计学习理论的关键定理和学习过程一致收敛速度的界,为系统建立基于复拟随机样本的统计学习理论奠定了理论基础。

关键词: 复拟随机变量, 准范数, 复经验风险最小化原则, 关键定理, 收敛速度的界, 神经网络

Abstract: Firstly,the definitions of complex quasi-random variable and primary norm are introduced.Next the concepts and some properties of the mathematical expectation and variance of complex quasi-random variables are provided.Secondly,for complex quasi-random variables we discuss a number of fundamental concepts such as e.g.,Markov’s inequalities,Chebyshev’s inequalities and a Khinchine’s law of large numbers.Finally,the definitions of the complex empirical risk functional,the complex expected risk functional,and complex empirical risk minimization principle on quasi-probability measure space are proposed.Then the key theorem of learning theory based on complex quasi-random samples is proved,and the bounds on the rate of uniform convergence of learning process are constructed.The investigations will help lay essential theoretical foundations for the systematic and comprehensive development of the complex quasi-statistical learning theory.

Key words: complex quasi-random variable, primary norm, complex empirical risk minimization principle, key theorem, the bounds on the rate of convergence, neural network