Computer Engineering and Applications ›› 2018, Vol. 54 ›› Issue (22): 156-159.DOI: 10.3778/j.issn.1002-8331.1708-0052

Previous Articles     Next Articles

Near-optimal active learning for Tibetan speech recognition

ZHAO Yue, LI Yaoqiang, XU Xiaona, WU Licheng   

  1. School of Information Engineering, Minzu University of China, Beijing 100081, China
  • Online:2018-11-15 Published:2018-11-13

临近最优主动学习的藏语语音识别方法研究

赵  悦,李要嫱,徐晓娜,吴立成   

  1. 中央民族大学 信息工程学院,北京 100081

Abstract: A large number of annotated speech corpus is needed to train speech recognition models. Tibetan language is one of Chinese ethnic minority languages, it lacks the annotator. So it is very time-consuming and costly for labeling Tibetan speech data. However, the active learning method can select a number of informative samples from unlabeled data according to the target of speech recognition to the user for annotation, in order to use a small amount of high quality training sample to build the accurate recognition models. This paper studies the method of speech data selection for Lhasa-Tibetan speech recognition based on active learning, and proposes a near-optimal batch mode objective function, and proves this objective function is submodular function. The experimental results show that the presented method can use less training data to ensure the accuracy of speech recognition model, and can reduce the workload of manual annotation.

Key words: near-optimal batch mode active learning, submodular function, speech corpus selection, Lhasa-Tibetan speech recognition

摘要: 语音识别模型需要大量带标注语音语料进行训练,作为少数民族语言的藏语,由于语音标注专家十分匮乏,人工标注语音语料是一件非常费时费力的工作。然而,主动学习方法可以根据语音识别的目标从大量未标注的语音数据中挑选一些具有价值的样本交给用户进行标注,以便利用少量高质量的训练样本构建与大数据量训练方式一样精准的识别模型。研究了基于主动学习的藏语拉萨话语音语料选择方法,提出了一种临近最优的批量样本选择目标函数,并验证了其具有submodular函数性质。通过实验验证,该方法能够使用较少的训练数据保证语音识别模型的精度,从而减少了人工标注语料的工作量。

关键词: 临近最优批量主动学习, submodular函数, 语音语料选择, 藏语拉萨话语音识别