%0 Journal Article %A WANG Yurong %A LIN Min %A LI Yanling %T BERT Mongolian Word Embedding Learning %D 2023 %R 10.3778/j.issn.1002-8331.2107-0102 %J Computer Engineering and Applications %P 129-134 %V 59 %N 2 %X The static Mongolian word embedding learning method represented by Word2Vec comprehensively represents a variety of semantic words in different contexts into a word embedding. Such context-independent text representation method has limited improvement on subsequent tasks. Through the second training, the multilingual BERT pre-training model is combined with CRF, and adopting the fusion method of two seed words, a new dynamic Mongolian word embedding learning method is proposed, which can solve the problem of lexical aggregation. In order to verify the effectiveness of this method, a comparative experiment is carried out on the data sets of education and literature fields of Masters and Doctrines dissertations of Inner Mongolia Normal University, and the clustering analysis of Mongolian words is carried out by using [K]-means clustering algorithm, finally, it is verified in the task of embedded keyword mining. The experimental results show that the quality of the word vectors learned by BERT is higher than that of Word2Vec. The embedding of similar words is very close in the vector space, while the embedding of non-similar words is far away. The subject words obtained in the subject word mining task are closely related. %U http://cea.ceaj.org/EN/10.3778/j.issn.1002-8331.2107-0102