计算机工程与应用 ›› 2024, Vol. 60 ›› Issue (4): 57-74.DOI: 10.3778/j.issn.1002-8331.2305-0102
章钧津,田永红,宋哲煜,郝宇峰
出版日期:
2024-02-15
发布日期:
2024-02-15
ZHANG Junjin, TIAN Yonghong, SONG Zheyu, HAO Yufeng
Online:
2024-02-15
Published:
2024-02-15
摘要: 机器翻译主要研究如何将源语言翻译为目标语言,对于促进民族之间的交流具有重要意义。目前神经机器翻译凭借翻译速度和译文质量成为主流的机器翻译方法。为更好地进行脉络梳理,首先对机器翻译的历史和方法进行研究,并对基于规则的机器翻译、基于统计的机器翻译和基于深度学习的机器翻译三种方法进行对比总结;然后引出神经机器翻译,并对其常见的类型进行讲解;接着选取多模态机器翻译、非自回归机器翻译、篇章级机器翻译、多语言机器翻译、数据增强技术和预训练模型六个主要的神经机器翻译研究领域进行重点介绍;最后从低资源语言、上下文相关翻译、未登录词和大模型四个方面对神经机器翻译的未来进行了展望。通过系统性的介绍以更好地理解神经机器翻译的发展现状。
章钧津, 田永红, 宋哲煜, 郝宇峰. 神经机器翻译综述[J]. 计算机工程与应用, 2024, 60(4): 57-74.
ZHANG Junjin, TIAN Yonghong, SONG Zheyu, HAO Yufeng. Survey of Neural Machine Translation[J]. Computer Engineering and Applications, 2024, 60(4): 57-74.
[1] 邓涵铖, 熊德意. 机器翻译译文质量估计综述[J]. 中文信息学报, 2022, 36(11): 20-37. DENG H C, XIONG D Y. A survey on machine translation quality estimation[J]. Journal of Chinese Information Processing, 2022, 36(11): 20-37. [2] 孙海鹏, 赵铁军. 无监督神经机器翻译综述[J]. 智能计算机与应用, 2021, 11(2): 1-6. SUN H P, ZHAO T J. A survey on unsupervised neural machine translation[J]. Intelligent Computer and Applications, 2021, 11(2): 1-6. [3] 苏劲松, 陈骏轩, 陆紫耀, 等. 篇章神经机器翻译综述[J]. 情报工程, 2020, 6(5): 4-14. SU J S, CHEN J X, LU Z Y, et al. A survey of document-level neural machine translation[J]. Intelligence Engineering, 2020, 6(5): 4-14. [4] 冯洋, 邵晨泽. 神经机器翻译前沿综述[J]. 中文信息学报,2020, 34(7): 1-18. FENG Y, SHAO C Z. Frontiers in neural machine translation: a literature review[J]. Journal of Chinese Information Processing, 2020, 34(7): 1-18. [5] 张健. 基于实例的机器翻译的泛化方法研究[D]. 北京: 中国科学院研究生院(计算技术研究所), 2001. ZHANG J. Research on generalization methods of case-based machine translation[D]. Beijing: Graduate University of Chinese Academy of Sciences (Institute of Computing Technology), 2001. [6] HUTCHINS J. Machine translation: a concise history[J]. Computer Aided Translation: Theory and Practice, 2007, 13: 11. [7] 刘群, 骆卫华. 跨语言检索中机器翻译技术的应用和进展[J]. 数字图书馆论坛, 2006(9): 12-19. LIU Q, LUO W H. Application and advance of machine translation technology in information retrieval[J]. Digital Library Forum, 2006(9): 12-19. [8] HINTON G, DENG L, YU D, et al. Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups[J]. IEEE Signal Processing Magazine, 2012, 29(6): 82-97. [9] 袁小于. 基于规则的机器翻译技术综述[J]. 重庆文理学院学报(自然科学版), 2011, 30(3): 56-59. YUAN X Y. Rule-based machine translation technology review[J]. Journal of Chongqing University of Arts and Sciences(Natural Science), 2011, 30(3): 56-59. [10] HINTON G E, OSINDERO S, TEH Y W. A fast learning algorithm for deep belief nets[J]. Neural Computation, 2006, 18(7): 1527-1554. [11] ROSENBLATT F. The perceptron: a probabilistic model for information storage and organization in the brain[J]. Psychological Review, 1958, 65(6): 386. [12] WILLIAMS R J, ZIPSER D. A learning algorithm for continually running fully recurrent neural networks[J]. Neural Computation, 1989, 1(2): 270-280. [13] ELMAN J L. Finding structure in time[J]. Cognitive Science, 1990, 14(2): 179-211. [14] BAHDANAU D, CHO K, BENGIO Y. Neural machine translation by jointly learning to align and translate[J]. arXiv:1409.0473, 2014. [15] RUMELHART D E, HINTON G E, WILLIAMS R J. Learning representations by back-propagating errors[J]. Nature, 1986, 323(6088): 533-536. [16] BENGIO Y, SIMARD P, FRASCONI P. Learning long-term dependencies with gradient descent is difficult[J]. IEEE Transactions on Neural Networks, 1994, 5(2): 157-166. [17] CHUNG J, GULCEHRE C, CHO K H, et al. Empirical evaluation of gated recurrent neural networks on sequence modeling[J]. arXiv:1412. 3555, 2014. [18] LECUN Y, BENGIO Y, HINTON G. Deep learning[J]. Nature, 2015, 521(7553): 436-444. [19] MENG F, LU Z, WANG M, et al. Encoding source language with convolutional neural network for machine translation[J]. arXiv:1503.01838, 2015. [20] GEHRING J, AULI M, GRANGIER D, et al. A convolutional encoder model for neural machine translation[J]. arXiv:1611.02344, 2016. [21] FUKUSHIMA K, MIYAKE S. Neocognitron: a self-organizing neural network model for a mechanism of visual pattern recognition[M]//Competition and cooperation in neural nets. Berlin, Heidelberg: Springer, 1982: 267-285. [22] LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324. [23] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. Image- Net classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90. [24] SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions[C]//Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, 2015: 1-9. [25] HE K, ZHANG X, REN S, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1904-1916. [26] YAMAMOTO Y, NAM J, TERASAWA H. Deformable CNN and imbalance-aware feature learning for singing technique classification[J]. arXiv:2206.12230, 2022. [27] MAAZ M, SHAKER A, CHOLAKKAL H, et al. Edge-NeXt: efficiently amalgamated CNN-transformer architecture for Mobile vision applications[J]. arXiv:2206.10589, 2022. [28] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems 30, 2017. [29] CAGLAYAN O, BARRAULT L, BOUGARES F. Multimodal attention for neural machine translation[J]. arXiv:1609.03976, 2016. [30] CALIXTO I, LIU Q, CAMPBELL N. Doubly-attentive decoder for multi-modal neural machine translation[J]. arXiv:1702.01287, 2017. [31] LIBOVICKY J, HELCL J. Attention strategies for multi-source sequence-to-sequence learning[J]. arXiv:1704.06567, 2017. [32] ZHOU M, CHENG R, LEE Y J, et al. A visual attention grounding neural model for multimodal machine translation[J]. arXiv:1808.08266, 2018. [33] VE J, MADHYASTHA P, SPECIA L. Distilling translations with visual awareness[J]. arXiv:1808.08266, 2018. [34] TOYAMA J, MISONO M, SUZUKI M, et al. Neural machine translation with latent semantic of image and text[J]. arXiv:1611.08459, 2016. [35] CALIXTO I, RIOS M, AZIZ W. Latent variable model for multi-modal translation[J]. arXiv:1811.00357, 2018. [36] YANG P, CHEN B, ZHANG P, et al. Visual agreement regularized training for multi-modal machine translation[C]//Proceedings of the 34th AAAI Conference on Artificial Intelligence, 2020: 9418-9425. [37] SHI X, YU Z. Adding visual information to improve multimodal machine translation for low-resource language[J]. Mathematical Problems in Engineering, 2022. DOI: 10.1155/ 2022/5483535. [38] WANG H, ZHANG Y, JI Z, et al. Consensus-aware visual-semantic embedding for image-text matching[C]//Proceedings of the 16th European Conference on Computer Vision. Cham: Springer, 2020: 18-34. [39] WEI B, WANG M, ZHOU H, et al. Imitation learning for non-autoregressive neural machine translation[J]. arXiv:1906.02041, 2019. [40] GUO J, TAN X, XU L, et al. Fine-tuning by curriculum learning for non-autoregressive neural machine translation[C]//Proceedings of the 34th AAAI Conference on Artificial Intelligence, 2020: 7839-7846. [41] LI Z, LIN Z, HE D, et al. Hint-based training for non-auto- regressive machine translation[J]. arXiv:1909.06708, 2019. [42] ZHOU J, KEUNG P. Improving non-autoregressive neural machine translation with monolingual data[J]. arXiv:2005. 00932, 2020. [43] REN Y, RUAN Y, TAN X, et al. FasTspeech: fast, robust and controllable text to speech[C]//Advances in Neural Information Processing Systems 32, 2019. [44] GU J, WANG C, ZHAO J. Levenshtein transformer[C]//Advances in Neural Information Processing Systems 32, 2019. [45] JEAN S, LAULY S, FIRAT O, et al. Does neural machine translation benefit from larger context?[J]. arXiv:1704. 05135, 2017. [46] WANG L, TU Z, WAY A, et al. Exploiting cross-sentence context for neural machine translation[J]. arXiv:1704.04347, 2017. [47] VOITA E, SERDYUKOV P, SENNRICH R, et al. Context-aware neural machine translation learns anaphora resolution[J]. arXiv:1805.10163, 2018. [48] ZHANG J, LUAN H, SUN M, et al. Improving the transformer translation model with document-level context[J]. arXiv:1810.03581, 2018. [49] MICULICICH L, RAM D, PAPPAS N, et al. Document-level neural machine translation with hierarchical attention networks[J]. arXiv:1809.01576, 2018. [50] MARUF S, MARTINS A F T, HAFFARI G. Selective attention for context-aware neural machine translation[J]. arXiv:1903.08788, 2019. [51] YANG Z, ZHANG J, MENG F, et al. Enhancing context modeling with a query-guided capsule network for document-level translation[J]. arXiv:1909.00564, 2019. [52] TU Z, LIU Y, SHI S, et al. Learning to remember translation history with a continuous cache[J]. Transactions of the Association for Computational Linguistics, 2018, 6: 407-420. [53] MARUF S, HAFFARI G. Document context neural machine translation with memory networks[J]. arXiv:1711. 03688, 2017. [54] DONG D, WU H, HE W, et al. Multi-task learning for multiple language translation[C]//Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2015: 1723-1732. [55] LUONG M T, LE Q V, SUTSKEVER I, et al. Multi-task sequence to sequence learning[J]. arXiv:1511.06114, 2015. [56] FIRAT O, CHO K, BENGIO Y. Multi-way, multilingual neural machine translation with a shared attention mechanism[J]. arXiv:1601.01073, 2016. [57] LEE J, CHO K, HOFMANN T. Fully character-level neural machine translation without explicit segmentation[J]. Transactions of the Association for Computational Linguistics, 2017, 5: 365-378. [58] HA T L, NIEHUES J, WAIBEL A. Toward multilingual neural machine translation with universal encoder and decoder[J]. arXiv:1611.04798, 2016. [59] JOHNSON M, SCHUSTER M, LE Q V, et al. Google’s multilingual neural machine translation system: enabling zero-shot translation[J]. Transactions of the Association for Computational Linguistics, 2017, 5: 339-351. [60] BLACKWOOD G, BALLESTEROS M, WARD T. Multilingual neural machine translation with task-specific attention[J]. arXiv:1806.03280, 2018. [61] SACHAN D S, NEUBIG G. Parameter sharing methods for multilingual self-attentional translation models[J]. arXiv:1809.00252, 2018. [62] PLATANIOS E A, SACHAN M, NEUBIG G, et al. Contextual parameter generation for universal neural machine translation[J]. arXiv:1808.08493, 2018. [63] LU Y, KEUNG P, LADHAK F, et al. A neural interlingua for multilingual machine translation[J]. arXiv:1804.08198, 2018. [64] WANG Y, ZHANG J, ZHAI F, et al. Three strategies to improve one-to-many multilingual translation[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2018: 2955-2960. [65] WANG X, PHAM H, ARTHUR P, et al. Multilingual neural machine translation with soft decoupled encoding[J]. arXiv:1902.03499, 2019. [66] TAN X, REN Y, HE D, et al. Multilingual neural machine translation with knowledge distillation[J]. arXiv:1902. 10461, 2019. [67] MURTHY V R, KUNCHUKUTTAN A, BHATTACHARYYA P. Addressing word-order divergence in multilingual neural machine translation for extremely low resource languages[J]. arXiv:1811.00383, 2018. [68] GU J, WANG Y, CHEN Y, et al. Meta-learning for low- resource neural machine translation[J]. arXiv:1808.08437, 2018. [69] NEUBIG G, HU J. Rapid adaptation of neural machine translation to new languages[J]. arXiv:1808.04189, 2018. [70] WANG X, NEUBIG G. Target conditioned sampling: optimizing data selection for multilingual neural machine translation[J]. arXiv:1905.08212, 2019. [71] WANG X, PHAM H, MICHEL P, et al. Optimizing data usage via differentiable rewards[C]//Proceedings of the 37th International Conference on Machine Learning, 2020: 9983-9995. [72] 何乌云, 秀芝, 包晶晶, 等. 结合BERT数据增强的基于词切分的蒙汉神经机器翻译系统[J]. 厦门大学学报(自然科学版), 2022, 61(4): 667-674. HE W Y, XIU Z, BAO J J, et al. Mongolian-Chinese neural machine translation system based on word segmentation with BERT data enhancement[J]. Journal of Xiamen University (Natural Science), 2022, 61(4): 667-674. [73] SUGIYAMA A, YOSHINAGA N. Data augmentation using back-translation for context-aware neural machine translation[C]//Proceedings of the 4th Workshop on Discourse in Machine Translation, 2019: 35-44. [74] 尤丛丛, 高盛祥, 余正涛, 等. 基于同义词数据增强的汉越神经机器翻译方法[J]. 计算机工程与科学, 2021, 43(8):1497-1502. YOU C C, GAO S X, YU Z T, et al. A Chinese-Vietnamese neural machine translation method based on synonym data augmentation[J]. Computer Engineering and Science, 2021,43(8): 1497-1502. [75] 贾承勋. 面向汉越神经机器翻译的伪平行语料生成方法研究[D]. 昆明: 昆明理工大学, 2020. JIA C X. Research on pseudo-parallel corpus generation for Chinese-Vietnamese neural Machine translation[D]. Kunming: Kunming University of Science and Technology, 2020. [76] 白天罡. 基于强化学习的蒙汉神经网络机器翻译的研究[D]. 呼和浩特: 内蒙古大学, 2020. BAI T G. Mongolian-Chinese neural machine translation based on reinforcement learning[D]. Hohhot: Inner Mongolia University, 2020. [77] 何建树. 基于深度学习的神经机器翻译技术研究[D]. 成都: 电子科技大学, 2021. HE J S. Research on neural machine translation based on deep learning[D]. Chengdu: University of Electronic Science and Technology of China, 2021. [78] SOHN K, LI C L, YOON J, et al. Learning and evaluating representations for deep one-class classification[J]. arXiv:2011.02578, 2020. [79] ELSAYED G, KRISHNAN D, MOBAHI H, et al. Large margin deep networks for classification[C]//Advances in Neural Information Processing Systems 31, 2018. [80] DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[J]. arXiv:1810.04805, 2018. [81] LIU Y, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized BERT pretraining approach[J]. arXiv:1907.11692, 2019. [82] 潘泽高. 基于数据增强技术的维汉机器翻译研究与实现[D]. 乌鲁木齐: 新疆大学, 2021. PAN Z G. Research and implementation of Uyghur-Chinese machine translation based on data augmentation[D].Urumchi: Xinjiang University, 2021. [83] 郭紫月. 基于生成式方法的蒙汉机器翻译研究[D]. 呼和浩特: 内蒙古大学, 2021. GUO Z Y. Research on Mongolian-Chinese machine translation based on generative methods[D]. Hohhot: Inner Mongolia University, 2021. [84] 卞乐乐. 基于单语语料与强化学习的蒙汉神经机器翻译的研究[D]. 呼和浩特: 内蒙古工业大学, 2021. BIAN L L. Mongolian-Chinese neural machine translation based on Monolingual corpus and reinforcement learning[D]. Hohhot: Inner Mongolia University of Technology, 2021. [85] 头旦才让. 汉藏神经机器翻译关键技术研究[D]. 拉萨: 西藏大学, 2021. TOUDANCAIRANG. Research on key technologies of Chinese-Tibetan neural machine translation[D]. Lhasa: Tibet University, 2021. [86] 赵丹丹, 黄德根, 孟佳娜, 等. 基于BERT-GRU-ATT模型的中文实体关系分类[J]. 计算机科学, 2022, 49(6): 319-325. ZHAO D D, HUANG D G, MENG J N, et al. Chinese entity relations classification based on BERT-GRU-ATT[J]. Computer Science, 2022, 49(6): 319-325. [87] 赖文. 低资源语言神经机器翻译关键技术研究[D]. 北京: 中央民族大学, 2020. LAI W. Research on key techniques of neural machine translation for low-resource languages[D]. Beijing: Minzu University of China, 2020. [88] 桑杰端珠. 稀疏资源条件下的藏汉机器翻译研究[D]. 西宁: 青海师范大学, 2019. SANGJIEDUANZHU. Tibetan-Chinese machine translation with sparse resources[D]. Xining: Qinghai Normal University, 2019. [89] 谭敏. 若干低资源条件下的神经机器翻译研究[D]. 苏州: 苏州大学, 2020. TAN M. Neural machine translation research under several low-resource conditions[D]. Suzhou: Soochow University, 2020. [90] 庞蕊. 融合先验知识的蒙汉神经机器翻译研究[D]. 呼和浩特: 内蒙古工业大学, 2021. PANG R. Research on Mongolian-Chinese neural machine translation based on prior knowledge[D]. Hohhot: Inner Mongolia University of Technology, 2021. [91] DI GANGI M A, FEDERICO M. Monolingual embeddings for low resourced neural machine translation[C]//Proceedings of the 14th International Conference on Spoken Language Translation, 2017: 97-104. [92] CHOUDHARY H, RAO S, ROHILLA R. Neural machine translation for low-resourced Indian languages[J]. arXiv:2004.13819, 2020. [93] LAKEW S M, FEDERICO M, NEGRI M, et al. Multilingual neural machine translation for low-resource languages[J]. Italian Journal of Computational Linguistics, 2018, 4: 11-25. [94] MACé V, SERVAN C. Using whole document context in neural machine translation[J]. arXiv:1910.07481, 2019. [95] MA S, ZHANG D, ZHOU M. A simple and effective unified encoder for document-level machine translation[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020: 3505-3511. [96] KUANG S, XIONG D, LUO W, et al. Modeling coherence for neural machine translation with dynamic and topic caches[J]. arXiv:1711.11221, 2017. [97] VOITA E, SENNRICH R, TITOV I. Context-aware monolingual repair for neural machine translation[J]. arXiv:1909.01383, 2019. [98] XIONG H, HE Z, WU H, et al. Modeling coherence for discourse neural machine translation[C]//Proceedings of the 33rd AAAI Conference on Artificial Intelligence, 2019: 7338-7345. [99] LI X, ZHANG J, ZONG C. Towards zero unknown word in neural machine translation[C]//Proceedings of the 25th International Joint Conference on Artificial Intelligence, 2016: 2852-2858. [100] HIRSCHMANN F, NAM J, FüRNKRANZ J. What makes word-level neural machine translation hard: a case study on English-German translation[C]//Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers, 2016: 3199-3208. [101] SENNRICH R, HADDOW B, BIRCH A. Neural machine translation of rare words with subword units[J]. arXiv:1508.07909, 2015. [102] 成洁. 一种神经机器翻译中稀有词模糊语义表示方法[J]. 信息技术, 2020, 44(12): 1-7. CHENG J. A fuzzy semantic representation method of rare words in neural machine translation[J]. Information Technology, 2020, 44(12): 1-7. [103] 赵旭. 基于ULR与元学习策略的蒙汉神经机器翻译的研究[D]. 呼和浩特: 内蒙古工业大学, 2021. ZHAO X. Mongolian-Chinese neural machine translation based on ULR and meta-learning strategy[D]. Hohhot: Inner Mongolia University of Technology, 2021. [104] 吉亚图. 低资源神经机器翻译中关键问题的研究[D]. 呼和浩特: 内蒙古大学, 2020. JI Y T. Research on key issues in low resource neural machine translation[D]. Hohhot: Inner Mongolia University, 2020. [105] HADDAD H, FADAEI H, FAILI H. Handling OOV words in NMT using unsupervised bilingual embedding[C]//Proceedings of the 2018 9th International Symposium on Telecommunications, 2018: 569-574. [106] HUCK M, HANGYA V, FRASER A. Better OOV translation with bilingual terminology mining[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019: 5809-5815. [107] RADFORD A, NARASIMHAN K, SALIMANS T, et al. Improving language understanding by generative pre-training[J]. OpenAI (2018) [2023-04-12]. https://api.semanticscholar.org/CorpusID:49313245. [108] TOUVRON H, LAVRIL T, IZACARD G, et al. LLama: open and efficient foundation language models[J]. arXiv:2302.13971, 2023. [109] TAORI R, GULRAJANI I, ZHANG T, et al. Alpaca: a strong, replicable instruction-following model[EB/OL]. Stanford Center for Research on Foundation Models [2023-04-12]. https://crfm.stanford.edu/2023/03/13/alpaca. html. [110] DU Z, QIAN Y, LIU X, et al. GLM: general language model pretraining with autoregressive blank infilling[J]. arXiv:2103.10360, 2021. |
[1] | 宋雨, 王帮海, 曹钢钢. 结合数据增强与特征融合的跨模态行人重识别[J]. 计算机工程与应用, 2024, 60(4): 133-141. |
[2] | 杨玮, 钟名锋, 杨根, 侯至丞, 王卫军, 袁海. 基于NVAE和OB-Mix的小样本数据增强方法[J]. 计算机工程与应用, 2024, 60(2): 103-112. |
[3] | 陈景霞, 唐喆喆, 林文涛, 胡凯蕾, 谢佳. 用于脑电数据增强和情绪识别的自注意力GAN[J]. 计算机工程与应用, 2023, 59(5): 160-168. |
[4] | 郭明镇, 汪威, 申红婷, 候红涛, 刘宽, 罗子江. 改进型YOLOv4-tiny的轻量级目标检测算法[J]. 计算机工程与应用, 2023, 59(23): 145-153. |
[5] | 丁锴, 杨佳熹, 杨耀, 那崇宁. 基于小样本StyleGAN的多类别车损图像生成方法[J]. 计算机工程与应用, 2023, 59(23): 202-210. |
[6] | 李聪林, 王琪冰, 陆佳炜, 赵国军, 胡豪, 肖刚. 基于数字孪生的电梯乘客异常行为建模与识别方法[J]. 计算机工程与应用, 2023, 59(19): 274-284. |
[7] | 王鑫鹏, 王晓强, 林浩, 李雷孝, 李科岑, 陶乙豪. 驾驶员手机使用检测模型:优化Yolov5n算法[J]. 计算机工程与应用, 2023, 59(18): 129-136. |
[8] | 李烁, 顾益军, 谭昊, 彭舒凡. 改进Xception网络的声纹对抗检测研究[J]. 计算机工程与应用, 2023, 59(14): 232-241. |
[9] | 庹冰, 黄丽雯, 唐鑫, 谌列勇, 周静. 基于YOLOX-WSC的PCB缺陷检测算法研究[J]. 计算机工程与应用, 2023, 59(10): 236-243. |
[10] | 刘涛, 丁雪妍, 张冰冰, 张建新. 改进YOLOv5的遥感图像检测方法[J]. 计算机工程与应用, 2023, 59(10): 253-261. |
[11] | 阿里木·赛买提, 斯拉吉艾合麦提·如则麦麦提, 麦合甫热提, 艾山·吾买尔, 吾守尔·斯拉木, 吐尔根·依不拉音. 神经机器翻译面对句长敏感问题的研究[J]. 计算机工程与应用, 2022, 58(9): 195-200. |
[12] | 陈逸东, 陆忠华. 基于卷积长短时记忆网络的CPI预测[J]. 计算机工程与应用, 2022, 58(9): 256-262. |
[13] | 张明, 卢庆华, 黄元忠, 李瑞轩. 自然语言语法纠错的最新进展和挑战[J]. 计算机工程与应用, 2022, 58(6): 29-41. |
[14] | 王鑫鹏, 王晓强, 林浩, 李雷孝, 杨艳艳, 孟闯, 高静. 深度学习典型目标检测算法的改进综述[J]. 计算机工程与应用, 2022, 58(6): 42-57. |
[15] | 李文婧, 徐国伟, 孔维刚, 郭风祥, 宋庆增. 基于改进YOLOv4的植物叶茎交点目标检测研究[J]. 计算机工程与应用, 2022, 58(4): 221-228. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||