Most Read articles

    Published in last 1 year |  In last 2 years |  In last 3 years |  All

    Published in last 1 year
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Progress on Deep Reinforcement Learning in Path Planning
    ZHANG Rongxia, WU Changxu, SUN Tongchao, ZHAO Zengshun
    Computer Engineering and Applications    2021, 57 (19): 44-56.   DOI: 10.3778/j.issn.1002-8331.2104-0369
    Abstract551)      PDF(pc) (1134KB)(388)       Save

    The purpose of path planning is to allow the robot to avoid obstacles and quickly plan the shortest path during the movement. Having analyzed the advantages and disadvantages of the reinforcement learning based path planning algorithm, the paper derives a typical deep reinforcement learning, Deep Q-learning Network(DQN) algorithm that can perform excellent path planning in a complex dynamic environment. Firstly, the basic principles and limitations of the DQN algorithm are analyzed in depth, and the advantages and disadvantages of various DQN variant algorithms are compared from four aspects:the training algorithm, the neural network structure, the learning mechanism and AC(Actor-Critic) framework. The paper puts forward the current challenges and problems to be solved in the path planning method based on deep reinforcement learning. The future development directions are proposed, which can provide reference for the development of intelligent path planning and autonomous driving.

    Reference | Related Articles | Metrics
    Research Progress of Medical Image Registration Technology Based on Deep Learning
    GUO Yanfen, CUI Zhe, YANG Zhipeng, PENG Jing, HU Jinrong
    Computer Engineering and Applications    2021, 57 (15): 1-8.   DOI: 10.3778/j.issn.1002-8331.2101-0281
    Abstract501)      PDF(pc) (681KB)(465)       Save

    Medical image registration technology has a wide range of application values for lesion detection, clinical diagnosis, surgical planning, and efficacy evaluation. This paper systematically summarizes the registration algorithm based on deep learning, and analyzes the advantages and limitations of various methods from deep iteration, full supervision, weak supervision to unsupervised learning. In general, unsupervised learning has become the mainstream direction of medical image registration research, because it does not rely on golden standards and uses an end-to-end network to save time. Meanwhile, compared with other methods, unsupervised learning can achieve higher accuracy and spends shorter time. However, medical image registration methods based on unsupervised learning also face some research difficulties and challenges in terms of interpretability, cross-modal diversity, and repeatable scalability in the field of medical images, which points out the research direction for achieving more accurate medical image registration methods in the future.

    Related Articles | Metrics
    Overview of Chinese Domain Named Entity Recognition
    JIAO Kainan, LI Xin, ZHU Rongchen
    Computer Engineering and Applications    2021, 57 (16): 1-15.   DOI: 10.3778/j.issn.1002-8331.2103-0127
    Abstract457)      PDF(pc) (928KB)(417)       Save

    Named Entity Recognition(NER), as a classic research topic in the field of natural language processing, is the basic technology of intelligent question answering, knowledge graph and other tasks. Domain Named Entity Recognition(DNER) is the domain-specific NER scheme. Drived by deep learning technology, Chinese DNER has made a breakthrough. Firstly, this paper summarizes the research framework of Chinese DNER, and reviews the existing research results from four aspects:the determination of domain data sources, the establishment of domain entity types and specifications, the annotation of domain data sets, and the evaluation metrics of Chinese DNER. Then, this paper summarizes the current common technology framework of Chinese DNER, introduces the pattern matching method based on dictionaries and rules, statistical machine learning method, deep learning method, multi-party fusion deep learning method, and focuses on the analysis of Chinese DNER method based on word vector representation and deep learning. Finally, the typical application scenarios of Chinese DNER are discussed, and the future development direction is prospected.

    Related Articles | Metrics
    Review of Text Sentiment Analysis Methods
    WANG Ting, YANG Wenzhong
    Computer Engineering and Applications    2021, 57 (12): 11-24.   DOI: 10.3778/j.issn.1002-8331.2101-0022
    Abstract445)      PDF(pc) (906KB)(511)       Save

    Text sentiment analysis is an important branch of natural language processing, which is widely used in public opinion analysis and content recommendation. It is also a hot topic in recent years. According to different methods used, it is divided into sentiment analysis based on emotional dictionary, sentiment analysis based on traditional machine learning, and sentiment analysis based on deep learning. Through comparing these three methods, the research results are analyzed, and the paper summarizes the advantages and disadvantages of different methods, introduces the related data sets and evaluation index, and application scenario, analysis of emotional subtasks is simple summarized. The future research trend and application field of sentiment analysis problem are found. Certain help and guidance are provided for the researchers in the related areas.

    Related Articles | Metrics
    Review of Research on Generative Adversarial Networks and Its Application
    WEI Fuqiang, Gulanbaier Tuerhong, Mairidan Wushouer
    Computer Engineering and Applications    2021, 57 (19): 18-31.   DOI: 10.3778/j.issn.1002-8331.2104-0248
    Abstract395)      PDF(pc) (1078KB)(998)       Save

    The theoretical research and applications of generative adversarial networks have been continuously successful and have become one of the current hot spots of research in the field of deep learning. This paper provides a systematic review of the theory of generative adversarial networks and their applications in terms of types of models, evaluation criteria and theoretical research progress; analyzing the strengths and weaknesses of generative models with explicit and implicit density-based, respectively; summarizing the evaluation criteria of generative adversarial networks, interpreting the relationship between the criteria, and introduces the research progress of the generative adversarial network in image generation from the application level, that is, through the image conversion, image generation, image restoration, video generation, text generation and image super-resolution applications; analyzing the theoretical research progress of generative adversarial networks from the perspectives of interpretability, controllability, stability and model evaluation methods. Finally, the paper discusses the challenges of studying generative adversarial networks and looks forward to the possible future directions of development.

    Reference | Related Articles | Metrics
    Research Progress of Natural Language Processing Based on Deep Learning
    JIANG Yangyang, JIN Bo, ZHANG Baochang
    Computer Engineering and Applications    2021, 57 (22): 1-14.   DOI: 10.3778/j.issn.1002-8331.2106-0166
    Abstract393)      PDF(pc) (1781KB)(104)       Save

    This paper comprehensively analyzes the research of deep learning in the field of natural language processing through a combination of quantitative and qualitative methods. It uses CiteSpace and VOSviewer to draw a knowledge graph of countries, institutions, journal distribution, keywords co-occurrence, co-citation network clustering, and timeline view of deep learning in the field of natural language processing to clarify the research. Through mining important researches in the field, this paper summarizes the research trend, the main problems, development bottlenecks, and gives corresponding solutions and ideas. Finally, suggestions are given on how to track the research of deep learning in the field of natural language processing, and provides references for subsequent research and development in the field.

    Reference | Related Articles | Metrics
    Review of Neural Style Transfer Models
    TANG Renwei, LIU Qihe, TAN Hao
    Computer Engineering and Applications    2021, 57 (19): 32-43.   DOI: 10.3778/j.issn.1002-8331.2105-0296
    Abstract362)      PDF(pc) (1078KB)(380)       Save

    Neural Style Transfer(NST) technique is used to simulate different art styles of images and videos, which is a popular topic in computer vision. This paper aims to provide a comprehensive overview of the current progress towards NST. Firstly, the paper reviews the Non-Photorealistic Rendering(NPR) technique and traditional texture transfer. Then, the paper categorizes current major NST methods and gives a detailed description of these methods along with their subsequent improvements. After that, it discusses various applications of NST and presents several evaluation methods which compares different style transfer models both qualitatively and quantitatively. In the end, it summarizes the existing problems and provides some future research directions for NST.

    Reference | Related Articles | Metrics
    Research Progress of Transformer Based on Computer Vision
    LIU Wenting, LU Xinming
    Computer Engineering and Applications    2022, 58 (6): 1-16.   DOI: 10.3778/j.issn.1002-8331.2106-0442
    Abstract360)      PDF(pc) (1089KB)(342)       Save
    Transformer is a deep neural network based on the self-attention mechanism and parallel processing data. In recent years, Transformer-based models have emerged as an important area of research for computer vision tasks. Aiming at the current blanks in domestic review articles based on Transformer, this paper covers its application in computer vision. This paper reviews the basic principles of the Transformer model, mainly focuses on the application of seven visual tasks such as image classification, object detection and segmentation, and analyzes Transformer-based models with significant effects. Finally, this paper summarizes the challenges and future development trends of the Transformer model in computer vision.
    Reference | Related Articles | Metrics
    Review of Attention Mechanism in Convolutional Neural Networks
    ZHANG Chenjia, ZHU Lei, YU Lu
    Computer Engineering and Applications    2021, 57 (20): 64-72.   DOI: 10.3778/j.issn.1002-8331.2105-0135
    Abstract357)      PDF(pc) (973KB)(420)       Save

    Attention mechanism is widely used in deep learning tasks because of its excellent effect and plug and play convenience. This paper mainly focuses on convolution neural network, introduces various mainstream methods in the development process of convolution network attention mechanism, extracts and summarizes its core idea and implementation process, realizes each attention mechanism method, and makes comparative experiments and results analysis on the measured data of the same type of emitter equipment. According to the main ideas and experimental results, the research status and future development direction of attention mechanism in convolutional networks are summarized.

    Reference | Related Articles | Metrics
    Improved U-Net Network for COVID-19 Image Segmentation
    SONG Yao, LIU Jun
    Computer Engineering and Applications    2021, 57 (19): 243-251.   DOI: 10.3778/j.issn.1002-8331.2010-0207
    Abstract344)      PDF(pc) (915KB)(193)       Save

    The novel corona virus pneumonia(COVID-19) pandemic is spreading globally. Computerized Tomography(CT) imaging technology plays a vital role in the fight against global COVID-19. When diagnosing new coronary pneumonia, it will be helpful if the new coronary pneumonia focus area can be automatically and accurately segmented from the CT image, the doctor makes a more accurate and quick diagnosis. Aiming at the segmentation problem of new coronary pneumonia lesions, an automatic segmentation method based on the improved U-Net model is proposed. The EfficientNet-B0 network pre-trained on ImageNet is used in the encoder to extract features of effective information. In the decoder, the traditional up-sampling operation is replaced with a DUpsampling structure, in order to fully obtain the detailed feature information of the lesion edge, and finally the accuracy of the segmentation is improved through the integration of model snapshots. The experimental results on the public data set show that the accuracy, recall and Dice coefficients of the proposed algorithm are 84.24%, 80.43% and 85.12%, respectively. Compared with other segmentation networks, this method can effectively segment the neo-coronary pneumonia lesion area and has good segmentation performance.

    Reference | Related Articles | Metrics
    Overview of Image Super-Resolution Algorithms
    SUN Jingyang, CHEN Fengdong, HAN Yueyue, WU Yuwen, GAN Yu, LIU Guodong
    Computer Engineering and Applications    2021, 57 (17): 1-9.   DOI: 10.3778/j.issn.1002-8331.2103-0556
    Abstract300)      PDF(pc) (1343KB)(289)       Save

    Image super-resolution reconstruction aims to recover high-resolution and clear images from low-resolution images. This article first explains the idea of typical image super-resolution reconstruction methods, and then reviews typical and latest image super-resolution reconstruction algorithms based on deep learning from the dimensions of up-sampling position and up-sampling method, learning strategy, loss function, etc. It analyzes the latest development status, and looks forward to the future development trend.

    Related Articles | Metrics
    Overview of Visual Multi-object Tracking Algorithms with Deep Learning
    ZHANG Yao, LU Huanzhang, ZHANG Luping, HU Moufa
    Computer Engineering and Applications    2021, 57 (13): 55-66.   DOI: 10.3778/j.issn.1002-8331.2102-0260
    Abstract296)      PDF(pc) (931KB)(396)       Save

    Visual multi-object tracking is a hot issue in the field of computer vision. However, the uncertainty of the number of targets in the scene, the mutual occlusion between targets, and the difficulties of discrimination between target features has led to slow progress in the real-world application of visual multi-target tracking. In recent years, with the continuous in-depth research of visual intelligent processing, a variety of deep learning visual multi-object tracking algorithms have emerged. Based on the analysis of the challenges and difficulties faced by visual multi-object tracking, the algorithm is divided into Detection-Based Tracking(DBT) and Joint Detection Tracking(JDT) two categories and six sub-categories class, and studied about its advantages and disadvantages. The analysis shows that the DBT algorithm has a simple structure, but the correlation of each sub-step of the algorithm is not high. The JDT algorithm integrates multi-module joint learning and is dominant in multiple tracking evaluation indicators. The feature extraction module is the key to solve the target occlusion in the DBT algorithm with the expense of the speed of the algorithm, and the JDT algorithm is more dependent on the detection module. At present, multi-object tracking is generally developed from DBT-type algorithms to JDT, achieving a balance between algorithm accuracy and speed in stages. The future development direction of the multi-object tracking algorithm in terms of datasets, sub-modules, and specific scenarios is proposed.

    Related Articles | Metrics
    Review of Application of Transfer Learning in Medical Image Field
    GAO Shuang, XU Qiaozhi
    Computer Engineering and Applications    2021, 57 (24): 39-50.   DOI: 10.3778/j.issn.1002-8331.2107-0300
    Abstract292)      PDF(pc) (896KB)(435)       Save

    Deep learning technology has developed rapidly and achieved significant results in the field of medical image treatment. However, due to the small number of medical image samples and difficult annotation, the effect of deep learning is far from reaching the expectation. In recent years, using transfer learning method to alleviate the problem of insufficient medical image samples and improve the effect of deep learning technology in the field of medical image has become one of the research hotspots. This paper first introduces the basic concepts, types, common strategies and models of transfer learning methods, then combs and summarizes the representative related research in the field of medical images according to the types of transfer learning methods, and finally summarizes and prospects the future development of this field.

    Reference | Related Articles | Metrics
    Research on Edge Computing Security of Railway 5G Mobile Communication System
    LIU Jiajia, WU Hao, LI Panpan
    Computer Engineering and Applications    2021, 57 (12): 1-10.   DOI: 10.3778/j.issn.1002-8331.2102-0052
    Abstract276)      PDF(pc) (688KB)(271)       Save

    Edge computing, as a key technology of the intelligent railway 5G network, it sinks data caching capabilities, traffic forwarding capabilities and application service capabilities to the edge of the network, effectively meets the low latency, large bandwidth, and massive connection requirements of intelligent railways to support intelligent rail transit application. However, due to it changes in physical location, business types and other aspects, and the complex external environment of the railway scene, highly dynamic, and low credibility, the edge nodes of the intelligent railway business are faced with new security challenges. Combined with the current research status of 5G edge computing security, the security threats faced by railway 5G edge computing are analyzed based on the analysis of the four aspects of terminal, edge network, edge node and edge application. On the basis of detailed security requirements and challenges, and standard progress, the research methods and evaluation indicators are summarized that can be applied to railway MEC safety. Combined with the characteristics of railway 5G edge computing, this paper proposes railway MEC end-to-end safety service solutions and the development direction of future intelligent railway MEC security research.

    Related Articles | Metrics
    Survey on Zero-Shot Learning
    WANG Zeshen,YANG Yun,XIANG Hongxin, LIU Qing
    Computer Engineering and Applications    2021, 57 (19): 1-17.   DOI: 10.3778/j.issn.1002-8331.2106-0133
    Abstract270)      PDF(pc) (1267KB)(270)       Save

    Although there have been well developed in zero-shot learning since the development of deep learning, in the aspect of the application, zero-shot learning did not have a good system to order it. This paper overviews theoretical systems of zero-shot learning, typical models, application systems, present challenges and future research directions. Firstly, it introduces the theoretical systems from definition of zero-shot learning, essential problems, and commonly used data sets. Secondly, some typical models of zero-shot learning are described in chronological order. Thirdly, it presents the application systems about of zero-shot learning from the three dimensions, such as words, images and videos. Finally, the paper analyzes the challenges and future research directions in zero-shot learning.

    Reference | Related Articles | Metrics
    Intelligent Analysis of Text Information Disclosure of Listed Companies
    LYU Pin, WU Qinjuan, XU Jia
    Computer Engineering and Applications    2021, 57 (24): 1-13.   DOI: 10.3778/j.issn.1002-8331.2106-0270
    Abstract265)      PDF(pc) (724KB)(217)       Save

    The analysis of the text disclosure issued by listed companies is an important way for investors to understand the companies’ operating conditions and to make investment decisions. However, the method based on manual reading and analysis has low efficiency and high cost. The development of artificial intelligence technology provides an opportunity for intelligent analysis of companies’ text information, which can mine valuable information from massive enterprise text data, fulfill the advantages of data-driven, and greatly improve the analysis efficiency. Hence, it has become a research hotspot in recent years. The research work in recent ten years about the announcement of listed companies is summarized from three aspects:the event types of the text information disclosure, intelligent analysis method and application scenario. The current challenges in this field are also discussed, and possible future research directions according to the existing shortcomings are finally pointed out.

    Reference | Related Articles | Metrics
    Research Progress of Object Detection Based on Weakly Supervised Learning
    YANG Hui, QUAN Jichuan, LIANG Xinyu, WANG Zhongwei
    Computer Engineering and Applications    2021, 57 (16): 40-49.   DOI: 10.3778/j.issn.1002-8331.2103-0306
    Abstract250)      PDF(pc) (633KB)(306)       Save

    With the continuous development of Convolutional Neural Network(CNN), as the most basic technology in computer vision, object detection has made remarkable progress. Firstly, the current situation that the strong supervised object detection algorithm requires high precision for labeling datasets is introduced. Secondly, the object detection algorithm based on weakly supervised learning is studied. The algorithm is classified into four categories according to different feature processing methods, and the advantages and disadvantages of each algorithm are analyzed and compared. Thirdly, the detection accuracy of all kinds of object detection algorithms based on weakly supervised learning is compared through experiments. At the same time, it is compared with the mainstream strong supervised object detection algorithms. Finally, the future research hotspots of object detection algorithms based on weakly supervised learning are prospected.

    Related Articles | Metrics
    Robot Dynamic Path Planning Based on Improved A* and DWA Algorithm
    LIU Jianjuan, XUE Liqi, ZHANG Huijuan, LIU Zhongpu
    Computer Engineering and Applications    2021, 57 (15): 73-81.   DOI: 10.3778/j.issn.1002-8331.2103-0525
    Abstract244)      PDF(pc) (1452KB)(401)       Save

    Traditional A* algorithm is one of the commonly used algorithms for global path planning of mobile robot, but the algorithm has low search efficiency, many turning points in planning path, and can’t achieve dynamic path planning in the face of random dynamic obstacles in complex environment. To solve these problems, the improved A* algorithm and DWA algorithm are integrated on the basis of global optimization. The obstacle information in the environment is quantified, and the weight of heuristic function of A* algorithm is adjusted according to the information to improve the efficiency and flexibility of the algorithm. Based on the Floyd algorithm, the optimization algorithm of path nodes is designed, which can delete redundant nodes, reduce turning points and improve the path smoothness. The dynamic window evaluation function of DWA algorithm is designed based on the global optimal, which is used to distinguish known obstacles from unknown dynamic and static obstacles, and the key points of the improved A* algorithm planning path are extracted as the temporary target points of DWA algorithm. On the basis of the global optimal, the fusion of the improved A* algorithm and DWA algorithm is realized. The experimental results show that, in the complex environment, the fusion algorithm can not only ensure the global optimal path planning, but also effectively avoid the dynamic and static obstacles in the environment, and realize the dynamic path planning in the complex environment.

    Related Articles | Metrics
    Survey of Remote Sensing Image Super-Resolution Based on Machine Learning
    LI Zheng,LIU Wei,ZHANG Kaibing
    Computer Engineering and Applications    2021, 57 (13): 8-17.   DOI: 10.3778/j.issn.1002-8331.2102-0180
    Abstract243)      PDF(pc) (961KB)(223)       Save

    This paper surveys the research and development of machine learning-based Super-Resolution(SR) reconstruction technique of remote sensing images. The machine learning-based remote sensing image SR reconstruction technique can improve the spatial resolution of remote sensing image by learning the mapping relationship between low resolution image and high resolution image, thus contributing to the visual analysis of remote sensing image. Firstly, according to the difference of data expression methods, machine learning-based SR methods of the remote sensing image are divided into two categories, i.e., dictionary learning-based methods and deep learning-based methods. Then, it briefly describes the concrete problems of various methods, their design ideas and principle are analyzed and summarized; next the advantages and disadvantages of various methods and reconstruction indicators are compared and analyzed. Finally, the problems and difficulties of remote sensing image SR are summarized and the future development trend of remote sensing image SR is prospected.

    Related Articles | Metrics
    Multi-channel Attention Mechanism Text Classification Model Based on CNN and LSTM
    TENG Jinbao, KONG Weiwei, TIAN Qiaoxin, WANG Zhaoqian, LI Long
    Computer Engineering and Applications    2021, 57 (23): 154-162.   DOI: 10.3778/j.issn.1002-8331.2104-0212
    Abstract232)      PDF(pc) (844KB)(181)       Save

    Aiming at the problem that traditional Convolutional Neural Network(CNN) and Long Short-Term Memory (LSTM) can not reflect the importance of each word in the text when extracting features, this paper proposes a multi-channel text classification model based on CNN and LSTM. Firstly, CNN and LSTM are used to extract the local information and context features of the text; secondly, multi-channel attention mechanism is used to extract the attention score of the output information of CNN and LSTM; finally, the output information of multi-channel attention mechanism is fused to achieve the effective extraction of text features and focus attention on important words. Experimental results on three public datasets show that the proposed model is better than CNN, LSTM and their improved models, and can effectively improve the effect of text classification.

    Reference | Related Articles | Metrics
    Application Research of Improved YOLOv4 in Remote Sensing Aircraft Target Detection
    HOU Tao, JIANG Yu
    Computer Engineering and Applications    2021, 57 (12): 224-230.   DOI: 10.3778/j.issn.1002-8331.2011-0248
    Abstract230)      PDF(pc) (2986KB)(255)       Save

    Aiming at the problems of low accuracy, slow detection speed and complex background of aircraft targets in remote sensing images, an improved YOLOv4 target detection algorithm based on deep learning is proposed. The backbone feature extraction network of YOLOv4 is improved to retain the high-resolution feature layer, remove the feature layer used to detect large targets, and to reduce semantic loss. DenseNet (Densely connected Network) is adopted to enhance feature extraction and reduce the vanishing gradient problem. The [K]-means algorithm on the data set is used to get the best prior frame number and size. Experimental results on RSOD(Remote Sensing Object Detection) data set and DIOR(Detection in Optical Remote sensing images) data set show that the accuracy of the proposed algorithm reaches 95.4%, which is 0.3 percentage points higher than original algorithm, and the recall rate reaches 86.04%, an increase of 4.68 percentage points, and then mAP value reaches 85.52%, an increase of 5.27 percentage points.

    Related Articles | Metrics
    Overview of Pedestrian Re-identification Research Based on Multi-source Information
    DU Zhuoqun, HU Xiaoguang, YANG Shixin, LI Xiaoxiao, WANG Ziqiang, CAI Nengbin
    Computer Engineering and Applications    2021, 57 (14): 1-14.   DOI: 10.3778/j.issn.1002-8331.2103-0197
    Abstract230)      PDF(pc) (1396KB)(216)       Save

    With the continuous development of computer vision technology, pedestrian re-identification technology has played a huge role in the fields of security, detection and intelligent surveillance, and has become a current research hotspot. The traditional pedestrian re-recognition technology focuses on the research of the visual information of the RGB image collected by the camera, and has achieved good results under laboratory conditions, but under adverse conditions such as poor lighting, occlusion of objects, and blurred image quality, the recognition rate of the algorithm has experienced a cliff-like decline. Nowadays, visual information does not only focus on RGB images, but also introduces information such as infrared images, depth images, and sketch portraits to improve the recognition rate of the algorithm. At the same time, the application of text information and spatiotemporal information also improves the performance of pedestrian re-recognition algorithms. However, due to the natural differences between the various modes, how to connect multiple kinds of information has become the main problem of multi-source information pedestrian re-identification research. This article combs the research papers on pedestrian re-identification with multiple sources of information published in recent years, expounds the current situation, technical difficulties and future development trends of pedestrian re-identification.

    Related Articles | Metrics
    COVID-19 Medical Imaging Dataset and Research Progress
    LIU Rui, DING Hui, SHANG Yuanyuan, SHAO Zhuhong, LIU Tie
    Computer Engineering and Applications    2021, 57 (22): 15-27.   DOI: 10.3778/j.issn.1002-8331.2106-0118
    Abstract229)      PDF(pc) (1013KB)(209)       Save

    As imaging technology has been playing an important role in the diagnosis and evaluation of the new coronavirus(COVID-19), COVID-19 related datasets have been successively published. But few review articles discuss COVID-19 image processing, especially in datasets. To this end, the new coronary pneumonia datasets and deep learning models are sorted and analyzed, through COVID-19-related journal papers, reports, and related open-source dataset websites, which include Computer Tomography(CT) image and X-rays(CXR)image datasets. At the same time, the characteristics of the medical images presented by these datasets are analyzed. This paper focuses on collating and describing open-source datasets related to COVID-19 medical imaging. In addition, some important segmentation and classification models that perform well on the related datasets are analyzed and compared. Finally, this paper discusses the future development trend on lung imaging technology.

    Reference | Related Articles | Metrics
    Review of Cognitive and Joint Anti-Interference Communication in Unmanned System
    WANG Guisheng, DONG Shufu, HUANG Guoce
    Computer Engineering and Applications    2022, 58 (8): 1-11.   DOI: 10.3778/j.issn.1002-8331.2109-0334
    Abstract227)      PDF(pc) (913KB)(232)       Save
    As the electromagnetic environment becomes more and more complex as well as the confrontation becomes more and more intense, it puts forward higher requirements for the reliability of information transmission of unmanned systems whereas the traditional cognitive communication mode is difficult to adapt to the independent and distributed development trend of broadband joint anti-interference in future. For the need of low anti-interference intercepted communications surrounded in unmanned systems, this paper analyzes the cognitive anti-interference technologies about interference detection and identification, transformation analysis and suppression in multiple domains and so on. The research status of common detection and estimation, classification and recognition are summarized. Then, typical interference types are modeled correspondingly, and transformation methods and processing problems are concluded. Furthermore, traditional interference suppression methods and new interference suppression methods are systematically summarized. Finally, the key problems of restricting the joint interference of broadband are addressed, such as the classification and recognition of unknown interference, the temporal elimination of multiple interference, the joint separation of distributed interference and the optimal control of collaborative interference, which highlight the important role of cognitive interference suppression technology in unmanned system communication.
    Reference | Related Articles | Metrics
    Research Hotspots and Cutting-Edge Mining of Artificial Intelligence
    WANG Youfa, CHEN Hui, LUO Jianqiang
    Computer Engineering and Applications    2021, 57 (12): 46-53.   DOI: 10.3778/j.issn.1002-8331.2102-0315
    Abstract225)      PDF(pc) (1089KB)(215)       Save

    In order to intuitively understand the development status and research frontier of artificial intelligence, analyze the similarities and differences between domestic and foreign research, and help domestic artificial intelligence research. Based on the journal papers from 2008 to 2019 in Web of science database and CNKI database, scientific knowledge mapping and visual analysis of journal papers are carried out with Citespace software. According to the objective data and the map of scientific knowledge, it is found that after 2016, the field of artificial intelligence ushers in a new upsurge, and presents a pattern of “China and the United States”. In terms of the quality of published papers, North America is currently the region with the highest level of artificial intelligence research. At present, the main force of artificial intelligence research is colleges and universities, and the system of combining production, teaching and research has not yet formed. The research topics have distinct characteristics of the times, and artificial neural networks, algorithms, big data, robots, computer vision, legal ethics and so on have become the current research hotspots. Finally, according to the evolution of artificial intelligence research context and high-frequency words, three research frontiers of this field, namely “deep reinforcement learning”, “artificial intelligence +” and “intelligent social science”, are put forward to provide direction suggestions for the follow-up artificial intelligence research.

    Related Articles | Metrics
    SDN Routing Optimization Algorithm Based on Reinforcement Learning
    CHE Xiangbei, KANG Wenqian, OUYANG Yuhong, YANG Kehan, LI Jian
    Computer Engineering and Applications    2021, 57 (12): 93-98.   DOI: 10.3778/j.issn.1002-8331.2003-0423
    Abstract224)      PDF(pc) (869KB)(257)       Save

    Aiming at the network routing optimization in SDN controller, a routing optimization algorithm is designed based on the PPO model in reinforcement learning. The algorithm can adjust the reward function for different optimization goals to dynamically update the routing strategy, and this algorithm does not depend on any specific network state and has very good generalization performance. Because of adopting the strategy method in reinforcement learning, the control of routing strategy is more elaborate than various Q-learning-based algorithms. Based on Omnet++ simulation software, the performance of the algorithm is evaluated through experiments. Compared with the traditional shortest path routing algorithm, the average delay and end-to-end maximum delay of this routing optimization algorithm on the Sprint structure network are reduced by 29.3% and 17.4%, respectively and throughput rate is increased by 31.77%. The experimental result shows that this PPO-based SDN routing control algorithm not only has good convergence, but also has better performance and stability than the shortest path routing algorithm and the Q-learning based QAR routing algorithm.

    Related Articles | Metrics
    Overview on Reinforcement Learning of Multi-agent Game
    WANG Jun, CAO Lei, CHEN Xiliang, LAI Jun, ZHANG Legui
    Computer Engineering and Applications    2021, 57 (21): 1-13.   DOI: 10.3778/j.issn.1002-8331.2104-0432
    Abstract224)      PDF(pc) (779KB)(290)       Save

    The use of deep reinforcement learning to solve single-agent tasks has made breakthrough progress. Since the complexity of multi-agent systems, common algorithms cannot solve the main difficulties. At the same time, due to the increase in the number of agents, taking the expected value of maximizing the cumulative return of a single agent as the learning goal often fails to converge and some special convergence points do not satisfy the rationality of the strategy. For practical problems that there is no optimal solution, the reinforcement learning algorithm is even more helpless. The introduction of game theory into reinforcement learning can solve the interrelationship of agents very well and explain the rationality of the strategy corresponding to the convergence point. More importantly, it can use the equilibrium solution to replace the optimal solution in order to obtain a relatively effective strategy. Therefore, this article investigates the reinforcement learning algorithms that have emerged in recent years from the perspective of game theory, summarizes the important and difficult points of current game reinforcement learning algorithms and gives several breakthrough directions that may solve the above-mentioned difficulties.

    Reference | Related Articles | Metrics
    Review of Extractive Machine Reading Comprehension
    BAO Yue, LI Yanling, LIN Min
    Computer Engineering and Applications    2021, 57 (12): 25-36.   DOI: 10.3778/j.issn.1002-8331.2102-0038
    Abstract221)      PDF(pc) (845KB)(196)       Save

    Machine reading comprehension requires machines to understand natural language texts and answer related questions, which is the core technology in the field of natural language processing and one of the most challenging tasks in the field of natural language processing. Extractive machine reading comprehension is an important branch of machine reading comprehension task. Because it is more suitable for the actual situation and can reflect the understanding ability of the machine, it has become a research hotspot in the current academic and industrial circles. This paper makes a comprehensive review of extractive machine reading comprehension from four aspects, first of all, the paper introduces the task of machine reading comprehension and its development process. Secondly, it describes the task of extractive machine reading comprehension and its difficulties at present. Then, the main data sets and methods of the extractive machine reading comprehension task are summarized. Finally, the future development direction of extractive machine reading comprehension is discussed.

    Related Articles | Metrics
    Survey of Multimodal Data Fusion
    REN Zeyu, WANG Zhenchao, KE Zunwang, LI Zhe, Wushour·Silamu
    Computer Engineering and Applications    2021, 57 (18): 49-64.   DOI: 10.3778/j.issn.1002-8331.2104-0237
    Abstract221)      PDF(pc) (1214KB)(257)       Save

    With the rapid development of information technology, information exists in various forms and sources. Different forms of existence or information sources can be referred to as one modal, and data composed of two or more modalities is called multi-modal data. Multi-modal data fusion is responsible for effectively integrating the information of multiple modalities, absorbing the advantages of different modalities, and completing the integration of information. Natural phenomena have very rich characteristics, and it is difficult for a single mode to provide complete information about a certain phenomenon. Faced with the fusion requirements of maintaining the diversity and completeness of the modal information after fusion, maximizing the advantages of each modal, and reducing the information loss caused by the fusion process, how to integrate the information of each modal has become a new challenge that exists in many fields. This paper briefly describes common multimodal fusion methods and fusion architectures, summarizes three common fusion models, and briefly analyzes the advantages and disadvantages of the three architectures of collaboration, joint, and codec, as well as specific fusion methods such as multi-core learning and image models. In the application of multi-modality, it analyzes and summarizes multi-modal video clip retrieval, comprehensive multi-modal information generation content summary, multi-modal sentiment analysis, and multi-modal man-machine dialogue system. The paper also proposes the current problems of multi-modal fusion and the future research directions.

    Related Articles | Metrics
    Review of Sign Language Recognition Methods and Techniques
    Minawaer·Abula, Alifu·Kuerban, XIE Qina, GENG Liting
    Computer Engineering and Applications    2021, 57 (18): 1-12.   DOI: 10.3778/j.issn.1002-8331.2104-0220
    Abstract215)      PDF(pc) (719KB)(201)       Save

    Sign language, as the main communication channel for deaf and hearing people, plays a crucial role in daily life. With the rapid development of the field of computer vision and deep learning, the field of sign language recognition has also ushered in new opportunities. The advanced methods and technologies used in the research of sign language recognition based on computer vision in recent years are reviewed. Starting from the three branches of static sign language, isolated words and continuous sentence sign language recognition, the common methods and technical difficulties of sign language recognition are systematically explained. The steps of sign language recognition such as image preprocessing, detection and segmentation, tracking, feature extraction, and classification are introduced in detail. It summarizes and analyzes the commonly used algorithms and neural network models for sign language recognition, summarizes and organizes commonly used sign language datasets, analyzes the status quo of different sign language recognition, and finally discusses the challenges and limitations of sign language recognition.

    Related Articles | Metrics
    Application of Generative Adversarial Networks in Medical Image Processing
    LI Xiangxia, XIE Xian, LI Bin, YIN Hua, XU Bo, ZHENG Xinwei
    Computer Engineering and Applications    2021, 57 (18): 24-37.   DOI: 10.3778/j.issn.1002-8331.2104-0176
    Abstract211)      PDF(pc) (726KB)(229)       Save

    Generative Adversarial Nets(GANs) models can learn more abundant data information in unsupervised learning. GANs consist of a generator and a discriminator, and these two are alternately optimized through mutual games in the training of the confrontation to improve performance. In view of the problems of traditional generative confrontation network, such as gradient disappearance, mode collapse and inability to generate discrete data distribution, the researchers have proposed a number variations of GANs model. The paper describes the theory and structure of the GANs model. Then, the paper introduces several typical variant models, and elaborates the current research progress and status of the GANs model in image generation, image segmentation, image classification, target detection applications and super resolution image reconstruction. The in-depth analysis is carried out based on the research status and existing problems in the paper, and the future development trend and challenges of deep learning in the field of medical image processing are further summarized and discussed.

    Related Articles | Metrics
    Computer Engineering and Applications    2021, 57 (19): 0-0.  
    Abstract210)      PDF(pc) (635KB)(178)       Save
    Related Articles | Metrics
    Improved Lightweight Attention Model Based on CBAM
    FU Guodong, HUANG Jin, YANG Tao, ZHENG Siyu
    Computer Engineering and Applications    2021, 57 (20): 150-156.   DOI: 10.3778/j.issn.1002-8331.2101-0369
    Abstract206)      PDF(pc) (808KB)(157)       Save

    In recent years, the attention model has been widely used in the field of computer vision. By adding the attention module to the convolutional neural network, the performance of the network can be significantly improved. However, most of the existing methods focus on the development of more complex attention modules to enable the convolutional neural network to obtain stronger feature expression capabilities, but this also inevitably increases the complexity of the model. In order to achieve a balance between performance and complexity, a lightweight EAM(Efficient Attention Module) model is proposed to optimize the CBAM model. For the channel attention module of CBAM, one-dimensional convolution is introduced to replace the fully connected layer to aggregate the channels. For the spatial attention module of CBAM, the large convolution kernel is replaced with a dilated convolution to increase the receptive field for aggregation Broader spatial context information. After integrating the model into YOLOv4 and testing it on the VOC2012 data set, mAP is increased by 3.48 percentage points. Experimental results show that the attention model only introduces a small amount of parameters, and the network performance can be greatly improved.

    Reference | Related Articles | Metrics
    Survey of Task Assignment for Crowd-Based Cooperative Computing
    CHEN Baotong, WANG Liqing, JIANG Xiaomin, YAO Hanbing
    Computer Engineering and Applications    2021, 57 (20): 1-12.   DOI: 10.3778/j.issn.1002-8331.2105-0396
    Abstract205)      PDF(pc) (689KB)(190)       Save

    Task allocation is one of the core issues in crowd-based cooperative computing and crowdsourcing, that is, by designing a reasonable task allocation strategy, the tasks are assigned to the appropriate workers under the task constraints, so as to improve the result quality and completion efficiency of tasks. The problems of the current task allocation method are analyzed first, and then, a general task allocation framework is proposed and the relevant research work at home and abroad is analyzed in three aspects:worker model, task model, and task allocation algorithm. Finally, the key issues and future research trends in the research of task allocation for crowd-based cooperative computing are put forward.

    Reference | Related Articles | Metrics
    Text Classification Method Based on LSTM-Attention and CNN Hybrid Model
    TENG Jinbao, KONG Weiwei, TIAN Qiaoxin, WANG Zhaoqian
    Computer Engineering and Applications    2021, 57 (14): 126-133.   DOI: 10.3778/j.issn.1002-8331.2011-0037
    Abstract205)      PDF(pc) (780KB)(262)       Save

    For the problem that traditional Long Short-Term Memory(LSTM) and Convolution Neural Network(CNN) cannot reflect the importance of each word in the text when extracting features, a text classification method based on the hybrid model of LSTM-Attention and CNN is proposed. Firstly, CNN is used to extract the local information of the text and then integrate the semantics of the whole text. Secondly, LSTM is used to extract text context features. After LSTM, Attention mechanism is added to extract the Attention score of output information. Finally, the output of LSTM-Attention is fused with the output of CNN, so as to realize the effective extraction of text features and focus Attention on important words. The experimental results on three open data sets show that the proposed model is more effective than LSTM, CNN and their improved models, and can effectively improve the effect of text classification.

    Related Articles | Metrics
    Computer Engineering and Applications    2021, 57 (24): 0-0.  
    Abstract203)      PDF(pc) (1168KB)(280)       Save
    Related Articles | Metrics
    Relation Network Based on Attention Mechanism and Graph Convolution for Few-Shot Learning
    WANG Xiaoru, ZHANG Heng
    Computer Engineering and Applications    2021, 57 (19): 164-170.   DOI: 10.3778/j.issn.1002-8331.2104-0275
    Abstract196)      PDF(pc) (949KB)(165)       Save

    Deep neural networks have dominated image recognition task with large amounts of labeled data. But training a well-performing network on a smaller dataset is still a very challenging task. How to learn from limited labeled data is a key research with excellent scenarios and potential applications. There are many ways to solve few-shot recognition problem, but there is still a problem of low recognition accuracy. The fundamental reason is that in few-shot learning, the traditional neural network can only accept a small amount of labeled data, which makes the network unable to obtain enough information for identification. Therefore, the paper proposes a few-shot classification model based on attention mechanism and graph convolutional neural network, which can not only extract features better, but also make full use of the features to classify the target image. Through the attention mechanism, it can guide the neural network to pay attention to more useful information, and graph convolution enables the network to make more accurate judgments by using the information from other classes of support set. Through many experiments, it is proved that the classification accuracy of the model on the Omniglot dataset and the miniImageNet dataset surpasses the original relational network which based on traditional neural network.

    Reference | Related Articles | Metrics
    Survey of Single Image Super-Resolution Based on Deep Learning
    HUANG Jian, ZHAO Yuanyuan, GUO Ping, WANG Jing
    Computer Engineering and Applications    2021, 57 (18): 13-23.   DOI: 10.3778/j.issn.1002-8331.2102-0257
    Abstract196)      PDF(pc) (996KB)(223)       Save

    Image super-resolution reconstruction refers to the use of a specific algorithm to restore a low-resolution blurry image in the same scene to a high-resolution image. In recent years, with the active development of deep learning, this technology has been widely used in many fields, and methods based on deep learning are being increasingly studied in the field of image super-resolution reconstruction. In order to understand the current status and research trends of image super-resolution reconstruction algorithms based on deep learning, popular image super-resolution algorithms are summarized. Mainly, the network model structure, scaling method, loss function of existing single image super-resolution algorithm are explained in detail. The drawbacks and advantages of various methods are analyzed. The reconstruction effects of various network models and various loss functions are compared and analyzed throughout the experiment. Finally, the future development direction of the single-image super-resolution reconstruction algorithm based on deep learning is forecasted.

    Related Articles | Metrics
    Random Forest Model Stock Price Prediction Based on Pearson Feature Selection
    YAN Zhengxu, QIN Chao, SONG Gang
    Computer Engineering and Applications    2021, 57 (15): 286-296.   DOI: 10.3778/j.issn.1002-8331.2011-0419
    Abstract194)      PDF(pc) (2026KB)(186)       Save

    In order to better predict the trend of stocks, the problem of low prediction accuracy under a large number of features and big data is solved.In this study, a new combinational model method of random forest based on Pearson coefficient is proposed on the basis of random forest. Pearson coefficient is used for correlation test to remove irrelevant features.The improved grid search method is used to optimize the decision tree parameters. Stochastic forest is used for modeling regression prediction of residual characteristics, and a final conclusion is drawn.The experimental results show that the MAE and MSE of the improved random forest are greatly improved.Among them, the MSE value and MAE value of the improved random forest are 56% and 37.3% lower than those of the traditional random forest, and the prediction effect of the other two stocks is also improved.The new portfolio model can realize the short-term forecast regression of stock price and reduce the influence of noise on stock price forecast.This study provides effective evidence for better forecasting of stock prices and provides investors with the choice of factors influencing the stock.

    Related Articles | Metrics
    Computer Engineering and Applications    2021, 57 (12): 0-0.  
    Abstract193)      PDF(pc) (638KB)(164)       Save
    Related Articles | Metrics