Most Download articles

    Published in last 1 year | In last 2 years| In last 3 years| All| Most Downloaded in Recent Month | Most Downloaded in Recent Year|

    Please wait a minute...
    For Selected: Toggle Thumbnails
    ZHANG Hui 1,2,DONG Yu-ning 1,YANG Long-xiang 1,ZHU Hong-bo 1
    2010, 46 (19): 14-17.   DOI: 10.3778/j.issn.1002-8331.2010.19.004
    Various QoS routing protocols based on stability are briefly reviewed.The implementation method of path stability
    computation model is given.A novel stability based segmented backup routing protocol is proposed by combining the path
    stability computation method with segmented backup routing strategy.The proposed routing protocol is analyzed theoretically.
    Two dimensional diffusionless heat conduction phenomenon has been described on partial differential equation. Based on finite volume method, discretized algebraic equation of partial differential equation has been deduced. Different coefficients and source terms have been discussed under different boundary conditions, which include prescribed heat flux, prescribed temperature, convection and insulation. Transient heat conduction analysis of infinite plate with uniform thickness and two dimensional rectangle region is realized by programming using MATLAB. It is useful to make the heat conduction equation more understandable by its solution with graphical expression. Feasibility and stability of numerical method have been demonstrated by running result.
    The boolean matrix representation of rough set theory and the concept of permutation matrix are introduced,the relationship between attribute reduction and permutation matrix is derived,the theory about the solution of the logical equations is discussed,and the novel algorithm in rough set attributes reduction based on permutation matrix is proposed,the validity of the algorithm is proved by means of an example,and it shows that the algorithm possesses use for reference in rough set attribute reduction and practical significance for its application.
    WANG Guoxia, LIU Heping
    2012, 48 (7): 66-76.  
    Information overload is one of the most critical problems, and personalized recommendation system is a powerful tool to solve this problem. In this article, the definition of recommendation system is introduced, this article also expounds some key technologies including user modeling, recommendation item modeling and recommendation algorithm. The recommendation frame and evaluation methods are also exhibited. This article tries to give the difficulties and future directions of recommendation system.
    Many researches demonstrate that good result can be gained by existing machine learning algorithms in the recognization of CAPTCHA(Completely Automated Public Turing test to tell Computers and Humans Apart) if single characters can be split. A method is presented to segment the merged characters in the recognization of CAPTCHA with touching characters. It seeks division points by combining the statistics of character width and the vertical histogram projection minimums, and then uses these points as the starting points of the drop fall algorithm to segment merged characters in CAPTCHA. The experiments show that it is a general method and can improve the recognization rate.
    YOLOv3 is a real-time object detection algorithm, its speed and accuracy reach good trade-off, but the disadvantages are that the boundary box positioning is inaccurate and it is difficult to distinguish overlapping objects. For the above problems, this paper proposes the Attention-YOLO algorithm based on the item-wise attention mechanism which embeds channel and spatial attention mechanism in the feature extraction network, uses the filtered weighted feature vector to replace the original residual fusion, and adds a second-order item to reduce the information loss in the process of fusion and accelerate the convergence of the model. Based on the experiments on COCO and PASCAL VOC datasets, the results show that the Attention-YOLO algorithm effectively reduces the boundary box positioning loss and improves the detection accuracy. Compared with YOLOv3, the Attention-YOLO improves at most 2.5 mAP@IoU[0.5∶0.95] on COCO dataset, and reaches 81.9 mAP on PASCAL VOC 2007 test.
    Support Vector Machine(SVM) has good performance for classification,but the performance is restricted by the kernel function and its parameters.This paper discusses the problem,and uses cross validation,grid searching for optimizing the kernel function parameters.
    LIU Kang, QIAN Xu, WANG Ziqiang
    2012, 48 (34): 1-4.  
    As a method of constructing an effective training set, the goal of active learning algorithm is to find informative sample which can enhance the classification results of the model during the iteration, thereby reducing the size of the training set and improving the efficiency of the model within the limited time and resources. At present, active learning has become a hot issue in the field of pattern recognition, machine learning and data mining. The fundamental ideas, some latest research results and algorithm analysis of active learning are introduced. Some problems for further research are presented and analyzed.
    The vast majority of researches about the shortest path algorithm, nowadays, focus just on the case starting from the beginning point and ending at the ending point. If additional condition that the shortest path must go through some given nodes of which number is uncertain must be met, then most of the existing classic algorithms are not applicable. A general method based on the classical Dijkstra algorithm and greedy algorithm is presented to solve this kind of problem. The main method is to split the relevant node set into three sub sets, find the local shortest path of connecting the three subset separately to form the global shortest path to be selected, obtain the target path through screening. The time complexity of the algorithm is given by theoretical analysis and the effectiveness of the algorithm is verified by programming calculation.
    The small signal model of the closed-loop control system for the DC/DC switching power supply in s-domain is built.The digital compensator design is based on digital redesign approach.The analog compensator in s-domain is designed firstly using Bode plot and root-locus techniques,and discretized to a z-domain compensator.The delay effects associated with the AD converter and the DPWM circuits are introduced to build block diagram of the system,so that the impact of sampling rate is considered to improve the design.The digital compensator achieves accurate and programmable control of PWM regulation to ensure high dynamic performance for the closed loop operation of the converter.The simulation results verify the performance of the compensator.
    JIANG Wei, HE Fei, TONG Yifei, LI Dongbo
    2016, 52 (22): 242-247.  
    In order to find out the shortest time that traverses all blocks to solve the circular orbit RGV(rail guided vehicles) scheduling problem in automated warehouse, this paper analyzes the main influencing factors and then proposes the goal that find out the shortest path and the least clogging scheme. Mathematical model is established and rule-based Genetic Algorithm is designed to solve the problem. This paper uses adaptive crossover and mutation probability to replace traditional fixed parameters to solve the problem that genetic algorithm is easy to fall into the phenomenon of local optima. An improved dynamic exploring process is advanced for the multi-objective optimization. In the end, the genetic operators are analyzed by experimental comparisons and the algorithm is validated by experiments.
    Code clone is a common phenomenon in the software system. The program code is converted to path execution sequence constituted by program nodes, through static analysis, by attribute definitions of nodes in this paper, and the calculation for the similarity is solved by discrete sequence similarity detection distance, line model and sequence correlation coefficient, and similarity between different programs. The experiments and data analysis verify the feasibility of this approach.
    Vehicle detection on image or video data is an important but challenging task for urban traffic surveillance. The difficulty of this task is to accurately locate and classify relatively small vehicles in complex scenes. In response to these problems, this paper presents a single deep neural network(DF-YOLOv3) for fast detecting vehicles with different types in urban traffic surveillance. DF-YOLOv3 improves the conventional YOLOv3 by first enhancing the residual network to extract vehicle features, then designing 6 different scale convolution feature maps and merging with the corresponding feature maps in the previous residual network, to form the final feature pyramid for performing vehicle prediction. Experimental results on the KITTI dataset demonstrate that the proposed DF-YOLOv3 can achieve efficient detection performance in terms of accuracy and speed. Specifically, for the 512×512 input model, using NVIDIA GTX 1080Ti GPU, DF-YOLOv3 achieves 93.61% mAP(mean average precision) at the speed of 45.48 f/s(frames per second). Especially, as for accuracy, DF-YOLOv3 performances better than those of Fast R-CNN, Faster R-CNN, DAVE, YOLO, SSD, YOLOv2, YOLOv3 and SINet.
    A new time series outlier detection algorithm of high-efficiency is proposed for the foundation of k-nearest local outlier detection algorithm based on segmentation.Firstly,series important point as segmentation point can compress high-proportionally time series data in this algorithm;Secondly,the outlier pattern of time series can be detected by local outlier detection technique.Experimental results on electrocardiogram(ECG) data show that the algorithm is effective and reasonable.
    Three matrix multiplications on CPU and four CUDA-based matrix multiplications on GPU are described,the causes of high performance are analyzed and the common characteristic of efficient algorithm is that data are properly organized and rationally utilized,and therefore the access cost effectively reduced and the speed is greatly improved.The best optimized implementation on CPU gain more 200 times fast than the common one,the best optimized implementation on GPU gain about 6 times fast than the best one on CPU.
    FANG Rui, LIU Jiahe, XUE Zhihui, YANG Guangwen
    2015, 51 (8): 32-36.  
    According to the characteristics of the Convolution Neural Network(CNN), a FPGA-based acceleration program which uses deep-pipeline architecture is proposed for the MNIST data set. In this program, theoretically 28×28 clock cycles can finish the whole calculation and get the output of the CNN. For the propagation stage of the training process, and in the same network structure and the same data set, this FPGA program with 50 MHz frequency can achieve nearly five times speedup compared to GPU version(Caffe), achieve eight times speedup compared to 12 CPU cores. While the FPGA program just costs 26.7% power which GPU version costs.
    YU Jian-ping 1,ZHOU Xin-min 2,CHEN Ming 1
    2010, 46 (25): 1-4.   DOI: 10.3778/j.issn.1002-8331.2010.25.001
    Swarm intelligence has the characteristics of the collective intelligence emerging from the cooperation of individuals with little intelligence,which provides basic solutions for the complicated distributed problems under the conditions without central control and global model.The potential features of the parallel and distribution make the swarm intelligence an important direction in computer domain.After introducing the basic swarm intelligence model,two kinds of the swarm intelligence-based representative algorithms——The particle swarm optimization and the ant colony optimization are detailed and the characteristics of them are compared.Finally,the future research aspects of the swarm intelligence are emphatically suggested,especially the broad-applied ant algorithms.
    Using Ethernet technology and architecture as the direction of the next-generation in-vehicle networks receives widespread concern by the automotive industry and communications technicians. The demand for transmission bandwidth of ADAS and entertainment system promotes the process of Ethernet network involved in automotive network. This paper analyzes the problems when in-vehicle network faces high bandwidth requirements, describes the evolution of Ethernet in-vehicle networks, and discusses the Ethernet technology for the automotive industry.
    LI Yang, XU Feng, XIE Guangqiang, HUANG Xianglong
    2018, 54 (9): 13-21.   DOI: 10.3778/j.issn.1002-8331.1712-0139
    First of all, the definition and characteristics of multi-agent technology is introduced. By analyzing the literature on application of multi-agent technology at home and abroad, the basic research of multi-agent system is analyzed and the technical development of the direction of multi-agent consensus and control are combed. Then, this article chooses the two fields of robot control and wireless sensor networks to focus on the application changes and the latest achievements of multi-agent technology in practical engineering in recent years. Finally, this paper summarizes main problems to be solved in engineering application, and points out the research direction of future multi-agent system application.
    LIANG Hong, XU Nanshan, LU Gang
    2015, 51 (7): 141-148.  
    Based on the relationship network of Weibo users, the number of fans, User PR values and users’ activities are considered as measurements of users’ influence on Weibo with the distributions of the three factors. Results show that both the distributions of the number of fans and User PR values follow power-law distribution. It is found that there are much more verified users in top User PR ranking list than in fans ranking list and it is suggested that top activity users are much more popular in advertisement campaign after analyzes the top users and their posts in fans ranking, User PR ranking and activity ranking. It is also found that Sina Weibo users prefer to repost and comment on other users’ Weibo. There are a large number of images, videos and links on Sina Weibo, and most of them are reposted from another user.
    LI Gai1,2,3,LI Lei2,3
    2011, 47 (30): 4-7.  
    Collaborative filtering recommendation algorithm is one of the most successful technologies in the e-commerce recommendation system.Aiming at the problem that traditional collaborative filtering algorithms generally exist sparseness resistance and extendibility,in this paper,a CF algorithm,alternating-least-squares with weighted-[λ]-regularization(ALS-WR) is described.That is,by using regularization constraint to the traditional matrix decomposition model to prevent model overfitting training data and using alternating-least-squares method to train the decomposition model.The experimental evaluation using two real-world datasets shows that ALS-WR achieves better results in comparison with several classical collaborative filtering recommendation algorithms not only in extendibility but also in sparseness resistance.
    LI Shuquan1, SUN Xue1, SUN Dehui1, BIAN Weipeng2
    2012, 48 (1): 36-39.  
    Crossover is an important operator in genetic algorithm. This paper gives a brief introduction about some mature crossover operators, discusses some improved crossover operators from different aspects, such as the application of theory, mechanism and so on. Through the analysis, it is found that the improved crossover operators can overcome the shortcomings of the traditional genetic algorithm, improve search efficiency and accuracy and avoid premature convergence. This paper points out the crossover operators’ research direction, which makes the foundation for the development and application of genetic algorithms in the future.
    A comprehensive investigation on data mining technique based demographic data predictive models,which include both domestic and abroad researches,has been brought forward in this paper. All the models are classified and compared with regard to specific prediction purposes. After analyzing the merits and disadvantages of these models,prospects are proposed for further study.
    FANG Luping1, HE Hangjiang1, ZHOU Guomin2
    2018, 54 (13): 11-18.   DOI: 10.3778/j.issn.1002-8331.1804-0167
    Object detection is an important problem in computer vision, which has critical research value in the field of pedestrian tracking, license plate recognition and unmanned driving. In recent years, the accuracy of image classification is greatly improved with deep learning, thus the object detection methods based on deep learning have gradually become mainstream. The development and present situation of object detection methods are reviewed, and a prospect is made. Firstly, the development, improvement and deficiency of the traditional algorithms and depth learning-based algorithms are summarized, and then compared. Finally, the difficulties and challenges of the object detection method based on deep learning are discussed, and the possible development direction is prospected.
    LI Xian-shan,ZHAO Feng-da,KONG Ling-fu
    2009, 45 (25): 246-248.   DOI: 10.3778/j.issn.1002-8331.2009.25.076
    Normal Distributions Transform(NDT) is used in scan-matching based SLAM,and map-building is implemented in larger scale indoor environment by home service robots.The matching between geometrical features is replaced by the matching of normal probability distribution of scanning points in this method,and effectively resolves the problem of slower speed of existing scan-matching method.
    ZHANG Shufang, ZHANG Cong, ZHANG Tao, LEI Zhichun
    2015, 51 (19): 13-23.  
    Image quality assessment can effectively evaluate distortion or degradation caused by image acquisition and transmission process, which has a broad application prospect in the field of digital multimedia. And because of no need any pristine knowledge of reference images, no-reference image quality assessment has become an advanced research hotspot in the field of image quality assessment. On the basis of extensive research of literatures at home and abroad, in both of algorithm principle and performance comparison, this paper systematically introduces several state-of-the-art no-reference IQA algorithms, such as BIQI, DIIVINE, BLIINDS, BLIINDS-II, BRISQUE, NIQE and GRNN. Firstly, the methods of feature extraction and the principle of quality assessment of each algorithm are introduced. Secondly, the algorithms above are simulated and evaluated on the LIVE image database, and the performance and execution speed of the algorithms are analyzed and compared. At last, the further research trends of no-reference image quality assessment are proposed. Although these no-reference image quality assessments reviewed in this paper have satisfactory performance, their processes of evaluating image quality heavily depend on opinion data of image quality in the image database, and there still exist some deficiencies in evaluation?performance and algorithm complexity. Therefore, it is necessary to make further study in this field.
    GU Nannan, FENG Jun, SUN Xia, ZHAO Yan, ZHANG Lei
    2017, 53 (18): 141-148.   DOI: 10.3778/j.issn.1002-8331.1612-0406
    In order to solve the problem of laborious and time-consuming artificial selection from mass electronic resumes, a solution to resumes automatic extraction and recommendation is proposed. Firstly, the sentences in Chinese resume are represented as vectors through word segmentation, part-of-speech tagging and other preprocessing steps, then SVM classification algorithm is used to classify the sentences into six predefined general classes, such as personal basic information, job intension, working experience and so on. Secondly, according to the lexical and grammatical features of personal basic information block, the rules are constructed by hand to extract the key information like Name, Gender, and Contact information. While the HMM model is used to extract the detailed information in complex information blocks, and puts forward rules and statistics based resume information extraction method. Finally, a Content-Based Reciprocal Recommender algorithm (CBRR) is proposed, which takes into account the preferences of both enterprise and job seekers. The experiment results show that the solution proposed in this paper can assist enterprises in recruitment, improve screening efficiency and save recruitment costs.
    CHANG Peng1,MA Hui2
    2011, 47 (20): 126-128.  
    In order to overcome the shortcoming of traditional methods of subject extraction,such as the theme drifting and theme misjudging,a new keywords extraction algorithm based on co-occurrence analysis is proposed in this paper.The word’s weight is adjusted by its ability of associating with other words.The word that occurred with more words has greater impact and is extracted firstly.The experimental results show that the summarization generated by the improved algorithm gets better performance than other methods both in recall and precision.
    LIN Gaoquan1, PAN Wubin2, 3, CHENG Guang2, 3, XU Jian2, 3
    2017, 53 (15): 18-24.   DOI: 10.3778/j.issn.1002-8331.1704-0394
    YouTube as the world’s largest video provider, the proportion of video traffic in network traffic is increasing. A lot of video traffic has brought great challenges to the Internet service provider. With the encryption of YouTube traffic, it is important to get video QoE evaluation information from HTTPS encrypted traffic. Based on the analysis of the streaming media transmission mode adopted by YouTube App on Android and IOS platform, this paper proposes a combination of C4.5 decision tree and [k]-means clustering to carry out resolution identification of all the video chunks when users watch the videos. The experimental results show that the method can accurately identify the bitrate and resolution of each video chunk in the video playback process.
    Spoken Language Understanding(SLU) is a vital part of the human-machine dialogue system, which includes an important sub-task called intent detection. The accuracy of intent detection is directly related to the performance of semantic slot filling, and it is helpful to the following research of the dialogue system. Considering the difficulty of intent detection in human-machine dialogue system, the traditional machine learning methods cannot understand the deep semantic information of user’s discourse. This paper mainly analyzes, compares and summarizes the deep learning methods applied in the research of intent detection in recent years, and further considers how to apply deep learning model to multi-intent detection task, so as to promote the research of multi-intent detection methods based on deep neural network.
    ZHANG Zhongwei1,2, CAO Lei1, CHEN Xiliang1, KOU Dalei1,3, SONG Tianting2
    2019, 55 (12): 8-19.   DOI: 10.3778/j.issn.1002-8331.1901-0358
    Knowledge reasoning is an important means of knowledge graph completion and has always been one of the research hotspots in the field of knowledge graph. With the development of neural network, its applications in knowledge reasoning have been paid more and more attention in recent years. The knowledge reasoning methods based on neural network have not only stronger reasoning and generalization abilities, but also higher utilization rates of entities, attributes, relations and text information in the knowledge base. These methods are more effective in reasoning. The relevant concepts of knowledge graph and knowledge graph completion are introduced, the concepts and basic principles of knowledge reasoning are indicated, and then the latest research progresses of the technology of knowledge reasoning based on neural network are reviewed. The existing problems and development directions of knowledge reasoning in the aspect of theory, algorithm and application are summarized.
    QIAN Yucun, PENG Guojun, WANG Ying, LIANG Yu
    2015, 51 (18): 76-81.  
    With the problem of the explosive growth of malicious code and many of the malicious samples are variations of previously encountered samples, this paper presents a novel approach to investigate the homology of malicious code based on behavior characteristics. To distinguish the variations of malicious code, it studies the malicious behavior of malwares, then computes the similarity of characteristics and the call graphs which are extracted by disassembly tools. It employs the clustering algorithms of DBSCAN to discover the family of malicious code. Experiments show that it effectively investigates the homology of malicious code and cluster variations into different malicious code family.
    ZHAI Zhengli, LIANG Zhenming, ZHOU Wei, SUN Xia
    2019, 55 (3): 1-9.   DOI: 10.3778/j.issn.1002-8331.1810-0284
    Variational Auto-Encoders(VAE) as one of deep latent space generative models have been immensely success on its performance in recent years, especially in image generation. VAEs models are important tools for unsupervised feature learning, which can learn a mapping from a latent encoding space to a data generative space and reconstruct the inputs to outputs. Firstly, this paper reviews the development and present research situation of the traditional variational auto-encoders and its variants, summarizes and compares the performance for all of them. Finally, the existing difficulties and challenges of VAEs are analyzed, and the possible development direction is prospected.
    YU Chunmei
    2014, 50 (11): 210-217.  
    Compressed Sensing(CS) is a new theoretical frame about information acquisition and processing developed in recent years. This paper gives an introduction to the basic theory of CS, focused on sparse optimization algorithms, which are divided into three classes in this paper:active set methods, projection operator methods and classical convex programming methods. The basic idea, main research progresses, and adaptive optimization problems of each method are discussed. Finally, some open problems and research directions in sparse optimization of CS are pointed out.
    Feature extraction is the foundation of the audio classification,and good features will enhance the classification accuracy effectively.In this paper,Mel-frequency cepstrum coefficients are extracted from frequency domain of audio.At the same time,features are extracted from wavelet domain after discrete wavelet transform is done for each frame of the audio.Then the features from the frequency domain and wavelet domain are combined to calculate the statistical features.Finally,audio template is established according to the Support Vector Machine(SVM),and it is classified and identified into speech,music and speech with music.Tests show that the method gets comparatively high identification accuracy.
    XIA Wenling1, GU Zhaopeng2, YANG Tangsheng2
    2014, 50 (24): 199-203.  
    As an important branch of the computer vision technology, the 3-D reconstruction techniques based on the Monocular vision get more and more attention, for it’s simple, low cost and easy to implement. This paper launches a study on the algorithm of the SLAM(Simultaneous Localization and Mapping) by introducing the RGB-D camera Kinect to obtain the depth information of the 3D scene. An algorithm of the 3-D reconstruction based on the Kinect and monocular vision SLAM is achieved.
    ZHANG Yu-dong,WU Le-nan,WANG Shui-hua
    2010, 46 (19): 43-47.   DOI: 10.3778/j.issn.1002-8331.2010.19.012
    To survey the development of expert system,this paper partitions expert systems as 5 stages: rule-based,
    frame-based,case-based,ontology-based,and web-based,on the basis of development sequence.In each stage the concept of
    corresponding expert system is analyzed,the typical algorithm is put forward,and some representative examples are advanced. Then,the development law is proposed,which is advised to consist of principle development law and technique development
    law.The principle development obeys the negation of negation law while the technique development can be seen as the interdisciplinary.
    In the end,the further direction of research is predicted.
    TIN(Triangulated Irregular Network) has better performance on shaping terrain. The generation algorithm has been great concerned. This paper has discussed the data structure design of triangulation, and designed and implemented the algorithm based on Bowyer-Watson idea that is an incremental insertion algorithm. This paper analyzes why the algorithms may arise the phenomenon of cross during the experiment, and gives the improved idea. The improved algorithm has been used to visualize the terrain modeling, to obtain good results, for the research triangulation has some value.
    CHEN Hui1,2, GUO Tao3, CUI Baojiang2, WANG Jianxin4
    2012, 48 (33): 79-84.  
    The traditional file similarity detection technique is generally based on source code. In the case of source code unavailable, binary comparison technique is proposed for clone detection. Four binary file similarity detection techniques and the main detection tools are summarized and analyzed. Based on the evaluation method of binary file clone comparison, the experiment test has been carried on. This method provides a review of binary file clone types, detection approaches and the similarity calculation standard. Experiments show that for continuous clone, division clone which doesn’t affect the call relations, equivalent replacement clone which doesn’t affect basic block number and the call relations, the techniques similarity detection with the binary files gets more accurate result than the ones token-based similarity detection with the source code files.
    With the development of microblog, it is more convenient to comment on the Web. Up to now, there are very few studies on the sentiment classification for Chinese microblog, therefore this paper uses three machine learning algorithms, three kinds of feature selection methods and three feature weight methods to study the sentiment classification for Chinese microblog. The experimental results indicate that the performance of SVM is best in three machine learning algorithms, IG is the better feature selection method compared to the other methods, and TF-IDF is best fit for the sentiment classification in Chinese microblog. Combining the three factors the conclusion can be drawn that the performance of combination of SVM, IG and TF-IDF is best. For the movie domain it is found that the sentiment classification depends on the review style.