Computer Engineering and Applications ›› 2019, Vol. 55 ›› Issue (12): 37-43.DOI: 10.3778/j.issn.1002-8331.1902-0116

Previous Articles     Next Articles

New Neural Network for Solving Nonsmooth Pseudoconvex Optimization Problems

YU Xin, WU Lingzhen, WANG Yanlin   

  1. College of Computer and Electronic Information, Guangxi University, Nanning 530004, China
  • Online:2019-06-15 Published:2019-06-13


喻  昕,伍灵贞,汪炎林   

  1. 广西大学 计算机与电子信息学院,南宁 530004

Abstract: Aiming at the nonsmooth pseudoconvex optimization problem with inequality constraints, a new recurrent neural network based on differential inclusion theory is proposed. According to the objective function and constraints, a penalty function is designed, which changes with the change of the state vector, so that the state vector of the neural network always moves in the direction of the feasible region and ensures that the state vector of the neural network can be in finite time. It enters the feasible region and converges to the optimal solution of the original optimization problem. Finally, two simulation experiments are used to verify the validity and accuracy of the neural network. Compared with the existing neural network, it is a new type of neural network model. The model has simple structure. It does not need to calculate the exact penalty factor, and most importantly, and it also does not need the bounded feasible region.

Key words: nonsmooth pseudoconvex functions, neural networks, convergence, optimization problems

摘要: 针对带有不等式约束条件的非光滑伪凸优化问题,提出了一种基于微分包含理论的新型递归神经网络模型,根据目标函数与约束条件设计出随着状态向量变化而变化的罚函数,使得神经网络的状态向量始终朝着可行域方向运动,确保神经网络状态向量可在有限时间内进入可行域,最终收敛到原始优化问题的最优解。最后,用两个仿真实验用来验证神经网络的有效性与准确性。与现有神经网络相比,它是一种新型的神经网络模型,模型结构简单,无需计算精确的罚因子,最重要的是无需可行域有界。

关键词: 非光滑伪凸函数, 神经网络, 收敛, 优化问题