Applying of Zhang neural network in time-varying nonlinear function optimization
Subject Areas : Multimedia Processing, Communications Systems, Intelligent Systemselaahe Karami 1 , zeinab Mousavi 2 , Kobra Gholami 3
1 - MS Student, Department of Mathematics, Bushehr Branch, Islamic Azad University, Bushehr, Iran
2 - Assistant Professor, Department of Mathematics, Abhar Branch, Islamic Azad University, Abhar, Iran
3 - Assistant Professor, Department of Mathematics, Bushehr Branch, Islamic Azad University, Bushehr, Iran
Keywords: Neural network, Zhang neural network, Nonlinear optimization, , Optimization, Time-varying nonlinear optimization,
Abstract :
Introduction: Optimization of nonlinear time-varying functions, as a subset of nonlinear programming, has been widely observed in various economic and engineering models. In energy management, one example of optimizing nonlinear functions with time-variable components is the efficient allocation of energy resources and managing changes in demand and supply, leading to increased efficiency and reduced energy waste. In this article, we intend to use Zhang neural networks for optimizing nonlinear functions with time-varying components. By harnessing the parallel processing power of neural networks, Zhang networks search the solution space faster than traditional methods, significantly reducing the required computational time.
Method: In this research, the proposed neural network receives data using MATLAB software. The data is first standardized using standard normalization methods. The data is then divided into four stages: training, testing, experimenting and validation which are further evaluated in five phases. The training data is based on the Luenberger-Madala algorithm for the first layer and a linear function for the second layer. Subsequently, the best network structure is considered with the transformation function and the proposed neural network model is tested in five stages. In this research, the Taylor series is used for data normalization and the zero-stability model of the n discrete time method is used to calculate error which reduces the error. The data in this research examined and evaluated in four stages of training, test, test and validation. Data training is based on Lunberg-Maud algorithm model for the first layer and linear function for the second layer. The reason for using Lunberg-Maud algorithm for research analysis is its convergence speed and higher efficiency due to not being in local minima and small error level.
Results: The best network structure with transformation function was considered and tested in 5 steps based on the proposed neural network model. The mean square of error in the third and fourth experiments has gradually increased compared to the first two stages. This amount of difference in the performance error as well as the coefficient of determination is different in each iteration and is caused by getting stuck in local minima.
Discussion: Due to the results obtained in the five test stages, it can be said that the algorithm based on the proposed neural network improves the performance of the network by increasing the learning rate. However, this algorithm is highly sensitive to local minima. This problem exists even when the learning rate is small and therefore the step of the algorithm is small. To avoid this sensitivity to local minima, the algorithm used in the proposed network was tested with momentum with different learning rates in five stages and the best result was evaluated. Also, at each stage, the process of training, testing and validation was evaluated separately.
[1] L. Jin ang Y. Zhang "Discrete-time Zhang neural network for online time-varying nonlinear optimization with application to manipulator motion generation," IEEE Transactions on Neural Networks and Learning Systems, Vol. 26, no. 7, pp. 1525-1540, 2015.
[2] D. Guo and et. al., "Design and analysis of two discrete-time ZD algorithms for time-varying nonlinear minimization," Numerical Algorithms, vol. 77, pp. 23-36, 2018.
[3] Y. Zhang and et. al., "General four-step discrete-time zeroing and derivative dynamics applied to time-varying nonlinear optimization," Journal of Computational and Applied Mathematics, pp. Vol. 347, pp: 314–329, 2019.
[4] Y. Wang and J. Xiu, Nonlinear Programming Theory and Algorithm, Shaanxi Science and Technology Press, 2008.
[5] F. Uhlig and Y. Zhang, "Neural Networks for fast and accurate computations of the field of values," Linear and Multilinear Algebra, vol. Vol. 13, pp. 1-18, 2019.
[6] W. Bian and et. al., "Neural network for nonsmooth pseudoconvex optimization with general convex constraints," Neural Networks, vol. Vol. 101, pp. 1-14, 2018.
[7] A. Hosseini, J. Wang and S.M. hosseini, "A recurrent neural network for solving a class of generalized convex optimization problems," Neural Network, vol. Vol. 44, pp. 78-86, 2013.
[8] Y. Zhang, "ZFD formula 4I g SFD_Y applied to future minimization," Physics Letters A, vol. Vol. 381, no. No. 19, p. 1677–1681, 2017.
[9] A. Joze Ashori, Sh. Karimi, "QoS Management Solution in Software Defined Networking using Ryu Controller," IMPCS, 2021, 3(1 9-23.
[10] J. Balakudehi, M. Tahghighi Sharabyan, "Providing a New Approach to Identify and Detect Credit Card Fraud Using ANN– ICA," IMPCS, 2022, 3(2), 51-62.