استفاده از شبکه های عصبی ژانگ با زمان گسسته برای بهینه سازی غیرخطی متغیر با زمان
محورهای موضوعی : پردازش چند رسانه ای، سیستمهای ارتباطی، سیستمهای هوشمندالهه کرمی 1 , زینب موسوی 2 , کبری غلامی 3
1 - کارشناسی ارشد، گروه ریاضی، واحد بوشهر، دانشگاه آزاد اسلامی، بوشهر، ایران
2 - استادیار، گروه ریاضی، واحد ابهر، دانشگاه آزاد اسلامی، ابهر، ایران
3 - استادیار، گروه ریاضی، واحد بوشهر، دانشگاه آزاد اسلامی، بوشهر، ایران
کلید واژه: شبکه عصبی, شبکه عصبی ژانگ, بهینهسازی غیرخطی, بهینهسازی, بهینهسازی غیرخطی با زمان متغیر.,
چکیده مقاله :
در این مقاله، قصد داریم از شبکه های عصبی ژانگ برای بهینهسازی توابع غیرخطی با زمان متغیر استفاده کنیم. در این جهت از یک مدل کلی گسسته سازی ژانگ با خطای کوتاه سازی O (τ^5) استفاده شده و سعی بر آن شده است تا مطالعه دو مدل کلی پنج مرحله ای زمان گسسته شبکه عصبی ژانگ و کاوش در رابطه پارامتر( a_1 ) و اندازه بهینه گام (h ) گسترش یابد. در این پژوهش، با استفاده از نرمافزار متلب به منظور ورود داده ها به شبکه عصبی پیشنهادی، ابتدا با روش نرمال سازی استاندارد، نرمال شده اند. داده های مورد نظر در پژوهش در چهار مرحله، آموزش، تست، آزمایش و اعتبارسنجی و در پنج فاز مورد بررسی و ارزیابی قرار گرفتند. آموزش داده ها بر مبنای مدل الگوریتم لونبرگ- ماد برای لایه اول و تابع خطی برای لایه دوم انجام شده است. در ادامه بهترین ساختار شبکه با تابع تبدیل در نظر گرفته شده و براساس مدل شبکه عصبی پیشنهادی در پنج مرحله مورد آزمایش قرار گرفته است.
Introduction: Optimization of nonlinear time-varying functions, as a subset of nonlinear programming, has been widely observed in various economic and engineering models. In energy management, one example of optimizing nonlinear functions with time-variable components is the efficient allocation of energy resources and managing changes in demand and supply, leading to increased efficiency and reduced energy waste. In this article, we intend to use Zhang neural networks for optimizing nonlinear functions with time-varying components. By harnessing the parallel processing power of neural networks, Zhang networks search the solution space faster than traditional methods, significantly reducing the required computational time.
Method: In this research, the proposed neural network receives data using MATLAB software. The data is first standardized using standard normalization methods. The data is then divided into four stages: training, testing, experimenting and validation which are further evaluated in five phases. The training data is based on the Luenberger-Madala algorithm for the first layer and a linear function for the second layer. Subsequently, the best network structure is considered with the transformation function and the proposed neural network model is tested in five stages. In this research, the Taylor series is used for data normalization and the zero-stability model of the n discrete time method is used to calculate error which reduces the error. The data in this research examined and evaluated in four stages of training, test, test and validation. Data training is based on Lunberg-Maud algorithm model for the first layer and linear function for the second layer. The reason for using Lunberg-Maud algorithm for research analysis is its convergence speed and higher efficiency due to not being in local minima and small error level.
Results: The best network structure with transformation function was considered and tested in 5 steps based on the proposed neural network model. The mean square of error in the third and fourth experiments has gradually increased compared to the first two stages. This amount of difference in the performance error as well as the coefficient of determination is different in each iteration and is caused by getting stuck in local minima.
Discussion: Due to the results obtained in the five test stages, it can be said that the algorithm based on the proposed neural network improves the performance of the network by increasing the learning rate. However, this algorithm is highly sensitive to local minima. This problem exists even when the learning rate is small and therefore the step of the algorithm is small. To avoid this sensitivity to local minima, the algorithm used in the proposed network was tested with momentum with different learning rates in five stages and the best result was evaluated. Also, at each stage, the process of training, testing and validation was evaluated separately.
[1] L. Jin ang Y. Zhang "Discrete-time Zhang neural network for online time-varying nonlinear optimization with application to manipulator motion generation," IEEE Transactions on Neural Networks and Learning Systems, Vol. 26, no. 7, pp. 1525-1540, 2015.
[2] D. Guo and et. al., "Design and analysis of two discrete-time ZD algorithms for time-varying nonlinear minimization," Numerical Algorithms, vol. 77, pp. 23-36, 2018.
[3] Y. Zhang and et. al., "General four-step discrete-time zeroing and derivative dynamics applied to time-varying nonlinear optimization," Journal of Computational and Applied Mathematics, pp. Vol. 347, pp: 314–329, 2019.
[4] Y. Wang and J. Xiu, Nonlinear Programming Theory and Algorithm, Shaanxi Science and Technology Press, 2008.
[5] F. Uhlig and Y. Zhang, "Neural Networks for fast and accurate computations of the field of values," Linear and Multilinear Algebra, vol. Vol. 13, pp. 1-18, 2019.
[6] W. Bian and et. al., "Neural network for nonsmooth pseudoconvex optimization with general convex constraints," Neural Networks, vol. Vol. 101, pp. 1-14, 2018.
[7] A. Hosseini, J. Wang and S.M. hosseini, "A recurrent neural network for solving a class of generalized convex optimization problems," Neural Network, vol. Vol. 44, pp. 78-86, 2013.
[8] Y. Zhang, "ZFD formula 4I g SFD_Y applied to future minimization," Physics Letters A, vol. Vol. 381, no. No. 19, p. 1677–1681, 2017.
[9] A. Joze Ashori, Sh. Karimi, "QoS Management Solution in Software Defined Networking using Ryu Controller," IMPCS, 2021, 3(1 9-23.
[10] J. Balakudehi, M. Tahghighi Sharabyan, "Providing a New Approach to Identify and Detect Credit Card Fraud Using ANN– ICA," IMPCS, 2022, 3(2), 51-62.