Developing a Novel Neural Network Method for Solving Variable-Order Fractional Partial Differential Equations with Time-Varying Delay
Subject Areas : Numerical Analysis
Farahnaz Golpour Lasaki
1
,
Hamideh Ebrahimi
2
*
,
Mousa Ilie
3
1 - Department of Mathematics, Rasht Branch, Islamic Azad University, Rasht, Iran
2 - Department of Mathematics, Rasht Branch, Islamic Azad University, Rasht, Iran
3 - Department of Mathematics, Rasht Branch, Islamic Azad University, Rasht, Iran
Keywords: Functional link neural network, Lagrange polynomials, Variable-order fractional operators, Time-varying delay,
Abstract :
This paper presents a novel functional link neural network for solving a class of variable-order fractional partial differential equations with time-varying delay. Due to the proficiency of Lagrange polynomials in numerical approximations for fractional calculus, these polynomials serve as the foundational neuro-solutions within the neural network. Finding the right activation functions is essential for effective learning in artificial neural networks, particularly when solving variable-order fractional derivatives and time-varying delays. To reduce the computational complexity of the proposed neural network, a linear activation function is used. Numerical simulations are carried out to demonstrate the capability of the proposed method. The neural network undergoes training using a modified Newton-Raphson method instead of the traditional learning techniques. The study’s findings indicate that the suggested functional link neural network achieves greater accuracy in comparison to some traditional methods for solving fractional partial differential equations.
Akgül, A., Inc, M., & Baleanu, D. (2017). On solutions of variable-order fractional differential equations. International Journal of Optimization and Control Theories & Applications, 7(1), 12-26. https://doi.org/10.11121/ijocta.01.2017.00368
Al-Janabi, S., & Alkaim, A. (2020). A nifty collaborative analysis to predicting a novel tool (drflls) for missing values estimation. Soft Computing, 24 (1), 38-46. https://doi.org/10.1007/s00500-019-03972-x
Al-Janabi, S., Alkaim, A., & Adil, Z. (2020). An innovative synthesis of deep learning techniques (DCapsNet & DCOM) for generation electrical renewable energy from wind energy. Methodologies and Application, 24 (1), 16-24. https://doi.org/10.1007/s00500-020-04905-9
Al-Janabi, S., Mohammad, M., & Al-Sultan, A. (2020). A new method for prediction of air pollution based on intelligent computation. Soft Computing, 24 (1), 62-76. https://doi.org/10.1007/s00500-019-04495-1
Bijiga, L. K., & Ibrahim, W. (2021). Neural network method for solving time-fractional telegraph equation. Mathematical Problems in Engineering, 12(2), 78-87. https://doi.org/10.1155/2021/7167801
Canuto, C., Hussaini, M. Y., Quarteroni, A., & Thomas, A. Jr. (2012). Spectral methods in fluid dynamics. Springer Science & Business Media. ISBN: 978-3-642-84108-8.
Chávez-Vázquez, S., Gómez-Aguilar, J. F., Lavín-Delgado, J. E., Escobar-Jiménez, R. F., & Olivares-Peregrino, V. H. (2022). Applications of Fractional Operators in Robotics: A Review. Journal of Intelligent & Robotic Systems, 104(63). https://doi.org/10.1007/s10846-022-01597-1
Choudhary, R., Singh, S., & Kumar, D. (2022). A second-order numerical scheme for the time-fractional partial differential equations with a time delay. Computational and Applied Mathematics, 88(1), 114-224. https://doi.org/10.1007/s40314-022-01810-9
Dabiri, A., Moghaddam, B. P., & Machado, J. A. T. (2018). Optimal variable-order fractional PID controllers for dynamical systems. Journal of Computational and Applied Mathematics, 339, 40-48. https://doi.org/10.1016/j.cam.2018.02.029
Dehestani, H., Ordokhani, Y., & Razzaghi, M. (2019). On the applicability of Genocchi wavelet method for different kinds of fractional-order differential equations with delay. Numerical Linear Algebra with Applications, 17 (1), 18-29. https://doi.org/10.1002/nla.2259
Diethelm, K., Kiryakova, V., Luchko, Y., Machado, J. A. T., & Tarasov, V. E. (2022). Trends, directions for further research, and some open problems of fractional calculus. Nonlinear Dynamics, 107(1), 3245-3270.
Ghazali, R., Hussain, A. J., Al-Jumeily, D., & Lisboa, P. (2009). Time series prediction using dynamic ridge polynomial neural networks. In 2009 Second International Conference on Developments in E-systems Engineering (pp. 354–363). IEEE. https://doi.org/10.1109/DeSE.2009.35
Golpour Lasaki, F., Ebrahimi, H., & Ilie, M. (2023). A novel Lagrange functional link neural network for solving variable-order fractional time-varying delay differential equations: a comparison with multilayer perceptron neural networks. Soft Computing, 27 (1), 12595-12608. https://doi.org/10.1007/s00500-023-08494-1
Hattaf, K. (2022). On the stability and numerical scheme of fractional differential equations with application to biology. Computations, 10(6). 108-126. https://doi.org/10.3390/computation10060097
Haykin, S. (1998). Neural Networks: A Comprehensive Foundation (2nd ed.). Pearson. ISBN-13: 978-0132733502.
Chen, T., Chen, H., & Liu, R. W. (1995). Approximation capability in multilayer feedforward networks and related problems. IEEE Transactions on Neural Networks, 6(1), 25-30. https://doi.org/10.1109/72.363453
Heydari, M. H., & Avazzadeh, Z. (2018). Legendre wavelets optimization method for variable-order fractional Poisson equation. Chaos, Solitons & Fractals, 112 (1), 180-190. https://doi.org/10.1016/j.chaos.2018.04.028
Hosseinpour, S., Nazemi, A., & Tohidi, E. (2018). A new approach for solving a class of delay fractional partial differential equations. Mediterranean. Journal of Mathematics, 15(1), 1-20.
https://doi.org/10.1007/s00009-018-1264-z
Kadhuim, Z. A., & Al-Janabi, S. (2023). Codon-mrna prediction using deep optimal neurocomputing technique (dlstm-dsn-woa) and multivariate analysis. Results in Engineering, 17(1), 118-132. https://doi.org/10.1016/j.rineng.2022.100847
Kheyrinataj, F., & Nazemi, A. (2019). Fractional power series neural network for solving delay fractional optimal control problems. Connection Science, 32(1), 1-28. https://doi.org/10.1080/09540091.2019.1605498
Kheyrinataj, F., & Nazemi, A. (2020). Fractional Chebyshev functional link neural network‐optimization method for solving delay fractional optimal control problems with Atangana‐Baleanu derivative. Optimal Control Applications and Methods, 27(1), 246-261. https://doi.org/10.1002/oca.2572
Kumar, P., & Erturk, V. S. (2022). The analysis of a time delay fractional COVID-19 model via Caputo type fractional derivative. Mathematical Methods in the Applied Sciences, 46(7), 23-213. https://doi.org/10.1002/mma.6935
Moghaddam, B. P. and Machado, J.T. (2017). A stable three-level explicit spline finite difference scheme for a class of nonlinear time variable order fractional partial differential equations. Computer & Mathematics with Applications, 73(6), 1262-1269. https://doi.org/10.1016/j.camwa.2016.07.010
Mohammed, G. S., & Al-Janabi, S. (2020). An innovative synthesis of optimization techniques (fdire-gsk) for generation electrical renewable energy from natural resources. Results in Engineering, 16(1), 118-126. https://doi.org/10.1016/j.rineng.2022.100637
Pao, Y.-H. (1989). Adaptive Pattern Recognition and Neural Networks. Addison-Wesley Longman Publishing Co., Inc. ISBN-13: 978-0201125849.
Patnaik, S., Hollkamp, J. P., & Semperlotti, F. (2020). Applications of variable-order fractional operators: a review. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Science, 476(1), 226-238. https://doi.org/10.1098/rspa.2019.0498
Patra, J. C., Chin, W. C., Meher, P. K., & Chakraborty, G. (2008). Legendre-flann-based nonlinear channel equalization in wireless communication system. In 2008 IEEE International Conference on Systems, Man and Cybernetics (pp. 1826–1831). https://doi.org/10.1109/icsmc.2008.4811554
Peterson, L. E., & Larin, K. V. (2008). Hermite/laguerre neural networks for classification of artificial fingerprints from optical coherence tomography. In 2008 Seventh International Conference on Machine Learning and Applications (pp. 637–643). https://doi.org/10.1109/ICMLA.2008.36
Sheng, H., Sun, H., Chen, Y. Q., & Qiu, T. Sh. (2011). Synthesis of multifractional Gaussian noises based on variable-order fractional operators. Signal Processing, 91(7), 1645–1650. https://doi.org/10.1016/j.sigpro.2011.01.010
Singh, A. K., Mehra, M., & Gulyani, S. (2021). A modified variable-order fractional SIR model to predict the spread of COVID-19 in India. Mathematical Methods in Applied Science, 46(7), 240-248. https://doi.org/10.1002/mma.7655
Susanto, H., & Karjanto, N. (2009). Newton’s method basin of attraction revisited. Applied Mathematics and Computation, 215(1), 1084-1090. https://doi.org/10.1016/j.amc.2009.06.041
Syah, R., Guerrero, J. W. G., Poltarykhin, A. L., Suksatan, W., Aravindhan, S., Bokov, D. O., Abdelbasset, W. K., Al-Janabi, S., Alkaim, A. F., & Tumanov, D. Y. (2022). Developed teamwork optimizer for model parameter estimation of the proton exchange membrane fuel cell. Energy Reports, 8(1), 10776–10785. https://doi.org/10.1016/j.egyr.2022.08.177
Tayebi, A., Shekari, Y., & Heydari, M. H. (2017). A meshless method for solving two-dimensional variable-order time fractional advection-diffusion equation. Journal of Computational Physics, 340(1), 655-669. https://doi.org/10.1016/j.jcp.2017.03.061
Zou, A.-M., Kumar, K. D., & Hou, Z.-G. (2010). Quaternion-based adaptive output feedback attitude control of spacecraft using chebyshev neural networks. IEEE Transactions on Neural Networks, 21(7), 1457-1471. https://doi.org/10.1109/TNN.2010.2050333
Zúniga-Aguilar, C., Gómez-Aguilar, J., Escobar Jiménez, R., & Romero Ugalde, H. (2019). A novel method to solve variable-order fractional delay differential equations based in lagrange interpolations. Chaos, 29(6), 146-168. https://doi.org/10.1016/j.chaos.2019.06.009
Islamic Azad University Rasht Branch ISSN: 2588-5723 E-ISSN:2008-5427
|
|
Optimization Iranian Journal of Optimization Volume 16, Issue 1, 2024, 73-87 Research Paper |
|
Online version is available on: https://sanad.iau.ir/journal/ijo
Developing a Novel Neural Network Method for Solving Variable-Order Fractional Partial Differential Equations with Time-Varying Delay
Farahnaz Golpour Lasaki, Hamideh Ebrahimi* and Mousa Ilie
Department of Mathematics, Ra.C., Islamic Azad University, Rasht, Iran.
Revise Date: 16 March 2025 Abstract
Keywords: Functional Link-based Neural Network Lagrange Polynomials Variable-order Fractional Systems Time-varying Delay
|
*Correspondence E‐mail: ebrahimi60@iau.ac.ir |
INTRODUCTION
Fractional calculus, a branch of Applied Mathematics, focuses on non-integer order differentiation and integration. This discipline presents numerous advantages and has found applications in a wide range of scientific and engineering fields (Chávez-Vázquez et al., 2022;
Diethelm et al., 2022). It provides a more accurate representation of complex systems, particularly those characterized by memory and hereditary effects, than traditional integer calculus. Additionally, fractional calculus enables the modeling and analysis of anomalous phenomena, allowing for a deeper understanding of intricate systems and their behavior.
Time-delay systems, often referred to as delay differential equations (DDEs), represent a distinct category of differential equations that incorporate time delays. These equations have been extensively utilized across diverse scientific domains, including economics, physics, ecology, and engineering control. While only a limited number of DDEs possess explicit analytical solutions, a surge in the development of numerical techniques has emerged recently to compensate for the existing gap in theoretical research. The study of delay fractional differential equations (DFDEs) has taken on increased significance in light of the growing relevance of fractional calculus. Prior research has introduced a range of both analytical and numerical methodologies to address DFDEs (Kumar & Erturk, 2022; Hattaf, 2022).
The dynamics described by variable-order fractional partial differential equations are finding increasing relevance in a variety of scientific and engineering domains. These include modeling the dynamics of the anomalous diffusion, synthesis of multifractional Gaussian noises, developing statistical mechanics models, and a predictive model of the spread of COVID-19 (Sheng et al., 2011; Singh et al., 2021). Deriving analytical solutions for problems that involve variable-order fractional derivatives poses significant challenges. Consequently, many researchers have turned to the development of numerical methods as viable alternatives. For example, a robust three-level explicit spline finite difference scheme has been proposed for a specific class of nonlinear time variable-order fractional partial differential equations (Moghaddam & Machado, 2017). Furthermore, a study in (Tayebi et al., 2017) explored a meshless approach to solve two-dimensional variable-order time-fractional advection-diffusion equations. Additionally, (Heydari & Avazzadeh, 2018). introduced the Legendre wavelets optimization technique for solving variable-order fractional Poisson equations.
While time-varying delay partial differential equations have attracted the attention of numerous researchers, there appears to be a limited exploration in the field of time-varying delay fractional partial differential equations with variable-order based on our current understanding. In this article, we introduce a neural network based model to solve the following category of variable-order fractional time-varying delay partial differential equations (VOFTVDPDEs). The equation is as follows
(1)
(2)
(3)
where , is the fractional variable-order derivative,
is the time-varying delay,
is a known function, and
is a continuous function,
is the diffusion coefficient.
In the VOTVFDPDEs (1)-(3), both the order of the derivative and the time delay are functions of time, which can introduce additional complexity to the problem. Moreover, the VOTVFDDE (1)-(3) can be considered a broader framework encompassing some previously discussed research. For example, with considering variable-order and time-varying delay
as constant values, the VOTVFDPDEs (1)-(3) are reduced to fractional order partial differential equations with time delay .To the best of our knowledge, this is the first time that this problem has been considered in the literature. Given the absence of an analytical solution, we aim to introduce a neural network (ANN) for solving the problem (1)-(3). ANNs, widely adopted in fractional calculus studies (Bijiga & Ibrahim, 2021). One of the primary advantages of employing ANN is the ability to acquire solutions in an analytical form, which is highly desirable in practical situations where continuous and differentiable functions are needed for a comprehensive study of solution properties. In addition to the analytical form of solutions, ANNs offer various other benefits. For instance, they consistently provide precise results throughout the entire domain, even when only a few data points are available. This enables researchers to effectively adjust the numerical technique based on initial conditions and simplifies the solving of complex problems due to the architecture and simplicity of ANN. Furthermore, ANNs prove to be valuable when there is a requirement to analytically integrate or differentiate the obtained solutions. By employing ANNs, one can conveniently perform these operations and study the properties of the solutions more effectively. ANNs are widely recognized and frequently used techniques in the field of machine learning. This is primarily due to their global approximation capability, which allows them to approximate and model various functions accurately. As a result, ANNs have gained significant popularity and find extensive applications in different fields.
To reference more recent works, refer to (Al-Janabi et al., 2020; Al-Janabi et al., 2020; Al-Janabi and Alkaim, 2020). Based on the previous discussion, we investigate the solutions of VOTVFDPDEs (1)-(3) through a novel class of artificial neural networks called functional link-based neural networks (FLNN). These networks offer advantages over multilayer neural networks (Pao, 1989). Unlike traditional neural networks, FLNNs do not include a hidden layer, resulting in a reduced number of unknown parameters compared to classical multilayer perceptrons (MLPs). Consequently, the computational burden is lessened with a single-layer FLNN structure. Recent studies have focused on classical polynomials within the FLNN framework (Zou et al., 2010; Peterson and Larin, 2008; Patra et al., 2008), particularly the Lagrange polynomial basis functions known for their rapid convergence and high accuracy. Hence, we propose a single-layer FLNN employing Lagrange polynomials to expand the input pattern for solving problem (1)-(3). By collocating the time domain, the VOTVFDPDEs problem can be transformed into an optimization problem, an essential aspect of neural network design (Mohammed & Al-Janabi, 2020; Kadhuim & Al-Janabi, 2023; Syah et al., 2022). Consequently, a backpropagation algorithm is applied to determine the unknown parameters of Lagrange functional link-based neural networks (LFLNN). Numerous iterative techniques, including gradient descent, Newton’s method, and conjugate gradient, are available for this purpose. However, for the training of our neural network, we will utilize a modified version of the Newton-Raphson method and examine its convergence properties. The structure of the paper is as follows: Section 2 provides a review of the fundamental concepts of fractional variable-order operators and basis functions. The architecture of the LFLNN and the corresponding learning algorithm are detailed in Section 3. In order to assess the effectiveness of different activation functions and the proposed framework, several numerical examples are discussed in Section 4. We give the discussion of results and conclusion in Sections 5 and 6, respectively.
MATHEMATICAL PRELIMINARIES
In this section, we will present definitions related to variable-order calculus and basic functions, which will be utilized in the subsequent sections of the paper. For more on the subject, we refer the reader to (Patnaik et al., 2020).
Fractional variable-order calculus
Definition 2.1 The Riemann-Liouville fractional variable-order integral is defined as follows
where .
Definition 2.2 The Caputo fractional variable-order derivative is characterized by the following definition
where .
Definition 2.3 The Caputo fractional variable-order partial differential operator of order with the lower limit
is defined as (Zúniga-Aguilar et al., 2019).
(6)
When the value of is a fixed constant
, it results in the
th-order Caputo fractional derivative. It is feasible to compute the variable-order fractional derivative for polynomial expressions, especially when
, as described by the following formula (Akgül et al., 2017).
(7)
Here, the foundation of our discussion includes an overview of Legendre polynomials. The basis functions of Legendre, applicable within the range , can be derived through the use of Legendre’s differential equation as (Patra et al., 2008)
They satisfy the following recurrence relation
In the scenario where and
, we can create the translated Legendre polynomials spanning the range
by applying the linear transformation
. In this context,
represents the translated Legendre polynomials of degree
and they can be derived using the subsequent recurrence relation.
where and
. For any polynomial
, with
, its orthogonality in relation to the weight function
is described as
(8)
The shifted Legendre polynomial of degree , denoted as
, over the interval
can be characterized by its analytical representation given by
(9)
Next, we move forward to introduce the concept of shifted Legendre-Gauss nodes and the associated quadrature weights.
Let represent the conventional Legendre-Gauss nodes over the interval
, which are identified as the roots of
. One pivotal characteristic of the LG points can be described as
(10)
This definite integral is accurate for polynomials with a maximum degree of , and the weights for numerical integration can be computed as (Canuto and Hussaini, 2012)
Let the Legendre-Gauss nodes, transformed to the interval , be represented by
for
. One can derive that
. The associated quadrature weights can be expressed as
. Introducing two distinct points,
and
, the Lagrange polynomials can be subsequently defined as
(11)
Consider the shifted Legendre-Gauss nodes over the interval represented by
for
. We have
. Accordingly, the corresponding Lagrange polynomials are defined over the interval
as
(12) where,
and
Design of neural network model
The exploration of mathematical research in biological nervous systems has paved the way for the development of artificial neural networks (ANNs) (Haykin, 1998). Figure 1 shows a simplified model of the neuron as an inspiration for mathematical model. Inspired by the intricate workings of the human brain, ANNs aim to create machines capable of learning and adapting. This research has sparked fascinating advancements in the field of machine learning, where algorithms and models strive to replicate the cognitive processes of human intelligence. By delving into the complexities of biological nervous systems, researchers have unlocked new possibilities for designing intelligent machines that can understand, reason, and learn from data. These artificial machines, mimicking biological neurons, are termed as nodes or artificial neurons. Every artificial neuron has multiple inputs but only one output, linking it to numerous other synthetic neurons. ANNs are widely utilized in the realm of approximation theory. Drawing on the Kolmogorov existence theorem, a suitable neural network can be utilized to approximate any continuous function of variables (Chen et al., 1995). ANNs have emerged as a powerful instrument in both mathematics and engineering, offering the potential to approximate solutions to a wide range of challenges. Neurons are commonly arranged in several layers. Each neuron produces its output by determining the weighted sum of its input values, which also includes a bias. This particular sum, often termed the net input, undergoes a transformation via an activation function to yield the final output (as shown in Fig. 1). There can be none or multiple hidden layers sandwiched between the input and output layers. If we exclude these hidden layers, we obtain a distinct category of neural networks known as FLNN. Subsequently, we focus on using an FLNN to solve (1)-(3).
Fig. 1. Model of the biological neuron.
Architecture of Lagrange functional link-based neural network
The FLNNs, often referred to as single-layered neural networks, function without hidden layers. In such configurations, the input data undergoes nonlinear functional expansion for enhancement. This strategy leads to a decrease in computational complexity and offers better approximation capabilities relative to back-propagation techniques (Ghazali et al., 2009). It is important to note that the FLNN models can be trained faster than MLP (Multi-Layer Perceptron) while maintaining computational efficiency.
Moreover, the effectiveness of the LFLNN enhances its attractiveness. With its single-layer design, the model streamlines the learning process, making it faster and more computationally efficient compared to more complex neural network architectures. This makes the LFLNN an attractive choice for applications where computational time is a crucial factor.
(a) Architecture of the neural network
(b) LFLNN design
Fig. 2. Schematic representation of the traditional ANN and the LFLNN.
Formulation of LFLNN for VOFTVDPDEs
The architecture of the LFLNN features a block dedicated to the Lagrange polynomial, connecting a single input node directly to an output node (refer to Fig. 1). The mathematical representation of this relationship can be described using input vectors
and weights
as follows
The output function can be formulated by introducing the input
to several activation functions. In (Kheyrinataj and Nazemi, 2020), the authors employed the activation function
in their FLNN model. This specific activation function has garnered attention in numerous applications (Kheyrinataj & Nazemi, 2019).
The sigmoid function, defined as , is also a frequently utilized activation function. However, for our current problem, these functions might not be optimal due to the computational burden introduced by the exponential terms. Through numerical demonstrations, we will illustrate that the linear activation function
serves as an effective solution for the problem defined by (1)-(3).
Let represent the neural-based solution to the equations (1)-(3). Hence, it can be expressed as
(13)
The initial conditions are met by incorporating the first term. The second term represents a single-layer LFLNN with tunable parameters. Accordingly, the solutions from the neural network meet the criteria in Eq. (1), allowing us to express it as
(14)
Based on the definition of the variable-order fractional derivative, we obtain
(15)
We also notice that
(16)
To solve Eq. (14), we initially define the subsequent squared error function
(17)
To minimize Eq. (17), we present an unconstrained optimization approach using a uniform discretization as described below
(19)
where
and
are shifted Legendre-Gauss nodes. For approximating the variable-order fractional partial derivative at collocation points, we utilize the Gaussian quadrature technique as described
(20)
Through this approach, the initial problem (1)-(3) is transformed into an unconstrained optimization problem. This can be solved using established mathematical optimization methods or heuristic strategies like particle swarm optimization, and genetic algorithms (Dabiri et al., 2018). In our study, we employ the modified Newton method to adjust the weights of the LFLNN during the learning phase.
Remark 3.1 When utilizing the linear activation function in conjunction with a polynomial initial condition
, we can directly determine the variable-order fractional derivative from Eq. (7), eliminating the need for approximating the variable-order derivative.
Training process and convergence analysis
The initial step in neural network training involves establishing a starting weight vector. Subsequently, a series of weight vectors is produced. The training process is concluded upon the fulfillment of a predetermined condition, often referred to as the termination criterion. For our learning and weight updating processes, we employ the backpropagation algorithm. Though various iterative approaches such as gradient descent, Newton’s method, and the Conjugate gradient are available, we have opted for the modified Newton-Raphson technique to train the neural network (Susanto and Karjanto, 2009). It’s pertinent to note that the applicability of the modified Newton-Raphson method is due to the continuous differentiability of the error function. In order to solve the optimization issue presented in (18), we let . Given
, with
symbolizing the Hessian matrix associated with the cost function
, the weights can be obtained as follows
(21)
where, represents the iteration step employed to update the weights. Subsequently, we will explain the learning process in the following steps.
• Start by setting the initial values for the weights and the input vector . Use
as the threshold for error tolerance.
• Calculate the output of the LFLNN model and determine the weights through a backpropagation method.
• If , proceed to the subsequent step. If not, evaluate the error function and initiate a new training iteration.
• After completing the desired learning process and achieving the intended results, the final network parameters can be saved.
The flowchart for the learning process is depicted in Fig. 3. We show that this learning method reaches the best solution for the given unconstrained optimization problem as expressed by (18).
Theorem 1. Given a sequence as described by Eq. (21), and a continuously differentiable function
in the vicinity of the optimal solution
where
, it follows that
.
Proof. See (Golpour Lasaki, Ebrahimi, & Ilie, 2023).
Fig. 3. Diagram illustrating the LFLNN structure’s learning algorithm.
Theorem 2. We assume that is an approximate solution to the equations (1)-(3), and for the derivative operators, we have
(22)
(23)
where and
are positive real numbers. For
, we have
, where
, and as a result, we obtain
(24)
where
(25)
(26)
Proof. Considering the exact solution and the approximate solution
, substituting into the differential equation (1), we have
(27)
(28)
By subtracting equation (28) from (27), we get
(29)
Now, by considering the Lipschitz condition for the function , we have
(30)
where are positive functions over the domain of the equation. Assuming
and
(31)
(32)
Equation (30) can be written as
(33)
Also, considering the assumption of the theorem, we have
(34)
where
Now, considering equations (27) to (34), we have
(35)
Then
,
(36)
Considering that , we have
where . Now, if
, we have
, and consequently
In this section, we assess the effectiveness of the introduced neural network model using multiple numerical examples. The simulations were executed on an X64-based PC equipped with an Intel(R) Core i7 CPU, clocked at 3.10 GHz and 4.0GB RAM, using MAPLE 18 with precision up to 25 decimal digits. For the learning phase, we set a tolerance of . The numerical method’s performance is evaluated by computing the absolute error as follows
Example 1. For the first example, we examine the following VOFTVDPDEs
(37)
(38)
(39)
where The exact solution for this problem is
. The solution can be approximate as
This problem has been solved for for and constant fractioanl-order
in (Dehestani et al., 2019). In Fig. 4, the numerical solution using the suggested method for
and
with
,
are plotted. Figure 5 shows the absolute error function for
and
with
,
. Table 1 shows the comparison of the results for the proposed methods and the method described in (Dehestani et al., 2019). The numerical results indicate that the method we proposed exhibits superior performance compared to the approach detailed in (Dehestani et al., 2019).
Table 1: A comparative analysis of the absolute error for the suggested method and method (Dehestani et al., 2019). for and
in Example 1.
| Suggested method | Method (Dehestani et al., 2019). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
(0, 0) | 0 | 4.31 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
(0.2, 0.1) | 0 | 6.12 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
(0.4, 0.2) | 0 | 1.36 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
(0.6, 0.3) | 0 | 2.35 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
(0.8, 0.4) | 0 | 3.62 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
(1, 0.5) | 0 | 5.21 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
(1.2, 0.6) | 5.551115 | 7.18 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
(1.4, 0.7) | 0 | 9.57 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
(1.6, 0.8) | 1.110223 | 1.24 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
(1.8, 0.9) | 5.551115 | 1.58 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
(2, 1) | 7.731641 | 1.97 |
|
| ||
---|---|---|---|
|
| ||
|
| ||
|
|
Points |
|
|
|
(0, 0) | 0 | 0 | 0 |
(0.2, 0.1) | 4.446443 10-14 | 0 | 0 |
(0.4, 0.2) | 1.845191 10-13 | 0 | 0 |
(0.6, 0.3) | 4.543033 10-13 | 1.110223 10-16 | 1.110223 10-16 |
(0.8, 0.4) | 8.772982 10-13 | 0 | 0 |
(1, 0.5) | 1.433076 10-12 | 0 | 0 |
(1.2, 0.6) | 2.023715 10-12 | 4.440892 10-16 | 4.440892 10-16 |
(1.4, 0.7) | 2.467360 10-12 | 8.881784 10-16 | 8.881784 10-16 |
(1.6, 0.8) | 2.498446 10-12 | 1.776357 10-15 | 0 |
(1.8, 0.9) | 1.788791 10-12 | 8.881784 10-16 | 0 |
(2, 1) | 1.776357 10-15 | 0 | 0 |
|
|
| |
| |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|