Research Article: A Numerical Method For Solving Fractional Differential Equations by Using Neural Network
Research Article: A Numerical Method For Solving Fractional Differential Equations by Using Neural Network
Research Article: A Numerical Method For Solving Fractional Differential Equations by Using Neural Network
Research Article
A Numerical Method for Solving Fractional Differential
Equations by Using Neural Network
Correspondence should be addressed to Haidong Qu; [email protected] and Xuan Liu; [email protected]
Copyright © 2015 H. Qu and X. Liu. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We present a new method for solving the fractional differential equations of initial value problems by using neural networks
which are constructed from cosine basis functions with adjustable parameters. By training the neural networks repeatedly the
numerical solutions for the fractional differential equations were obtained. Moreover, the technique is still applicable for the coupled
differential equations of fractional order. The computer graphics and numerical solutions show that the proposed method is very
effective.
RL 𝛼 𝑑 𝑛 𝑛−𝛼 𝛼
𝐷𝑎+ (𝑥 − 𝑎)𝛽−1 Sin𝜇,𝛽 [𝜆 (𝑥 − 𝑎)𝜇 ]
𝐷0+ 𝑓 (𝑥) := ( ) 𝐼0+ 𝑓 (𝑥)
𝑑𝑥 (11)
𝑥 𝑛 = (𝑥 − 𝑎)𝛽−𝛼−1 Sin𝜇,𝛽−𝛼 [𝜆 (𝑥 − 𝑎)𝜇 ] ,
1 𝑑 𝑓 (𝑡) 𝑑𝑡
= ( ) ∫ 𝛼−𝑛+1
,
Γ (𝑛 − 𝛼) 𝑑𝑥 0 (𝑥 − 𝑡) 𝛼
𝐷𝑎+ (𝑥 − 𝑎)𝛽−1 Cos𝜇,𝛽 [𝜆 (𝑥 − 𝑎)𝜇 ]
(6) (12)
𝑑 𝑛
𝛼
𝐷0+ 𝑛−𝛼
𝑓 (𝑥) := 𝐼0+ ( ) 𝑓 (𝑥) = (𝑥 − 𝑎)𝛽−𝛼−1 Cos𝜇,𝛽−𝛼 [𝜆 (𝑥 − 𝑎)𝜇 ] .
𝑑𝑥
1 𝑥
(𝑑/𝑑𝑡)𝑛 𝑓 (𝑡) 𝑑𝑡 1
= ∫ , ̃ 𝑞) := ∫ 𝑥𝑝−1 (1−
Proof. The beta function was defined by 𝛽(𝑝,
Γ (𝑛 − 𝛼) 0 (𝑥 − 𝑡)𝛼−𝑛+1 0
𝑥)𝑞−1 𝑑𝑥, and we have the following equation:
where (𝑛 = [𝛼] + 1, 𝑥 > 0).
Γ (𝑝) Γ (𝑞)
Definition 3 (see [22]). The classical Mittag-Leffler function 𝛽̃ (𝑝, 𝑞) = . (13)
is defined by Γ (𝑝 + 𝑞)
∞
𝑥𝑘
E𝛼 (𝑥) := ∑ , (𝑥 ∈ 𝐶, 𝛼 > 0) . (7) Then according to the definition of Caputo fractional deriva-
𝑘=0
Γ (𝛼𝑘 + 1) tives, we have
𝑛 𝛽−1 𝑘+1 ∞
𝜇 𝜇 2𝑘−1
(𝑥 − 𝑎)𝛽−𝛼−1 1 (𝑑/𝑑𝑡) 𝑡 ∑𝑘=1 ((−1) [𝜆𝑡 (𝑥 − 𝑎) ] /Γ (𝜇 (2𝑘 − 1) + 𝛽))
= ∫ 𝑑𝑡
Γ (𝑛 − 𝛼) 0+ (1 − 𝑡)𝛼−𝑛+1
∞ 𝑘+1 𝜇 2𝑘−1
(𝑥 − 𝑎)𝛽−𝛼−1 ∑𝑘=1 (−1) [𝜆 (𝑥 − 𝑎) ] 1
(𝑑/𝑑𝑡)𝑛 𝑡𝛽−1+𝜇(2𝑘−1)
= ∫ 𝑑𝑡
Γ (𝑛 − 𝛼) Γ (𝜇 (2𝑘 − 1) + 𝛽) 0+ (1 − 𝑡)𝛼−𝑛+1
∞ 𝑘+1 𝜇 2𝑘−1 𝑛
(𝑥 − 𝑎)𝛽−𝛼−1 ∑𝑘=1 (−1) [𝜆 (𝑥 − 𝑎) ] 1
𝑡𝛽−1+𝜇(2𝑘−1)−𝑛
= ∏ [𝛽 − 𝑖 + 𝜇 (2𝑘 − 1)] ∫ 𝑑𝑡
Γ (𝑛 − 𝛼) Γ (𝜇 (2𝑘 − 1) + 𝛽) 𝑖=1 0+ (1 − 𝑡)𝛼−𝑛+1
∞ 𝑘+1 𝜇 2𝑘−1
(𝑥 − 𝑎)𝛽−𝛼−1 ∑𝑘=1 (−1) [𝜆 (𝑥 − 𝑎) ]
= 𝛽̃ (𝛽 + 𝜇 (2𝑘 − 1) − 𝑛, 𝑛 − 𝛼)
Γ (𝑛 − 𝛼) Γ (𝜇 (2𝑘 − 1) + 𝛽 − 𝑛)
∞ 𝑘+1 𝜇 2𝑘−1
(𝑥 − 𝑎)𝛽−𝛼−1 ∑𝑘=1 (−1) [𝜆 (𝑥 − 𝑎) ] Γ (𝛽 + 𝜇 (2𝑘 − 1) − 𝑛) Γ (𝑛 − 𝛼)
=
Γ (𝑛 − 𝛼) Γ (𝜇 (2𝑘 − 1) + 𝛽 − 𝑛) Γ (𝛽 + 𝜇 (2𝑘 − 1) − 𝛼)
2𝑘−1
∞
(−1)𝑘+1 [𝜆 (𝑥 − 𝑎)𝜇 ]
= (𝑥 − 𝑎)𝛽−𝛼−1 ∑ = (𝑥 − 𝑎)𝛽−𝛼−1 Sin𝜇,𝛽−𝛼 [𝜆 (𝑥 − 𝑎)𝜇 ] .
𝑘=1
Γ (𝜇 (2𝑘 − 1) + 𝛽 − 𝛼)
(14)
Then (11) holds. Similarly, we obtain (12). In particular, when where N represents the number of sample points, ‖ ⋅ ‖2 is
𝛽 = 1, 𝜇 = 1, we have Euclidean norm, and
𝛼
𝐷𝑎+ Sin1,1 [𝜆 (𝑥 − 𝑎)] 𝛼
𝑒𝑗 (𝑘) = 𝑓 (𝑥𝑘 , 𝑦𝑗 (𝑥𝑘 )) − 𝐷0+ 𝑦𝑗 (𝑥𝑘 )
𝛼
= 𝐷𝑎+ sin [𝜆 (𝑥 − 𝑎)] M
−𝛼
= 𝑓 (𝑥𝑘 , 𝑦𝑗 (𝑥𝑘 )) − 𝑥𝑘−𝛼 (∑𝑤𝑖,𝑗 Cos1,1−𝛼 (𝑖𝑥𝑘 )
= (𝑥 − 𝑎) Sin1,1−𝛼 [𝜆 (𝑥 − 𝑎)] , 𝑖=1 (18)
(15) M
𝛼
𝐷𝑎+ Cos1,1 [𝜆 (𝑥 − 𝑎)]
+ (𝐶 − ∑𝑤𝑖,𝑗 ) Cos1,1−𝛼 ((M + 1) 𝑥𝑘 )) ,
𝛼 𝑖=1
= 𝐷𝑎+ cos [𝜆 (𝑥 − 𝑎)]
where 𝑘 = 1, 2, . . . , N; then we can adjust the weights 𝑤𝑖,𝑗 by
= (𝑥 − 𝑎)−𝛼 Cos1,1−𝛼 [𝜆 (𝑥 − 𝑎)] .
the following equation:
𝛼
which yields
= 𝑓 (𝑋, 𝑦𝑗 (𝑋)) − 𝐷0+ 𝑦𝑗 (𝑋) (23)
𝜕𝐸 𝑇 𝜕𝐸𝑗
= 𝑓 (𝑋, 𝑦𝑗 (𝑋)) 𝐼 − 𝜇 ( 𝑗 ) < 1. (32)
𝜕𝑊𝑗 𝜕𝑊𝑗 𝐹
− 𝑋𝛼 (𝑊𝑗 𝐻 + (𝐶 − 𝑊𝑗 𝐼1 ) Cos1,1−𝛼 ((M + 1) 𝑋)) ,
Hence,
where 𝐻 = (Cos1,1−𝛼 (𝑋), Cos1,1−𝛼 (2𝑋), . . . , Cos1,1−𝛼 (M𝑋))𝑇 .
𝜕𝐸𝑗 𝑇 𝜕𝐸𝑗 𝜕𝐸 2
Then we have 1 > 𝐼 − 𝜇 ( ) ≥ 𝜇 𝑗 − ‖𝐼‖𝐹
𝜕𝑊 𝜕𝑊 𝜕𝑊
𝑗 𝑗 𝐹 𝑗 𝐹
𝜕𝐸𝑗
= 𝑓𝑦 (𝐺 − 𝐼1 cos ((M + 1) 𝑋)) (33)
𝜕𝑊𝑗 𝜕𝐸 2
(24) 𝑗
= 𝜇 − N.
− 𝑋𝛼 (𝐻 − 𝐼1 Cos1,1−𝛼 ((M + 1) 𝑋)) . 𝜕𝑊𝑗
𝐹
Noting 𝐽 = (1/2)‖𝐸𝑗 ‖22 , then we get In accordance with 0 < 𝜇 < 1, we obtain
𝜕𝐽 𝜕𝐸𝑗 𝜕𝐸𝑗
0<𝜇<
N+1
.
Δ𝑊𝑗 = −𝜇 = −𝜇𝐸𝑗 .
𝜕𝐸𝑗 𝜕𝑊𝑗 𝜕𝑊𝑗
(25) 𝜕𝐸 /𝜕𝑊 2 (34)
𝑗 𝑗 𝐹
Define Lyapunov function 𝑉𝑗 = (1/2)‖𝐸𝑗 ‖22 ; we have By calculating (25), we get
1 2 1 2 𝜕𝐸 2
Δ𝑉𝑗 = 𝐸𝑗+1 − 𝐸𝑗 . (26) 𝑗
2 2 2 2
𝜕𝑊
𝑗 𝐹
Suppose
2 (𝐿 + 𝛿−𝛼 𝐿 ) ⋅ ⋅ ⋅ 2 (𝐿 + 𝛿−𝛼 𝐿 ) 2
𝜕𝐸𝑗 𝑇 1 2 1 2
𝐸𝑗+1 = 𝐸𝑗 + Δ𝐸𝑗 = 𝐸𝑗 + ( ) Δ𝑊𝑗 ,
(27) . . (35)
𝜕𝑊𝑗 ≤ ( .. .. )
2 (𝐿 + 𝛿−𝛼 𝐿 ) ⋅ ⋅ ⋅ 2 (𝐿 + 𝛿−𝛼 𝐿 )
and then in accordance with (25) that yields 1 2 1 2 M×N 𝐹
𝑇
𝜕𝐸𝑗 𝜕𝐸𝑗 = 4MN (𝐿 1 + 𝛿−𝛼 𝐿 2 ) .
2
𝐸𝑗+1 = (𝐼 − 𝜇 ( ) ) 𝐸𝑗 , (28)
𝜕𝑊𝑗 𝜕𝑊𝑗
Finally, we have
where
N+1
1 0<𝜇< 2
. (36)
4MN (𝐿 1 + 𝛿−𝛼 𝐿 2 )
𝐼=( d ) . (29)
1 N×N
Advances in Mathematical Physics 5
Table 1: Weights (×10−4 ) obtained along with the solution of Examples 1, 2, and 3.
1.2 1.2
1.0 1.0
0.8 0.8
y = x2
y = x2
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x x
Figure 1: The learning curve for Example 1. Figure 2: The inspection curve for Example 1.
3.3. Example
are done by using Lenovo T400, Intel Core 2 Duo CPU P8700,
3.3.1. Example 1. We first consider the following linear frac-
2.53 GHz, and Matlab version R2010b. The neural networks
tional differential equation:
with cosine basis functions have taken about 850 s, but the
𝛼 2 other algorithms mentioned above need to run about 2,240 s.
𝐷0+ 𝑦 (𝑥) = 𝑥2 + 𝑥2−𝛼 − 𝑦 (𝑥) , (37)
Γ (3 − 𝛼)
with condition 𝑦(0) = 0. The exact solution is 𝑦(𝑥) = 𝑥2 . 3.3.2. Example 2. We secondly consider the following linear
This equation also can be solved by the following methods: fractional differential equation:
Genetic Algorithm (GA) [21], Grünwald-Letnikov classical
numerical technique (GL) [23], and Particle Swarm Opti-
𝛼
mization (PSO) algorithm [23]. We set the parameters 𝜇 = 𝐷0+ 𝑦 (𝑥) = cos (𝑥) + 𝑥−𝛼 Cos1,1−𝛼 (𝑥) − 𝑦 (𝑥) , (38)
0.001, M = 7, and N = 10 and train the neural network
4500 times, and the weights of the network for Example
1 are given in Table 1. Figure 1 shows that sample points with condition 𝑦(0) = 1. The exact solution is 𝑦(𝑥) = cos(𝑥).
are on the exact solution curve after training is completed. We set the parameters 𝜇 = 0.001, M = 7, and N = 10 and
Then we check whether the other points also match well train the neural network 1000 times, and the weights of the
with the exact solution (see Figure 2). From Figure 3 we see network for Example 2 are given in Table 1. Figures 4, 5, and
the error values decrease rapidly. Tables 2(a) and 2(b) show 6 show that the neural network is still applicable when 𝐶 ≠ 0.
the numerical solutions and accuracy for Example 1 by the Table 3 shows the exact solution, approximate solution, and
different methods. In this paper, all numerical experiments accuracy for Example 2.
6 Advances in Mathematical Physics
700 0.95
0.9
600
0.85
500
Error value
0.8
y = Cos(x)
400 0.75
0.7
300
0.65
200
0.6
100 0.55
0 0.5
0 500 1000 1500 2000 2500 3000 3500 4000 4500 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x
Training times
0.8
1000
Error value
0.75
0.7
0.65
500
0.6
0.55
0.5
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
x 0 200 400 600 800 1000
Training times
Exact solution
Sample points Figure 6: The error curve for Example 2.
Figure 4: The learning curve for Example 2.
Table 2: (a) Comparison of results for the solution of Example 1 for 𝛼 = 0.5. (b) Comparison of results for the solution of Example 1 for
𝛼 = 0.75.
(a)
Table 3: Exact solution, approximate solution, and accuracy for Table 4: Exact solution, approximate solution, and accuracy for
Example 2. Example 3.
M M (−𝛼)
− 𝑥−𝛼 (∑𝑤𝑖,𝑗 Cos1,1−𝛼 (𝑖𝑥𝑘 ) + (𝐶1 − ∑𝑤𝑖,𝑗 ) ⋅ (cos (𝑥𝑘 ) − cos ((M + 1) 𝑥𝑘 )) − (𝑥𝑘 )
𝑖=1 𝑖=1
⋅ (Cos1,1−𝛼 𝑥𝑘 − Cos1,1−𝛼 ((M + 1) 𝑥𝑘 )))
⋅ Cos1,1−𝛼 ((M + 1) 𝑥𝑘 )) , N
𝑦
− 𝜇 ∑ 𝑒𝑗 (𝑘) 𝑓𝑧 (𝑥𝑘 , 𝑦𝑗 (𝑥𝑘 ) , 𝑧𝑗 (𝑥𝑘 )) (cos (𝑥𝑘 )
𝑘=1
𝑒𝑗𝑧 (𝑘) = 𝑔 (𝑥𝑘 , 𝑦𝑗 (𝑥𝑘 ) , 𝑧𝑗 (𝑥𝑘 )) − 𝐷0+
𝛼
𝑧𝑗 (𝑥𝑘 )
− cos ((M + 1) 𝑥𝑘 )) .
= 𝑔 (𝑥𝑘 , 𝑦𝑗 (𝑥𝑘 ) , 𝑧𝑗 (𝑥𝑘 )) (44)
M M
− 𝑥−𝛼 (∑𝑞𝑖,𝑗 Cos1,1−𝛼 (𝑖𝑥𝑘 ) + (𝐶2 − ∑𝑞𝑖,𝑗 ) 3.5. Convergence of the Algorithm
𝑖=1 𝑖=1
Theorem B. Let 𝜇 represent learning rate, let N represent the
number of sample points, and let M represent the number of
⋅ Cos1,1−𝛼 ((M + 1) 𝑥𝑘 )) . neurons: 𝑋 = (𝑥1 , 𝑥2 , . . . , 𝑥N ), 𝛿 < 𝑥𝑖 < 1, 1 ≤ 𝑖 ≤ N. Suppose
𝑦 𝑦
|𝑓𝑦 | ≤ 𝐿 1 , |𝑓𝑧 | ≤ 𝐿𝑧1 , |𝑔𝑦 | ≤ 𝐿 3 , |𝑔𝑧 | ≤ 𝐿𝑧3 , |Cos1,1−𝛼 (𝑥)| ≤ 𝐿 2
(42) on the interval (𝛿, 1) for 0 < 𝛿 < 1. Then the neural network is
convergent on the interval (𝛿, 1) when
Then we adjust the weights 𝑤𝑖,𝑗 and 𝑞𝑖,𝑗 by the following two
0
equations:
<𝜇
𝑤𝑖,𝑗+1 = 𝑤𝑖,𝑗 + Δ𝑤𝑖,𝑗 , (45)
2N + 1
(43) < 𝑦 𝑦 2 2
.
𝑞𝑖,𝑗+1 = 𝑞𝑖,𝑗 + Δ𝑞𝑖,𝑗 , 4MN ((𝐿 1 + 𝐿 3 + 𝛿−𝛼 𝐿 2 ) + (𝐿𝑧1 + 𝐿𝑧3 + 𝛿−𝛼 𝐿 2 ) )
𝑦+𝑧
where Proof. Let 𝑊𝑗 = (𝑤1𝑗 , 𝑤2𝑗 , . . . , 𝑤M𝑗 , 𝑞1𝑗 , 𝑞2𝑗 , . . . , 𝑞M𝑗 ), and
then we denote 𝑦𝑗 (𝑋) and 𝑧𝑗 (𝑋) by
𝑦
𝜕𝐽𝑦+𝑧 N
𝜕𝐽𝑦+𝑧 𝜕𝑒𝑗 (𝑘)
Δ𝑤𝑖,𝑗 = −𝜇 = −𝜇 ( ∑ 𝑦 𝑦+𝑧
𝜕𝑤𝑖,𝑗 𝑘=1 𝜕𝑒𝑗 (𝑘)
𝜕𝑤𝑖,𝑗 𝑦𝑗 (𝑋) = 𝑊𝑗 𝐺1,0
(46)
𝑦+𝑧 1,0
𝑧 + (𝐶1 − 𝑊𝑗 𝐼1 ) cos ((M + 1) 𝑋) ,
N
𝜕𝐽𝑦+𝑧 𝜕𝑒𝑗 (𝑘) N
𝑦
+ ∑ 𝑧 ) = −𝜇 ∑ 𝑒𝑗 (𝑘)
𝜕𝑒 (𝑘) 𝜕𝑤𝑖,𝑗
𝑘=1 𝑗 𝑘=1 𝑧𝑗 (𝑋) = 𝑊𝑗
𝑦+𝑧
𝐺0,1
(47)
𝑦+𝑧 0,1
⋅ (𝑓𝑦 (𝑥𝑘 , 𝑦𝑗 (𝑥𝑘 ) , 𝑧𝑗 (𝑥𝑘 )) + (𝐶2 − 𝑊𝑗 𝐼1 ) cos ((M + 1) 𝑋) ,
𝑇
This completes the proof.
𝜕𝐸𝑗
𝑦+𝑧 The exact solution is 𝑦(𝑥) = 𝑥2 and 𝑧(𝑥) = 𝑥3 . We set the
= 𝑓𝑦 (𝐺 1,0
− 𝐼11,0 cos ((M + 1) 𝑋)) parameters 𝛼 = 0.9, 𝜇 = 0.001, M = 7, and N = 10 and train
𝑦+𝑧
𝜕𝑊𝑗 the neural network 2000 times, and the weights of the
network for Example 4 are given in Table 5. Figures 7 and
+ 𝑓𝑧 (𝐺0,1 − 𝐼10,1 cos ((M + 1) 𝑋)) 8 show that the sample points and checkpoints are in well
agreement with the exact solutions for the problem. Figure 9
shows that the error of the numerical solutions decreases
− 𝑋𝛼 (𝐻1,0 − 𝐼11,0 Cos1,1−𝛼 ((M + 1) 𝑋)) rapidly within the first 50 training times. Table 6 shows
the exact solution, approximate solution, and accuracy for
+ 𝑔𝑧 (𝐺0,1 − 𝐼10,1 cos ((M + 1) 𝑋)) Example 4.
10 Advances in Mathematical Physics
Table 5: Weights obtained along with the solution of Examples 4 Inspection curve
and 5. 1.4
y = x2 and z = x3
𝑤3 /𝑞3 −0.6414/ − 0.7451 −0.1512/ − 0.2292
𝑤4 /𝑞4 0.0052/ − 0.0392 −0.1342/ − 0.1116 0.8
𝑤5 /𝑞5 0.0559/0.3705 0.1657/0.3582
0.6
𝑤6 /𝑞6 0.3998/0.2282 −0.0815/0.0781
𝑤7 /𝑞7 −0.4141/ − 0.4014 0.0533/ − 0.2259
0.4
0.2
Learning curve
1.4 0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
1.2 x
0.6
Error curve
50
0.4
45
0.2 40
35
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Error value
30
x
25
Exact solution y(x) Exact solution z(x)
20
Sample point for y(x) Sample point for z(x)
15
Figure 7: The learning curve for Example 4.
10
5
3.6.2. Example 5. We second consider the following nonlin- 0
ear fractional coupled differential equations: 0 500 1000 1500 2000
Training times
𝛼 6 2
𝐷0+ 𝑦 (𝑥) = 𝑥6 + cos (𝑥) + 𝑥3−𝛼 − (𝑦 (𝑥)) Error of y(x)
Γ (4 − 𝛼)
Error of z(x)
− 𝑧 (𝑥) , 0 < 𝑥 ≤ 1, 0 < 𝛼 ≤ 1,
(55) Figure 9: The error curve for Example 4.
𝛼 3 −𝛼
𝐷0+ 𝑧 (𝑥) = 𝑥 + cos (𝑥) + 𝑥 Cos1,1−𝛼 (𝑥) − 𝑦 (𝑥)
− 𝑧 (𝑥) ,
4. Conclusion
with initial conditions as follows: In this paper, by using the neural network, we obtained the
𝑦 (0) = 0, numerical solutions for single fractional differential equa-
(56) tions and the systems of coupled differential equations of
𝑧 (0) = 1, fractional order. The computer graphics demonstrates that
numerical results are in well agreement with the exact
The exact solution is 𝑦(𝑥) = 𝑥3 and 𝑧(𝑥) = cos(𝑥). We set solutions. In (1), suppose that 𝑓(𝑥, 𝑦(𝑥)) = 𝐴(𝑥) + 𝐵(𝑥)𝑦+
the parameters 𝛼 = 0.9, 𝜇 = 0.001, M = 7, and N = 10. 𝐶(𝑥)𝑦2 ; then the problem transformed into Fractional Riccati
Numerical solutions in Table 7 show that this network can Equations (Example 3 in this paper). In (3), suppose that
also be applied to the nonlinear fractional coupled differential 𝑓(𝑥, 𝑦(𝑥)) = 𝑦(𝑥)(𝑟 − 𝑎𝑦(𝑥) − 𝑏𝑧(𝑥)) and 𝑔(𝑥, 𝑧(𝑥)) =
equations but we need more time to train the network. 𝑧(𝑥)(−𝑑 + 𝑐𝑦(𝑥)); then the problem transformed into
Advances in Mathematical Physics 11
−3
0.3 0.9553 0.027 0.9625 0.0351 10 10−3
−0.2 0.4 0.9210 0.064 0.9267 0.0689 10−3 10−3
−0.4 0.5 0.8775 0.125 0.8773 0.1220 10−4 10−3
−3
−0.6 0.6 0.8253 0.216 0.8198 0.2040 10 10−3
−0.8 0.7 0.7648 0.343 0.7604 0.3276 10−3 10−2
International
Journal of Journal of
Mathematics and
Mathematical
Discrete Mathematics
Sciences