Aerodynamic Coefficients Identification in Dynamic Stall Conditions Using Neural Networks

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

AIAA SciTech Forum 10.2514/6.

2022-2577
January 3-7, 2022, San Diego, CA & Virtual
AIAA SCITECH 2022 Forum

CRJ 700 Aerodynamic Coefficients Identification in


Dynamic Stall Conditions using Neural Networks
Yvan Tondji 1, Georges Ghazi 2, Ruxandra Mihaela Botez 3
École de technologie supérieure (ÉTS),
Laboratory of Applied Research in Active Controls, Avionics and AeroServoElasticity LARCASE
Montreal Quebec, Canada, H3C-1K3

This paper presents a methodology to predict aircraft aerodynamic coefficients in both


linear and non-linear stall conditions along the hysteresis curve, using Neural Networks. The
variations of the lift and drag aerodynamic coefficients were estimated during an aircraft stall
maneuver. A Level-D Bombardier CRJ-700 Virtual Research Simulator (VRESIM), designed
and manufactured by CAE Inc. and Bombardier, was used to gather flight test data in both
Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

linear and non-linear stall phases. According to the Federal Aviation Administration (FAA),
the Level-D is the highest certification level for the flight dynamics model of an aircraft, which
means that its flight dynamics data is very close to real aircraft flight dynamics data. These
data are then used to create a database of aerodynamics coefficients for the complete flight
envelope of the aircraft. Multilayer Perceptron (MLP) and Recurrent Neural Networks (RNN)
were trained to learn the aerodynamic coefficients and their correlation with flight
parameters. The choice of the neural network hyper parameters is also explained. Finally, the
obtained models are validated by comparing the predicted aerodynamic coefficients with their
corresponding experimental data from the Level-D Bombardier CRJ 700 flight simulator. The
results obtained showed that both MLP and RNN were able to predict the lift and drag
aerodynamic coefficients with an average relative error of 2 %.

Nomenclature
𝑎𝑎𝑥𝑥 , 𝑎𝑎𝑧𝑧 = accelerations components along x and z body axes
𝑐𝑐𝑤𝑤 = mean aerodynamic chord length
𝐶𝐶𝐿𝐿𝑠𝑠 , 𝐶𝐶𝐷𝐷𝑠𝑠 = Lift and Drag coefficients
𝐼𝐼𝑦𝑦𝑦𝑦 = moment of inertia about y body axis
𝑚𝑚 = mass of the aircraft
𝑀𝑀 = Mach number
𝑜𝑜 = Neural Network output vector
𝑞𝑞 = aircraft pitch rate
𝑆𝑆𝑤𝑤 = reference wing area
𝑇𝑇𝑥𝑥 , 𝑇𝑇𝑧𝑧 = thrust force components along x and z body axes
𝑉𝑉𝑡𝑡 = True Airspeed
𝑤𝑤 = weight matrix in neural network
𝑧𝑧𝑐𝑐𝑐𝑐 , 𝑥𝑥𝑐𝑐𝑐𝑐 = z and x axis coordinates of the center of gravity

Greek Notation
𝛼𝛼 = angle of attack
𝛿𝛿 = control surface deflections
𝜌𝜌 = air density

1
Ph.D. Student, LARCASE, 1100 Notre Dame West, Montreal, QC, H3C-1K3, Canada.
2
Assistant Professor, LARCASE, 1100 Notre Dame West, Montreal, QC, H3C-1K3, Canada.
3
Full Professor, LARCASE, 1100 Notre Dame West, Montreal, QC, H3C-1K3, Canada, AIAA Fellow.

American Institute of Aeronautics and Astronautics


1

Copyright © 2022 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
I. Introduction

For safety reasons and passengers’ comfort, aircraft are designed to operate at flight conditions within their flight
envelope to avoid stalling. A flight condition is defined by a combination of altitude and Mach number. Aerodynamic
forces and moments change as flight conditions change and vary with respect to the angle of attack. Following the
industry’s desire to continuously improve aircraft flight performances and safety, aerodynamic phenomena such as
“dynamic stall” are subject to continuous interest. The stall represents a significant reduction of the lift coefficient of
the wing, which occurs when the aircraft reaches a critical stall angle of attack, and which can result in the aircraft
loss of control. Although this represents a certain risk, flying at an angle of attack close to stall conditions has several
advantages, such as increased lift at low speeds or reduced landing distances.
When an aircraft reaches stall conditions, it is subjected to multiple non-linearities, such as boundary layer
instabilities, vortices instabilities, early laminar to turbulent transition and massive flow separation. In some flight
cases, when the aircraft angle of attack remains below the stall angle, the pilot can still control the aircraft, and thus
he can return it to a more stable configuration. However, when the aircraft angle of attack exceeds the stall angle, the
pilot temporarily loses control of the aircraft, which then exhibits a very complex and non-linear behavior. In this case,
Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

the lift and drag coefficients describe hysteresis loop curves which are of great modelling interest, given their
importance for the aerodynamic recovery of the aircraft stall.
Over the past years, Computational Fluid Dynamics methods [1,2] as well as several semi-empirical and empirical
models [3–5] have significantly improved the modeling of unsteady stalled flow. Many experimental methods, such
as Time–Resolved Particle Image Velocimetry (TR-PIV) [6], or smoke visualization technique [7] were validated on
wind-tunnel data from unsteady airfoil tests [8]. Although these methods have given satisfactory results, they present
some drawbacks, as they are time consuming, depend on the scale of the model or do not properly account for the
effects of aeroservoelasticity or Reynolds numbers. Indeed, the Reynolds numbers that can be reproduced in a wind
tunnel environment are generally limited to values ranging from 0.5 × 106 to 1 × 106 , while in real flight conditions
it is between 20 × 106 and 50 × 106 [6]. This difference in Reynolds number induces errors and uncertainties in the
estimation of the lift coefficient in the stall region.
Today, numerical models, such as those encoded in highly qualified flight simulators, are able to represent with
very good precision the flight dynamics of an aircraft, to the point of being a reference for researchers and being used
for system identifications [9–11]. Aircraft flight simulators present the advantage of enabling the fast gathering of data
that can be used to build a large database, needed to identify model. Technologies based on Artificial Intelligence [12]
are currently developed for flight simulation and they could solve a wide range of complex problems in the
aeronautical field [13–19]. They demonstrated that they could use “past data” to build a generalized mathematical
model of a system presently under test [20]. Recently, Basappa et al. [21] demonstrated that Feed Forward Neural
Networks (FFNN) could be a potential solution to model the aerodynamic coefficients of an aircraft from flight test
data, and thus predict its flight dynamics.
The main objective of this paper is to develop a methodology to predict the flight dynamics of a Bombardier CRJ
700 regional jet aircraft in stall conditions using neural networks, including in the hysteresis region. The aerodynamic
coefficients will be estimated from data obtained from flight tests performed on Bombardier CRJ 700 level D Virtual
Research Equipment Simulator (VRESIM) designed and manufactured by CAE Inc and Bombardier.

Fig. 1: Bombardier CRJ 700 level D Virtual


Research Equipment Simulator (VRESIM)

American Institute of Aeronautics and Astronautics


2
The main objective of this paper is to develop a methodology to predict the flight dynamics of a Bombardier CRJ
700 regional jet aircraft in stall conditions using neural networks, including in the hysteresis region. The aerodynamic
coefficients will be estimated from data obtained from flight tests performed on Bombardier CRJ 700 level D Virtual
Research Equipment Simulator (VRESIM) designed and manufactured by CAE Inc and Bombardier.
The rest of the paper is structured as follows: Section II presents the methodology, which includes the data
acquisition procedure from the flight simulator, and the data preprocessing and the computation of the aerodynamic
coefficients from measurable parameters, such as angle of attack, airspeed, and angular rates. The procedure used to
select the optimal hyperparameters for the neural networks, including the training algorithm, the activation function,
the number of hidden layers, and the number of neurons per hidden layer is also presented. Finally, numerical results
and their comparisons with experimental data obtained from the CRJ 700 VRESIM are presented in Section III.

II. Methodology

Modeling a physical system consists of designing a mathematical model that approximate its behavior, which
Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

could then be used for simulation and testing purposes. Models are generally used when it is impossible or very
expensive to create experimental conditions under which the system is to be tested. The objective of this section is
therefore to present the methodology developed at LARCASE for modeling the lift and drag aerodynamic coefficients
of an aircraft in stall conditions. The aircraft stall model should represent the aerodynamic coefficients in terms of
relevant and measurable parameters, such as aircraft airspeed, angle of attack, Mach number, or control surface
deflections [9].

A. Flight test procedure


To estimate the aerodynamic coefficients in the stall region, flight tests should be performed. For this purpose,
several stall flight tests were conducted with the Bombardier CRJ-700 VRESIM, following the procedure described
in Fig. 2.

Fig. 2: Stall Flight Test Procedure Illustration

As shown in Fig. 2, the complete flight test procedure included several maneuvers. The first maneuver was to trim
the aircraft to stable flight conditions at a given altitude and airspeed (or Mach number). This maneuver was performed
with the assistance of the autopilot's altitude hold mode to maintain altitude, while the airspeed was stabilized manually
by adjusting the throttle position. Once the aircraft was trimmed, the next maneuver was to stall it. For this purpose,
the engine thrust was reduced by moving back the throttles to the idle position. This action resulted in a reduction of
the aircraft airspeed, and an increase in the angle of attack to maintain altitude. When the aircraft airspeed was
relatively low, close to the stall speed, the autopilot was disengaged, and the yoke was pulled back manually to deflect
the elevators. This second action caused the angle of attack to suddenly increase until reaching the stall angle 𝛼𝛼𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 .
During this part of the flight test, the aircraft was maintained in stall conditions as much as possible by controlling the
elevators in order to observe the stall phenomenon, and at least one hysteresis cycle.
During the flight test, various parameters, such as the Mach number, true airspeed, angular rates, accelerations,
engine thrust, control surface deflections, angles of attack, and altitude were recorded at a sample rate of 30 Hz.
Fig. 3 shows a typical example of data recorded from the VERSIM for a flight test conducted at an altitude of 7500
ft, for a Mach number of 0.20, and with the slats fully retracted (i.e., 0𝑜𝑜 ).

American Institute of Aeronautics and Astronautics


3
8
10

10
7

5
5
6

0
0 5
0 100 200 0 100 200 0 50 100 150

300 0
0

-20

-5 200
-40
Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

-60

-10 100
0 50 100 150 0 50 100 150 0 50 100 150

1 60 0.2

40 0.1

20 0

-1 0 -0.1
0 50 100 150 0 50 100 150 0 50 100 150

Fig. 3 Example of Data Recorded for a Flight Test at h = 7500 ft, M = 0.20, and Stats Retracted
In Fig. 3, 𝑎𝑎𝑥𝑥 and 𝑎𝑎𝑧𝑧 are respectively the longitudinal and vertical accelerations of the aircraft measured at its center
of gravity, 𝑞𝑞 is the aircraft pitch rate, 𝑉𝑉𝑡𝑡 the true airspeed, 𝑇𝑇 the total engines thrust force, 𝛿𝛿𝑒𝑒 and 𝛿𝛿𝑆𝑆 are respectively
the elevators and slats deflections. In this example, the pilot suddenly deflects the elevators around 120 seconds,
which induces an immediate increase in the angle of attack beyond the stall angle. The lift force then drops
significantly, resulting in the vertical acceleration 𝑎𝑎𝑧𝑧 change. Similarly, the drastic variation of the longitudinal
acceleration 𝑎𝑎𝑥𝑥 reflects the increase of the drag force occurring during the stall. These two phenomena lead to a drop
in altitude.

Table 1: Flight Test Conditions with Slats Retracted


Flight Case Altitude Mach Number Angle of attack
Number [ft] [at stall] [at stall, in o]
1 5000 0.21 17.02
2 7500 0.20 17.01
3 10,000 0.26 17.00
4 12,500 0.24 17.01
5 15,000 0.31 17.03
6 17,500 0.31 17.02
7 20,000 0.30 16.84
8 22,500 0.36 17.10
9 25,000 0.37 17.00
10 27,500 0.38 17.00
11 30,000 0.40 17.01
12 32,500 0.34 17.00
13 35,000 0.45 17.12

American Institute of Aeronautics and Astronautics


4
Table 2: Flight Test Conditions with Slats at 20o
Flight Case Altitude Mach Number Angle of attack
Number [ft] [at stall] [at stall, in o]
14 5000 0.18 17.62
15 7500 0.19 17.55
16 10,000 0.19 17.58
17 12,500 0.20 17.60
18 15,000 0.21 17.50
19 17,500 0.23 17.60
20 20,000 0.24 17.56
21 22,500 0.30 17.60
22 25,000 0.30 17.54
Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

23 27,500 0.31 17.70


24 30,000 0.36 17.70
25 32,500 0.36 17.60
26 35,000 0.37 17.70

Table 3: Flight Test Conditions with Slats at 45o


Flight Case Altitude Mach Number Angle of attack
Number [ft] [at stall] [at stall, in o]
27 5000 0.16 17.72
28 7500 0.17 17.66
29 10,000 0.19 17.71
30 12,500 0.19 17.51
31 15,000 0.21 17.64
32 17,500 0.22 17.55
33 20,000 0.23 17.88
34 22,500 0.23 17.77
35 25,000 0.28 17.66
36 27,500 0.28 17.63
37 30,000 0.29 17.69
38 32,500 0.29 17.61
39 35,000 0.31 17.56

Following the procedure described in Fig. 2, 39 flight cases were conducted with the Bombardier CRJ-700
VRESIM. The flight conditions (i.e., altitude, Mach number and angle of attack) and aircraft slats configurations
considered for all flight cases are detailed in Table 1 to Table 3.
The altitudes considered for the flight tests varied from 5000 to 35,000 ft. In addition, slats affect the wing airflow
by modifying the airfoil shape and by locally increasing the wing camber, which has the effect of delaying the stall
phenomenon. Thus, to account for this aspect, three slats configurations were considered: 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 = 0𝑜𝑜 (Table 1),
𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 = 20𝑜𝑜 (Table 2) and 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 = 45𝑜𝑜 (Table 3).

B. Data processing
The lift and drag aerodynamic coefficients (i.e., 𝐶𝐶𝐿𝐿𝑠𝑠 and 𝐶𝐶𝐷𝐷𝑠𝑠 ) expressed in the stability axes are estimated from the
recorded acceleration and flight parameters based on the following equations [9]:
𝐶𝐶𝐿𝐿𝑠𝑠 = 𝐶𝐶𝐿𝐿𝑏𝑏 cos(𝛼𝛼) − 𝐶𝐶𝐷𝐷𝑏𝑏 sin (𝛼𝛼) (1)

American Institute of Aeronautics and Astronautics


5
𝐶𝐶𝐷𝐷𝑠𝑠 = 𝐶𝐶𝐷𝐷𝑏𝑏 cos(𝛼𝛼) + 𝐶𝐶𝐿𝐿𝑏𝑏 sin(𝛼𝛼) (2)
where 𝐶𝐶𝐿𝐿𝑏𝑏 and 𝐶𝐶𝐷𝐷𝑏𝑏 are respectively the vertical and longitudinal forces coefficients expressed in the aircraft body axis.
In addition, the aerodynamic coefficients in the body axes are calculated with Eq. (3) to Eq. (4):
𝑚𝑚𝑎𝑎𝑧𝑧 − 𝑇𝑇𝑧𝑧
𝐶𝐶𝐿𝐿𝑏𝑏 = (3)
1/2𝜌𝜌𝑉𝑉𝑇𝑇2 𝑆𝑆𝑤𝑤
𝑚𝑚𝑎𝑎𝑥𝑥 − 𝑇𝑇𝑥𝑥
𝐶𝐶𝐷𝐷𝑏𝑏 = (4)
1/2𝜌𝜌𝑉𝑉𝑇𝑇2 𝑆𝑆𝑤𝑤
where 𝜌𝜌 is the air density, 𝑇𝑇𝑥𝑥 and 𝑇𝑇𝑧𝑧 are the longitudinal and vertical components of the engines thrust force,
respectively, 𝐼𝐼𝑦𝑦𝑦𝑦 is the aircraft mass moment of inertia around the lateral axis, 𝑆𝑆𝑤𝑤 is the reference area of the wing, 𝑐𝑐𝑤𝑤
is mean aerodynamic chord, 𝑎𝑎𝑥𝑥 and 𝑎𝑎𝑧𝑧 are the longitudinal and vertical acceleration of the aircraft, respectively.
Figures 4 to 6 show the aerodynamic coefficients estimated from flight tests data for Bombardier CRJ-700
VRESIM, and for the three slats configurations. Note that for confidentiality reasons, the data presented in these
Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

figures are normalized to the mean value and standard deviation.


It may be interesting to notice the effect of slat deflection on the lift and drag variations during the stall phase.
Indeed, when the slats are fully retracted (Fig. 4), a very large reduction in lift and an increase in drag are observed,
and the angle of attack can vary up to 80°. However, when the slats are extended (Fig. 5 and Fig. 6), the reduction in
lift is smaller, and the angle of attack does not exceed 45°.

3 3.5

3
2

2.5

1
2

1.5
0

-1
0.5

0
-2

-0.5

-3
-1

-4 -1.5
0 50 100 0 50 100

Fig. 4 Aerodynamic Coefficients Estimation from Flight Tests Data


obtained from the Bombardier CRJ-700 VRESIM (slats at 0°)

C. Neural Networks Modeling


Three categories of methods are commonly used to model nonlinear systems: block-oriented methods [22],
functional time series methods [8,22–25], and black-box methods [9,26] which include artificial neural networks.
Given their ability to successfully approximate continuous or discontinuous functions, neural networks are preferably
used to identify complex nonlinear systems [12].

American Institute of Aeronautics and Astronautics


6
2 3

1.5
2.5

1
2

0.5
1.5

-0.5

0.5

-1
Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

0
-1.5

-0.5
-2

-2.5 -1

-3 -1.5
0 10 20 30 40 0 10 20 30 40

Fig. 5 Aerodynamic Coefficients Estimation from Flight Tests Data


obtained from the Bombardier CRJ-700 VRESIM (slats at 20°)

2 3.5

1
2.5

2
0

1.5

-1 1

0.5

-2
0

-0.5
-3

-1

-4 -1.5
-20 0 20 40 -20 0 20 40

Fig. 6 Aerodynamic Coefficients Estimation from Flight Tests Data


obtained from the Bombardier CRJ-700 VRESIM (slats at 45°)

American Institute of Aeronautics and Astronautics


7
1. Selection of the type of neural networks
The first step in applying neural networks to system identification problems is to determine the type of network
best suited to solve the problem. There are many types of networks, and new structure are still being developed.
Among all the possible types of networks, there are two that have been recognized as being of general application and
have been shown to be effective: Multi-Layer Perceptrons (MLPs) and Recurrent Neural Network (RNN). Moreover,
it emerges from the literature that, for regression predictions, where real-values parameters such as aerodynamic
coefficients are predicted for a given set of inputs, MLPs are particularly good. However, RNNs are also investigated
in this paper because they demonstrated their abilities to build models from time-series data by using their previous
state data information to feed neurons of the actual state during the model training [27]. In the following sections a
brief description of the architecture of both MLPs and RNNs is given.

Multilayer Perceptron (MLP)


The fundamental element of a neural network, whatever its type, is the artificial neuron. Fig. 7 gives a schematic
representation of an artificial neuron, also called a “node” or “perceptron”, with multiple inputs from either a set of
inputs or from neurons in another hidden layer.
Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

X1
W1
Threshold
X2
W2 b
Net input
X3 ϕ ô
Output
W3
Activation
. .
. . function
. .
. .
. .
. .

Xn
Wn

Fig. 7 Graphical Representation of an Artificial Neuron


As shown in Fig. 7, a perceptron is a very simple processing unit that computes an output from a given set of
inputs. To compute the value of the neuron's output 𝑜𝑜�, the input signal of the neuron 𝑋𝑋 = [𝑥𝑥1 , 𝑥𝑥2 , … , 𝑥𝑥𝑛𝑛 ] is multiplied
by its corresponding weights 𝑊𝑊 = [𝑤𝑤1 , 𝑤𝑤2 , … , 𝑤𝑤𝑛𝑛 ], summed up, and then fed to a “transfer” function or “activation”
function. An activation function is attached to each neuron in the networks and determines whether the neuron should
be activated or not. Several models exist in the literature, such as the linear function, sigmoid function, or rectified
linear unit activation function. Mathematically, the output of a perceptron is given by Eq. (5):
𝑖𝑖=𝑛𝑛

𝑜𝑜� = 𝜑𝜑 �� 𝑥𝑥𝑖𝑖 𝑤𝑤𝑖𝑖 , 𝑏𝑏� (5)


𝑖𝑖=1

where 𝜑𝜑 is the transfer activation function and 𝑏𝑏 is a parameter that define the activation threshold of the neuron.
MLPs are composed of a set of neurons, connected to each other, and organized in layers, as shown in Fig 8. The
first layer, also called “input layer” aims to receive signal from data, while the last layer, also called “output layer”, is
defined according to the number of outputs of the model. Between those two layers, there is an arbitrary number of
hidden layers. The number of hidden layers, as well as the number of neurons per layer, are essential parameters for
MLPs, and in some way, determine their performance.
The predicted output 𝑜𝑜� of the MLP is computed according to Eq. (6) [28]:
𝑘𝑘=𝑛𝑛𝑚𝑚 𝑖𝑖=𝑛𝑛2 𝑗𝑗=𝑛𝑛1

𝑜𝑜� = 𝜑𝜑𝑚𝑚 � � 𝑊𝑊𝑚𝑚,𝑘𝑘 × … × 𝜑𝜑2 � � 𝑊𝑊2,𝑖𝑖 × 𝜑𝜑1 � � 𝑋𝑋 𝑊𝑊1,𝑗𝑗 + 𝑏𝑏1,𝑗𝑗 � + 𝑏𝑏2,𝑖𝑖 � + 𝑏𝑏𝑚𝑚,𝑘𝑘 � (6)
𝑘𝑘=1 𝑖𝑖=1 𝑗𝑗=1

where 𝑋𝑋 is the input vector, m is the number of layers of the Neural Network, 𝜑𝜑𝑖𝑖 is the activation function of the layer
𝑖𝑖, 𝑛𝑛𝑖𝑖 is the number of neurons of layer 𝑖𝑖, 𝑊𝑊𝑖𝑖,𝑗𝑗 and 𝑏𝑏𝑖𝑖,𝑗𝑗 are respectively the weight and bias of the 𝑗𝑗 th neuron of the
layer 𝑖𝑖.

American Institute of Aeronautics and Astronautics


8
X1

X2

ô
X3

.
.
.
.

Xn
Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

Fig. 8 Graphical Representation of a MLP Neural Network

Recurrent Neural Networks (RNN)


RNNs are neural networks that are well suited for modeling sequential data. For example, in reference [29], the
authors demonstrated that RNN was more effective than MLPs for learning the behavior of complex dynamic systems,
such as the behavior of an aircraft in the stall region. In fact, for dynamic systems, the output is a function of inputs
and past outputs. Therefore, we improved our previous MLP model, as proposed by Jand [27], by adding an output
feedback delayed by one time step, so that the prediction of the aerodynamic coefficients at a given time 𝑡𝑡 depends on
the system behavior at the previous time (𝑡𝑡 − Δ𝑡𝑡).
A graphical architecture of the RNN is given in Fig. 9,

time step Δt

α W1,2
1,1
M W1,3
X (t) q W1,4 Ô (t)
W1,5
Δ

Output layer
Input layer

Hidden layers

Fig. 9 Graphical Representation of a RNN Neural Network

where 𝑋𝑋(𝑡𝑡) is the input vector at time 𝑡𝑡, and 𝑜𝑜�(𝑡𝑡) is the output vector at time 𝑡𝑡. The input and output vectors used for
training are the same as the ones used for MLPs for comparison purposes.

2. Definition of Neural Networks Inputs and Outputs


In this study, two separated MLPs and two separated RNNs were developed for the prediction of aerodynamic
aerodynamics coefficients. In addition, it has been decided to train multiple neural networks with a single output rather
than training single neural networks with joined outputs. This strategy was envisaged because it was found that single
output neural networks were more accurate and did not need complex architecture to learn the correlation between the
input data and the target values.
Basically, the aerodynamic coefficients of an aircraft in stall conditions depend on the following variables: angle
of attack 𝛼𝛼, Mach number 𝑀𝑀, pitch rate 𝑞𝑞, rate of change of the angle of attack 𝛼𝛼̇ , true airspeed 𝑉𝑉𝑡𝑡 [24], and the surface
control deflections such as, the elevator angle 𝛿𝛿𝑒𝑒 , the horizontal stabilizer angle 𝛿𝛿𝐻𝐻 , and slats angle 𝛿𝛿𝑠𝑠 . In addition,
depending on the aircraft configuration, the wing airflow may also be affected by the air coming from the engines.

American Institute of Aeronautics and Astronautics


9
Thus, the aerodynamic coefficients may also depend on the engine thrust 𝑇𝑇 (or thrust coefficient). Finally, it worth
noting that since the variation of aerodynamic coefficients with respect to the input variables is nonlinear, a better
correlation can be usually found by adding the square or cube of those variables as inputs to the neural networks.
Based on all these observations and assumptions, the input vector 𝑋𝑋 was defined as follows:
𝑇𝑇
𝑞𝑞 𝑞𝑞 2
𝑋𝑋 = �𝛼𝛼, 𝛼𝛼 2 , 𝛼𝛼 3 , 𝛼𝛼̇ , 𝑀𝑀, 𝑀𝑀2 , 𝑀𝑀3 , , � � , 𝑇𝑇, 𝛿𝛿� (7)
𝑉𝑉𝑇𝑇 𝑉𝑉𝑇𝑇

where 𝛿𝛿 represents all control surface deflections (i.e., 𝛿𝛿𝑒𝑒 , 𝛿𝛿𝐻𝐻 , and 𝛿𝛿𝑠𝑠 .).
The output 𝑜𝑜� is one of the two aerodynamic coefficients, that needs to be estimated:
𝑜𝑜� = �𝐶𝐶� �
𝐿𝐿𝑠𝑠 , 𝐶𝐶𝐷𝐷 𝑠𝑠 � (8)
where �𝐶𝐶� �
𝐿𝐿𝑠𝑠 , 𝐶𝐶𝐷𝐷 𝑠𝑠 � are the predicted lift and drag coefficients expressed in the stability axis, respectively.

3. Data Organization and Model Performance Evaluation


Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

Of the 39 flight cases conducted with the Bombardier CRJ-700 VRESIM, 27 were used as training and test sets,
while the remaining 12 cases were used for validation purposes. Note that the training set was used to optimize the
neural network weights, while the test set was used to determine network performance.
In order to evaluate how well the networks is able to model the trained data, a training error was needed. In this
study, the training error, also called training performance, was calculated based on the Mean Square Error (𝑀𝑀𝑀𝑀𝑀𝑀). For
a given set of 𝑛𝑛 training data points and a given set of values of weights 𝑤𝑤𝑖𝑖,𝑗𝑗 , the 𝑀𝑀𝑀𝑀𝑀𝑀 was computed as follows:
𝑛𝑛
1 2
𝑀𝑀𝑀𝑀𝐸𝐸𝑇𝑇𝑇𝑇 (𝑤𝑤) = ��𝑜𝑜�𝑘𝑘 �𝑤𝑤𝑖𝑖,𝑗𝑗 � − 𝑜𝑜𝑘𝑘 � (9)
𝑛𝑛
𝑘𝑘=1

where the subscript “𝑇𝑇𝑇𝑇” refers to the training set data, 𝑖𝑖 is the position of the neurone on the layer 𝑗𝑗, 𝑜𝑜𝑘𝑘 is the 𝑘𝑘 th
training data, and 𝑜𝑜�𝑘𝑘 is the 𝑘𝑘 th value predicted by the network.
As mentioned above, the test set was used to evaluate the performances of the networks based on data that were
not considered in the training. Therefore, the performance calculated from the test data is important and allows to
adjust the model parameters, such as the training function, the activation function, or the number of hidden layers. For
a given set of 𝑛𝑛 data points and a given set of values of weights 𝑤𝑤𝑖𝑖,𝑗𝑗 , the test error (or test performance) is calculated
in the same way as for the 𝑀𝑀𝑀𝑀𝐸𝐸𝑇𝑇𝑇𝑇 , and according to the following equation:
𝑛𝑛
1 2
𝑀𝑀𝑀𝑀𝐸𝐸𝑇𝑇𝑇𝑇 (𝑤𝑤) = ��𝑜𝑜�𝑘𝑘 �𝑤𝑤𝑖𝑖,𝑗𝑗 � − 𝑜𝑜𝑘𝑘 � (10)
𝑛𝑛
𝑘𝑘=1

where the subscript “𝑇𝑇𝑇𝑇” refers to the test set data


Finally, the validation set, consisting of 12 flight cases, was used to validate the final model. The validation set is
used to demonstrate the accuracy of the trained network in predicting the aerodynamic coefficients based on new data
that has not been used to train and optimize the network. For a given set of 𝑛𝑛 validation data points, and a given set of
𝑛𝑛 predicted values, the validation error (or validation performance) is calculated based on the Mean Absolute Relative
Error (𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀), as follows:
𝑛𝑛
1 𝑜𝑜�𝑘𝑘 − 𝑜𝑜𝑘𝑘
𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 = �� � �� × 100 (11)
𝑛𝑛 𝑜𝑜𝑘𝑘
𝑘𝑘=1
𝑡𝑡ℎ
�𝑘𝑘 is the 𝑘𝑘𝑡𝑡ℎ value predicted by the network.
where 𝑜𝑜𝑘𝑘 is the 𝑘𝑘 experimental data used for validation, and 𝑜𝑜
It should be noted that since the aerodynamic coefficients are relatively small magnitude parameters (in the order
of 10−1 ), it may be interesting to consider the Mean Absolute Residual to compare the predicted value with the
experimental data, since the 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 can have very large values when the reference value 𝑜𝑜𝑘𝑘 is close to zero. The Mean
Absolute Residual is calculated as follow:
𝑛𝑛
1
𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅 = ��|𝑜𝑜�𝑘𝑘 − 𝑜𝑜𝑘𝑘 |� (12)
𝑛𝑛
𝑘𝑘=1

American Institute of Aeronautics and Astronautics


10
4. Choice of Training Algorithm and Activation Function
The activation function type, combined with the choice of training algorithm, influences the overall performances
of the network and the training time. Thus, in order to determine the best activation function/learning algorithm
combination, two analyses were performed.
The first analysis consisted in evaluating the performance of the network for different training functions. For this
purpose, nine gradient-based local optimization methods were used to train the network. These algorithms [30] are
given in Table 4.

Table 4: Training Algorithms Considered to Train the Network


Algorithm Description

OSS One-step secant backpropagation


CGP Conjugate gradient backpropagation with Polak-Ribiére updates
CGB Conjugate gradient backpropagation with Powell-Beale restarts
Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

CGF Conjugate gradient backpropagation with Fletcher-Reeves updates


BFG BFGS 4 quasi-Newton backpropagation
RP Resilient backpropagation
SCG Scale conjugates gradient method
LM Levenberg-Marquardt optimization.
BR Bayesian regularization backpropagation

Both MLP and RNN networks were trained with the nine training algorithms. For this first analysis, the activation
function and the structure of the neural network were assumed to be the same for all tests. Each training algorithm
was next used to determine the weights 𝑤𝑤𝑖𝑖,𝑗𝑗 and biases 𝑏𝑏𝑖𝑖 that minimized the training error (𝑀𝑀𝑀𝑀𝐸𝐸𝑇𝑇𝑇𝑇 ). Fig. 10 shows
the 𝑀𝑀𝑀𝑀𝐸𝐸𝑇𝑇𝑇𝑇 obtained for each training algorithm. Note that, for the sake of clarity, the results presented in this figure
are for the MLP networks, and for the prediction of the lift coefficient 𝐶𝐶𝐿𝐿𝑠𝑠 of the Bombardier CRJ- 700. Similar results
were obtained for the RNN and the other coefficients.

0.0035

0.003
Test Performance

0.0025

0.002

0.0015

0.001

0.0005

0
BR LM BFG SCG CGF CGP CGB OSS RP
Training Algorithm

Fig. 10 Network Performance Variation for Different Training Algorithms

By analysing the results in Fig. 10, it can be seen that the Levenberg-Marquardt (LM) and Bayesian Regularization
(BR) algorithms provided the lowest 𝑀𝑀𝑀𝑀𝐸𝐸𝑇𝑇𝑇𝑇 . This result was expected as both LM and BR algorithms are well known
for their performance in solving nonlinear regression problems. Both algorithms operate using the same procedure,
except that in the BR algorithm, a backpropagation is used to compute the Jacobian of the network performance with

4
Broyden-Fletcher-Goldfarb-Shanno.

American Institute of Aeronautics and Astronautics


11
respect to the weight and bias variables [31]. The results presented in Fig. 10 are in agreement with several studies
that have shown that the LM and BR algorithms are expected to outperform other commonly used backpropagation
algorithms for solving nonlinear curve fitting problems [30,32,33].
Once the two best training algorithms were identified, the second analysis consisted in testing several activation
functions, and evaluating their impact on the network performance. Based on the previous results, it has been decided
to test the activation function with both BR and LM algorithms as they gave the best and almost equal performances
(𝑀𝑀𝑀𝑀𝐸𝐸𝑇𝑇𝑇𝑇 = 1.55 × 10−4 for the BR, and 𝑀𝑀𝑀𝑀𝐸𝐸𝑇𝑇𝑇𝑇 = 1.63 × 10−4 for the LM) when the activation function and the
neural network structure are held fixed. In addition, although BR performed relatively better than LM, we may find
that when changing the activation function, some of them perform better with one or other training algorithms.
Following the work of Mac et al. [30], the tested activation function, and their respective formulas are presented
in Table 5.

Table 5: Implemented Activation Function; 𝒂𝒂 is the Neuron’s Activation, 𝒚𝒚 is the Neuron’s Output
Activation Function Mathematical Equation
Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

Log Sigmoid (Logsig) 1


𝑦𝑦(𝑎𝑎) =
1 + exp (−𝑎𝑎)

Hyperbolic Tangent Sigmoid (Tansig) 2


𝑦𝑦(𝑎𝑎) = −1
(1 + exp(−2 ∗ 𝑎𝑎))

Elliot Symmetric Sigmoid (Elliotsig) 𝑎𝑎


𝑦𝑦(𝑎𝑎) =
(1 + |𝑎𝑎|)

Radial basis (Radbas) 𝑦𝑦(𝑎𝑎) = exp (−𝑎𝑎2 )

Normalized radial basis (Radbasn) exp(𝑎𝑎𝑖𝑖 )


𝑦𝑦(𝑎𝑎)𝑖𝑖 =
∑𝑛𝑛𝑗𝑗=1 exp (𝑎𝑎𝑗𝑗 )

where 𝑎𝑎 is the input vector to a soft max function, that consists of


𝑛𝑛 elements of 𝑛𝑛 classes, and 𝑎𝑎𝑖𝑖 is the i-th element of the input vector.

Soft max (Softmax) exp(𝑎𝑎𝑖𝑖 )


𝑦𝑦(𝑎𝑎)𝑖𝑖 =
∑𝑛𝑛𝑗𝑗=1 exp (𝑎𝑎𝑗𝑗 )

where 𝑎𝑎 is the input vector to a soft max function that consists of


𝑛𝑛 elements of 𝑛𝑛 classes, and 𝑎𝑎𝑖𝑖 is the i-th element of the input vector.
Saturating linear (Satlin) 0, 𝑖𝑖𝑖𝑖 𝑎𝑎 ≤ 0
𝑦𝑦(𝑎𝑎) = �𝑎𝑎, 𝑖𝑖𝑖𝑖 0 ≤ 𝑎𝑎 ≤ 1
1, 𝑖𝑖𝑖𝑖 1 ≤ 𝑎𝑎

Symmetric saturating linear (Satlins) −1, 𝑖𝑖𝑖𝑖 𝑎𝑎 ≤ −1


𝑦𝑦(𝑎𝑎) = � 𝑎𝑎, 𝑖𝑖𝑖𝑖 − 1 ≤ 𝑎𝑎 ≤ 1
1, 𝑖𝑖𝑖𝑖 1 ≤ 𝑎𝑎

Triangular basis (Tribas) 1 − |𝑎𝑎|, 𝑖𝑖𝑖𝑖 − 1 ≤ 𝑎𝑎 ≤ 1


𝑦𝑦(𝑎𝑎) = �
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒

Positive linear (Poslin) 𝑎𝑎, 𝑖𝑖𝑖𝑖 𝑎𝑎 ≥ 0


𝑦𝑦(𝑎𝑎) = �
0, 𝑖𝑖𝑖𝑖 𝑎𝑎 ≤ 0

Fig. 11 shows the test error 𝑀𝑀𝑀𝑀𝐸𝐸𝑇𝑇𝑇𝑇 obtained for each activation function when training MLP for predicting the
lift coefficient 𝐶𝐶𝐿𝐿𝑆𝑆 of the Bombardier CRJ-700, with the LM and BR training algorithms. We can see that both BR
and LM algorithms, associated with either Logsig or Tansig transfer functions, gave the best and almost similar results.
The obtained 𝑀𝑀𝑀𝑀𝐸𝐸𝑇𝑇𝑇𝑇 for the four combinations (LM, Logsig), (LM, Tansig), (BR, Logsig), (BR, Tansig) are

American Institute of Aeronautics and Astronautics


12
respectively 1.56 × 10−4 , 1.59 × 10−4 , 1.53 × 10−4 , 1.64 × 10−4. We consider that each of these “training
algorithm – activation function” combinations can be used to learn and predict the lift coefficient optimally. However,
to strictly respect the optimization process, we decided to use the BR training algorithm associated with the Logsig
activation function for the determination of 𝐶𝐶𝐿𝐿𝑆𝑆 with MLP, as they give the minimum 𝑀𝑀𝑀𝑀𝐸𝐸𝑇𝑇𝑇𝑇 . Even if any other
combinations mentioned above could have been used.

0.0007
0.0006
Test Performance

0.0005
0.0004
LM
0.0003
Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

BR
0.0002
0.0001
0

Fig. 11 Network Performance Variation for various Activation Functions

The same procedure presented in this section for determining the ideal training algorithm and activation function
for the determination of 𝐶𝐶𝐿𝐿𝑆𝑆 was repeated for the estimation of the other aerodynamic coefficients, and for the RNN.
The resulting training algorithm and activation function for all the trained models are presented in Section III.

5. Neural Network structure optimization


Finally, the last step of the neural network design process is to determine the ideal configuration that provides the
best performance while ensuring a relatively acceptable learning time. The “ideal” configuration can be defined as the
combination (𝑚𝑚, 𝑛𝑛) – where 𝑛𝑛 is the number of layers of the neural network, and 𝑚𝑚 is the number of neurons on each
layer – that results in the minimum test error 𝑀𝑀𝑀𝑀𝐸𝐸𝑇𝑇𝑇𝑇 . For this purpose, a procedure similar to the one presented in the
previous section was considered. This procedure aimed to test different neural network configurations, and to
determine which one gave the best performance in an appropriate learning time. The procedure was performed to find
the ideal MLPs and RNNs structures for the estimation of the lift and drag aerodynamic coefficients.
In references [9] and [29], similar studies were conducted, as they bounded the number of layers and the number
of neurons per layer, respectively, to 3 and 10. We assumed our network structure to be more complex than the ones
used in [9] and [29]because of the highly non-linear nature of the learning data. Indeed, the flight test procedure
includes flying at a high angle of attack until the pilot reaches the stall angle. Therefore, the pilot also performs the
recovery from the stall, which induces the lift coefficient variation with respect to the angle of attack curve under the
form of a hysteresis loop, which is difficult to approximate using aerodynamic codes [24]. Previous studies [9,29] did
not consider recovering from stall maneuver. To limit the number of solutions while maximizing the search domain,
we selected a minimum number of hidden layers 𝑛𝑛 = 1, and a maximum number of 𝑛𝑛 = 5. The minimum number
of neurons per layer was set to 𝑚𝑚 = 2, and the maximum number was set to 𝑚𝑚 = 15. For each combination of
(𝑚𝑚, 𝑛𝑛), the test error 𝑀𝑀𝑀𝑀𝐸𝐸𝑇𝑇𝑇𝑇 . is calculated. The ideal Neural Network structure corresponds to the combination of
(𝑚𝑚, 𝑛𝑛) resulting in the minimum test error. Fig. 12 shows the test error 𝑀𝑀𝑀𝑀𝐸𝐸𝑇𝑇𝑇𝑇 obtained for each (𝑚𝑚, 𝑛𝑛) combination
for the estimation of lift coefficients using MLPs.
In Fig. 12.a, Fig. 12.b and Fig. 12.c, the x-axis shows the MLP structure, represented by the number 𝑚𝑚 of neurons
on each hidden layer. For example, (10, 5) represents a MLP with 𝑛𝑛 = 2 hidden layers. The first hidden layer has
𝑚𝑚 = 10 neurons and the second hidden layer has 𝑚𝑚 = 5 neurons. (10, 9, 7) represents a MLP with 𝑛𝑛 = 3 layers.
The first hidden layer has 𝑚𝑚 = 10 neurons, the second hidden layer has 𝑚𝑚 = 9 neurons and the third hidden layer
has 𝑚𝑚 = 7 neurons. The y-axis shows the corresponding test error 𝑀𝑀𝑀𝑀𝐸𝐸𝑇𝑇𝑇𝑇 . First, the number of hidden layers is 𝑛𝑛 =
1, and the number of nodes varies from 𝑚𝑚 = 2 to 𝑚𝑚 = 15. Fig. 12.a shows that increasing the number of nodes up
to 10 may improve the model performance, as the 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 decreased when 𝑚𝑚 is increased. But beyond 𝑚𝑚 = 10, the

American Institute of Aeronautics and Astronautics


13
accuracy of the model remains constant. Therefore, the number of nodes on the first hidden layer is fixed to 𝑚𝑚 = 10.
The same procedure is repeated for the other successive hidden layers 𝑛𝑛 = 2 and 𝑛𝑛 = 3. As shown in Fig. 12.b, the
best performances are obtained with 𝑚𝑚 = 9 nodes on the 2nd hidden layer. When 𝑚𝑚 is increased beyond 𝑚𝑚 = 9, the
𝑀𝑀𝑀𝑀𝐸𝐸𝑇𝑇𝑇𝑇 values stop decreasing and start to oscillate. Fig. 12.c shows that a third layer does not significantly improve
the model’s performance (𝑀𝑀𝑀𝑀𝐸𝐸𝑇𝑇𝑇𝑇 ), while adding a complexity to the model and consequently, increasing the learning
time. Therefore, the ideal MLP structure for predicting lift coefficient has 𝑛𝑛 = 2 hidden layers, with 𝑚𝑚 = 10 neurons
on the first hidden node and 𝑚𝑚 = 9 neuron on the second hidden node.

3.50E-05 2.50E-06

3.00E-05 2.00E-06
2.50E-05

Test Performance
Test Performance

1.50E-06
2.00E-05
(10, 9)
1.50E-05 1.00E-06
Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

1.00E-05 (10) 5.00E-07

5.00E-06
0.00E+00
0.00E+00
2 3 4 5 6 7 8 9 10 11 12 13 14 15
Neural Network Structure Neural Network Structure

a. MLP Structure with one Hidden Layer b. MLP Structure with two Hidden Layers (the 1st
and Multiple Neurons layer has 10 neurons, while the number of neurons on
the 2nd layer changes)
8.00E-07
7.00E-07
6.00E-07
Test Performance

5.00E-07
4.00E-07
3.00E-07
2.00E-07
1.00E-07
0.00E+00

Neural Network Structure

c. MLP Structure with three Hidden Layers (the 1st layer has 10 neurons, the 2nd layer
has 9 neurons, and the number of neurons on the 3rd layer changes)
Fig 12: Performances for various MLP Structures for
the Estimation of 𝑪𝑪𝑳𝑳𝒔𝒔 of the Bombardier CRJ-700

III.Results
This section presents the results obtained for the prediction of the aerodynamic coefficients 𝐶𝐶𝐿𝐿𝑠𝑠 and 𝐶𝐶𝐷𝐷𝑠𝑠 of the
Bombardier CRJ-700 aircraft. Data collected from the VRESIM presented in Section II were processed and used to
feed the Neural Networks models. From the 39 flight cases conducted on the VRESIM, data from 27 of them were
used to train the models, while data from the remaining 12 cases were used for validation purposes. The procedure to
select the Neural Network parameters were explained in Section II and applied to MLPs and RNNs models for the
determination of lift and drag coefficients. The resulting parameters are given on Table 6. The test errors 𝑀𝑀𝑀𝑀𝐸𝐸𝑇𝑇𝑇𝑇
obtained after training the models during 1000 epochs are also shown.

American Institute of Aeronautics and Astronautics


14
Table 6: Simulation parameter of MLPs and RNNs
Parameters MLPs RNN
𝑪𝑪𝑳𝑳𝒔𝒔 𝑪𝑪𝑫𝑫𝒔𝒔 𝑪𝑪𝑳𝑳𝒔𝒔 𝑪𝑪𝑫𝑫𝒔𝒔
Training algorithm BR LM BR LM
Activation function Logsig Logsig Logsig Logsig
Number of hidden layers 2 3 2 2
Number of nodes per hidden layer (10,9) (9,9,9) (9,9) (9,8)
Learning rate 0.05 0.05 0.05 0.05
Number of training epochs 1000 1000 1000 1000
Test error 𝑴𝑴𝑴𝑴𝑬𝑬𝑻𝑻𝑻𝑻 8.65x10-5 4.49x10-6 4.32x10-5 3.26x10-6

Tables 7 and 8 show the validation error 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 and the mean absolute residual obtained for the determination of
lift and drag aerodynamic coefficients for each validation flight case (defined by altitude and slats angle) and using
both MLP and RNN models, respectively.
Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

Table 7: Mean Absolute Relative Error (𝑴𝑴𝑴𝑴𝑴𝑴𝑴𝑴) and Residual


Obtained between Experimental Data and Predicted Values with MLP
Flight Case 𝑪𝑪𝑳𝑳𝒔𝒔 𝑪𝑪𝑫𝑫𝒔𝒔
Altitude Slat angle 𝑴𝑴𝑴𝑴𝑴𝑴𝑴𝑴 Residual MARE Residual
[ft] [in o]
7500 0 0.3% 2.57×10-3 0.74% 5.96×10-4
15,000 0 0.35% 3.00×10 -3
0.65% 5.23×10-4
25,000 0 0.44% 3.77×10 -3
1.36% 10.9×10-4
27,500 0 1.37% 11.7×10 -3
0.66% 5.31×10-4
5000 20 0.63% 5.40×10 -3
0.85% 6.84×10-4
20,000 20 0.23% 1.97×10 -3
0.28% 2.25×10-4
27,500 20 0.22% 1.89×10 -3
0.26% 2.09×10-4
35,000 20 0.57% 4.88×10 -3
0.43% 3.46×10-4
12,500 45 0.31% 2.66×10 -3
0.22% 1.77×10-4
20,000 45 0.37% 3.17×10 -3
0.26% 2.09×10-4
25,000 45 0.5% 4.29×10 -3
0.33% 2.65×10-4
32,500 45 0.42% 3.60×10 -3
0.3% 2.41×10-4

Table 8: Mean Absolute Relative Error (𝑴𝑴𝑴𝑴𝑴𝑴𝑴𝑴) and Residual


Obtained between Experimental Data and Predicted Values with RNN
Flight Case 𝑪𝑪𝑳𝑳𝒔𝒔 𝑪𝑪𝑫𝑫𝒔𝒔
Altitude Slat angle 𝑴𝑴𝑴𝑴𝑴𝑴𝑴𝑴 Residual 𝑴𝑴𝑴𝑴𝑴𝑴𝑴𝑴 Residual
[ft] [in o]
7500 0 0.29% 2.48×10-3 0.90% 7.25×10-4
15,000 0 0.36% 3.10×10-3 0.59% 4.75×10-4
25,000 0 0.37% 3.17×10-3 0.8% 6.44×10-4
27,500 0 0.99% 8.48×10-3 0.65% 5.23×10-4
5000 20 0.52% 4.45×10-3 0.72% 5.79×10-4
20,000 20 0.25% 2.14×10-3 0.3% 2.41×10-4
27,500 20 0.23% 1.97×10-3 0.21% 1.69×10-4
35,000 20 0.53% 4.54×10-3 0.41% 3.30×10-4
12,500 45 0.28% 2.40×10-3 0.21% 1.69×10-4
20,000 45 0.35% 3.00×10-3 0.2% 1.61×10-4
25,000 45 0.5% 4.29×10-3 0.41% 3.30×10-4
32,500 45 0.37% 4.17×10-3 0.35% 2.81×10-4

As seen in the Tables 7 and 8, both MLP and RNN methodologies can globally estimate the lift and drag
coefficients with less than 2% relative error. The order of the residual values (10-3 for 𝐶𝐶𝐿𝐿𝑠𝑠 and 10-4 for 𝐶𝐶𝐷𝐷𝑠𝑠 ) is relatively

American Institute of Aeronautics and Astronautics


15
low compared to the experimental data (10-1), which means that the errors are negligible. However, for most predicted
flight cases, the RNN gives relatively better performance than the MLP. These results can be explained by the ability
of the RNN to model time series data by considering the dynamics of the system.
Figure 13 to 15 show examples of the estimated aerodynamic coefficients compared with their experimental data
and their corresponding residuals for different slat angles.
As shown in Fig. 13 to 15, the experimental data (in blue color) are very closed to the data predicted by both MLP
(in red color) and RNN (in black color). From a general point of view, it can therefore be concluded that very good
performances of the models, in term of validation error 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 was obtained for the flight cases with and without slats.
The 12 flight cases were successfully validated, and the average 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 obtained for the prediction of 𝐶𝐶𝐿𝐿𝑠𝑠 and 𝐶𝐶𝐷𝐷𝑠𝑠 on
the 12 validation cases are about 0.5 %. Table 9 shows the average 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 and the corresponding standard deviation
obtained for each prediction.

Table 9: Average 𝑴𝑴𝑴𝑴𝑴𝑴𝑴𝑴 and standard deviation of MARE


obtained for the prediction of 𝑪𝑪𝑳𝑳𝒔𝒔 and 𝑪𝑪𝑫𝑫𝒔𝒔 using MLP and RNN
Number of Standard deviation
Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

Average 𝑴𝑴𝑴𝑴𝑴𝑴𝑴𝑴
validation cases of the 𝑴𝑴𝑴𝑴𝑴𝑴𝑴𝑴
Estimation of 𝑪𝑪𝑳𝑳𝒔𝒔 with MLP 12 0.48% ± 0.30%
Estimation of 𝑪𝑪𝑳𝑳𝒔𝒔 with RNN 12 0.42% ±0.20%
Estimation of 𝑪𝑪𝑫𝑫𝒔𝒔 with MLP 12 0.53% ± 033%
Estimation of 𝑪𝑪𝑫𝑫𝒔𝒔 with RNN 12 0.48 % ± 0.23%

0 10 20 30 40 50 60 0 10 20 30 40 50 60

0.05 0.02

0.01

-0.05
-0.01

-0.1 -0.02
0 10 20 30 40 50 60 0 10 20 30 40 50 60

Fig. 13: Example of Results for a Flight Test at 15,000 ft and with slat retracted

American Institute of Aeronautics and Astronautics


16
0 5 10 15 20 25 30 0 5 10 15 20 25 30

0.1 0.01
Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

0.005

0.05
0

-0.005
0

-0.01

-0.05
0 5 10 15 20 25 30 0 5 10 15 20 25 30

Fig. 14: Example of Results for a Flight Test at 25,000 ft and with slat at 45°

-5 0 5 10 15 20 25 30 -5 0 5 10 15 20 25 30

-3
10
0.05 5

-5

-0.05 -10
-5 0 5 10 15 20 25 30 -5 0 5 10 15 20 25 30

Fig. 15: Example of Results for a Flight Test at 27,500 ft and with slat at 20°

American Institute of Aeronautics and Astronautics


17
Conclusion
This paper presents a methodology to predict the aerodynamic coefficients of an aircraft during a stall recovery
maneuver in dynamic stall conditions. The linear and nonlinear variations of lift and drag aerodynamic coefficients
are estimated along the stall hysteresis curve. The methodology was successfully applied on the CRJ 700 airplane
designed and manufactured by CAE Inc and Bombardier. Flight tests data obtained from the CRJ-700 VERSIM were
used to train Multilayer Perceptrons and Recurrent Neural networks. The procedure to select the Neural Network
parameters (training algorithms, activation function) was detailed, and the process to optimize the model structure was
also developed. Both MLP and RNN were able to predict the lift and drag coefficients with a mean absolute relative
error of less than 2 % for the 12 cases used for validation.

Acknowledgments
This work was accomplished at the Laboratory of Applied Research in Active Controls, Avionics, and
AeroServoElasticity Research (LARCASE). The CRJ 700 Aircraft Research Flight Simulator VRESIM was obtained
by Dr. Ruxandra Botez, Full Professor at ETS, with a grant from the Canadian Foundation of Innovation (CFI), the
Ministère du Développement Économique, de l'Innovation et de l'Exportation (MDEIE) and CAE Inc. Thanks are
Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

dues to the CREATE-UTILI program funded by the NSERC and led by Dr. Jeremy Laliberte, Carleton University
Que., Canada, as well as to the NSERC for the Canada Research Chair in Aircraft Modeling and Simulation
Technologies. Thanks, are also dues to Mr. Oscar Carranza Moyao and Mrs Odette Lacasse for their support in the
development of the aircraft research flight simulators at the LARCASE.

References
[1] Tinoco, E., “The Changing Role of Computational Fluid Dynamics in Aircraft Development,” 16th AIAA
Applied Aerodynamics Conference, Albuquerque,NM,U.S.A., 1998. DOI:10.2514/6.1998-2512.
[2] Spentzos, A., Barakos, G., Badcock, K., Richards, B., Wernert, P., Schreck, S., and Raffel, M., “Investigation
of Three-Dimensional Dynamic Stall Using Computational Fluid Dynamics,” AIAA Journal, Vol. 43, No. 5,
2005, pp. 1023–1033. DOI:10.2514/1.8830.
[3] Fischenberg, D., “Identification of an Unsteady Aerodynamic Stall Model from Flight Test Data,” 20th
Atmospheric Flight Mechanics Conference, Baltimore,MD,U.S.A., 1995. DOI:10.2514/6.1995-3438.
[4] Bierbooms, W. A. A. M., “A Comparison between Unsteady Aerodynamic Models,” Journal of Wind
Engineering and Industrial Aerodynamics, Vol. 39, No. 1–3, 1992, pp. 23–33. DOI:10.1016/0167-
6105(92)90529-J.
[5] Botez, R. M., “Une Étude Comparative Des Modèles Semi-Empiriques Pour La Prédiction Du Décrochage
Dynamique,” Montreal, QC, Canada, 1989.
[6] Mulleners, K., Pape, A., Heine, B., and Raffel, M., “The Dynamics of Static Stall,” 2012.
[7] Moir, S., and Coton, F. N., “An Examination of the Dynamic Stalling of Two Wing Planforms. G.U. Aero
Report 9526.” [Online]. Available at: http://eprints.gla.ac.uk/183243/.
[8] Botez, R., “Morphing Wing, UAV and Aircraft Multidisciplinary Studies at the Laboratory of Applied
Research in Active Controls, Avionics and AeroServoElasticity LARCASE,” AerospaceLab Journal, Vol.
Issue 14, 2018, p. September 2018; ISSN: 21076596. DOI:10.12762/2018.AL14-02.
[9] Ghazi, G., Bosne, M., Sammartano, Q., and Botez, R. M., “Cessna Citation X Stall Characteristics
Identification from Flight Data Using Neural Networks,” AIAA Atmospheric Flight Mechanics Conference,
Grapevine, Texas, 2017. DOI:10.2514/6.2017-0937.
[10] Hamel, C., Sassi, A., Botez, R., and Dartigues, C., “Cessna Citation X Aircraft Global Model Identification
from Flight Tests,” SAE International Journal of Aerospace, Vol. 6, No. 1, 2013, pp. 106–114.
DOI:10.4271/2013-01-2094.
[11] Zaag, M., and Botez, R. M., “Cessna Citation X Engine Model Identification and Validation in the Cruise
Regime from Flight Tests Based on Neural Networks Combined with Extended Great Deluge Algorithm,”
AIAA Modeling and Simulation Technologies Conference, Grapevine, Texas, 2017. DOI:10.2514/6.2017-
1941.
[12] Haykin, S. O., Neural Networks: A Comprehensive Foundation, 2 édition, Pearson, Upper Saddle River, N.J,
1998.
[13] Boely, N., Botez, R. M., and Kouba, G., “Identification of a Non-Linear F/A-18 Model by the Use of Fuzzy
Logic and Neural Network Methods,” Proceedings of the Institution of Mechanical Engineers, Part G: Journal
of Aerospace Engineering, Vol. 225, No. 5, 2011, pp. 559–574. DOI:10.1177/2041302510392871.

American Institute of Aeronautics and Astronautics


18
[14] De Jesus Mota, S., and Botez, R. M., “New Helicopter Model Identification Method Based on Flight Test
Data,” The Aeronautical Journal, Vol. 115, No. 1167, 2011, pp. 295–314. DOI:10.1017/S0001924000005789.
[15] Mosbah, B., Botez, R. M., and Dao, T. M., “New Methodology for Calculating Flight Parameters with Neural
Network - Extended Great Deluge Method Applied on a Reduced Scale Wind Tunnel Model of an ATR-42
Wing,” AIAA Modeling and Simulation Technologies (MST) Conference, Boston, MA, 2013.
DOI:10.2514/6.2013-5074.
[16] Boely, N., and Botez, R. M., “New Approach for the Identification and Validation of a Nonlinear F/A-18
Model by Use of Neural Networks,” IEEE Transactions on Neural Networks, Vol. 21, No. 11, 2010, pp. 1759–
1765. DOI:10.1109/TNN.2010.2071398.
[17] Ben Mosbah, A., Botez, R. M., Medini, S. M., and Dao, T.-M., “Artificial Neural Networks-Extended Great
Deluge Model to Predict Actuators Displacements for a Morphing Wing Tip System,” INCAS BULLETIN,
Vol. 12, No. 4, 2020, pp. 13–24. DOI:10.13111/2066-8201.2020.12.4.2.
[18] Ben Mosbah, A., Botez, R. M., and Dao, T.-M., “New Methodology Combining Neural Network and Extended
Great Deluge Algorithms for the ATR-42 Wing Aerodynamics Analysis,” The Aeronautical Journal, Vol.
120, No. 1229, 2016, pp. 1049–1080. DOI:10.1017/aer.2016.46.
Downloaded by Indian Institute of Science on December 8, 2023 | http://arc.aiaa.org | DOI: 10.2514/6.2022-2577

[19] Ben Mosbah, A., Flores Salinas, M., Botez, R., and Dao, T., “New Methodology for Wind Tunnel Calibration
Using Neural Networks - EGD Approach,” SAE International Journal of Aerospace, Vol. 6, No. 2, 2013, pp.
761–766. DOI:10.4271/2013-01-2285.
[20] Al-Shareef, A., Mohamed, E., and Al-Judaibi, E., “Next 24-Hours Load Forecasting Using Artificial Neural
Network (ANN) for the Western Area of Saudi Arabia,” Journal of King Abdulaziz University-Engineering
Sciences, Vol. 19, No. 2, 2008, pp. 25–40. DOI:10.4197/Eng.19-2.2.
[21] Basappa, and Jategaonkar, R. V., “Aspects of Feed Forward Neural Network Modeling and Its Application to
Lateral-Directional Flight Data.”
[22] Baldelli, D. H., Lind, R., and Brenner, M., “Nonlinear Aeroelastic/Aeroservoelastic Modeling by Block-
Oriented Identification,” Journal of Guidance, Control, and Dynamics, Vol. 28, No. 5, 2005, pp. 1056–1064.
DOI:10.2514/1.11792.
[23] Levenberg, K., “A Method for the Solution of Certain Non-Linear Problems in Least Squares,” Quarterly of
Applied Mathematics, Vol. 2, No. 2, 1944, pp. 164–168. DOI:10.1090/qam/10666.
[24] McCroskey, W. J., “The Phenomenon of Dynamic Stall.,” NATIONAL AERONUATICS AND SPACE
ADMINISTRATION MOFFETT FIELD CA AMES RESEARCH CENTER, 1981.
[25] Yao, F., Müller, H.-G., and Wang, J.-L., “Functional Linear Regression Analysis for Longitudinal Data,” The
Annals of Statistics, Vol. 33, No. 6, 2005. DOI:10.1214/009053605000000660.
[26] Yeom, S., Giacomelli, I., Fredrikson, M., and Jha, S., “Privacy Risk in Machine Learning: Analyzing the
Connection to Overfitting,” arXiv:1709.01604 [cs, stat], 2018. .
[27] Williams, R. J., and Zipser, D., “A Learning Algorithm for Continually Running Fully Recurrent Neural
Networks,” Neural Computation, Vol. 1, No. 2, 1989, pp. 270–280. DOI:10.1162/neco.1989.1.2.270.
[28] “Deep Learning Toolbox.” [Online]. Available at: https://www.mathworks.com/products/deep-learning.html.
[29] Suresh, S., Omkar, S. N., Mani, V., and Guru Prakash, T. N., “Lift Coefficient Prediction at High Angle of
Attack Using Recurrent Neural Network,” Aerospace Science and Technology, Vol. 7, No. 8, 2003, pp. 595–
602. DOI:10.1016/S1270-9638(03)00053-1.
[30] Maca, P., Pech, P., and Pavlasek, J., “Comparing the Selected Transfer Functions and Local Optimization
Methods for Neural Network Flood Runoff Forecast,” Mathematical Problems in Engineering, Vol. 2014,
2014, pp. 1–10. DOI:10.1155/2014/782351.
[31] MacKay, D. J. C., “Bayesian Interpolation,” Neural Computation, Vol. 4, No. 3, 1992, pp. 415–447.
DOI:10.1162/neco.1992.4.3.415.
[32] Bataineh, A. A., and Kaur, D., “A Comparative Study of Different Curve Fitting Algorithms in Artificial
Neural Network Using Housing Dataset,” NAECON 2018 - IEEE National Aerospace and Electronics
Conference, Dayton, OH, 2018. DOI:10.1109/NAECON.2018.8556738.
[33] Khan, T. A., Alam, M., Shahid, Z., and Mazliham, M. S., “Comparative Performance Analysis of Levenberg-
Marquardt, Bayesian Regularization and Scaled Conjugate Gradient for the Prediction of Flash Floods,”
Journal of Information Communication Technologies and Robotic Applications, 2019, pp. 52–58.

American Institute of Aeronautics and Astronautics


19

You might also like