0% found this document useful (0 votes)
41 views6 pages

A Turbulence Model Based On Deep Neural Network Co

1) The document presents a turbulence model based on deep neural networks (DNN) that considers the nonlinear relationship between Reynolds stress anisotropy and mean velocity gradients, as well as the near-wall effect. 2) Classical turbulence models assume a linear relationship between Reynolds stress anisotropy and mean strain rate, which does not capture anisotropy in some flows. Nonlinear models have been developed but have inconsistent performance. 3) The DNN model takes the mean strain rate/rotation rate tensors and turbulence Reynolds number as inputs to account for nonlinear relationships and near-wall effects. It aims to accurately predict Reynolds stresses, particularly for channel flows.

Uploaded by

shakeel6787
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views6 pages

A Turbulence Model Based On Deep Neural Network Co

1) The document presents a turbulence model based on deep neural networks (DNN) that considers the nonlinear relationship between Reynolds stress anisotropy and mean velocity gradients, as well as the near-wall effect. 2) Classical turbulence models assume a linear relationship between Reynolds stress anisotropy and mean strain rate, which does not capture anisotropy in some flows. Nonlinear models have been developed but have inconsistent performance. 3) The DNN model takes the mean strain rate/rotation rate tensors and turbulence Reynolds number as inputs to account for nonlinear relationships and near-wall effects. It aims to accurately predict Reynolds stresses, particularly for channel flows.

Uploaded by

shakeel6787
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

manuscript No.

(will be inserted by the editor)

A turbulence model based on deep neural network considering


the near-wall effect
Muyuan Liu · Yiren Yang · Hao Chen∗
arXiv:2103.16963v1 [physics.flu-dyn] 31 Mar 2021

Received: date / Accepted: date

Abstract There exists continuous demand of improved ity models (LEVM) assume a linear relationship be-
turbulence models for the closure of Reynolds Averaged tween the Reynolds stress anisotropy tensor and the
Navier-Stokes (RANS) simulations. Machine Learning local mean strain rate:
(ML) offers effective tools for establishing advanced em-
aij = −2νT S̄, (1)
pirical Reynolds stress closures on the basis of high fi-
delity simulation data. This paper presents a turbu- where aij is the Reynolds anisotropy tensor with aij =
lence model based on the Deep Neural Network(DNN) < ui uj > − 32 kδij and S̄ the mean strain rate. The
which takes into account the non-linear relationship be- velocity co-variance < ui uj > is the Reynolds stress
tween the Reynolds stress anisotropy tensor and the tensor and k the turbulent kinetic energy. Classical one-
local mean velocity gradient as well as the near-wall equation models (e.g. the turbulent-kinetic-energy model
effect. The construction and the tuning of the DNN- [1] and the Spalart-Allmaras model [2]) and two-equation
turbulence model are detailed. We show that the DNN- models (e.g. the k-ε model [3] and the k-ω model [4])
turbulence model trained on data from direct numer- based on the turbulent viscosity hypothesis mainly dif-
ical simulations yields an accurate prediction of the fer in the modeling of the turbulent viscosity νT . The
Reynolds stresses for plane channel flow. In particu- models with a linear stress-strain relationship do not
lar, we propose including the local turbulence Reynolds capture the correct anisotropy of the Reynolds stresses
number in the model input. in many flows including e.g. pipe flow with a contrac-
tion [5].
Keywords turbulence modeling · machine learning ·
near-wall effect · plane channel flow Nonlinear turbulent viscosity models have also been
developed for the closure problem of the RANS simula-
tions, the general nondimensional form of which is given
1 Introduction as [6]:
bij = Bij (S,
b Ω),
b (2)
Turbulent flows involve a range of spatial and tem-
poral scales, a complete resolving of which is compu- where bij is the Reynolds anisotropy tensor nondimen-
tationally expensive. Reynolds-averaged Navier-Stokes sionalized by k. The mean strain rate and mean ro-
(RANS) simulation, which solves equations for mean tation rate tensors nondimensionalized by a turbulent
quantities, is a feasible concept widely used in indus- time scale are denoted by S b and Ω,
b respectively. The
trial turbulent flow problems. The Reynolds stresses turbulent time scale can be constructed by means of
are unknowns in the RANS equations and must be de- the local turbulent dissipation rate ε and the turbulent
termined by a turbulence model. Linear eddy viscos- kinetic energy k, as suggested by Pope [6]. The explicit

expressions of equation (2) have been proposed in a va-
Corresponding Author: Hao Chen
School of Mechanics and Engineering,
riety of different forms with examples found in [6, 7, 8,
Southwest Jiaotong University, Chengdu 610031, China 9]. Generally, the classical nonlinear viscosity models
Tel.: +0086-28-87600797 yield more accurate predictions of the Reynolds stress
E-mail: [email protected] anisotropy and allow the calculation of secondary flows,
2 Muyuan Liu et al.

yet are not widely used due to the inconsistent perfor- tensor S b and rotation rate tensor Ω b as input features
mance improvement [5, 10]. [10, 16, 17]. In addition to S
b and Ω,b Zhang et al. [17]
It is well known that the turbulence modeling in the selected the wall units y + as an input feature, which
near-wall region should take the effect of the fluid vis- takes into account the near-wall effect. Zhang et al.
cosity into account, because the local turbulence Reynolds [17] showed that DNN with this additional input fea-
number ReL = k 2 /(εν) tends to zero approaching the ture yields an improvement on the prediction of the
wall, where ν denotes the kinematic viscosity of the Reynolds anisotropy tensor in plane channel flows. Al-
fluid. Classically, the near-wall effect is accommodated ternatively, we propose to select the turbulence Reynolds
by means of damping functions applied to the modeled number (ReL = k 2 /(εν)) as an additional quantity in
isotropic turbulent viscosity νT . As an example given the input, in order to take into account the viscosity
by Jones and Launder [11], in association with the k-ε effect near the wall. We select the turbulence Reynolds
model the turbulent viscosity is given as number as the additional input feature, because the tur-
bulence Reynolds number is a local quantity, which can
k2
νT = fµ Cµ , (3) be easily constructed with available turbulent kinetic
ε
energy k and turbulent dissipation rate ε. Instead of S b
with a calibrated constant Cµ = 0.09 and a damping and Ω, equivalently, we select the nondimensionalized
b
function local velocity gradient for the input, because the strain
−2.5 rate and rotation rate tensors are a decomposition of
fµ = exp( ). (4)
1 + ReL /50 the velocity gradient and using the velocity gradient
reduces the number of items in the input.
Equation (3) reduces to the standard k-ε formulation
The objective of this paper is to present a turbu-
away from the wall.
lence model based on DNN, which distinguishes from
Alongside with the growing popularity of applying
previous works mainly in the introduction of the local
machine learning methods in turbulence simulations (a
Reynolds number as an additional input feature. The
recent review is given by Brunton et al. [12]), deep neu-
DNN used in this paper is trained and tested on data
ral networks (DNN) have been introduced for devel-
obtained from direct numerical simulations of plane chan-
oping RANS turbulent models in the past years. Deep
nel flows. The organization of this paper is as follows.
neural network establishes a transformation of input
We first detail the structure of the DNN for the tur-
features through multiple nonlinear interactions to an
bulence modeling and the corresponding tuning pro-
output, which enables the learning of nonlinear turbu-
cess. Then, we evaluate the predicted Reynolds stress
lence models from high fidelity simulation data, i.e. data
anisotropy given by the DNN-turbulence model followed
from direct numerical simulations (DNS) or large eddy
by a conclusion.
simulations (LES). Deep neural networks have gained
attention in turbulence modeling partially due to its
overwhelming performance in other research fields in- 2 Deep Neural Network
cluding e.g. image classification [13] and speech recog-
nition [14]. Zhang and Duraisamy [15] predicted a cor- Deep neural networks are composed of multiple layers
rection factor for the turbulent production term using of nodes (or neurons), with each node connected to all
neural networks. Ling et al. [10] designed a DNN ar- nodes in neighboring layers, as shown schematically in
chitecture to model the turbulence closure which en- Fig. 1. The input layer at the far left is provided with an
ables the reproduction of secondary flows in duct flow. input x which is then linearly transformed by means of
Weatheritt et al. [16] applied DNN in turbulence mod- a weight matrix W represented by the connecting lines
elling. Their model applied to jets in crossflow yields an in Fig. 1 and a bias vector b. The outcome Wx + b
improvement on the prediction of the Reynolds stress is passed to the nodes in the first hidden layer, with
anisotropy over the model based on the linear relation- each of the components then treated by means of a
ship. Zhang et al. [17] predicted the Reynolds stress so-called activation function and serves as the input for
anisotropy in channel flows using DNN. the next layer. This procedure applies to the subsequent
Given an appropriate input, a nonlinear turbulence layers and terminates after giving an output vector y
model based on DNN predicts the non-dimensionalized in the output layer. A common activation function f =
Reynolds-stress anisotropy tensor bij . One major con- max(0,x), which is called Rectified Linear Unit (ReLU)
cern of using DNN for turbulence modeling is the se- [18], is applied in this work.
lection of the input features. In consistence with the The objective of DNN is to learn a mapping f :
classical non-linear turbulence models, almost all DNN- X → Y on a training dataset constructed with values
turbulence models select so far the local strain rate sampled from the input space X and correspondingly
A turbulence model based on deep neural network considering the near-wall effect 3

2.2 Training of the DNN-turbulence model

The training of a neural network is essentially an opti-


mization process by means of updating the weights and
biases. The initial values of the biases are not trouble-
some and simply set to zero, whereas an inappropriate
initialization of the weights might lead to vanishing gra-
dients that prevents the proceeding of the update [23].
Following the suggestion given by He et al. [24], we use
Gaussian distribution with zero means and standard
Fig. 1: Structure of a deep neural network. deviations of the He-values for the p initialization of the
weights. The He-value is given by 2/n with n being
the number of nodes in the precedent layer. This initial-
from the output space Y . In order to propose a map- ization method is generally applied in association with
ping, DNN minimizes a loss function in terms of W and the ReLU activation function.
b on the training dataset by means of gradient based Since training a neural network is analogous to fit-
methods. In this work, we use the mean squared error ting a regression model to the data, it suffers the risk
(MSE) between the predicted and sampled values as the of over-fitting, meaning obtaining high accuracy on the
loss function. The back propagation algorithm [19] cal- training data yet inaccurate on data not observed by
culates the gradients of the loss function in terms of the the DNN in the training process. In order to suppress
weights and the biases. The gradient decent algorithm the over-fitting, we apply the weights decay [25], which
then updates the weights and the biases by means of the adds a regularization term 21 λWT W to the loss func-
obtained gradients multiplied by a learning rate η. We tion.
use the Adam method [20] which calculates an adaptive Four hyper parameters are still to be determined,
learning rate for the computation of the gradients. which are the coefficient λ in the regularization term,
the initial learning rate η, the number of hidden lay-
ers denoted by nl and the number of nodes in each
layer denoted by nn. In order to determine these hyper
parameters, we sample the parameters randomly in a
2.1 Data set sample space around the parameters used by Zhang et
al. [17] who trained a DNN for predicting the Reynolds
The dataset for training and testing the model is com- anisotropy tensor in channel flows. Based on these sam-
posed of statistical quantities obtained from direct nu- plings, we first select η = 2.5 · 10−7 , which guarantees
merical simulations of channels flows conducted by Lee convergence and is not too small so that the compu-
and Moser and [21] and Moser et al. [22], which are tational time for a training is acceptable. Zhang et al.
available under http://turbulence.ices.utexas.edu. Six [17] observed that the Reynolds anisotropies predicted
flows characterized by Reynolds numbers computed on by a DNN-turbulence model yields unexpected oscilla-
the basis of friction velocities are considered for assem- tions due to over-fitting. We select λ = 0.001 for the
bling the dataset. The corresponding Reynolds num- weight decay, which effectively reduces the over-fitting
bers are Reτ = 390, 550, 590, 1000, 2000 and 5200 , and thereby the oscillations.
respectively. The case with Reτ = 590 is retrieved for With selected λ and η, we generate a two dimen-
composing the validation dataset, which serves for the sional grid with nl = (3, 4, 5, 6, 7) and nn = (10, 20, 30).
determination of the hyper parameters of the DNN- We evaluate the root mean squared error (RMSE) loss
turbulence model and the case with Reτ = 1000 for the on the validation dataset for each item on the grid.
testing. The rest data consist of the training dataset. The RMSEs are calculated by summing over all non-
Based on the hypothesis of local dependency, one zero and non-identical components of the symmetric
entry in the input for the DNN is composed of the local tensor bij in all entries of a dataset. Due to the ran-
turbulence Reynolds number ReL and the local nondi- dom nature of training a neural network, we conduct
mensionalized velocity gradient, as argued in the intro- the evaluation 10 times for each combination of nl and
duction. The output is the Reynolds anisotropy tensor nn. The averaged values are given in Table 1. We select
bij , where b13 and b23 are zero and excluded due to the trained DNN with nl = 5 and nn = 30 as the final
the symmetry of the geometrical configuration of plane DNN-turbulence model, which yields the smallest error
channel flows. on the validation dataset.
4 Muyuan Liu et al.

nl nn = 10 20 30
3 0.0263 0.0234 0.0206
4 0.0252 0.0222 0.0202
5 0.0266 0.0204 0.0195
6 0.0235 0.0215 0.0198
7 0.0255 0.0209 0.0198

Table 1: Averaged RMSE evaluated on the validation


dataset.

3 Prediction of the Reynolds anisotropy tensor

The trained DNN represents a DNN-turbulence model


which predicts the Reynolds anisotropy tensor given in-
puts that are not seen in the training process. In order
to evaluate the prediction capability of this model, the
predicted Reynolds anisotropy tensor is compared with
their true values obtained from DNS for the test case
Reτ = 1000 in Fig. 2. Due to the symmetry of the ge-
ometry configuration of the plane channel flow as men-
tioned above, b13 and b23 have zero values and are not
considered. It is shown that the DNN-turbulence model
reproduces the Reynolds anisotropy tensor which is in
very good agreement with the DNS-values. The clas-
sical linear models cannot predict the full anisotropy
tensor and is restricted to predict b12 , because only Sb12
in the strain rate tensor is not zero in plane channel
flows. The components b12 given by linear models with
and without damping effect computed on the basis of
equation (2) are plotted in Fig. 3 along with the DNN
prediction and DNS values. While the prediction given
by the linear model without damping is acceptable in
the inner region of the channel, the error in the near
wall region is obviously large. The linear model with a
damping yields better agreement in the whole channel,
yet is clearly not as good as the prediction given by the
DNN model especially in the near wall region.
A quantification of the prediction error in terms
of the RMSE is given in Table 2, along with RMSEs
yielded by turbulence models based on Deep Neural
Networks from previous works [10, 17], though not of
exactly the same geometrical and computational con-
figuration. It is shown that the DNN applied in the
present work achieves a significant improvement on the
prediction accuracy, demonstrating the merit of using
the turbulence Reynolds number as an additional input
feature for data-driven turbulence modelling. Fig. 2: The Reynolds anisotropy tensor predicted by
DNN and obtained from DNS.
A turbulence model based on deep neural network considering the near-wall effect 5

References

1. Kolmogorov A. N. The equations of turbulent motion in


an incompressible fluid [J]. Doklady Akademii nauk SSSR,
1941, 30(4): 341-343.
2. Spalart P., Allmaras S. A one-equation turbulence model
for aerodynamic flows [J]. Recherche Aerospatiale, 1994,
1: 5-21.
3. Launder B. E., Sharma B. I. Application of the energy-
dissipation model of turbulence to the calculation of flow
near a spinning disc [J]. Letters in Heat and Mass Trans-
fer, 1974, 1(2): 131-137.
4. Wilcox D. C. Multiscale model for turbulent flows [J].
AIAA Journal, 1988, 26(11): 1311-1320.
Fig. 3: The non-zero component b12 predicted by DNN, 5. Pope S. B. Turbulent Flows [M]. Cambridge, Britain:
by linear model with and without damping compared Cambridge University Press, 2000.
to DNS values. 6. Pope S. B. A more general effective-viscosity hypothesis
[J]. Journal of Fluid Mechanics, 1975, 72(2): 331-340.
7. Gatski T. B., Speziale C. G. On explicit algebraic stress
Flow RMSE models for complex turbulent flows [J]. Journal of Fluid
Duct flow [10] 0.14 Mechanics, 1993, 254: 59-78.
Flow over wavy wall [10] 0.08 8. Robert R., Michael B. J. Nonlinear Reynolds stress models
Channel flow [17] (using y + ) 0.05 and the renormalization group [J]. Physics of FLuids A,
Channel flow (This work) (using ReL ) 0.02 1990, 2(8): 1472-1476.
9. Craft T. J., Launder B. E. A reynolds stress closure de-
Table 2: Comparison of RMSEs of the Reynolds signed for complex geometries [J]. International Journal
anisotropy tensor. of Heat and Fluid Flow, 1996, 17(3): 245-254.
10. Ling J., Kurzawski A., Templeton J. Reynolds averaged
turbulence modelling using deep neural networks with em-
bedded invariance [J]. Journal of Fluid Mechanics, 2016,
807: 155-166.
4 conclusion
11. Jones W. P., Launder B. E. The prediction of laminariza-
tion with a two-equation model of turbulence [J]. Interna-
In conclusion, we trained a non-linear DNN-turbulence tional of Heat and Mass Transfer, 1972, 15(2): 301-314.
model which takes the near-wall effect into account by 12. Brunton S. L., Noack B. R., Koumoutsakos P. Machine
means of adding the local turbulence Reynolds number learning for fluid mechanics [J]. Annual Review of Fluid
Mechanics, 2020, 52(1): 477-508.
to the input feature. The model was shown to be able
13. LeCun Y., Bengio Y., Hinton G. Deep learning [J]. Na-
to predict the Reynolds stress anisotropy tensor accu- ture, 2015, 521: 436-444.
rately in a test channel flow. The proposed model has 14. Hinton G., Deng L., Yu D., Dahl G., Mohamed A.-r.,
the potential to be deployed to similar and even more Jaitly N., Senior A., Vanhoucke V., Nguyen P., Sainath T.,
complicated flow problems, though the selection of the Kingsbury B. Deep neural networks for acoustic modeling
in speech recognition: The shared views of four research
hyper parameters of the DNN and the model accuracy groups [J]. IEEE Signal Processing Magazine, 2012, 29:
will be open questions. We will further address these 82-97.
issues next. 15. Zhang Z. J., Duraisamy K. Machine learning methods
for data-driven turbulence modeling [C]. AIAA Computa-
tional Fluid Dynamics Conference, 2015, pages 2015-2460.
16. Weatheritt J., Sandberg R., Ling J., Saez G., Bodart J. A
comparative study of contrasting machine learning frame-
Acknowledgement works applied to rans modeling of jets in crossflow [C].
In: Turbomachinery Technical Conference and Exposition,
This work was supported by the National Natural Sci- Charlotte, US, 2017.
ence Foundation of China (Grant No. 11902275 and 17. Zhang Z., Song X.-D., Ye S.-R., Wang Y.-W., Huang C.-
G., An Y.-R., Chen Y.-S. Application of deep learning
11772273) and the Fundamental Research Funds for the method to reynolds stress models of channel flow based
Central Universities (Grant No. 2682020CX46). on reduced-order modeling of dns data [J]. Journal of Hy-
drodynamics, 2019, 31: 58-65.
18. Maas A. L., Hannun A. Y., Ng A. Y. Rectifier nonlin-
earities improve neural network acoustic models [C]. In:
Conflict of interest International Conference on Machine Learning, Atlanta,
United States, 2013.
19. Rumelhart D. E., Hinton G. E., Williams R. J. Learn-
The authors declare that they have no conflict of inter- ing representations by back-propagating errors [J]. Nature,
est. 1986, 323(6088): 533-536.
6 Muyuan Liu et al.

20. Kingma D., Ba J. Adam: A method for stochastic opti-


mization [C]. Proceedings of the 3rd International Confer-
ence on Learning Representations, 2015.
21. Lee M., Moser R. D. Direct numerical simulation of tur-
bulent channel flow up to Re τ ≈ 5200 [J]. Journal of Fluid
Mechanics, 2015, 774: 395-415.
22. Moser R., Kim J., Mansour N. Direct Numerical Simu-
lation of Turbulent Channel Flow up to Re τ ≈ 590 [J].
Physics of Fluids, 1999, 11: 943-945.
23. Goodfellow I., Bengio Y., Courville A. Deep Learning
[M]. MIT Press, 2016, http://www.deeplearningbook.org.
24. He K.-M., Zhang X.-Y, Ren S.-Q., Sun J. Delving deep
into rectifiers: Surpassing human-level performance on Im-
ageNet classification [C]. IEEE International Conference
on Computer Vision, 2015, pages 1026-1034.
25. Krogh A. and Hertz J. A simple weight decay can improve
generalization [C]. International Conference on Neural In-
formation Processing Systems, 1992, pages 950-957.

You might also like