0% found this document useful (0 votes)
14 views69 pages

Russell 2023

Uploaded by

earth friendlyhq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views69 pages

Russell 2023

Uploaded by

earth friendlyhq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

U NIVERSITY OF M ICHIGAN -F LINT

M ASTER ’ S T HESIS

Applying Physics-Informed Neural Networks to


the Solution of Parametric, Nonlinear Heat
Conduction Problems

Author: Supervisor:
Collin R USSELL Dr. Matthew S PRADLING

A thesis submitted in partial fulfillment of the requirements


for the degree of Master of Science
in
Computer Science & Information Systems
of the
College of Innovation & Technology

December 18, 2023


ii

1st
Reader:
Dr. Matthew S PRADLING

2nd
Reader:
Dr. James A LSUP
iii

Declaration of Authorship
I, Collin R USSELL, declare that this thesis titled, “Applying Physics-Informed Neural Networks
to the Solution of Parametric, Nonlinear Heat Conduction Problems” and the work presented
in it are my own. I confirm that:

• This work was done wholly or mainly while in candidature for a research degree at this
University.

• Where any part of this thesis has previously been submitted for a degree or any other
qualification at this University or any other institution, this has been clearly stated.

• Where I have consulted the published work of others, this is always clearly attributed.

• Where I have quoted from the work of others, the source is always given. With the excep-
tion of such quotations, this thesis is entirely my own work.

• I have acknowledged all main sources of help.

• Where the thesis is based on work done by myself jointly with others, I have made clear
exactly what was done by others and what I have contributed myself.

Signed:

Date:
v

UNIVERSITY OF MICHIGAN-FLINT

Abstract
College of Innovation & Technology

Master of Science

Applying Physics-Informed Neural Networks to the Solution of Parametric, Nonlinear Heat


Conduction Problems
by Collin R USSELL

Physics-informed neural networks (PINNs), and the many variants they have inspired, have
the potential to reshape how scientists and engineers solve computational physics problems.
Despite their relatively recent introduction to the scientific community, PINNs have been suc-
cessfully used to approximate the solution to a variety of computational physics problems, in-
cluding heat transfer problems. In this work, a PINN is trained to solve a 2-D, steady-state heat
conduction problem inspired by the practical problem of finding the cross-sectional, steady-
state temperature field in the insulation module of a box furnace. The problem is formulated
such that (i) the thermal conductivity of the heat-conducting medium is modeled as quadratic
function of temperature and (ii) the coefficients of the quadratic function are parameterized
and passed to the PINN as additional inputs. The PINN is trained exclusively by minimizing
physics residuals (i.e., no data is used in the training process), and its accuracy is evaluated
using a testing dataset composed of eight numerical solutions, each of which is associated with
a unique, commercially available high-temperature insulation material. The results obtained in
this work demonstrate the feasibility of training a PINN (without any training data) to accu-
rately solve heat conduction problems involving parameterized, temperature-dependent mate-
rial property models. To the author’s knowledge, this is the first time this capability has been
demonstrated.
vii

Acknowledgements
I would like to thank Dr. Spradling for his guidance over the course of completing this thesis.
His propensity to ask the right questions and his ability to identify weaknesses in logical argu-
ments were particularly appreciated, in addition to his positive, encouraging attitude. I would
also like to thank my wife and my son for their unwavering love, support, and patience over
the past year; I am immensely grateful to play this game of life on such a wonderful team.
ix

Contents

Declaration of Authorship iii

Abstract v

Acknowledgements vii

1 Introduction 1

2 Literature Review 3

3 Theoretical Background 7
3.1 Feedforward Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Physics-Informed Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 9

4 Problem Description 13
4.1 Introduction to the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2 Dimensional Form of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.3 Dimensionless Form of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 19

5 Methodology & Implementation 21


5.1 Generation of Testing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2 Architecture of the PINN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.3 Training the PINN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

6 Results 31
6.1 Presentation of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.2 Discussion of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

7 Conclusion 37
7.1 Contributions of this Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
7.2 Opportunities for Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

8 Appendix I 39

9 Appendix II 43

10 Appendix III 45
xi

List of Figures

3.1 A three-layer FNN composed of a single-node input layer (gray), a three-node


hidden layer (blue), a two-node hidden layer (blue), and a single-node output
layer (green). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 A prototypical 2-D, steady-state, linear heat conduction problem. . . . . . . . . . 11
3.3 Schematic of the PINN used to approximate the solution to the example problem
shown in Fig. 3.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

4.1 A resistively heated, benchtop muffle furnace [27]. . . . . . . . . . . . . . . . . . . 13


4.2 The general, dimensional form of the heat conduction problem considered in this
work. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.3 The nonlinear, dimensional form of the problem. . . . . . . . . . . . . . . . . . . . 16
4.4 Tabulated temperature-dependent thermal conductivity data and corresponding
best-fit quadratic equations for the eight reference insulation materials. . . . . . . 18
4.5 Computed quadratic equation coefficients and corresponding coefficients of de-
termination for the eight reference insulation materials. . . . . . . . . . . . . . . . 18
4.6 The nonlinear, dimensionless form of the problem. . . . . . . . . . . . . . . . . . . 20

5.1 Numerical solution u∗ ( x ∗ , y∗ ) for each problem instance, i.e., each reference ma-
terial, included in the testing dataset. . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2 Domain collocation points projected onto 2-D plane defined by x ∗ , y∗ . . . . . . . . 24
5.3 Schematic of the PINN used to approximate the solution to the problem of interest. 26
5.4 Domain collocation points projected onto the 3-D spaces defined by x ∗ , y∗ , α (top),
x ∗ , y∗ , β (middle), and x ∗ , y∗ , η (bottom). . . . . . . . . . . . . . . . . . . . . . . . . 27
5.5 Boundary collocation points projected onto the 3-D spaces defined by x ∗ , y∗ , α
(top), x ∗ , y∗ , β (middle), and x ∗ , y∗ , η (bottom). . . . . . . . . . . . . . . . . . . . . 28
5.6 Thermal conductivity vs. temperature curves associated with collocation points
generated by uniformly sampling the dimensionless parameter space specified
in Table 4.3. The number of training curves (black) included in the top plot is 800,
the minimum number of collocation points assigned to any subdomain/boundary,
and the number included in the bottom plot is 12000, the maximum number of
collocation points assigned to any subdomain/boundary. . . . . . . . . . . . . . . 29

6.1 Solution prediction û∗ ( x ∗ , y∗ ) for each problem instance, i.e., each reference ma-
terial, included in the testing dataset. . . . . . . . . . . . . . . . . . . . . . . . . . . 32
xii

6.2 Error û∗ ( x ∗ , y∗ ) − u∗ ( x ∗ , y∗ ) for each problem instance, i.e., each reference mate-
rial, included in the testing dataset. . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.3 Comparison of û∗ and u∗ along the domain boundaries in the problem instance
associated with Material 1. A key for the boundary numbering scheme used is
provided in Fig. 6.4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.4 Key for the boundary numbering scheme used in Fig. 6.3. . . . . . . . . . . . . . . 36

8.1 Estimated natural convection HTCs along each physically unique surface of the
furnace insulation module, assuming us = 440◦ K and thus u∗s = 0.346. . . . . . . 40
8.2 Estimated natural convection HTCs along each physically unique surface of the
furnace insulation module, assuming us = 610◦ K and thus u∗s = 0.479. . . . . . . 41

10.1 Comparison of û∗ and u∗ along the domain boundaries in the problem instance
associated with Material 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
10.2 Comparison of û∗ and u∗ along the domain boundaries in the problem instance
associated with Material 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
10.3 Comparison of û∗ and u∗ along the domain boundaries in the problem instance
associated with Material 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
10.4 Comparison of û∗ and u∗ along the domain boundaries in the problem instance
associated with Material 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
10.5 Comparison of û∗ and u∗ along the domain boundaries in the problem instance
associated with Material 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
10.6 Comparison of û∗ and u∗ along the domain boundaries in the problem instance
associated with Material 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
10.7 Comparison of û∗ and u∗ along the domain boundaries in the problem instance
associated with Material 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
xiii

List of Tables

4.1 The dimensional parameter space. . . . . . . . . . . . . . . . . . . . . . . . . . . . 17


4.2 Product name and manufacturer for each of the eight reference insulation mate-
rials [28]–[32]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.3 The dimensionless parameter space. . . . . . . . . . . . . . . . . . . . . . . . . . . 20

5.1 The weights used to compute the PINN’s composite loss. . . . . . . . . . . . . . . 30

6.1 RRMSE of each solution prediction. . . . . . . . . . . . . . . . . . . . . . . . . . . 34


1

1 Introduction

Computational physics problems generally involve solving differential equations. Numerical


methods such as the finite difference method (FDM), finite volume method (FVM), and finite
element method (FEM) are used extensively in this discipline because most real-world problems
involve complex geometries, mixed boundary conditions, and/or nonlinearities that preclude
the use of analytical solution methods [1], [2]. Numerical methods, however, are not without
disadvantages; applying numerical methods to the solution of computational physics problems
typically requires:

1. Discretizing the domain of interest;

2. Transforming the governing differential equation(s) and imposed boundary and/or initial
condition(s) into approximate, algebraic equations valid at discrete points in space and/or
time; and

3. Solving the resultant system(s) of equations to obtain an approximate solution, which can
be a computationally expensive, time-intensive process.

Data-driven models leveraging the function approximation capabilities of neural networks


[3], [4] are an attractive alternative to models applying numerical methods because they are not
subject to the requirements above and can, once trained, return accurate results quickly with
minimal computational effort. The availability of graphics processing units (GPUs) capable
of massively parallel, general-purpose computation (e.g., NVIDIA’s CUDA-compatible GPUs
[5]) and the accessibility of powerful, open-source machine learning software (e.g., Python’s Py-
Torch [6] and TensorFlow [7] libraries) also contribute to the increasing interest in applying neu-
ral networks to computational physics problems. Conventional (i.e., physics-naive) neural net-
works, however, typically require large training datasets to accurately approximate a function
over the domain of interest. When applied to problems for which data acquisition, including
the collection of empirically derived ground-truth data and/or the generation of simulation-
derived data, is prohibitively expensive or time intensive, this reliance on training data can be a
major limitation. Many computational physics problems, including heat transfer problems, fall
into this so-called “small data” regime [8], which diminishes the utility of conventional neural
networks in this application space.
Physics-informed neural networks (PINNs) cleverly overcome the data dependence of con-
ventional neural networks by enabling the incorporation of prior knowledge, often laws of
physics expressed in the form of differential equations, into their training. More specifically,
2 Chapter 1. Introduction

PINNs replace (or augment) the computation of data-based loss with the computation of physics-
based loss that indicates how closely the network’s predictions abide by the governing differ-
ential equation(s) and imposed boundary and/or initial condition(s) [8]. By minimizing this
loss, PINNs are trained to respect the laws of physics. This incorporation of prior knowledge –
information that conventional neural networks cannot utilize – enables PINNs to solve complex
physics problems with a relatively high degree of accuracy, even when applied to problems for
which little (or zero) training data is available [8].
The literature contains numerous examples of applying PINNs to the solution of computa-
tional physics problems, including computational heat transfer problems. Similarly, the litera-
ture contains multiple examples of training PINNs without training data. This work is unique
in that it applies a PINN to the solution of a nonlinear heat transfer problem, more specifically
a nonlinear heat conduction problem, in which material properties are assumed to be tempera-
ture dependent and are modeled using parametric functions of temperature. Training a PINN
to accurately solve this type of problem over a reasonably large parameter space would demon-
strate that a single PINN can be trained to solve a general heat conduction problem for many
different nonlinear materials.1 To the author’s knowledge, this has never been shown previ-
ously.
The remainder of this paper is organized as follows:

• Chapter 2 reviews related works in the literature;

• Chapter 3 presents the fundamental theoretical aspects of PINNs in the context of feedfor-
ward neural networks, which provide the foundation upon which most PINNs are built;

• Chapter 4 specifies the heat conduction problem considered in this work;

• Chapter 5 describes the PINN configuration applied in this work and the methods em-
ployed to generate the testing data used to evaluate the PINN’s accuracy;

• Chapter 6 presents the PINN’s post-training solution predictions and discusses the mag-
nitude and likely causes of the observed prediction error; and

• Chapter 7 summarizes the value of this work and identifies opportunities for future re-
search.

1 Here,
the term "nonlinear materials" refers to materials with temperature-dependent properties. This tempera-
ture dependency makes the governing differential equation (the heat equation) nonlinear.
3

2 Literature Review

The theoretical capability of neural networks to approximate the solutions to differential equa-
tions has been established for decades. In proving that multilayer feedforward neural networks
(FNNs) are universal function approximators, Hornik et al. [3] and Cybenko [4] implicitly
proved that FNNs of sufficient size and with sufficient training can accurately approximate the
solutions to differential equations, including the differential equations that govern heat transfer
phenomena.
The application of neural networks to the solution of heat transfer problems is also far from
new. In 1989, the same year that Hornik et al. and Cybenko’s works were published, Wattanabe
et al. used a multilayer FNN to detect incipient thermal-, chemical-, and flow-related faults in
a temperature-controlled chemical reactor [9]. Shortly thereafter, Thibault et al. applied FNNs
to a set of data correlation tasks including mapping the voltage generated by a type-K ther-
mocouple to the thermocouple’s temperature and mapping the Rayleigh number to the Nusselt
number in problems involving natural convection heat transfer along horizontal cylinders [10].1
Numerous examples of using neural networks to solve heat conduction problems emerged
in the following years:

• Dissanayake et al. used FNNs to predict the steady-state temperature field in a 2-D region
subject to temperature-dependent internal heat generation and mixed boundary condi-
tions [11].

• Jambunathan et al., in one of the earliest examples of applying neural networks to the
solution of inverse heat conduction problems, used FNNs to predict the time-dependent,
spatially averaged convection heat transfer coefficient along the external surface of a semi-
infinite wall heated to an elevated initial temperature [12].

• Diaz et al. used neural networks to solve a set of heat transfer problems of increasing
complexity; the authors first demonstrated how a FNN with as few as one hidden node
can accurately predict the steady-state conduction heat transfer rate through a wall, and
ultimately used a multilayer FNN to predict the steady-state rate of heat transfer in a
finned-tube heat exchanger [13].

The preceding application examples illustrate the merit of using neural networks to solve
computational heat transfer problems. It is important to note, however, that in all of these
1 The Rayleigh number is dimensionless parameter indicating the relative magnitude of buoyancy forces vs. vis-
cous forces in a flow, and the Nusselt number is a dimensionless parameter indicating the magnitude of the temper-
ature gradient at a fluid-surface-fluid boundary [1], [2].
4 Chapter 2. Literature Review

examples (i) the accuracy of the neural networks was reliant upon the availability of data de-
rived from known analytical solutions, approximate numerical solutions, or laboratory experi-
ments; and (ii) the predictions output by the neural networks, while generally accurate (due to
the availability of sufficient training data), were in no way constrained by underlying laws of
physics. It is these limitations, which are inherent to conventional neural networks, that PINNs
address.
Raissi et al. introduced PINNs in 2019;2 however, the works of Psichogios et al. [16] and
Lagaris et al. [17] inspired and provided much of the theoretical foundation for their develop-
ment of the PINN framework [8], [18]. Psichogios et al. demonstrated that “hybrid” process
models composed of a neural network and a first-principles physics model incorporating prior
knowledge can significantly increase the accuracy and reduce the training data requirements of
process models composed of a neural network alone [16]. A few years later, Lagaris et al. pre-
sented a novel method of training neural networks to approximate the solution to differential
equations that eliminates the need for data-based  training. Given a general differential equation
of the form f (x) = g x, y(x), ∇y(x), ∇2 y(x), ... , x ∈ Ω with an unknown particular solution
y(x) associated with a given set of boundary and/or initial conditions, their method consists of
[17]:

1. Formulating a trial solution of the form ŷ(x) = a(x) + b (x, z(x; θ)), where a and b are
functions dependent upon the form of the differential equation g, and z(x; θ) is the output
of a FNN f NN (x; θ) with parameters θ;

2. Selecting a set of N collocation points {x1 , ..., xi , ...x N } in the domain Ω;

 neural network f NN (x; θ) with


3. Training the the objective of minimizing
2 
N 2
L = ∑i=1 g xi , ŷ(xi ), ∇ŷ(xi ), ∇ ŷ(xi ), ... , the mean-squared error (MSE) of the trial
solution evaluated over the collocation points {x1 , ..., xi , ...x N }; and

4. Returning the trial solution ŷ(x) as the resultant approximation of y(x).

The PINN framework presented by Raissi et al. is essentially a modern variant of this
method that uses automatic differentiation (instead of manually derived derivative expressions)
∂ŷ
to compute the derivatives ∂xi needed to compute the loss L.3
Despite their relative youth, PINNs have been applied to the solution of many computa-
tional heat transfer problems. Some of the most prominent examples are presented in Cai et
al.’s and Hennigh et al.’s works [18], [19]. Cai et al. applied PINNs to the prediction of 2-D fluid
pressure, velocity, and temperature fields in steady-state and transient convection heat transfer
problems. The authors also applied PINNs to an inverse, transient heat conduction problem
involving a dynamic phase-transformation boundary in a prototypical 2-D domain [18]. Hen-
nigh et al. presented NVIDIA SimNet, a TensorFlow-based framework for solving parametric,
2 Raissi
et al. posted preprints [14], [15] of their seminal paper in 2017.
3 PINNsalso use automatic differentiation to compute derivatives of the form ∂∂θL , which are needed to optimize
the network parameters θ [8].
Chapter 2. Literature Review 5

complex-geometry, multiphysics problems using PINNs and demonstrated its capabilities us-
ing a number of test cases including (i) predicting the 3-D, steady-state pressure, velocity, and
temperature fields in/around a finned FPGA heat sink contained in a rectangular channel and
(ii) optimizing the parameterized geometry of a heat sink using a 3-D conjugate heat transfer
model [19].
The literature also contains numerous works that apply PINNs to problems primarily or
exclusively involving conduction heat transfer (i.e., problems primarily or exclusively involving
the solution of the heat equation). For example:

• Zobeiry et al. applied PINNs to the solution of a prototypical 2-D, transient heat conduc-
tion problem involving parameterized convection heat transfer boundary conditions and
trained the PINN using an exclusively physics-based, i.e., data-free, training process [20].

• Liao et al. used a PINN to predict the 3-D, transient temperature field in rectangular slab
subject to localized heat generation from a moving heat source, simulating heat transfer
during a direct-energy-deposition metal additive manufacturing process. The authors
investigated the use of both data-reliant and data-free training methods [21].

• Wurth et al. applied PINNs to predict the 2-D, transient temperature field in a rectangular
domain undergoing an exothermic thermochemical curing process. The authors parame-
terized the boundary conditions and material properties to capture the effects of varying
process parameters (e.g. the setpoint autoclave temperature) and material compositions.
They also used an exclusively physics-based training process [22].

This thesis is differentiated from these earlier works in its application of PINNs to the so-
lution of conduction heat transfer problems in which material properties are assumed to be
temperature-dependent and modeled as parametric functions of temperature. While PINNs
have previously been applied to the solution of parametric and/or nonlinear heat conduc-
tion problems (e.g., Zobeiry et al. considered a problem involving parameterized boundary
conditions [20], Liao et al. considered a nonlinear problem involving temperature dependent
properties [21], and Wurth et al. considered a problem involving parameterized, temperature-
independent material properties [22]), the application of PINNs to heat conduction problems
involving parameterized, temperature-dependent material property models has, to the author’s
knowledge, never been investigated.
7

3 Theoretical Background

This chapter presents the theoretical basis of PINNs and demonstrates how they are applied
using a simple example problem. Because PINNs are prototypically built upon FNNs [8], the
chapter begins with a brief review of the function, structure, and mathematical representation
of FNNs.

3.1 Feedforward Neural Networks


Functionally, FNNs are function approximators. A FNN approximating y = f (x) outputs
ŷ = f NN (x; θ), where the network’s parameters θ (weights and biases) are optimized during
training to minimize the error in each prediction ŷ[k] = f NN (x[k] ; θ) relative to the corresponding
true value y[k] = f (x[k] ) [23].
Structurally, a FNN is a collection of nodes, a.k.a. artificial neurons, arranged in sequentially
interconnected layers. This structure is illustrated in Fig. 3.1 for a small three-layer network.1
Each node in a FNN computes a biased, weighted sum of its inputs, passes the sum to an
activation function, and outputs the value returned. Due to how nodes are interconnected in
FNNs, the output of each computational layer is:

a ( i ) = σ ( i ) ( W ( i ) xi + b ( i ) ) , (3.1)
where a(i) contains the output of each node in layer i; σ(i) is the (typically nonlinear) activation
function applied by the nodes in layer i; W(i) contains the weights associated with each node in
layer i; x(i) contains the inputs to layer i and is equal to a(i−1) , the output of the preceding layer;
and b(i) contains the bias of each node in layer i [23].2 Using the FNN shown in Fig. 3.1 as an
example, the output of each computational layer in the network is:

1 The FNN shown in Fig. 3.1 appears to have three layers, however the input (first) layer is typically excluded

from layer count because it is not a computational layer.


2 This mathematical representation implicitly assumes the FNN is fully connected. In this work, all of the FNNs

presented are fully connected FNNs.


8 Chapter 3. Theoretical Background

(1)
   (1)  
w0→0 b
 (0)  0  
a(1) = σ(1) (w(1) a(0) + b(1) ) = σ(1) w0(1→) 1  a0 + b1(1) 

(1) (1)
w0→2 b2
# a (1)
   
" " #
(2) (2) (2) ( 2 )
 w w1→0 w2→0  0(1)  b (3.2)
a(2) = σ(2) (W(2) a(1) + b(2) ) = σ(2)  0(2→) 0  a1  + 0(2) 

(2) (2)
w0→1 w1→1 w2→1 (1) b1
a2
" # !
h i a (2)
(3) (3) (3) (2) (3) (3) (3) (3) 0 ( 3 )
ŷ = a = σ (w a + b ) = σ w0→0 w1→0 (2) + b0 .
a1

F IGURE 3.1: A three-layer FNN composed of a single-node input layer (gray), a


three-node hidden layer (blue), a two-node hidden layer (blue), and a single-node
output layer (green).

A FNN f NN (x; θ) can therefore be represented as a sequence of function compositions in


which the number of composed functions is equal to L, the number of computational layers in
the network. Accordingly:
         
( L) ( L −1) (2) (1)
f NN (x; θ) = f NN f NN ... f NN f NN x; θ(1) ; θ(2) ... ; θ( L−1) ; θ( L) , (3.3)

(i ) (i )
(i ) i (i )
where, from Eq. 3.1, each composed
  is of the form f NN = σ (W x + b ) [23]. Note
function
( i −1)
again that xi = a(i−1) = f NN x(i−1) ; θ(i−1) [23].
3.2. Physics-Informed Neural Networks 9

FNNs are trained (and tested) prior to their deployment. Training a FNN is an iterative
process consisting of forward propagation, error computation, backward propagation, and parameter
optimization.

• Forward propagation is the propagation of each input x[k] through the network to gener-
ate a prediction ŷ[k] = f NN (x[k] ; θ), where x[k] contains the inputs from a single training
example k and ŷ[k] contains the corresponding predictions.

• Error computation consists of passing each prediction ŷ[k] and its corresponding true
value y[k] to a loss function, a.k.a. cost function, to compute a scalar loss L representa-
tive of the aggregate error in the network’s predictions [23].

• Backward propagation, a.k.a. backpropagation, is the propagation of loss L back through


the network to determine the impact of each network parameter θ on the loss, where
the impact is expressed as a partial derivative of the form ∂∂θL [23]. In modern FNNs,
these partial derivatives are typically computed by reverse-mode automatic differentiation
(hereafter referred to simply as automatic differentiation), a differentiation technique that
applies the chain rule of calculus and dynamic programming to compute derivatives at
machine precision (i.e., practically exactly) with high computational efficiency [24].

• Parameter optimization refers to the computation of new network parameters with the
objective of minimizing the network’s loss L. New parameter values are often computed
using a gradient-based optimization algorithm, which takes the partial derivatives com-
puted during backward propagation as inputs [23].

3.2 Physics-Informed Neural Networks


PINNs are differentiated from FNNs in how they are trained. In the context of solving forward,
a.k.a. direct, physics problems governed by differential equations, a FNN is trained with the
objective of minimizing the error in its predictions relative to a corresponding set of true values
(where the predictions collectively comprise the predicted solution of the problem and the true
values collectively comprise the true solution of the problem). In contrast, a PINN is trained
with the objective of minimizing the residuals of the differential equation(s) and associated
boundary and/or initial conditions applicable to the problem [8]. Accordingly, while FNNs can
only be trained to respect the laws of physics relevant to a given problem indirectly (i.e., without
any constraints imposed by the differential equation(s) and associated boundary and/or initial
conditions that express these laws of physics), PINNs are trained to respect the relevant laws of
physics directly.
Applied to a general heat transfer problem governed by the source-free heat equation, a
PINN approximates the solution to a differential equation of the form [8]:
     
ρc ∂u ∂z , x, y, z ∈ Ω, t ∈ [0, T ]
∂ ∂u ∂ ∂u ∂ ∂u
∂t = ∂x k ∂x + ∂y k ∂y + ∂z k
u( x, y, z, t) = g( x, y, z, t), x, y, z ∈ ∂Ω, t ∈ [0, T ] (3.4)
u( x, y, z, 0) = h( x, y, z), x, y, z ∈ Ω,
10 Chapter 3. Theoretical Background

where x, y, and z are the spatial coordinates; t is the temporal coordinate; ρ, c, and k are the
density, specific heat capacity, and thermal conductivity of the material(s) in the domain Ω;
and u( x, y, z, t) is the unknown solution of the differential equation with boundary conditions
g( x, y, z, t) and initial condition h( x, y, z). This is accomplished by creating a FNN to approxi-
mate u( x, y, z, t) and optimizing its parameters to minimize a composite, weighted loss function
of the form:

L = λD LD + λB LB + λ I L I , (3.5)
where L is the composite loss; L D , L B , and L I are physics-based loss components associated
with the differential equation, boundary conditions, and initial condition respectively; and λ D ,
λ B , and λ I are weights corresponding to loss components L D , L B , and L I respectively [8], [18].3
If the MSE loss function is applied, as it is in this work, L D , L B , and L I are:
 
1 ND [k] 2
L D = ND ∑k=1 R D
 
1 NB [k] 2
L B = NB ∑k=1 R B (3.6)
 
[k] 2
L I = N1I ∑kN=I 1 R I ,

[k] [k]
where the differential equation, boundary condition, and initial condition residuals R D , R B ,
[k]
and R I associated with a given collocation point k, are [8]:
 [k]   [k]   
[k] [k] [k]
R D = ρc ∂∂tû[k] − ∂x∂[k] k ∂∂xû[k] − ∂y∂[k] k ∂∂yû[k] − ∂
∂z[k]
k ∂∂zû[k]
[k] (3.7)
R B = g ( x [k] , y[k] , z[k] , t[k] )
[k]
R I = h ( x [ k ] , y [ k ] , z [ k ] );
and ND , NB , and NI are the number of collocation points used to enforce the differential equa-
tion, boundary conditions, and initial condition respectively during the training process.
It should be noted that PINNs can also be trained with data, in which case an additional
data-based loss component Ldata and a corresponding weight λdata are added to Eq. 3.5 [8].
In such cases, the physics-based, unsupervised learning process is combined with a data-based,
supervised learning process.
As a simple example, consider the linear, nonparametric heat conduction problem shown
schematically in Fig. 3.2. The objective of the problem is to find the steady-state temperature
field u( x, y) in the domain given the imposed temperature (Dirichlet) boundary conditions ap-
plied to its left and bottom boundaries and the convection (Robin) boundary conditions applied
to its right and top boundaries. This problem is of the form:
3 The composite loss function shown in Eq. 3.5 implicitly assumes that the boundary conditions are enforced as

“soft” constraints (i.e., via loss penalization). Alternative approaches enable enforcing the boundary conditions as
“hard” constraints, which force the predictions to satisfy the boundary conditions automatically (i.e., without loss
penalization) [17], [25], [26].
3.2. Physics-Informed Neural Networks 11

∂2 u 2
∂x2
+ ∂∂yu2 = 0, x ∈ (0, L), y ∈ (0, L)
u = u0 , x ∈ [0, L], y = 0
u = u0 , x = 0, y ∈ [0, L] (3.8)
−k ∂u
∂y = h ( u − u∞ ), x ∈ (0, L], y = L
−k ∂x = h(u − u∞ ),
∂u
x = L, y ∈ (0, L],
where x and y are the spatial coordinates, both of which are bounded by [0, L]; u0 is the im-
posed temperature along the left and bottom boundaries; u∞ is the ambient temperature of the
surrounding fluid; k is the thermal conductivity of the domain; h is the convection heat transfer
coefficient (HTC) along the right and top boundaries; and u( x, y) is the unknown solution of
the differential equation with the specified boundary conditions.

y
þÿ"u
- k- = h(u - þÿu")
þÿ" y

L
:x
image image image image image
imageimage
1\/' X: X:>9 l\i >9 >Q N
;x
>9( Q< ;/9


,(x


,:x ;x IX

i
Q< ,:x >9(

Y'
9s :x ,(x ;/9 ,(xV..
þÿ"suared times u
--
9: X: >9 O<
"' O< ~ x; XX XI XX XI >C >9 X: X
)() ,c O< X: þÿ" times x squared
>◊ 9
◊ X ◊ X )◊ 8 )◊ 0 X 0 X )◊
8 X
XI X: XI X: x:, O< Y,
XI X) )() X: )() X: X) O< X:

>◊ X ◊ X X 8 >◊ 9 0 io X 0 X X & :x


A,
Q<Q◊< ,II '0 ,II '0 Q< '0.
>◊ X: :0 X:

◊:X ◊:X X Rl >◊ ¢': 0 ◊ 6< ◊6< X QS :x


A,
:x þÿ" times u
= U0
.A.
':0 X: X: V' Q< ex \I' Qi '0 ,II '0 Q< \I'\/'
U
:0 ½
" -k h(u - þÿU")
XI )◊ •A
" ~
)0 )◊
þÿ" times x
So :x X: 6/: X/'N :0 x:, :x &<X CX"\I' ,II i◊' Q< N

Q< Q< :0 :0 ,:x ;x <X 9' ,:x 9' Q<:X ◊:X xi 'v'\

6i :x XI XI N X) X) X: XI X: !xl\i ?Q X: XI N

9S ;x X ;/9 ,(x O< ;x Q< y :x 9' <Y,:X ,:x;x ~ ,(x V\

'I? V,

9: x; y (X N l\i 00 ?Ii xx; 9 XX 9 >O ◊ X: ◊ Xi >() l\i X)

0 X 0 X » 8» .A
0
N
SA
X,
N 9 So 0 X 0 X So 8 X
y' x; y' ?Ii XI' l\i ex >N >N 'I? X> ◊ X: ◊ X) X> l\i >()y
'
◊ >◊ ◊ >◊ » s:; V,
A
0
•A
X,
,N
~ So ◊ io ◊ io So s:; )◊
0 X
0 L
U = Uo
F IGURE 3.2: A prototypical 2-D, steady-state, linear heat conduction problem.

The solution to this problem, the steady-state temperature field u( x, y), can be approximated
by the PINN shown in Fig. 3. This PINN is built upon a two-input, one-output FFN composed
of L computational layers and M nodes per hidden layer. Given a pair of inputs x, y, the FNN
outputs the corresponding prediction û( x, y), i.e., the predicted temperature at position x, y.
During training, the physics module applies automatic differentiation to differentiate each
12 Chapter 3. Theoretical Background

prediction u[k] with respect to x [k] and y[k] and uses these derivatives to compute the physics-
based loss components L D and L B . The composite loss L is then differentiated with respect to
the network parameters θ and the resultant derivatives, also computed using automatic differ-
entiation, are used to optimize the network parameters such that L is minimized.

,----------------------------- Physics Module ------------------------------,

_____________________ Fully-Connected Feedforward Neural Network ___________________ _

:RB= !au
~ - u,,
u - u0 ,

kay+h(U-u 00 ),
au
k~+h(U-uro),
XE [0,L],y=0
x=O,ye[O,l]

xE(O,l],y=l

x=l,ye(O,l]

F IGURE 3.3: Schematic of the PINN used to approximate the solution to the ex-
ample problem shown in Fig. 3.2.
13

4 Problem Description

This chapter specifies the problem considered in this work, hereafter referred to as “the heat
conduction problem” or simply “the problem”. From the problem statement, a dimensional
form of the problem is formulated, from which an equivalent dimensionless form is derived.

4.1 Introduction to the Problem


In this work, a PINN is used to approximate the solution to 2-D, steady-state heat conduction
problem inspired by, but simplified from, the task of finding the cross-sectional, steady-state
temperature field in the insulation module of a box furnace such as the muffle furnace shown
in Fig. 4.1. Knowing the approximate steady-state temperature field in the insulation module
of such a furnace is of practical value to furnace designers because it can be used to estimate the
furnace’s steady-state rate of heat loss and can also inform the insulation selection process.

image
F IGURE 4.1: A resistively heated, benchtop muffle furnace [27].
14 Chapter 4. Problem Description

4.2 Dimensional Form of the Problem


This heat conduction problem, shown schematically in Fig. 4.2, is of the general form:

    x ∈ ( Lint , Lext ), y ∈ (0, Lint ]



∂x k ∂u
∂x +

∂y k ∂u
∂y = 0, x ∈ ( Lint , Lext ), y ∈ ( Lint , Lext )
x ∈ (0, Lint ], y ∈ ( Lint , Lext )
∂u
∂y= 0, x ∈ ( Lint , Lext ), y = 0
u = u0 , x = Lint , y ∈ [0, Lint ] (4.1)
u = u0 , x ∈ [0, Lint ], y = Lint
∂u
∂x = 0, x = 0, y ∈ ( Lint , Lext )
−k ∂u
∂y = h ( u − u∞ ), x ∈ [0, Lext ], y = Lext
−k ∂x = h(u − u∞ ),
∂u
x = Lext , y =∈ [0, Lext ],
where x and y are the spatial coordinates; Lint and Lext are the lengths that define the geometry
of the domain; u0 is the imposed internal temperature; u∞ is the ambient air temperature; k is
the thermal conductivity of the domain; h is the convection HTC along the boundaries subject to
natural convection heat transfer; and u( x, y) is the unknown solution of the differential equation
with the specified boundary conditions.

y
au
-k - = h(u-u )
þÿ" t i m e s y 00

au
-=0
ax

au
-k-= h(u-u )
þÿ"x þÿ"

au
-=0
þÿ" y

F IGURE 4.2: The general, dimensional form of the heat conduction problem con-
sidered in this work.
4.2. Dimensional Form of the Problem 15

This problem involves mixed boundary conditions, as detailed below:

• Imposed temperature (Dirichlet) boundary conditions are applied to the internal bound-
aries of the domain, the boundaries representative of the internal surfaces of the furnace
insulation module. These boundary conditions impose a uniform temperature to these
surfaces that is assumed to be equal to the setpoint temperature of the furnace chamber.

• Imposed heat flux (Neumann) boundary conditions are applied to the domain bound-
aries coincident with the x- and y-axes. By imposing a heat flux of zero, these boundary
conditions imply symmetry about the x- and y-axes.

• Convection (Robin) boundary conditions are applied to the external boundaries of the
domain, the boundaries representative of the external surfaces of the furnace insulation
module. These boundary conditions account for natural convection heat transfer along
these surfaces.

Additionally, because the insulation materials utilized in high-temperature furnaces often


have highly temperature-dependent thermal conductivities, the thermal conductivity of the
domain is assumed to be a function of temperature. Based upon a review of temperature-
dependent thermal conductivity data in the literature [28]–[32], the thermal conductivity of the
domain was modeled using a quadratic equation of the form:

k (u) = au2 + bu + c (4.2)


where the coefficients a, b, and c are unique to each insulation material and can be computed
via quadratic regression given tabulated thermal conductivity vs. temperature data. The tem-
perature dependency of k described in Eq. 4.2 produces (by the product rule of differentiation)
the following nonlinear form of the problem:

   2  2  x ∈ ( Lint , Lext ), y ∈ (0, Lint ]


∂2 u ∂2 u dk
k ∂x2
+ ∂y2
+ du
∂u
∂x + ∂u
∂y = 0, x ∈ ( Lint , Lext ), y ∈ ( Lint , Lext )
x ∈ (0, Lint ], y ∈ ( Lint , Lext )
∂u
∂y= 0, x ∈ ( Lint , Lext ), y = 0
u = u0 , x = Lint , y ∈ [0, Lint ] (4.3)
u = u0 , x ∈ [0, Lint ], y = Lint
∂u
∂x = 0, x = 0, y ∈ ( Lint , Lext )
−k ∂u
∂y = h ( u − u∞ ), x ∈ [0, Lext ], y = Lext
−k ∂x = h(u − u∞ ),
∂u
x = Lext , y =∈ [0, Lext ],
dk
where k = au2 + bu + c and thus du = 2au + b. This form of the problem is shown schematically
in Fig. 4.3.
16 Chapter 4. Problem Description

y
au
-(au 2 + bu +c) þÿ"=
y h(u - þÿU")

au
-= 0
ax

au
-(au 2 +bu+ c) ax = h(u - þÿU")

au
-=0
þÿ" y

F IGURE 4.3: The nonlinear, dimensional form of the problem.

The values assigned to the dimensional parameters Lint , Lext , u0 , u∞ , a, b, c, and h were cho-
sen based upon realistic furnace dimensions, operating conditions, and insulation properties.
These dimensional parameter values are shown in Table 4.1, where:

• The values assigned to Lint and Lext are reasonable dimensions for this type of furnace
[27], [33];

• The value assigned to u0 is a reasonable setpoint temperature for this type of furnace [27],
[33];

• The value assigned to u∞ is room temperature, which is a conservative ambient tempera-


ture to assume from the perspective of furnace heat loss;

• The values assigned to a , b, and c are the quadratic equation coefficients computed by
applying quadratic regression to manufacturer-provided thermal conductivity data for
the eight commercially available high-temperature insulation materials listed in Table 4.2
[28]–[32]; and

• The value assigned to h is a reasonable estimate of the average convection HTC along the
external surfaces of a furnace insulation module. Justification for selecting this value is
provided in Appendix I using empirical correlations.
4.2. Dimensional Form of the Problem 17

Parameter Value(s)
Lint (m) 0.120
Lext (m) 0.200
u0 ( ◦ K) 1273
u ∞ ( ◦ K) 293
a mW 8.482 × 10−8 , 8.482 × 10−8 , 1.027 × 10−7 , 1.206 × 10−7 ,

◦ K3
9.780 × 10−8 , 8.043 × 10−8 , 4.974 × 10−8 , 6.378 × 10−8
W
1.634 × 10−5 , −3.366 × 10−5 , −5.699 × 10−5 , −4.936 × 10−5 ,

b m ◦ K2
−3.021 × 10−5 , −2.052 × 10−5 , 5.726 × 10−5 , 3.047 × 10−5
W
3.221 × 10−2 , 6.587 × 10−2 , 1.429 × 10−1 , 5.720 × 10−2 ,

c m◦ K
5.152 × 10−2 , 8.180 × 10−2 , 1.672 × 10−1 , 1.993 × 10−1
W

h m2 ◦ K
5

TABLE 4.1: The dimensional parameter space.

Material ID Product Name Manufacturer


1 Kaowool 1400LD Morgan Advanced Materials
2 Kaowool 1400MD Morgan Advanced Materials
3 Kaowool HS45 Morgan Advanced Materials
4 I-2600 Morgan Advanced Materials
5 I-2800 Morgan Advanced Materials
6 Alumina Type AL-30 ZIRCAR Ceramics
7 Alumina Type SALI ZIRCAR Ceramics
8 Alumina Type SALI-2 ZIRCAR Ceramics

TABLE 4.2: Product name and manufacturer for each of the eight reference insu-
lation materials [28]–[32].

The quadratic models used to approximate the thermal conductivities of the eight reference
insulation materials, which were generated using Python’s NumPy library [34], are shown in
Fig. 4.4-4.5. Collectively, these thermal conductivity models are assumed to be representative
of the range of insulation materials typically found in this type of furnace.
18 Chapter 4. Problem Description

Thermal Conductivity vs. Temperature

• Material
Material
1
1
(Data)
(Model)

• Material
Material
2
2
(Data)
(Model)

• Material
Material
3
3
(Data)
(Model)

• Material
Material
4
4
(Data)
(Model)

• Material
Material
5
5
(Data)
(Model)

• Material
Material
6
6
(Data)
(Model)

• Material
Material
7
7
(Data)
(Model)

• Material
Material
8
8
(Data)
(Model)

0.0 i m a g e
200 400 600 800 1000 1200 1400 1600 1800 2000
u (K)

F IGURE 4.4: Tabulated temperature-dependent thermal conductivity data and


corresponding best-fit quadratic equations for the eight reference insulation ma-
terials.

Material 1:
(a, b, c ) = 8.482e-08, l .634e-05, 3.22le-02
R^2 = 0 .999
Material 2:
(a, b , c ) = 8.482e - 08, - 3 . 366e - 05 , 6.587e - 02
R^2 = 0 . 999
Material 3:
(a, b , c ) = 1.027e- 07, - 5.699e - 05 , 1.429e - 01
R^2 = 0 . 999
Material 4:
(a , b , c ) = l.206e-07 , -4.936e-05 , 5.720e-02
R^2 = 1.000
Material 5:
(a , b, c ) = 9.780e-08 , -3. 02le -05 , 5.152e-02
R^2 = 1.000
Material 6:
(a, b , c ) = 8.043e - 08, - 2 . 052e - 05 , 8 .1 80e - 02
R^2 = 0.986
Material 7:
(a , b , c ) = 4.974e-08, 5 . 726e-05 , l.672e-01
R^2 = 0.999
Material 8:
(a , b , c ) = 6.378e - 08 , 3 . 047e - 05 , l.993e - 01
R^2 = 0.999

F IGURE 4.5: Computed quadratic equation coefficients and corresponding coeffi-


cients of determination for the eight reference insulation materials.
4.3. Dimensionless Form of the Problem 19

4.3 Dimensionless Form of the Problem


Thus far, the heat conduction problem has been considered in dimensional form. It is advan-
tageous, however, to transform the problem into an equivalent dimensionless form prior to
solving it, regardless of whether it is solved conventionally using numerical methods or using
a neural network. Nondimensionalization is advantageous because it:

• Reduces the number of parameters in the problem and thus can reduce the effort required
to solve the problem [35], [36];

• Reveals relationships between dimensional parameters that indicate how they collectively
affect the problem’s solution [35], [36];

• Enables the simultaneous solution of all dimensionally similar problem instances [35],
[36]; and

• Can (but does not always) produce a normalized form of the problem, i.e., a form in which
the dimensionless variables are all on the order of one [35].

The dimensional form of the problem shown in Eq. 4.3 and Fig. 4.3 involves P = 11 dimen-
sional parameters: u, x, y, Lint , Lext , u0 , u∞ , a, b, c, and h; and D = 3 dimensions: length (m),
temperature (°K), and power (W). Per Buckingham’s Pi Theorem, [35]–[37], the dimensional
form of the problem can be reduced to an equivalent dimensionless form involving P − D = 8
dimensionless parameters. One equivalent form, obtained by applying the method of repeating
variables [35] with u0 , Lext , and c as repeating parameters, is:

   2  2  x ∗ ∈ (λ, 1), y∗ ∈ (0, λ]


∂2 u ∗ ∂2 u ∗ ∂u∗ ∂u∗
κ ∂x ∗ 2
+ ∂y∗ 2
+ dκ
du∗ ∂x ∗ + ∂y∗ = 0, x ∗ ∈ (λ, 1), y∗ ∈ (λ, 1)
x ∗ ∈ (0, λ], y∗ ∈ (λ, 1)
∂u∗
∂y∗ = 0, x ∗ ∈ (λ, 1), y∗ = 0
u∗ = 1, x ∗ = λ, y∗ ∈ [0, λ] (4.4)
u∗ = 1, x ∗ ∈ [0, λ], y∗ = λ
∂u∗
∂x ∗ =∗ 0, x ∗ = 0, y∗ ∈ (λ, 1)
−κ ∂u ∗ x ∗ ∈ [0, 1], y∗ = 1
∂y∗ = η ( u − υ ),

−κ ∂u ∗ x ∗ = 1, y∗ =∈ [0, 1],
∂x ∗ = η ( u − υ ),
y Lint
where κ = αu∗ 2 + βu∗ + 1; dκ
du∗ = 2αu∗ + β; and x ∗ = x
Lext , y∗ = Lext , λ = Lext , υ = u∞
u0 ,
au2
α = c 0 , β = buc 0 , η = hLcext , and u∗ = uu0 are the eight dimensionless parameters remaining
after nondimensionalization. This dimensionless form of the problem is shown schematically
in Fig. 4.6.
The values of the dimensionless parameters λ, υ, α, β, and η considered in this work are
shown in Table 4.3. These correspond directly to the values of the dimensional parameters
shown in Table 4.1. In combination with Eq. 4.4, these dimensionless parameter values fully
define the parametric heat conduction problem considered in this work.
20 Chapter 4. Problem Description

y*
þÿ" u
-(au* 2 + þÿ²u* + 1) þÿ" y=* þÿ·(u* -v)

þÿ" u *
-(au* 2 + þÿ²u* + 1) þÿ" x * =þÿ·(u* -v)

0 x*
0 þÿ» 1
þÿ" u *
- =0
þÿ" y *

F IGURE 4.6: The nonlinear, dimensionless form of the problem.

Parameter Value(s)
λ (1) 0.6
υ (1) 0.230
α (1) 4.268, 2.087, 1.164, 3.416, 3.077, 1.593, 0.482, 0.519
β (1) 0.646, −0.651, −0.508, −1.098, −0.746, −0.319, 0.436, 0.195
η (1) 31.048, 15.182, 6.998, 17.482, 19.412, 12.225, 5.982, 5.018

TABLE 4.3: The dimensionless parameter space.


21

5 Methodology & Implementation

This chapter presents the methods employed to generate the testing dataset, which is used to
evaluate the accuracy of the PINN, and describes the architecture, configuration, and training
regimen of the PINN used to solve the problem.

5.1 Generation of Testing Data


The PINN presented in this work was trained without any training data; however, testing data
was still required to evaluate the PINN’s prediction accuracy. The testing dataset was generated
by applying the FDM to solve each of eight problem instances described by the combination of
Eq. 4.4 and the parameter values in Table 4.3. This was accomplished by:

1. Discretizing the spatial domain by creating a grid composed of N × N grid points xi,j ∗ , y∗
i,j
with a uniform grid spacing of ∆x ∗ = ∆y∗ , where i ∈ {0, ..., N − 1}, j ∈ {0, ..., N − 1}, and
∆x ∗ = ∆y∗ = N1−1 . To simplify the implementation of the grid, a square grid spanning
x ∗ ∈ [0, 1], y∗ ∈ [0, 1] was used, which spans the domain of interest and the interior region
it encloses (the region representative of the furnace chamber).

2. Formulating a set of finite difference equations, written in residual form, to approximate


the differential equation and associated boundary conditions. The finite difference equa-
tion applicable to the domain nodes was derived by replacing the derivatives in the dif-
ferential equation with centered, finite difference approximations with truncation error
proportional to ∆x ∗ 2 = ∆y∗ 2 . The finite difference equations applicable to the boundary
nodes (both edge nodes and corner nodes) were derived by applying the law of conserva-
tion of energy to each physically unique node along the perimeter of the domain. This ap-
proach results in the replacement of derivatives with one-sided (i.e., forward/backward)
finite difference approximations with truncation error proportional to ∆x ∗ = ∆y∗ .1
∗ at each grid point x ∗ , y∗ .
3. Using a nonlinear solver to compute the discrete solution ui,j i,j i,j

4. Extracting and aggregating the discrete solutions that lie in the domain of interest to form
the complete numerical solution.

This finite difference scheme was implemented using Python’s NumPy library [34] and so-
lutions were computed using a nonlinear solver in Python’s SciPy library [38]. The complete set
1 The imposed temperature boundary conditions applied to the internal boundaries do not contain derivatives,
so the finite difference equations applied to the internal boundary nodes are exact.
22 Chapter 5. Methodology & Implementation

of finite difference equations used to compute these numerical solutions is shown in Appendix
II, and the solutions obtained are shown in Fig. 5.1.
Because the solutions computed using this numerical model were used as testing data to
evaluate the prediction accuracy of the PINN presented in the following chapter, significant
effort was devoted to verifying their accuracy. First, a grid refinement test was conducted to
confirm the grid independence of each FDM solution. Then, each FDM solution was compared
to a corresponding solution computed using the FEM. These FEM solutions were computed
using COMSOL Multiphysics, a well-known, commercially available multiphysics simulation
software package.

5.2 Architecture of the PINN


The PINN used to solve this problem is a five-input, one-output FNN composed of 6 layers with
80 nodes per hidden layer. The hidden nodes apply the hyperbolic tangent activation function,
and the output node applies the identity activation function, i.e., no activation function. The
hidden layers therefore output:
(i )
(i ) e z ( i ) − e −z
a = , (5.1)
e z ( i ) + e −z ( i )
where z(i) = W(i) a(i−1) + b(i) , and the output layer therefore outputs:

a( L) = z( L) , (5.2)
where z( L) = w( L) a( L−1) + b( L) .
This neural network architecture was determined empirically with the competing objec-
tives of having sufficient expressivity to output accurate predictions, while not being so large
that training requires a prohibitively large amount of computing resources and/or time. Pub-
lications in the literature, particularly [8], [18], were helpful in identifying a reasonable initial
architecture of approximately 5-10 layers and 20-200 hyperbolic tangent units per hidden layer.
To simplify the process of generating collocation points over the irregularly shaped spatial
domain, it was partitioned into three rectangular subdomains, as shown in Fig. 5.2. These
subdomains were defined geometrically as:

• Subdomain 1: x ∗ ∈ (λ, 1), y∗ ∈ (0, λ];

• Subdomain 2: x ∗ ∈ (λ, 1), y∗ ∈ (λ, 1); and

• Subdomain 3: x ∗ ∈ (0, λ], y∗ ∈ (λ, 1).


5.2. Architecture of the PINN 23

u·1x· ,y"), Matenol l u·cx· ,y"), Material 2


1.0 l .l LO u
1.0 LO

o.e 0.9 o.e 0.9

0.8 0.8

E
o.e
0.7

0.6 =E
... 0.7

o.s E
0.5 '.:, 0.5 ' :i.
" 0.4 " 0.4
0.4 0.4

0.3 0.3
0.2 0.2 0.2 0.2

0.1 0.1

0.2 0.4
x"(l)
0.0 0.8 1.0
0.0
0.2 0.4
x' (l)
0.0 ... 1.0
0.0

u"(x" , y"), Material 3 u'(x",y'), Material 4


LO l .l LO 1.1

o., ...
LO

o., ...
1.0

0.1 0.1

... 0.7

2
... 0.7

2
- 0.6
0.5 ':,
- 0.6
0.5 ':,
" 0.4 " 0.4
0.4 0.4

0.3 0.3
0.2 0.2 0.2 0.2

0.1 0.1

0.0 0.0
0.2 0.4 0.0 o.e LO 0.2 0.4 0.6 o.e 1.0
Jl."(l] x' ( l)

u·1x· ,y'), Moterial 5 u·1x· ,y'), Moterial 6


LO Ll LO l.l
LO LO

0.9 0.9
0.8 0.8
o.e o.e
0.7 0.7
0.6 0.6

- 0.6:::
E 0.6:::

" ... o.s ':,


0.4
" ... o.s '.:,
0.4

0.3 0.3
0.2 0.2 0.2 0.2

0.1 0.1

0.0 0.0
0.2 0.4 0.6 0.8 LO 0.2 0.4 0.6 0.8 LO
1C 0 (l) ,c'(l)

u'(x' ,y"), Maternal 7 u"(x" , y"), Material 8


LO Ll LO LI
1.0 1.0

o., 0.9 o., 0.9

0.8 0.8

0.7 0.7
0.6 0.6

- o.6 E
- 0.6 =
0.5 ":, 0.5 ":i.
" 0.4 0.4
" 0.4
0.4

0.3 0.3
0.2 0.2 0.2 0.2

0.1 0.1

0.2 0.4
,c " (1)
... 0.8 1.0
0.0
0.2 0.4
,c '( l)
... 0.8 1.0
0.0

F IGURE 5.1: Numerical solution u∗ ( x ∗ , y∗ ) for each problem instance, i.e., each
reference material, included in the testing dataset.
24 Chapter 5. Methodology & Implementation

The MSE loss function was used to compute the physics-based loss components; however,
instead of computing the composite loss as the sum of aggregated domain and boundary losses
as shown in Eq. 3.5, it was computed as the sum of individual subdomain and boundary losses.
Configuring the composite loss in this way does add some complexity from a formulation per-
spective; however, weighting each subdomain loss L Di and boundary loss L Bj enables the loss
on each subdomain and boundary to be penalized individually, which may provide better train-
ing results in some instances.2

Domain Collocat ion Points: x vs. Y *

1.0

- ·~)'~~~®~ ,• • • - •

0.8

t
0.6
. .: ,
. ·~·..
.-: .
.
t.~"li• .
~
..,. :~
.,;, . .
•it· ..... ..

.::,.,

0.4

0.2

Subdomain 1
Subdomain 2
0.0 Subdomain 3

0.0 0.2 0.4 0.6 0.8 1.0


x• (1)

F IGURE 5.2: Domain collocation points projected onto 2-D plane defined by x ∗ , y∗ .

The composite loss was therefore computed as:


ND NB
∑ Di Di ∑ λB j LB j ,

L= ( λ L ) + (5.3)
i =1 j =1

2 This approach was used in this work, but it was not rigorously studied.
5.3. Training the PINN 25

where λ Di L Di is the weighted loss on subdomain i, ND is the number of subdomains, λ B j L B j is


the weighted loss on boundary j, and NB is the number of boundaries. The subdomain loss L Di
and boundary loss L B j in Eq. 5.3 were computed as:
 
1 ND i [k] 2
L Di = ND i ∑i=1 R D i
  (5.4)
1 NB j [k] 2
L B j = NB ∑i=1 R B j
j
,

[k]
respectively, where R D i is the residual associated with a given collocation point k on subdo-
[k]
main i, NDi is the number collocation points on subdomain i, R B j is the residual associated
with a given collocation point k on boundary j, and NB j is the number of collocation points on
boundary j.
[k] [k]
The differential equation and boundary condition residuals R D i and R B j in Eq. 5.4 were
therefore computed as:3
   ∗ 2 
 2 ∗  2
∂2 û∗ ∂û∗
∂ û
R D1 = κ ∂x∗ 2 + ∂y∗ 2 + dû∗ dκ
∂x ∗ + ∂∂yû∗ , x ∗ ∈ (λ, 1), y∗ ∈ (0, λ]
   ∗ 2 
 2 ∗  2
∂2 û∗ ∂û∗
∂ û
R D2 = κ ∂x∗ 2 + ∂y∗ 2 + dû∗ dκ
∂x ∗ + ∂∂yû∗ , x ∗ ∈ (λ, 1), y∗ ∈ (λ, 1)
 2 ∗    ∗ 2  ∗ 2 
2 ∗
R D3 = κ ∂∂xû∗ 2 + ∂∂yû∗ 2 + ddκ
û ∗
∂û
∂x ∗ + ∂∂yû∗ , x ∗ ∈ (0, λ], y∗ ∈ (λ, 1)

R B1 = ∂∂yû∗ , x∗ ∈ (λ, 1), y∗ = 0 (5.5)
R B2 = û∗ − 1, x∗ = λ, y∗ ∈ [0, λ]
R B3 = û∗ − 1, x∗ ∈ [0, λ], y∗ = λ

R B4 = ∂∂xû∗ , x∗ = 0, y∗ ∈ (λ, 1)

R B5 = κ ∂∂yû∗ + η (û∗ − υ), x∗ ∈ [0, 1], y∗ = 1

R B6 = κ ∂∂xû∗ + η (û∗ − υ), x∗ = 1, y∗ =∈ [0, 1],
where κ = αû∗ 2 + βû∗ + 1 and ddκ ∗
û∗ = 2α û + β.
Combining Eq. 5.3-5.5 with the previously described FNN structure yields the PINN used
to solve the problem. This PINN, which was implemented, trained, and tested using Python’s
PyTorch library [6], is shown schematically in Fig. 5.3.

5.3 Training the PINN


The PINN was trained using a total of 36800 collocation points, each of the form
x ∗ [k] , y∗ [k] , α[k] , β[k] , η [k] . More specifically, the PINN was trained with ND1 = 12000, ND2 = 8000,
ND3 = 12000, and NB1 , ..., NB6 = 800. Each set of collocation points was generated by uniformly
sampling (i.e., randomly sampling a uniform distribution over) the applicable 5-D parameter
3 Thesuperscript [k], which indicates an association with a collocation point k, is implied in Eq. 5.5 (instead of
included explicitly as done in Eq. 3.7) to keep the length of the equations reasonable.
26 Chapter 5. Methodology & Implementation

space, where the range of values for each independent variable x ∗ and y∗ was determined by the
possible range of x ∗ and y∗ values in or along the subdomain or boundary respectively, and the
range of values for each parameter α, β, and η was taken directly from Table 4.3. The resultant
sets of collocation points can be partially visualized in the 3-D plots shown in Fig. 5.4-5.5.4

______________________ Fully-Connected Feedforward Neural Network --------------------- ,

___________________________ Physics Module ---------------------------.

il ' (x' , y' ; a, p , ry)

F IGURE 5.3: Schematic of the PINN used to approximate the solution to the prob-
lem of interest.

A subtle but noteworthy implication of generating collocation points in this manner is that
each collocation point contains a combination of randomly sampled α, β, and η values. Con-
sequently, while the range of each parameter α, β, and η in the testing dataset was limited to
corresponding range of values in Table 4.3, the effective range of thermal conductivity vs. tem-
perature curves used in the training process was comparatively broad, as shown in Fig. 5.6.

4 Fully visualizing these sets of collocation points would require 5-D plots.
5.3. Training the PINN 27

Domain Collocat ion Points: a vs. x*, y•

4.0

1/1t t ) ;
y 3.5

1 i.
3.0

< < 2.5 ~


2.0 Subdomain 1

((1
Subdomain2
1.5
Subdomain3
1.0

0'
)
0.5

1.0

1.0

Domain Collocation Points: /3 vs. x*, y *

0.50

0.25

0.00 2
0.25QJ..
Subdomain 1
0.50 Subdomain2
-0.75 Subdomain3

1.00

1.0

Domain Collocat ion Points: fJ vs. x•, y •

30

25

20
~

Subdomain 1
Subdomain2
10 Subdomain3

1.0

1.0

F IGURE 5.4: Domain collocation points projected onto the 3-D spaces defined by
x ∗ , y∗ , α (top), x ∗ , y∗ , β (middle), and x ∗ , y∗ , η (bottom).
28 Chapter 5. Methodology & Implementation

Boundary CollOcation Points: o vs. x:. y•

Bovnd•ry 1
&ovnd•ry 2
&ovnd•l'y 3
&ovnd•ry 4
eovnc11,iy s
eourw1a.iy6

LO
0.8

o.•
0.4 • $'
~
0.2

LO

Boundary CollOcation Points: 8 vs. :it,. . y•

&ound•ry 1
&ovnd•l'y 2
&ovnd•ry 3
eounc110,ry •
8ourwto,ry s
8oundo,ry 6

LO

Boundary CollOCation Points: I) vs. X"' . y•

lO

25

20 E 8ovnd•ry 1
C,
&ovnd•ry 2
1.5 &ovnd•ry 3
&ovndo,l'y 4
10 8ovndO,l'y s
eourwtary 6

LO

LO

F IGURE 5.5: Boundary collocation points projected onto the 3-D spaces defined
by x ∗ , y∗ , α (top), x ∗ , y∗ , β (middle), and x ∗ , y∗ , η (bottom).
5.3. Training the PINN 29

Thermal Conductivity vs. Temperature


0.6 ~ - - - - - - - - - - - - - - - - - - - - - - - - - - - ~
- Training Curves (N .,. 800)
- Material l (Model)
- Material 2 (Model)
- Material 3 (Model)
o.s - Material 4 (Model) +
- Materials (Model)
- Material 6 (Model)
- Material 7 (Model)
- Material a (Model)
0.4

0.2

0.1

0.0 + - - - - - - - - - ~ - - - - - - - - - ~ - - - - - - - - - - !
200 400 600 800 1000 1200 1400
u 1• K)
Thermal Conductivity vs. Temperature
0.6 ~ - - - - - - - - - - - - - - - - - - - - - - - - - - - ~
- ltaining Curves (N = 12000)
- Material l (Model)
- Material 2 (Model)
- Material 3 (Model)
0.5 - Material 4 (Model) +
- Material S (Model)
- Material 6 (Model)
- Material 7 (Model)
- Material 8 (Model)
0.4

0.2

0.1

0.0 +-- - -- -- - -~ - - - -- - - - -~ - - -- -- - ---!


200 400 600 800 1000 1200 1400
u (• K)

F IGURE 5.6: Thermal conductivity vs. temperature curves associated with col-
location points generated by uniformly sampling the dimensionless parameter
space specified in Table 4.3. The number of training curves (black) included in
the top plot is 800, the minimum number of collocation points assigned to any
subdomain/boundary, and the number included in the bottom plot is 12000, the
maximum number of collocation points assigned to any subdomain/boundary.
30 Chapter 5. Methodology & Implementation

The weight values used to compute the composite loss in Eq. 5.3 were determined em-
pirically. This approach was chosen over alternative approaches, such as estimating optimal
weights before training [39] or adapting weights during training [40], due to its relative sim-
plicity. The final weights used are shown in Table 5.1.

Weight Value
λ D1 0.3
λ D2 0.3
λ D3 0.3
λ B1 0.1
λ B2 1.0
λ B3 1.0
λ B4 0.1
λ B5 0.1
λ B6 0.1

TABLE 5.1: The weights used to compute the PINN’s composite loss.

The neural network parameters θ were optimized using the Adam optimizer, an extension
of the standard stochastic gradient descent algorithm that applies unique, adaptive learning
rates to each parameter based upon recent values of the gradient computed for that parameter
[41]. This optimizer was chosen over alternatives because of its combined effectiveness and
ease of use. A three-step training process was used consisting of:

1. 50,000 iterations with an initial learning rate of 1 × 10−3 (default value),

2. 50,000 iterations with an initial learning rate of 1 × 10−4 , and

3. 50,000 iterations with an initial learning rate of 1 × 10−5 .

This three-step training process is similar to the multi-step training process used in [18].
31

6 Results

This chapter compares the PINN’s solution predictions to the true solutions contained in the
testing dataset. The error in each solution prediction, as well as the aggregate error over the
entire testing dataset, is quantified and discussed.

6.1 Presentation of Results


After training, the PINN’s accuracy was evaluated using the testing dataset presented in the
previous chapter. The plots in Fig. 6.1 show the PINN’s solution prediction for each of the
problem instances in the testing dataset, i.e., for each of the eight reference insulation materials
listed in Table 4.1. Corresponding plots showing the error in each solution prediction (relative
to the corresponding true solution shown in Fig. 5.1) are shown in Fig. 6.2.
To quantify the accuracy of the PINN more generally, the relative root mean square error
(RRMSE) was computed for each problem instance in the testing dataset, as well as for the
full testing dataset. The RRMSE, which normalizes the RMS error to the RMS true value, is
an effective metric for evaluating and comparing the performance of predictive models and is
computed as [42], [43]:
v
u  2 
u N u ∗ [k] − û∗[k]
u1
L RRMSE = utN ∑  , (6.1)
 
∗ [ k ] 2
k =1 u

where u∗ [k] and û∗[k] are the true and predicted values associated with a given point k in the test-
ing dataset; and N is the number of values considered from the dataset. The resultant RRMSE
values are shown in Table 6.1.
32 Chapter 6. Results

U'" (x ·, y' ), Material 1 LJ" (x", y· ), Ma terial 2


1.0 1.1 1.0

1.0 1.0

0.8 0.9 0.8 0.9

0.8 0.8

0.7 0.7
0.6 0.6
0.6 2 0.6 2
0.5 "::i 0.5 '::i
0.4 0.4
0.4 0.4

0.3 0.3
0.2 0.2 0.2 0.2

0.1 0.1

0.0 0.0
0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0
x' (l) x' (ll

U"(x", y ' ), Materi al 3 LJ " (x" ,y· ), Material 4


1.0 1.1 1.0

1.0 1.0

0.8 0.9 0.8 0.9

0.8 0.8

0.7 0.7
0.6 0.6
0.6 2 0.6 2
0.5 ·::. 0.5 '::i
0.4 0.4
0.4 0.4

0.3 0.3
0.2 0.2 0.2 0.2

0.1 0.1

0.0 0.0
0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0
x' ( 1) x' (1)

lJ "(x·,y · ), Ma terial s ti' (x', y · ), Material 6


1.0 1.1 1.0

1.0

0.8 0.9 0.8


0.8

0.7
0.6 0.6
0.6 ::;-

0.5 ·::,
0.4 0.4
0.4

0.3
0.2 0.2 0.2

0.1

0.0
0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0
x ' ( ll x' (ll

LI "(x", y · ), Material 7 u· (x ", y ' ), Material 8


1.1

1.0

0.9

0.8

0.7

0.6 2
0.5 ·::,

0.4

0.3

0.2

0.1

0.0

x' ( 1) x' (1)

F IGURE 6.1: Solution prediction û∗ ( x ∗ , y∗ ) for each problem instance, i.e., each
reference material, included in the testing dataset.
6.1. Presentation of Results 33

U'(x · , y')-u'(x' , y 'J, Materia1 l U'<x' , y')-u·tx ' , y'), Material 2


LO~---'---'-..;._-'--..;_-'------~ 0.10 LO~---'---'-..;._-'--..;_-'------~ 0,10

... 0,05
... 0,05
·,, ·,,
... ·," •-• ~I
...
0.00 I

·,, . ...

0.00
·,,
-0.0:5
"
:,
-0.0:5
"
:,
0.2 0.2

0.0 ..,
o.o+-----------------1
0.'I
x· {1J
0.6 0.8 1.0
-0.10
0.0 .,
o.o+-----------------1
0.'I
x· CU
0.6 0.8 LO
--0.10

U"(x" ,y")-u'(x' , y '), Material 3 U'(x' , y')-u '(x " , y'), Material 4
LO~----'-----'------~ 0.10 LO~----'----...;_-----~ 0.10

0.8 0.8
0.05 0.05

... ...
... ;.
... .
;.
:, :,
-0.05 --0.05
0.2 0.2

0.0
0.0 .., ... ,t"{l}
•-• ... LO
-0.10 0.0
0.0 0.2 ... ,t' (1)
•-• ... LO
-0.10

U"(x" ,y" ) -u '(x" , y '). Materlal S U'(x" , y·)-u'tx' ,y'). Material 6


LO 0.10 LO 0.10

0.8 0.8
0.05 0.05
-,, -,,
0.6 ;. 0.6 ;.
:: ·, ·,
. ... 0,00 I
-,,
-.
o.,
0,00 I
-,,
;. ;.
:, :,
-0.05 -0.05
0.2 0.2

0.0
0.0 .., ... ..-· {1J
•-• 0.8 LO
-0.10 0.0
0.0 0.2 o.,
..-·m
•-• o., LO
-0.10

0.10 0.10

... 0.05
... 0.05
-,, -,,
0.6 ;. 0.6 ;.
·, ·,
0,00 I 0,00 I

... ;. •-• -,,


;.
:, :,
-0.0S -0.05
0.2 0.2

0.0 ..,
o.o+------------------<
0.'I
.... {l)
0.6 0.8 LO
-0.10 o.o+------------------<
0.0 0.2 0.4
..-· Cl)
0.6 ... 1.0
-0.10

F IGURE 6.2: Error û∗ ( x ∗ , y∗ ) − u∗ ( x ∗ , y∗ ) for each problem instance, i.e., each ref-
erence material, included in the testing dataset.
34 Chapter 6. Results

Problem Instance RRMSE


(Material ID)
1 1.138 × 10−2
2 7.699 × 10−3
3 7.148 × 10−3
4 1.150 × 10−2
5 8.199 × 10−3
6 7.684 × 10−3
7 6.367 × 10−3
8 5.334 × 10−3
All 9.063 × 10−3

TABLE 6.1: RRMSE of each solution prediction.

6.2 Discussion of Results


As shown in Table 6.1, the PINN’s solution predictions were generally accurate across the entire
testing dataset. The aggregate RRMSE was approximately 0.91% and the worst-case individ-
ual RRMSE (associated with Material 4) was approximately 1.15%. Six of the eight individual
RRMSE values were in the range 0.53-0.82%.
In each problem instance, the highest magnitude error occurred along the internal bound-
aries at which the imposed temperature boundary condition is applied, particularly at/near
the intersection of these two boundaries. This is illustrated in Fig. 6.3-6.4 using the solution
prediction associated with Material 1 as an example (corresponding plots for Materials 2-8 are
provided in Appendix III). From a physics perspective, this observed tendency is not surprising
because the magnitude of the temperature gradient is highest in this area.
Similarly, the RRMSE was generally higher in the problem instances associated with mate-
rials that have lower and more variable thermal conductivities over the relevant temperature
range. Again, this is not surprising from a physics perspective because:

• The temperature gradient in a material with a low thermal conductivity will generally be
larger than the temperature gradient in a material with a higher thermal conductivity.

• The maximum value of the temperature gradient will generally be higher in a material
with a highly temperature-dependent thermal conductivity averaging k over the temper-
ature range of interest than a material with a constant thermal conductivity equal to k.

Regarding the time and computing resources required to obtain these results, the duration of
the training process was approximately 3 hours using a Dell Precision 5680 mobile workstation
with an Intel i9-139000H CPU and NVIDIA RTX 2000 Ada Laptop GPU. Given the general avail-
ability of desktop workstations with multiple, significantly more powerful GPUs, this training
time could presumably be reduced significantly while still using a single, off-the-shelf machine.
6.2. Discussion of Results 35

Bou ndary 1 , Mate ri a l 1 Bo undary 2, Material 1


1.1 1.1
I

"'
1.0 1.0

0.9
I'---. 0.9

0.8 "- 0.8

....
· :,
0.7

0.6

0.5
""''\ ....
·:,
0.7

0.6

0.5

0.4
\ 0.4

0.3 0.3

0.2 0.2
- Tru e - True
0.1 - 0.1 -
- Predicted - Predicted
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x' (1) y' ( 1)

Boundary 3, Mate rial 1 Boundary 4 , Material 1


1.1 1.1

1. 0 1.0

0.9

0.8
" 0.9

0.8
I'---.
"- I'\.
"'
0.7 0.7

.... 0.6 .... 0.6


·:, 0.5 :, 0.5
'\
0.4 0.4
\
0.3 0.3

0.2 0.2
- Tru e - True
0.1 a
0.1 -
- Predicted - Predicted
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x ' (1) y ' (1)

Boundary 5, Mate ri a l 1 Bo undary 6 , Material 1


1.1 1.1

1.0 1.0 -~

0.9 0.9

0.8 0.8

0.7 0.7

;:; 0.6 .... 0.6


· :, 0.5 · :, 0.5

0.4 0.4
,-___ r--..

---- ----
0.3 0.3

0.2 ~ 0.2 ~

- Tru e - True
0.1 - 0.1 -
- Predicted - Predicted
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x ' (1) y ' ( 1)

F IGURE 6.3: Comparison of û∗ and u∗ along the domain boundaries in the prob-
lem instance associated with Material 1. A key for the boundary numbering
scheme used is provided in Fig. 6.4.
36 Chapter 6. Results

y*

0 x*
0 1

F IGURE 6.4: Key for the boundary numbering scheme used in Fig. 6.3.
37

7 Conclusion

This chapter summarizes the contributions that this work makes to the computational physics
and physics-focused machine learning fields and identifies opportunities for future research.

7.1 Contributions of this Work


The results presented in the previous chapter demonstrate the feasibility of training a PINN,
without using any training data, to accurately solve heat conduction problems involving pa-
rameterized, temperature-dependent material properties models. To the author’s knowledge,
this is the first time a PINN has been applied to the solution of any computational physics prob-
lem involving parameterized, nonlinear material property models, irrespective of whether data
is used in the training process.
More generally, this work also shows how the Python programming language and existing
open-source libraries can be applied to practically all aspects of implementing a PINN to solve
this type of problem, including:

• The statistical analysis of empirically derived material property data and the creation of
representative material property models;

• The generation of simulation-based testing data;

• The initialization and configuration of a PINN;

• The training of a PINN;

• The evaluation of a PINN’s accuracy; and

• The postprocessing of a PINN’s solution predictions.

7.2 Opportunities for Future Research


This work is far from exhaustive and presents many opportunities for future research, some of
which are already active research areas. These opportunities for further research include, but
are not limited to:

• Investigating how the results obtained in this work would be affected by (i) varying the
architecture of the PINN, (ii) enforcing boundary conditions as “hard” constraints on the
38 Chapter 7. Conclusion

solution [17], [25], [26]; (iii) generating collocation points using an alternative sampling
method (e.g., Latin hypercube sampling [44]), (iv) adapting loss weights during the train-
ing process [39], and/or (v) using alternative optimization algorithms to update neural
network parameters.

• Using PINNs to solve similar problems (i.e., problems that also involve parameterized,
nonlinear material property models) that involve additional geometric and/or physical
complexities. This includes 3-D problems, transient problems, multiphysics problems in-
volving systems of coupled differential equations, etc.

• Using PINNs to solve similar problems in which the range of values considered for each
parameter is significantly larger.

• Using PINNs to solve problems that involve multiple parameterized, nonlinear material
properties, e.g., applying PINNs to solve a transient heat conduction problem in which
temperature-dependent models are used for both the thermal conductivity and specific
heat capacity of the heat-conducting medium.

• Using PINNs to solve problems that are also parameterized in other ways, e.g., problems
that also involve parameterized geometries and/or boundary conditions.

• Investigating the practical limitations of using PINNs to solve parametric problems (e.g.,
the extent to which the curse of dimensionality limits the application of PINNs to parametric
problems, and how this may be mitigated).

• Investigating if/how PINNs, or select components of the PINN framework (e.g., au-
tomatic differentiation), could be leveraged to accelerate the solution of computational
physics problems using traditional numerical solution methods.
39

8 Appendix I

As specified in Chapter 4, the average convection HTC along the external surfaces of the furnace
insulation module was assumed to be 5 W/(m2 ◦ K). This value was derived from the calcula-
tions shown in Fig. 8.1-8.2, which estimate the natural convection HTC along each physically
unique external surface of the furnace insulation module considered in the 2-D cross-section
(i.e., the vertical side surfaces, the horizontal top surface, and the horizontal bottom surface).
These calculations apply well-known empirical correlations of the form [1], [2]:

k
h L = Nu L (8.1)
Lc
where h̄ L is the average HTC along the surface; Nu L is the average Nusselt number along the
surface and is computed using empirically derived relationships based upon the configuration
of the heat transfer problem along the surface; k is the thermal conductivity of air evaluated at
the film temperature along the surface, and Lc is the characteristic length of the surface. The
empirical correlations used to compute Nu L were taken from [2].
In these calculations:

• The length L of each surface was assumed to be 0.4 m, twice the half-length of Lext ;

• The depth d of each surface was assumed to be 0.6 m, a reasonable assumption given the
typical aspect ratios of commercially available furnaces [27], [33]; and

• All properties of air were computed assuming an ambient pressure of 1 atm.

Two different cases were considered; in the first case (Fig. 8.1), the average surface temper-
ature us was assumed to be 440 °K (equivalent to a dimensionless temperature of u∗s = 0.346),
and in the second case (Fig. 8.2), us was assumed to be 610 °K (equivalent to a dimensionless
temperature of u∗s = 0.479). These values were selected deliberately to span the approximate
range of average surface temperatures computed for the eight problem instances that comprise
the testing data. As shown in Fig. 8.1-8.2, the calculated natural convection HTCs range from
3.82 to 8.76, confirming that the assumed value of 5 W/(m2 ◦ K) is a reasonable estimate.
40 Chapter 8. Appendix I

INPJJJ VALU~S) INT[RMfDIATt: VALUl(S) OUTl'IJ'TVALU[(S)


u,l"l<l 440 <: Avwa.ge suface temper.a t un! t.lmJ o.... l\_(W/(n,1 "1()) 6..8S < HTC t~Yel'ilged OYerl)
u_('K) 293 < ~ I al, tempe,.a-ture u. f~ 3665 < rilm t~.uure
1lm/1'I 9 81 < G~t'itionil ~ t i o n 6{tt) (VIQ 2 1J.E 03 < Coeff, of wiolumc- CICPJnsion fl\.\
L(m) 0.40 <: l.J!ogth of surf.ice vluJ(m'/s) 2..20E-OS < Kmematie vlsros.ty of ~Jr @I u_.
o(mJ O(iO<~hofji.irf~ Gr1 (J) 5 lOC,OS «;<>>ho!.......,
Onentanon v < Onenta'bOn of $Ul'hct Pr(u)(l) 0.713 < Prindd numbtrof air t, u.
R,.(IJ 3.71C.o8 < ~'flelgh.numb«
N1,,1. .-,.: (U 90.7 < Nus>Oi ....,_ ( ~ .,,.. LI
kluJ(W/lm•<II 3.02£-02 < Thetmal conductMty o f air @I U.

INPJJJ VALUQS) INTDlMfDlATt: VALUl(SJ 0UTl'IJ'T VALU[(S)


u,("'K) 440 < Awngesuface1~.a1u<e t. (m) 0,12 1\-(Wl(m' •l)) &.ll <HTCfaYH"a£ed<NHLI
u_rlQ 29.3 < AmbiMl a 11 lt.fflPtl"a'lurt .. n:i 366.S < Fat,,, t tmPt"l'1lutt'
1("1/1') 9.81 < Gf111.,,t.iuon,1I .,c;c.tll,",tlon ll(u.)(1/'I() 2.nE-03 < Cot.ff. ol ~""' txP,i~on • t..1r
L(ml 0 40 <l'-"'i,th oiwrfa<:c -luJ(m'M 2 lOC 05 < Kin(oma.liC visc~ty ~ .air @' "'°
o(ml 0.60 < Dtoth af surface Gr1 ( l) L40E•07 < Graihofnumb«
Orienl-1\0'I ti , UPPM < Orienl.nlOl'I of a,rfKe ..,..)(1) 0.713 < Pr.-nddnumbt<oflirt,u.
R-.111 1 OOE+OJ < fwtlt'igh nurnbff
N""-,(1) 32.3 < Nus.sef\ nut't'lbt, (avtrJise,d ~ L)
\(u,)(W/(m•<JI 3.02(-02 <Th(otm.l!condUCINftyOfllolr@'u,

INPUT VALUQS) INTIRMfDI.ATt: VALUl(SJ 0UTl'IJ'T VALU<(S)

u rlQ 440 <Avn-1,at'wfa~t~IUfC (. (..,) 0. 12 t\-(W.ftm1 "K)) 382 <HTC{IW1'11JedCM'fl)


u_ rlQ 291 < .AnltN@nt i 1r temp«uure u. no l&S.S < Fi1m temperatutt
1("1/1'1 9.11 < Gf,htlillUO!'tilll xcef«J1ia., ll(uJ (1/'K) 2.7JC-DJ <Cooff. oJ ,dum, ""'""°" ~"'
Llml 0 40 < lm,gth ol surt.a- ,lu,)(m'/,1 2 20E OS < Kinm\.a.tK viscosity of ~ir ti '-"
o(ml 0.60 .: D@ptt'l ol wrl.au Gr1 ( 1) l .40E+07 c Graihef numb«
~1-..tioo H. Lown < Qric:,(,1-..tion ol surl'K(' P,(..)(1) 0.7 ll < PtMldtl numbtf o, ah ip u.
R-.UI 1.CKIE•07 < Rii.,teghnumber
N""-, (1) 1.S.l <N~t nurT'b8 (avera~ aw., l)
\(u,l (W/(m"<II 3 02:E 02 < TIM:rmfl «induc:tMty of i ir @ u.

l!QUATtoNS APPIJ.to
Cakulauon ol u_:
"~+ u_
1J, = -2-
Cakulatlon of~-

Ra - Gr Pr_ gP(u. - u_ )L; P


:. - L - \,' z r

CJik:ubUon of ... fo,r YHtical surf~ (OtiNUJUon = V)

t{u, = (o.s s +
2 o.387Ra: ,) '

(u (0.492t f

Cakulauon of .,. fo,r hon10t1Llil, uppe, $Ulf.l(t (Orlent.n lon ... H, Upper);

!iii, = O.SARa], 10' :S Ra, < 10'


{
~ = 0. l SR&f, 10· :!i lu:. :!i 1011

t:lkW1,on of ., fot honzonll!, !owe, wrf,ct (Orieflit,i,on • H.. t.owlf);

'
Nii:. = 0..27iu:. 10..1 :5 Ra:. :5 10: 0
CJlculallonolf\ >-t:

h, : Nu:.;

F IGURE 8.1: Estimated natural convection HTCs along each physically unique
surface of the furnace insulation module, assuming us = 440◦ K and thus u∗s =
0.346.
Chapter 8. Appendix I 41

INPUT VAW[tS) INTUMfDIAT'E VALUl(S] 0Ul1'U'I' VAW[(S)


u,rKl 6 10 < A'V'l!B.gesuface tempera t..-e 1. 1ml 0."° 1\-(W/b"l'\1 "1)) 7.7.3 < HTC(a'l'el'illged overlJ
u_rK) Hl <Ambient aft tempe,~ture ¼ (-.J 4St.5 < nlm t ~.Uur~
slm/1'1 9 81 < G~titionill ~cdcqtion ll(")(VIQ 2 2U 03<C-. .. - ... .,...• • . . . "
L(m) 0.40 < ~ h of surf.ace v(uJ(m'/s} 3..21E-05 < Kinemauc vlst.OMty of a r @I u..
d (m( 0 00 < D<p(h of ,url><e Gr1U) 4 28(,0S<Gr.,"""-
On,e,nti1l0n V < Onentat>On of $Uff,ct
.....,
Pr{u )(l )

Nu ,-.: (11
0 ,699 < Pnndd number of ..,.r @ u.
2.99C.oa < ~ .,telghnurnb«
84.7 < N.,...,....,_(...,..il<d°""IJ
kjuJ(W/lm' <II 3.65.£-02 < Thennal conductMty of air @I Ui,

INPUT VALUQS) INTUMf~ll VALUl(S] 0Ull'U'l'VALU<(SI


u. C-Kl 610 < Avf'nge suface l,mpH.a tw-e l.(M) 0 .12 t\_(W/lm1 "l)) &.76 < HTC(av«.t edoverl)
u- rKl 29.3 <Atnbit-nt a1r tt.omQttatu r't' ,.('IQ 4Sl..S < F, t ~ l ' 1tutt-
,,,.,.,,,, 9.8 1 < Gr1YltiU0n,tl ~c.tllr,tlon tl(u.)(1/'IQ :Z.:zt f -C3 < Cocff. of ~rne txp,1~on 8- 1.1,.
L(ml 040 <ltni.th oiwrla(;.: v!uJ(m' M 3 11( 05 <Kint'ma1i(:vi;,c;m,i1y of.air @'I,\,
d (m)
Oritnl -1\0'I ti , UPPM
0.60 < Depth of k.lrlact
< ~lit.ion of s,urf,c.e . ,. ,,.,
Grl (U

• (I)
1.16E •07 <Graihof'numb«
0.699 < Pr~tl numbtt' of alt (' u.
8- 07E-+06 < R~t'igh numbt-r
N-....(11 28.8 < Nuuett nutt'lb« (averased 0\41' l)
l (u,)(W/lm..)I 3.6SC-02 < Thttm,.11conduetMty of 11ir fP u.

INPUT VALU{tS) tNTtRMfDl.61! VALUQS] 0Ull'U'l'VAW[(S)


u ("K) 6 10 < A""'1",&t' w 1;~ l ~ IUfC t.,lr,) 0.12 t\-(W.ft,.,1 0 1)) 4.38 < HTC(,~st'd OYer ll
u_ rKJ 291 < ~ t air t tmptrature .,. rKl • s1.s <( F11ri1 1empet1twe
1(m/1'1 9.a l < Gra,.,,,1auon~ xceterat~ tl(uJ(l/'IQ 2.21<,QJ <C<><ff. • · -......"'.......
Llml 040 <lt!n,gth ol suffx,c, v(u,)(m'I>) 3 21E OS <Kinc.'ma,ticviKOSitv ofa.·r @u,
d (m) 0.60 < ~ of surl.au Gr1 ( t ) U 6E ..07 < Gnihol numb«
CJric,nt;aticM, H, Lown < Qric('ll.Jtioo of sud,ci: P,(..)(1) 0.699 < Pr-.-.dd rMmbtt of ah t, u.
. .. ,11 8.07E •06 < ~ .,te gh number
N..._,(11 ! 4..4 < NIJU@ft RUITlb8 (awu~ aw.t l)
l (uJ (W/lm"KII 3 6SE 02 < ~ 1 «induc:tMty of fir@ u,

EQUATIONS APt>UfD
Cakula.uon of ¼:
u., +u_
u, = - 2-

Cakula.uon of A.ai-
gp(u, - u_ )L; Pr
,.,
c.alcubuon ol .. for wttical surf.:1~ [or&Mtauon = V)

~-.= ( 0-82S+ 0.387Ra~ ,) '

(l+ (M 92l f
Cakula.uon of ... for hof11onul, uppe, Wffact (Ortentat lOl"I • tt. Upptr)~

~' = O.S4Raj. 10' :5 Ra, < 10'


{
Nu,= O.l Sbf, 10· :5 IQ, :!i 1ou

~lcuboon of for hofizontal, low-e, wrf,c• (c,n.,_t,1101\ • H.. ll.Owtf);

Nu;. =0.271Q:' , 10-1 :!i RI., :!i i on

c-ul.,11ion of I\,...:

h, ;::; Nu~!.
L,

F IGURE 8.2: Estimated natural convection HTCs along each physically unique
surface of the furnace insulation module, assuming us = 610◦ K and thus u∗s =
0.479.
43

9 Appendix II

The complete set of finite difference equations used to compute the numerical solutions pre-
sented in Chapter 5 is:

κi+ 1 ,j ∆i u∗ − κi− 1 ,j ∇i u∗ + κi,j+ 1 ∆ j u∗ − κi,j− 1 ∇ j u∗ = 0, i ∈ (iλ , N − 1), j ∈ (0, jλ ]


2 2 2 2
κi+ 1 ,j ∆i u∗ − κi− 1 ,j ∇i u∗ + κi,j+ 1 ∆ j u∗ − κi,j− 1 ∇ j u∗ = 0, i ∈ (iλ , N − 1), j ∈ ( jλ , N − 1)
2 2 2 2
κi+ 1 ,j ∆i u∗ − κi− 1 ,j ∇i u∗ + κi,j+ 1 ∆ j u∗ − κi,j− 1 ∇ j u∗ = 0, i ∈ (0, iλ ], j ∈ ( jλ , N − 1)
2 ∗ 2 2 2
η υ−ui,j ∇i u ∗ ∆ j u∗
N −1 2 − κ 1
i − 2 ,j 2 + κ i,j+ 2 2 − 0 = 0,
1 i = N − 1, j = 0
∆i u ∗ ∇i u ∗
κi+ 1 ,j 2 − κi− 1 ,j 2 + κi,j+ 1 ∆ j u∗ − 0 = 0, i ∈ ( i λ , N − 1), j = 0
2 2 2
∗ − 1 = 0,
ui,j i = iλ , j ∈ [0, jλ ]
∗ − 1 = 0,
ui,j i ∈ [0, iλ ], j = jλ
∆ u∗ ∇ u∗
κi+ 1 ,j ∆i u∗ − 0 + κi,j+ 1 j2 − κi,j− 1 2j = 0, i = 0, j ∈ ( jλ , N − 1)
2 2 ∗ 2
η υ−u ∇ u∗
κi+ 1 ,j ∆i2u − 0 + N −1 2 i,j − κi,j− 1 2j = 0,

i = 0, j = N − 1
2 2
κi+ 1 ,j ∆i2u − κi− 1 ,j ∇2i u + N −1 (υ − ui,j
∗ ∗ η ∗ )−κ ∗
2 2 i,j− 12 ∇ j u = 0, i ∈ (0, N − 1), j = N − 1
∗ ∗ ∗
η υ−ui,j ∗ υ−u ∇u
N −1 2 − κi− 1 ,j ∇2i u + Nη−1 2 i,j − κi,j− 1 2j = 0, i = N − 1, j = N − 1
2 2
η ∗ ∗ ∆ j u∗ ∇ j u∗
N −1 ( υ − ui,j ) − κi − 12 ,j ∇i u + κi,j+ 12 2 − κi,j− 12 2 = 0, i = N − 1, j ∈ (0, N − 1),

where i and j are indexes indicating discrete nodes along the x-axis and y-axis respectively; N is
the number of nodes along each axis; iλ = jλ = λ( N − 1); ∆ is the forward difference operator
(i.e., ∆i u∗ = ui∗+1,j − ui,j
∗ ); ∇ is the backward difference operator (i.e., ∇ u∗ = u∗ − u∗
i i,j i −1,j );
   
κi+ 1 ,j = 12 αu∗ 2 + βui∗+1,j + 1 + αui,j ∗ 2 + βu∗ + 1 ;
2  i+1,j   i,j

κi− 1 ,j = 12 ∗ 2 + βu∗ + 1 + αu∗ 2 ∗
2
αui,j i,j i −1,j + βui −1,j + 1 ;
   
∗ 2 ∗ ∗ 2 ∗
κi,j+ 1 = 12 αui,j +1 + βui,j+1 + 1 + αui,j + βui,j + 1 ; and
2    
κi,j− 1 = 12 ∗ 2 + βu∗ + 1 + αu∗ 2 ∗
2
αui,j i,j i,j−1 + βui,j−1 + 1 .
45

10 Appendix III

The predicted vs. true temperature distribution along each boundary in the problem instances
associated with Materials 2-8 are provided in Fig. 10.1-10.7. A key for the boundary numbering
scheme used is provided in Fig. 6.4.
46 Chapter 10. Appendix III

Boundary 1, Materia l 2 Boundary 2, Material 2


1.1 1.1

"'
1.0 1.0

"'--
"'"',
0.9 0.9

0.8 0.8

0.7 0.7

.... 0.6
'\. .... 0.6
· :,
0.5
\ · :,
0.5

0.4
\ 0.4

0.3 0.3

0.2 0.2
- True - True
0.1 - 0.1 -
- Predicted - Predicted
T
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x' (1) y' (1)

Boundary 3, Materia l 2 Boundary 4, Material 2


1.1 1.1

1.0 1.0

"" I"'-.
"'
0.9 0.9

0.8 0.8

0.7 0.7
'I'\.

.... 0.6 .... 0.6


'\.
· :, 0.5 · :, 0.5
\
0.4 0.4
\
0.3 0.3

0.2 0.2

0.1 .- True
0.1 -
- True
- Predicted - Predicted
r
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x' (1) y ' (1)

Boundary 5, Material 2 Boundary 6, Material 2


1.1 1.1

1.0 1.0

0.9 0.9

0.8 0.8

0.7 0.7

.... 0.6 .... 0.6


· :, · :,
0.5 0.5

0.4

0.3

0.2

0.1 .- Tru e
-- -
-
-- ---
0.4

0.3

0.2

0.1
- True
--
- Predicted Predicted
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x' (1) y ' (1)

F IGURE 10.1: Comparison of û∗ and u∗ along the domain boundaries in the prob-
lem instance associated with Material 2.
Chapter 10. Appendix III 47

Bou ndary 1, Mate ri a l 3 Bo undary 2, Material 3


1.1 1.1
-
"'
1.0 1.0

0.9
I" . 0.9

0.8 "'- ' 0.8

'
"'""
0.7 0.7

.... 0.6 .... 0.6


· :, ·:,
0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2
- Tru e - True
0.1 - 0.1 -
- Predicted - Predicted
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x' (1) y' ( 1)

Boundary 3, Mate rial 3 Boundary 4, Material 3


1.1 1.1

"'
1. 0 1.0

0.9 0.9
I" .
0.8 0.8 "'-' I"-
"'""
0.7 0.7

.... 0.6 .... 0.6


·:, 0.5 :, 0.5

0.4 0.4

0.3 0.3

0.2 0.2
- Tru e - True
0.1 a
0.1 -
- Predicted - Predicted
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x ' (1) y ' (1)

Boundary 5, Mate ri a l 3 Bo undary 6 , Material 3


1.1 1.1

1.0 1.0 -~

0.9 0.9

0.8 0.8

0.7 0.7

;:; 0.6 .... 0.6

-- -- -- --
· :, · :,

-
0.5 0.5

0.4 0.4
r---..
0.3 0.3

0.2 ~ 0.2 ~

- Tru e - True
0.1 - 0.1 -
- Predicted - Predicted
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x ' (1) y ' ( 1)

F IGURE 10.2: Comparison of û∗ and u∗ along the domain boundaries in the prob-
lem instance associated with Material 3.
48 Chapter 10. Appendix III

Boundary 1, Materia l 4 Boundary 2, Material 4


1.1 1.1

1.0 1.0
I
'-..,
0.9
"'-.. 0.9

0.8

0.7
"' '\.
0.8

0.7

"
.... 0.6 .... 0.6
· :, · :,
0.5 0.5

0.4
\ 0.4

0.3 0.3

0.2 0.2
- True - True
0.1 - 0.1 -
- Predicted - Predicted
T
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x' (1) y' (1)

Boundary 3, Materia l 4 Boundary 4, Material 4


1.1 1.1

1.0 ....___ 1.0

0.9 0.9
I'--.
0.8

0.7
0.8

0.7
" ['\.
'\.
"
.... 0.6 .... 0.6
· :, 0.5 · :, 0.5

0.4 0.4
\
0.3 0.3

0.2 0.2
- True - True
0.1 a
0.1 -
- Predicted - Predicted
r
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x' (1) y ' (1)

Boundary 5, Material 4 Boundary 6, Material 4


1.1 1.1

1.0 -- 1.0

0.9 0.9

0.8 0.8

0.7 0.7

.... 0.6 .... 0.6


· :, · :,

-
0.5 0.5

----- ----
0.4 0.4

0.3

0.2

0.1 -
-
-
True
-

Predicted
0.3

0.2

0.1
- True
Predicted
----
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x ' (1) y ' (1)

F IGURE 10.3: Comparison of û∗ and u∗ along the domain boundaries in the prob-
lem instance associated with Material 4.
Chapter 10. Appendix III 49

Bou ndary 1, Materi al 5 Boundary 2, Material 5


1.1 1.1

1.0 1.0 .......,,


I~
"" '
0.9 0.9

0.8 0.8

0.7 0.7

....
· :,
0.6

0.5

0.4
""'\\ ....
·:,
0.6

0.5

0.4

0.3 0.3

0.2 0.2
- Tru e - True
0.1 - 0.1 -
- Predicted - Predicted
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x' (1) y' ( 1)

Boundary 3, Material 5 Boundary 4, Material 5


1.1 1.1

1. 0 1.0

" I~
""
0.9 0.9

0.8 0.8

0.7 0.7
r'\.
....
·:,
0.6

0.5

0.4
....
:,
0.6

0.5

0.4
""'\\
0.3 0.3

0.2 0.2
- Tru e - True
0.1 a
0.1 -
- Predicted - Predicted
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x ' (1) y ' (1)

Boundary 5, Materi al 5 Boundary 6, Material 5


1.1 1.1

1.0 1.0 -~

0.9 0.9

0.8 0.8

0.7 0.7

;:; 0.6 .... 0.6


· :, 0.5 · :, 0.5

--- ---
0.4 0.4
~ ~
0.3 0.3

0.2 ~ 0.2 ~

- Tru e - True
0.1 - 0.1 -
- Predicted - Predicted
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x ' (1) y ' ( 1)

F IGURE 10.4: Comparison of û∗ and u∗ along the domain boundaries in the prob-
lem instance associated with Material 5.
50 Chapter 10. Appendix III

Boundary 1, Materia l 6 Bo undary 2, Material 6


1.1 1.1

"'""',
1. 0 1.0

0.9 0.9
"'-
0.8 0.8

0.7 0.7

"-
"'
.... 0.6 .... 0.6
· :, · :,
0.5 0.5

0.4
\ 0.4

0.3 0.3

0.2 0.2
- True - True
0.1 - 0.1 -
- Predicted - Predicted
T
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x' ( 1) y' (1)

Boundary 3, Materia l 6 Boundary 4, Material 6


1.1 1.1

"'
1. 0 1.0

0.9 0.9 I"


0.8

0.7
0.8

0.7
" 'r--._

"-
"'
.... 0.6 .... 0.6
· :, 0.5 · :, 0.5
\
0.4 0.4

0.3 0.3

0.2 0.2
- Tru e - True
0.1 a
0.1 -
- Predicted - Predicted
r
0.0 0.0
0.0 0.2 0. 4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x' (1) y ' (1)

Boundary 5, Material 6 Bo unda ry 6, Material 6


1.1 1.1

1.0 -- 1.0 -- --
0.9 0.9

0.8 0.8

0.7 0.7

.... 0.6 .... 0.6


· :, 0.5 · :, 0.5

0.4
---
0.4
-r---
0.3

0.2

0.1 -
- Tru e
- --- 0.3

0.2

0.1 -
- True
- ---
- Predict ed - Predicted
0.0 0.0 '
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x ' ( 1) y ' (1)

F IGURE 10.5: Comparison of û∗ and u∗ along the domain boundaries in the prob-
lem instance associated with Material 6.
Chapter 10. Appendix III 51

Bou ndary 1, Materi al 7 Boundary 2, Material 7


1.1 1.1

1.0 1.0

0.9
I"'-... 0.9
""'
0.8 "-- 0.8

....
· :,
0.7

0.6

0.5
""'" ....
·:,
0.7

0.6

0.5

0.4 0.4

0.3 0.3

0.2 0.2
- Tru e - True
0.1 - 0.1 -
- Predicted - Predicted
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x' (1) y' ( 1)

Boundary 3, Material 7 Boundary 4, Material 7


1.1 1.1

1. 0 1.0

0.9 ""' 0.9


I"'-...
0.8 0.8 "-- r-----
"'"
0.7 0.7

.... 0.6 .... 0.6


·:, 0.5 :, 0.5

0.4 0.4

0.3 0.3

0.2 0.2
- Tru e - True
0.1 a
0.1 -
- Predicted - Predicted
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x ' (1) y ' (1)

Boundary 5, Materi al 7 Boundary 6, Material 7


1.1 1.1

1.0 1.0 -~

0.9 0.9

0.8 0.8

0.7 0.7

;:; 0.6 .... 0.6


· :, 0.5 · :, 0.5
~

r---__ ~

r--.._

----
0.4 0.4

0.3

0.2

0.1 -
- Tru e
~ ----- 0.3

0.2

0.1 -
- True
~

- Predicted - Predicted
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x ' (1) y ' ( 1)

F IGURE 10.6: Comparison of û∗ and u∗ along the domain boundaries in the prob-
lem instance associated with Material 7.
52 Chapter 10. Appendix III

Boundary 1, Materia l 8 Boundary 2, Material 8


1.1 1.1

1.0 1.0 -
0.9

0.8
"" 0.9

0.8
"
....
· :,
0.7

0.6

0.5
" "" ....... ....
· :,
0.7

0.6

0.5

0.4 0.4

0.3 0.3

0.2 0.2
- True - True
0.1 - 0.1 -
- Predicted - Predicted
T
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x' (1) y' (1)

Boundary 3, Materia l 8 Boundary 4, Material 8


1.1 1.1

1.0 1.0

0.9
" 0.9
I" -.

....
· :,
0.8

0.7

0.6 ....
· :,
0.8

0.7

0.6
" """
I"-...
0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2
- True - True
0.1 a
0.1 -
- Predicted - Predicted
r
0.0 0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x' (1) y ' (1)

Boundary 5, Material 8 Boundary 6, Material 8


1.1 1.1

1.0 -- 1.0 -- --
0.9 0.9

0.8 0.8

0.7 0.7

.... 0.6 .... 0.6

----- ----
· :, 0.5 · :, 0.5
r---
----
0.4 0.4

0.3 0.3

0.2 - 0.2 -
- True - True
0.1 - 0.1 -
- Predicted - Predicted
0.0 0.0 '
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
x ' (1) y ' (1)

F IGURE 10.7: Comparison of û∗ and u∗ along the domain boundaries in the prob-
lem instance associated with Material 8.
53

Bibliography

[1] Y. Cengel and A. Ghajar, Heat and Mass Transfer: Fundamentals & Applications, 5th ed. New
York: McGraw-Hill, 2015.
[2] F. P. Incropera, D. P. DeWitt, T. L. Bergman, and T. S. Lavine, Fundamentals of Heat and
Mass Transfer, 6th ed. Hoboken, NJ: John Wiley & Sons, 2007.
[3] K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are univer-
sal approximators,” Neural Networks, pp. 359–366, 1989.
[4] G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Mathematics of
Control, Signals, and Systems, vol. 2, pp. 303–314, 1989.
[5] CUDA C++ Programming Guide, Release 12.1, NVIDIA, 2023. [Online]. Available: https:
//docs.nvidia.com/cuda/pdf/CUDA_C_Programming_Guide.pdf.
[6] A. Paszke, S. Gross, F. Massa, et al., “Pytorch: An imperative style, high-performance deep
learning library,” in Advances in Neural Information Processing Systems 32 (NeurIPS 2019),
(Vancouver, Canada), H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alche Buc, E. Fox,
and R. Garnett, Eds., Curran Associates, Inc., 2019, pp. 8024–8035.
[7] M. Abadi, A. Agarwal, P. Barham, et al., Tensorflow: Large-scale machine learning on hetero-
geneous systems, 2015. [Online]. Available: https://www.tensorflow.org/.
[8] M. Raissi, P. Perdikaris, and G. Karniadakis, “Physics-informed neural networks: A deep
learning framework for solving forward and inverse problems involving nonlinear partial
differential equations,” Journal of Computational Physics, vol. 378, pp. 686–707, 2019.
[9] K. Watanabe, I. Matsura, M. Abe, M. Kubota, and D. M. Himmelblau, “Incipient fault
diagnosis of chemical processes via artificial neural networks,” AIChE Journal, vol. 35,
no. 11, pp. 1803–1812, 1989.
[10] J. Thibault and B. Grandjean, “A neural network methodology for heat transfer data anal-
ysis,” International Journal of Heat and Mass Transfer, vol. 34, no. 8, pp. 2063–2070, 1991.
[11] M. W. M. G. Dissanayake and N. Phan-Thien, “Neural-network-based approximations
for solving partial differential equations,” Communications in Numerical Methods in Engi-
neering, vol. 10, pp. 195–201, 1994.
[12] K. Jambunathan, S. L. Hartle, S. Ashforth-Frost, and V. N. Fontama, “Evaluating convec-
tive heat transfer coefficients using neural networks,” International Journal of Heat and Mass
Transfer, vol. 39, no. 11, pp. 2329–2332, 1996.
[13] G. Díaz, M. Sen, K. T. Yang, and R. L. McClain, “Simulation of heat exchanger perfor-
mance by artificial neural networks,” HVACR Research, vol. 5, no. 3, pp. 195–208, 1999.
54 Bibliography

[14] M. Raissi, P. Perdikaris, and G. Karniadakis. “Physics informed deep learning (part i):
Data-driven solutions of nonlinear partial differential equations.” version 0. arXiv: 1711.
10561. (2017).
[15] M. Raissi, P. Perdikaris, and G. Karniadakis. “Physics informed deep learning (part ii):
Data-driven discovery of nonlinear partial differential equations.” version 0. arXiv: 1711.
10566. (2017).
[16] D. Psichogios and L. Ungar, “A hybrid neural network-first principles approach to pro-
cess modeling,” AIChE Journal, vol. 38, no. 10, pp. 1499–1511, 1992.
[17] I. Lagaris, A. Likas, and D. Fotiadis, “Artificial neural networks for solving ordinary and
partial differential equations,” IEEE Transactions on Neural Networks, vol. 9, no. 5, pp. 987–
1000, 1998.
[18] S. Cai, Z. Wang, S. Wang, P. Perdikaris, and G. Karniadakis, “Physics-informed neural
networks for heat transfer problems,” Journal of Heat Transfer, vol. 143, no. 6, 2021.
[19] O. Hennigh, S. Narasimhan, M. Nabian, et al., “NVIDIA SimNet™: An AI-accelerated
multi-physics simulation framework,” in Computational Science - ICCS 2021, (Krakow,
Poland), M. Paszynski, D. Kranzlmuller, V. Krzhizhanovskaya, J. Dongarra, and P. Sloot,
Eds., Springer, 2021, pp. 447–461.
[20] N. Zobeiry and K. Humfeld, “A physics-informed machine learning approach for solv-
ing heat transfer equations in advanced manufacturing and engineering applications,”
Engineering Applications of Artificial Intelligence, vol. 101, 2021.
[21] S. Liao, T. Xue, J. Jeong, S. Webster, K. Ehmann, and J. Cao, “Hybrid thermal modeling of
additive manufacturing processes using physics-informed neural networks for tempera-
ture prediction and parameter identification,” Computational Mechanics, vol. 72, pp. 499–
512, 2023.
[22] T. Wurth, C. Krauss, C. Zimmerling, and L. Karger, “Physics-informed neural networks
for data-free surrogate modelling and engineering optimization - An example from com-
posite manufacturing,” Materials Design, vol. 231, 2023.
[23] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.
[24] A. Baydin, B. Pearlmutter, A. Radul, and J. Siskind, “Automatic differentiation in machine
learning: A survey,” Journal of Machine Learning Research, vol. 18, 2018.
[25] L. Sun, H. Gao, S. Pan, and J. Wang, “Surrogate modeling for fluid flows based on physics-
constrained deep learning without simulation data,” Computer Methods in Applied Mechan-
ics and Engineering, vol. 361, 2020.
[26] L. Lu, R. Pestourie, W. Yao, Z. Wang, F. Verdugo, and S. Johnson, “Physics-informed neu-
ral networks with hard constraints for inverse design,” SIAM Journal on Scientific Comput-
ing, vol. 43, 2021.
[27] Laboratory Industrial Ovens Furnaces, Carbolite-Gero. [Online]. Available: https://www.
carbolite-gero.com/files/12830/laboratory-industrial-ovens-furnaces.pdf.
Bibliography 55

[28] Kaowool® Organic Boards, Morgan Advanced Materials, 2021. [Online]. Available: https:
/ / www . morganadvancedmaterials . com / media / 4xtlvoq0 / kaowool - organic - boards _
global_eng.pdf.
[29] Inrganic Boards, Morgan Advanced Materials, 2020. [Online]. Available: https : / / www .
morganadvancedmaterials.com/media/mccfjp2d/inorganic-i-boards_eng.pdf.
[30] Alumina Type AL-30, ZIRCAR Ceramics. [Online]. Available: https://zircarceramics.
com/wp-content/uploads/2017/01/AL-30.pdf.
[31] Alumina Type SALI, ZIRCAR Ceramics. [Online]. Available: https://zircarceramics.
com/wp-content/uploads/2017/02/SALI.pdf.
[32] Alumina Type SALI-2, ZIRCAR Ceramics. [Online]. Available: https://zircarceramics.
com/wp-content/uploads/2017/01/SALI-2.pdf.
[33] Consistent Performance at a High Degree: Thermo Scientific Furnaces, Thermo Fisher Scientific,
2023. [Online]. Available: https : / / assets . thermofisher . com / TFS - Assets % 2FLED %
2Fbrochures%2FLED-FurnacesBrochure-BRFURNACE0316-EN.pdf.
[34] C. R. Harris, K. J. Millman, S. J. van der Walt, et al., “Array programming with NumPy,”
Nature, vol. 585, no. 7825, pp. 357–362, 2020.
[35] Y. Cengel and A. J. Cimbala, Fluid Mechanics: Fundamentals and Applications, 1st ed. New
York: McGraw-Hill, 2006.
[36] J. Leinhard and J. Leinhard, A Heat Transfer Textbook, 4th ed. Cambridge, MA: Phlogiston
Press, 2017.
[37] E. Buckingham, “On physically similar systems: Illustrations of the use of dimensional
equations,” Physical Review, vol. 4, no. 4, pp. 345–376, 2018.
[38] P. Virtanen, R. Gommers, T. E. Oliphant, et al., “SciPy 1.0: Fundamental algorithms for
scientific computing in python,” Nature Methods, vol. 17, pp. 261–272, 2020.
[39] Z. Xiang, W. Peng, X. Liu, and W. Yao, “Self-adaptive loss balanced physics-informed
neural networks,” Neurocomputing, vol. 496, pp. 11–34, 2022.
[40] R. van der Meer, C. W. Oosterlee, and A. Borovykh, “Optimally weighted loss functions
for solving PDEs with neural networks,” Journal of Computational and Applied Mathematics,
vol. 405, 2022.
[41] D. Kingma and J. Ba. “Adam: A method for stochastic optimization.” version 9. arXiv:
1412.6980. (2017).
[42] C. N. Kroll and J. R. Stedinger, “Estimation of moments and quantiles using censored
data,” Water Resources Research, vol. 32, no. 4, pp. 1005–1012, 1996.
[43] S. P. Wechsler and C. N. Kroll, “Quantifying DEM uncertainty and its effect on topo-
graphic parameters,” Photogrammetric Engineering Remote Sensing, vol. 72, no. 9, pp. 1081–
1090, 2006.
[44] M. D. McKay, R. J. Beckman, and W. J. Conover, “A comparison of three methods for
selecting values of input variables in the analysis of output from a computer code,” Tech-
nometrics, vol. 42, no. 1, pp. 55–61, 1979.

You might also like