Deepxde Readthedocs Io en Latest
Deepxde Readthedocs Io en Latest
Release 0.13.6
Lu Lu
1 Features 3
2 User guide 5
2.1 Install and Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Demos of Forward Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Demos of Inverse Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Demos of Function Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.5 FAQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.6 Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.7 Cite DeepXDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.8 The Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3 API reference 21
3.1 deepxde . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 deepxde.data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 deepxde.geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.4 deepxde.icbcs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.5 deepxde.nn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.6 deepxde.nn.tensorflow_compat_v1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.7 deepxde.nn.tensorflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.8 deepxde.nn.pytorch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.9 deepxde.optimizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.10 deepxde.utils . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Index 63
i
ii
DeepXDE Documentation, Release 0.13.6
DeepXDE is a library for scientific machine learning. Use DeepXDE if you need a deep learning library that
• solves forward and inverse partial differential equations (PDEs) via physics-informed neural network (PINN),
• solves forward and inverse integro-differential equations (IDEs) via PINN,
• solves forward and inverse fractional partial differential equations (fPDEs) via fractional PINN (fPINN),
• approximates nonlinear operators via deep operator network (DeepONet),
• approximates functions from multi-fidelity data via multi-fidelity NN (MFNN),
• approximates functions from a dataset with/without constraints.
DeepXDE supports three tensor libraries as backends: TensorFlow 1.x (tensorflow.compat.v1 in TensorFlow 2.x),
TensorFlow 2.x, and PyTorch.
Documentation: ReadTheDocs, SIAM Rev., Slides, Video
Papers on algorithms
• Solving PDEs and IDEs via PINN: SIAM Rev.
• Solving fPDEs via fPINN: SIAM J. Sci. Comput.
• Solving stochastic PDEs via NN-arbitrary polynomial chaos (NN-aPC): J. Comput. Phys.
• Solving inverse design/topology optimization via hPINN: arXiv
• Learning nonlinear operators via DeepONet: Nat. Mach. Intell., J. Comput. Phys., J. Comput. Phys.
• Learning from multi-fidelity data via MFNN: J. Comput. Phys., PNAS
Contents 1
DeepXDE Documentation, Release 0.13.6
2 Contents
CHAPTER 1
Features
DeepXDE has implemented many algorithms as shown above and supports many features:
• complex domain geometries without tyranny mesh generation. The primitive geometries are interval, triangle,
rectangle, polygon, disk, cuboid, and sphere. Other geometries can be constructed as constructive solid geometry
(CSG) using three boolean operations: union, difference, and intersection.
• multi-physics, i.e., (time-dependent) coupled PDEs.
• 5 types of boundary conditions (BCs): Dirichlet, Neumann, Robin, periodic, and a general BC, which can be
defined on an arbitrary domain or on a point set.
• different neural networks, such as (stacked/unstacked) fully connected neural network, residual neural network,
and (spatio-temporal) multi-scale fourier feature networks.
• 6 sampling methods: uniform, pseudorandom, Latin hypercube sampling, Halton sequence, Hammersley se-
quence, and Sobol sequence. The training points can keep the same during training or be resampled every
certain iterations.
• conveniently save the model during training, and load a trained model.
• uncertainty quantification using dropout.
• many different (weighted) losses, optimizers, learning rate schedules, metrics, etc.
• callbacks to monitor the internal states and statistics of the model during training, such as early stopping.
• enables the user code to be compact, resembling closely the mathematical formulation.
All the components of DeepXDE are loosely coupled, and thus DeepXDE is well-structured and highly configurable.
It is easy to customize DeepXDE to meet new demands.
3
DeepXDE Documentation, Release 0.13.6
4 Chapter 1. Features
CHAPTER 2
User guide
2.1.1 Installation
• For developers, you should clone the folder to your local machine and put it along with your project scripts:
• Other dependencies
– Matplotlib
– NumPy
– scikit-learn
– scikit-optimize
– SciPy
5
DeepXDE Documentation, Release 0.13.6
DeepXDE supports TensorFlow 1.x (tensorflow.compat.v1 in TensorFlow 2.x), TensorFlow 2.x, and PyTorch
backends. DeepXDE will choose the backend on the following options (high priority to low priority)
• Use the DDEBACKEND environment variable:
– You can use DDEBACKEND=BACKEND python pde.py ... to specify the backend
– Or export DDEBACKEND=BACKEND to set the global environment variable
• Modify the config.json file under “~/.deepxde”:
– You can use python -m deepxde.backend.set_default_backend BACKEND to set the de-
fault backend
Currently BACKEND can be chosen from “tensorflow.compat.v1” (TensorFlow 1.x backend), “tensorflow” (Tensor-
Flow 2.x backend), and “pytorch” (PyTorch). The default backend is TensorFlow 1.x.
We note that
• Different backends support slightly different features, and switch to another backend if DeepXDE raised a
backend-related error. Currently, the number of features supported is: TensorFlow 1.x > TensorFlow 2.x >
PyTorch. Some features can be implemented easily (basically translating from one framework to another), and
we welcome your contributions.
• Different backends also have different computational speed, and switch to another backend if the speed is an
issue in your case.
Export DDEBACKEND as tensorflow.compat.v1 to specify TensorFlow 1.x backend. The required TensorFlow
version is 2.2.0 or later. Essentially, TensorFlow 1.x backend uses the API tensorflow.compat.v1 in TensorFlow 2.x
and disables the eager execution:
import tensorflow.compat.v1 as tf
tf.disable_eager_execution()
In addition, DeepXDE will set TF_FORCE_GPU_ALLOW_GROWTH to true to prevent TensorFlow take over the
whole GPU memory.
Export DDEBACKEND as tensorflow to specify TensorFlow 2.x backend. The required TensorFlow version is
2.2.0 or later. In addition, DeepXDE will set TF_FORCE_GPU_ALLOW_GROWTH to true to prevent TensorFlow
take over the whole GPU memory.
PyTorch backend
Export DDEBACKEND as pytorch to specify PyTorch backend. In addition, if GPU is available, DeepXDE will set
the default tensor type to cuda, so that all the tensors will be created on GPU as default:
if torch.cuda.is_available():
torch.set_default_tensor_type(torch.cuda.FloatTensor)
2.2.1 ODEs
• ODE system
• Lotka-Volterra equation
Problem setup
𝑢(−1) = 0, 𝑢(1) = 0.
Implementation
This description goes through the implementation of a solver for the above described Poisson equation step-by-step.
First, the DeepXDE and TensorFlow (tf) modules are imported:
We begin by defining a computational geometry. We can use a built-in class Interval as follows
geom = dde.geometry.Interval(-1, 1)
The first argument to pde is the network input, i.e., the 𝑥-coordinate. The second argument is the network output, i.e.,
the solution 𝑢(𝑥), but here we use y as the name of the variable.
Next, we consider the Dirichlet boundary condition. A simple Python function, returning a boolean, is used to define
the subdomain for the Dirichlet boundary condition ({−1, 1}). The function should return True for those points
inside the subdomain and False for the points outside. In our case, the points 𝑥 of the Dirichlet boundary condition
are 𝑥 = −1 and 𝑥 = 1. (Note that because of rounding-off errors, it is often wise to use np.isclose to test whether
two floating point values are equivalent.)
The argument x to boundary is the network input and is a 𝑑-dim vector, where 𝑑 is the dimension and 𝑑 = 1 in this
case. To facilitate the implementation of boundary, a boolean on_boundary is used as the second argument. If
the point x (the first argument) is on the entire boundary of the geometry (the left and right endpoints of the interval in
this case), then on_boundary is True, otherwise, on_boundary is False. Thus, we can also define boundary
in a simpler way:
Next, we define a function to return the value of 𝑢(𝑥) for the points 𝑥 on the boundary. In this case, it is 𝑢(𝑥) = 0.
def func(x):
return 0
If the function value is not a constant, we can also use NumPy to compute. For example, sin(𝜋𝑥) is 0 on the boundary,
and thus we can also use
def func(x):
return np.sin(np.pi * x)
Now, we have specified the geometry, PDE residual, and Dirichlet boundary condition. We then define the PDE
problem as
The number 16 is the number of training residual points sampled inside the domain, and the number 2 is the number
of training points sampled on the boundary. The argument solution=func is the reference solution to compute the
error of our solution, and can be ignored if we don’t have a reference solution. We use 100 residual points for testing
the PDE residual.
Next, we choose the network. Here, we use a fully connected neural network of depth 4 (i.e., 3 hidden layers) and
width 50:
Now, we have the PDE problem and the network. We bulid a Model and choose the optimizer and learning rate:
We also compute the 𝐿2 relative error as a metric during training. We can also use callbacks to save the model
and the movie during training, which is optional.
checkpointer = dde.callbacks.ModelCheckpoint(
"model/model.ckpt", verbose=1, save_better_only=True
)
(continues on next page)
Complete code
Documentation: https://deepxde.readthedocs.io/en/latest/demos/poisson.1d.dirichlet.
˓→html
"""
import deepxde as dde
import matplotlib.pyplot as plt
import numpy as np
# Import tf if using backend tensorflow.compat.v1 or tensorflow
from deepxde.backend import tf
# Import torch if using backend pytorch
# import torch
def func(x):
return np.sin(np.pi * x)
geom = dde.geometry.Interval(-1, 1)
bc = dde.DirichletBC(geom, func, boundary)
data = dde.data.PDE(geom, pde, bc, 16, 2, solution=func, num_test=100)
# Optional: Restore the saved model with the smallest training loss
# model.restore("model/model.ckpt-" + str(train_state.best_step), verbose=1)
# Plot PDE residual
x = geom.uniform_points(1000, True)
y = model.predict(x, operator=pde)
plt.figure()
plt.plot(x, y)
plt.xlabel("x")
plt.ylabel("PDE residual")
plt.show()
Burgers equation
Problem setup
𝑑𝑢 𝑑𝑢 𝑑𝑢
+𝑢 = 𝜈 2, 𝑥 ∈ [−1, 1], 𝑡 ∈ [0, 1]
𝑑𝑡 𝑑𝑥 𝑑𝑥
with the Dirichlet boundary conditions and initial conditions
Implementation
This description goes through the implementation of a solver for the above described Burgers equation step-by-step.
First, the DeepXDE and TensorFlow (tf) modules are imported:
We begin by defining a computational geometry and time domain. We can use a built-in class Interval and
TimeDomain and we combine both the domains using GeometryXTime as follows
geom = dde.geometry.Interval(-1, 1)
timedomain = dde.geometry.TimeDomain(0, 0.99)
geomtime = dde.geometry.GeometryXTime(geom, timedomain)
The first argument to pde is 2-dimensional vector where the first component(x[:,0]) is 𝑥-coordinate and the second
componenet (x[:,1]) is the 𝑡-coordinate. The second argument is the network output, i.e., the solution 𝑢(𝑥, 𝑡), but
here we use y as the name of the variable.
Next, we consider the boundary/initial condition. on_boundary is chosen here to use the whole boundary of the
computational domain in considered as the boundary condition. We include the geotime space , time geometry
created above and on_boundary as the BCs in the DirichletBC function of DeepXDE. We also define IC
which is the inital condition for the burgers equation and we use the computational domain, initial function, and
on_initial to specify the IC.
Now, we have specified the geometry, PDE residual, and boundary/initial condition. We then define the TimePDE
problem as
The number 2540 is the number of training residual points sampled inside the domain, and the number 80 is the number
of training points sampled on the boundary. We also include 160 initial residual points for the initial conditions.
Next, we choose the network. Here, we use a fully connected neural network of depth 4 (i.e., 3 hidden layers) and
width 20:
Now, we have the PDE problem and the network. We bulid a Model and choose the optimizer and learning rate:
After we train the network using Adam, we continue to train the network using L-BFGS to achieve a smaller loss:
model.compile("L-BFGS-B")
losshistory, train_state = model.train()
Complete code
Documentation: https://deepxde.readthedocs.io/en/latest/demos/burgers.html
"""
import deepxde as dde
import numpy as np
def gen_testdata():
data = np.load("dataset/Burgers.npz")
t, x, exact = data["t"], data["x"], data["usol"].T
xx, tt = np.meshgrid(x, t)
X = np.vstack((np.ravel(xx), np.ravel(tt))).T
y = exact.flatten()[:, None]
return X, y
geom = dde.geometry.Interval(-1, 1)
timedomain = dde.geometry.TimeDomain(0, 0.99)
geomtime = dde.geometry.GeometryXTime(geom, timedomain)
data = dde.data.TimePDE(
geomtime, pde, [bc, ic], num_domain=2540, num_boundary=80, num_initial=160
)
net = dde.maps.FNN([2] + [20] * 3 + [1], "tanh", "Glorot normal")
model = dde.Model(data, net)
model.compile("adam", lr=1e-3)
model.train(epochs=15000)
model.compile("L-BFGS")
losshistory, train_state = model.train()
dde.saveplot(losshistory, train_state, issave=True, isplot=True)
X, y_true = gen_testdata()
(continues on next page)
• Diffusion equation
• Diffusion equation with hard initial and boundary conditions
• Diffusion equation with training points resampling
• Heat equation
• Burgers equation with residual-based adaptive refinement (RAR)
• Beltrami flow
• Kovasznay flow
• Wave propagation with spatio-temporal multi-scale Fourier feature architecture
• Integro-differential equation
• Volterra IDE
2.3.1 ODEs
Problem setup
Implementation
Complete code
Jupyter notebook
Documentation: https://deepxde.readthedocs.io/en/latest/demos/lorenz.inverse.html
"""
import deepxde as dde
import numpy as np
def gen_traindata():
data = np.load("dataset/Lorenz.npz")
return data["t"], data["y"]
C1 = dde.Variable(1.0)
C2 = dde.Variable(1.0)
C3 = dde.Variable(1.0)
geom = dde.geometry.TimeDomain(0, 3)
# Initial conditions
ic1 = dde.IC(geom, lambda X: -8, boundary, component=0)
ic2 = dde.IC(geom, lambda X: 7, boundary, component=1)
ic3 = dde.IC(geom, lambda X: 27, boundary, component=2)
data = dde.data.PDE(
geom,
Lorenz_system,
[ic1, ic2, ic3, observe_y0, observe_y1, observe_y2],
num_domain=400,
(continues on next page)
2.3.2 PDEs
2.5 FAQ
If you have any questions about DeepXDE, first read the papers/slides and watch the video at the DeepXDE homepage
and also check the following list of frequently asked DeepXDE questions. To get further help, you can open an issue
in the GitHub “Issues” section.
• Q: DeepXDE failed to run.
A: #2, #3, #5
• Q: What is the output of DeepXDE? How can I visualize the results?
A: #4, #9, #17, #48, #53, #73, #77, #171, #217, #218, #223, #274, #276
• Q: More details and examples about geometry.
A: #32, #38, #161, #264, #278, #332
• Q: How can I implement new ODEs/PDEs, e.g., compute derivatives, complicated PDEs?
A: #12, #13, #21, #22, #74, #78, #79, #124, #172, #185, #193, #194, #246, #302
• Q: More details and examples about initial conditions.
A: #19, #75, #104, #134
• Q: More details and examples about boundary conditions.
A: #6, #10, #15, #16, #22, #26, #33, #38, #40, #44, #49, #115, #140, #156
• Q: By default, initial/boundary conditions are enforced in DeepXDE as soft constraints. How can I enforce
them as hard constraints?
A: #36, #90, #92, #252
• Q: I failed to train the network or get the right solution, e.g., large training loss, unbalanced losses.
A: #15, #22, #33, #41, #61, #62, #80, #84, #85, #108, #126, #141, #188, #247, #305, #321
• Q: Implement certain features for the input, such as Fourier features.
A: #277
• Q: Implement new losses/constraints.
A: #286, #311
• Q: How can I implement new IDEs?
A: #95, #198
• Q: Solve PDEs with complex numbers.
A: #284
• Q: Solve inverse problems with unknown parameters/fields in the PDEs or initial/boundary conditions.
A: #55, #76, #86, #114, #120, #125, #178, #208, #235
• Q: Solve parametric PDEs.
A: #273, #299
• Q: How does DeepXDE choose the training points? How can I use some specific training points?
A: #32, #57, #64
• Q: How can I give different weights to different residual points?
A: #45
• Q: I want to customize network training/optimization, e.g., mini-batch.
A: #166, #307, #320, #331
• Q: How can I use a trained model for new predictions?
A: #10, #18, #93, #177
• Q: How can I save a trained model and then load the model later?
A: #54, #57, #58, #63, #103, #206, #254
• Q: Residual-based adaptive refinement (RAR).
A: #63
• Q: By default, DeepXDE uses float32. How can I use float64?
A: #28
• Q: More details about DeepXDE source code, and want to modify DeepXDE.
A: #35, #39, #66, #68, #69, #91, #99, #131, #163, #175, #202
• Q: Examples collected from users.
A: Lotka–Volterra, Potential flow around a cylinder, Laminar Incompressible flow passing a step, Shallow
water equations
• Q: Questions about multi-fidelity neutral networks.
A: #94, #195, #324
2.6 Research
Here is a list of research papers that used DeepXDE. If you would like your paper to appear here, open an issue in the
GitHub “Issues” section.
2.6.1 PINN
1. S. Lee, & T. Kadeethum. Physics-informed neural networks for solving coupled flow and transport system.
2021.
2. Y. Chen, & L. Dal Negro. Physics-informed neural networks for imaging and parameter retrieval of photonic
nanostructures from near-field data. arXiv preprint arXiv:2109.12754, 2021.
3. A. M. Ncube, G. E. Harmsen, & A. S. Cornell. Investigating a new approach to quasinormal modes: Physics-
informed neural networks. arXiv preprint arXiv:2108.05867, 2021.
4. M. Almajid, & M. Abu-Alsaud. Prediction of porous media fluid flow using physics informed neural networks.
Journal of Petroleum Science and Engineering, 109205, 2021.
5. M. Merkle. Boosting the training of physics-informed neural networks with transfer learning. 2021. [Code]
6. L. Lu, R. Pestourie, W. Yao, Z. Wang, F. Verdugo, & S. G. Johnson. Physics-informed neural networks with
hard constraints for inverse design. arXiv preprint arXiv:2102.04626, 2021. [Code]
7. L. Lu, X. Meng, Z. Mao, & G. E. Karniadakis. DeepXDE: A deep learning library for solving differential
equations. SIAM Review, 63(1), 208–228, 2021. [Code]
8. A. Yazdani, L. Lu, M. Raissi, & G. E. Karniadakis. Systems biology informed deep learning for inferring
parameters and hidden dynamics. PLoS Computational Biology, 16(11), e1007575, 2020. [Code]
9. Q. Zhang, Y. Chen, & Z. Yang. Data driven solutions and discoveries in mechanics using physics informed
neural network. Preprints, 2020060258, 2020.
10. Y. Chen, L. Lu, G. E. Karniadakis, & L. D. Negro. Physics-informed neural networks for inverse problems
in nano-optics and metamaterials. Optics Express, 28(8), 11618–11633, 2020.
11. G. Pang, L. Lu, & G. E. Karniadakis. fPINNs: Fractional physics-informed neural networks. SIAM Journal
on Scientific Computing, 41(4), A2603–A2626, 2019. [Code]
2.6. Research 17
DeepXDE Documentation, Release 0.13.6
12. D. Zhang, L. Lu, L. Guo, & G. E. Karniadakis. Quantifying total uncertainty in physics-informed neural
networks for solving forward and inverse stochastic problems. Journal of Computational Physics, 397,
108850, 2019.
2.6.2 DeepONet
1. 26. Mao, L. Lu, O. Marxen, T. A. Zaki, & G. E. Karniadakis. DeepM&Mnet for hypersonics: Predicting
the coupled flow and finite-rate chemistry behind a normal shock using neural-network approximation of
operators. Journal of Computational Physics, 447, 110698, 2021.
2. P. Clark Di Leoni, L. Lu, C. Meneveau, G. E. Karniadakis, & T. A. Zaki. DeepONet prediction of linear
instability waves in high-speed boundary layers. arXiv preprint arXiv:2105.08697, 2021.
3. S. Cai, Z. Wang, L. Lu, T. A. Zaki, & G. E. Karniadakis. DeepM&Mnet: Inferring the electroconvection mul-
tiphysics fields based on operator approximation by neural networks. Journal of Computational Physics,
436, 110296, 2021.
4. L. Lu, P. Jin, G. Pang, Z. Zhang, & G. E. Karniadakis. Learning nonlinear operators via DeepONet based on
the universal approximation theorem of operators. Nature Machine Intelligence, 3, 218–229, 2021. [Code]
5. C. Lin, Z. Li, L. Lu, S. Cai, M. Maxey, & G. E. Karniadakis. Operator learning for predicting multiscale
bubble growth dynamics. The Journal of Chemical Physics, 154(10), 104118, 2021.
2.6.3 Multi-fidelity NN
1. L. Lu, M. Dao, P. Kumar, U. Ramamurty, G. E. Karniadakis, & S. Suresh. Extraction of mechanical properties
of materials through deep learning from instrumented indentation. Proceedings of the National Academy
of Sciences, 117(13), 7052–7062, 2020. [Code]
2. X. Meng, & G. E. Karniadakis. A composite neural network that learns from multi-fidelity data: Application
to function approximation and inverse PDE problems. Journal of Computational Physics, 401, 109020,
2020.
If you use DeepXDE for academic research, you are encouraged to cite the following paper:
@article{lu2021deepxde,
author = {Lu, Lu and Meng, Xuhui and Mao, Zhiping and Karniadakis, George Em},
title = {{DeepXDE}: A deep learning library for solving differential equations},
journal = {SIAM Review},
volume = {63},
number = {1},
pages = {208-228},
year = {2021},
doi = {10.1137/19M1274067}
}
DeepXDE was originally developed by Lu Lu at the Brown University under the supervision of Prof. George Karni-
adakis, supported by PhILMs.
DeepXDE is currently maintained by Lu Lu at University of Pennsylvania with major contributions coming from sev-
eral talented individuals in various forms and means. A non-exhaustive but growing list needs to mention: Shunyuan
Mao, Zongren Zou.
API reference
If you are looking for information on a specific function, class or method, this part of the documentation is for you.
3.1 deepxde
class deepxde.callbacks.Callback
Bases: object
Callback base class.
model
instance of Model. Reference of the model being trained.
init()
Init after setting a model.
on_batch_begin()
Called at the beginning of every batch.
on_batch_end()
Called at the end of every batch.
on_epoch_begin()
Called at the beginning of every epoch.
on_epoch_end()
Called at the end of every epoch.
on_predict_begin()
Called at the beginning of prediction.
on_predict_end()
Called at the end of prediction.
21
DeepXDE Documentation, Release 0.13.6
on_train_begin()
Called at the beginning of model training.
on_train_end()
Called at the end of model training.
set_model(model)
class deepxde.callbacks.CallbackList(callbacks=None)
Bases: deepxde.callbacks.Callback
Container abstracting a list of callbacks.
Parameters callbacks – List of Callback instances.
append(callback)
on_batch_begin()
Called at the beginning of every batch.
on_batch_end()
Called at the end of every batch.
on_epoch_begin()
Called at the beginning of every epoch.
on_epoch_end()
Called at the end of every epoch.
on_predict_begin()
Called at the beginning of prediction.
on_predict_end()
Called at the end of prediction.
on_train_begin()
Called at the beginning of model training.
on_train_end()
Called at the end of model training.
set_model(model)
class deepxde.callbacks.DropoutUncertainty(period=1000)
Bases: deepxde.callbacks.Callback
Uncertainty estimation via MC dropout.
Reference: https://arxiv.org/abs/1506.02142
Warning: This cannot be used together with other techniques that have different behaviors during training
and testing, such as batch normalization.
on_epoch_end()
Called at the end of every epoch.
on_train_end()
Called at the end of model training.
class deepxde.callbacks.EarlyStopping(min_delta=0, patience=0, baseline=None)
Bases: deepxde.callbacks.Callback
Stop training when a monitored quantity (training loss) has stopped improving. Only checked at validation step
according to display_every in Model.train.
Parameters
• min_delta – Minimum change in the monitored quantity to qualify as an improvement,
i.e. an absolute change of less than min_delta, will count as no improvement.
• patience – Number of epochs with no improvement after which training will be stopped.
• baseline – Baseline value for the monitored quantity to reach. Training will stop if the
model doesn’t show improvement over the baseline.
get_monitor_value()
on_epoch_end()
Called at the end of every epoch.
on_train_begin()
Called at the beginning of model training.
on_train_end()
Called at the end of model training.
class deepxde.callbacks.FirstDerivative(x, component_x=0, component_y=0)
Bases: deepxde.callbacks.OperatorPredictor
Generates the first order derivative of the outputs with respect to the inputs.
Parameters x – The input data.
class deepxde.callbacks.ModelCheckpoint(filepath, verbose=0, save_better_only=False, pe-
riod=1)
Bases: deepxde.callbacks.Callback
Save the model after every epoch.
Parameters
• filepath (string) – Path to save the model file.
• verbose – Verbosity mode, 0 or 1.
• save_better_only – If True, only save a better model according to the quantity moni-
tored. Model is only checked at validation step according to display_every in Model.
train.
• period – Interval (number of epochs) between checkpoints.
on_epoch_end()
Called at the end of every epoch.
class deepxde.callbacks.MovieDumper(filename, x1, x2, num_points=100, period=1, compo-
nent=0, save_spectrum=False, y_reference=None)
Bases: deepxde.callbacks.Callback
Dump a movie to show the training progress of the function along a line.
Parameters spectrum – If True, dump the spectrum of the Fourier transform.
init()
Init after setting a model.
on_epoch_end()
Called at the end of every epoch.
3.1. deepxde 23
DeepXDE Documentation, Release 0.13.6
on_train_begin()
Called at the beginning of model training.
on_train_end()
Called at the end of model training.
class deepxde.callbacks.OperatorPredictor(x, op)
Bases: deepxde.callbacks.Callback
Generates operator values for the input samples.
Parameters
• x – The input data.
• op – The operator with inputs (x, y).
get_value()
init()
Init after setting a model.
on_predict_end()
Called at the end of prediction.
class deepxde.callbacks.PDEResidualResampler(period=100)
Bases: deepxde.callbacks.Callback
Resample the training points for PDE losses every given period.
on_epoch_end()
Called at the end of every epoch.
on_train_begin()
Called at the beginning of model training.
class deepxde.callbacks.Timer(available_time)
Bases: deepxde.callbacks.Callback
Stop training when training time reaches the threshold. This Timer starts after the first call of on_train_begin.
Parameters available_time (float) – Total time (in minutes) available for the training.
on_epoch_end()
Called at the end of every epoch.
on_train_begin()
Called at the beginning of model training.
class deepxde.callbacks.VariableValue(var_list, period=1, filename=None, precision=2)
Bases: deepxde.callbacks.Callback
Get the variable values.
Parameters
• var_list – A TensorFlow Variable or a list of TensorFlow Variable.
• period (int) – Interval (number of epochs) between checking values.
• filename (string) – Output the values to the file filename. The file is kept open to
allow instances to be re-used. If None, output to the screen.
• precision (int) – The precision of variables to display.
get_value()
Return the variable values.
on_epoch_end()
Called at the end of every epoch.
on_train_begin()
Called at the beginning of model training.
deepxde.config.default_float()
Returns the default float type, as a string.
deepxde.config.set_default_float(value)
Sets the default float type.
The default floating point type is ‘float32’.
Parameters value (String) – ‘float32’ or ‘float64’.
deepxde.gradients.clear()
Clear cached Jacobians and Hessians.
deepxde.gradients.hessian(ys, xs, component=None, i=0, j=0, grad_y=None)
Compute Hessian matrix H: H[i][j] = d^2y / dx_i dx_j, where i,j=0,. . . ,dim_x-1.
Use this function to compute second-order derivatives instead of tf.gradients() or torch.autograd.
grad(), because
• It is lazy evaluation, i.e., it only computes H[i][j] when needed.
• It will remember the gradients that have already been computed to avoid duplicate computation.
Parameters
• ys – Output Tensor of shape (batch_size, dim_y).
• xs – Input Tensor of shape (batch_size, dim_x).
• component – If dim_y > 1, then ys[:, component] is used as y to compute the Hessian. If
dim_y = 1, component must be None.
• i (int) –
• j (int) –
• grad_y – The gradient of y w.r.t. xs. Provide grad_y if known to avoid duplicate compu-
tation. grad_y can be computed from jacobian. Even if you do not provide grad_y, there
is no duplicate computation if you use jacobian to compute first-order derivatives.
Returns H[i][j].
3.1. deepxde 25
DeepXDE Documentation, Release 0.13.6
Parameters
• ys – Output Tensor of shape (batch_size, dim_y).
• xs – Input Tensor of shape (batch_size, dim_x).
• i (int) –
• j (int or None) –
Returns J[i][j] in Jacobian matrix J. If j is None, returns the gradient of y_i, i.e., J[i].
deepxde.losses.get(identifier)
deepxde.losses.mean_absolute_error(y_true, y_pred)
deepxde.losses.mean_absolute_percentage_error(y_true, y_pred)
deepxde.losses.mean_squared_error(y_true, y_pred)
deepxde.losses.softmax_cross_entropy(y_true, y_pred)
deepxde.losses.zero(*_)
deepxde.metrics.absolute_percentage_error_std(y_true, y_pred)
deepxde.metrics.accuracy(y_true, y_pred)
deepxde.metrics.get(identifier)
deepxde.metrics.l2_relative_error(y_true, y_pred)
deepxde.metrics.max_absolute_percentage_error(y_true, y_pred)
deepxde.metrics.mean_absolute_percentage_error(y_true, y_pred)
deepxde.metrics.mean_l2_relative_error(y_true, y_pred)
Compute the average of L2 relative error along the first axis.
deepxde.metrics.mean_squared_error(y_true, y_pred)
deepxde.metrics.nanl2_relative_error(y_true, y_pred)
Return the L2 relative error treating Not a Numbers (NaNs) as zero.
3.1. deepxde 27
DeepXDE Documentation, Release 0.13.6
deepxde.postprocessing.plot_best_state(train_state)
Plot the best result of the smallest training loss.
This function only works for 1D and 2D problems. For other problems and to better customize the figure, use
save_best_state().
deepxde.postprocessing.plot_loss_history(loss_history, fname=None)
Plot the training and testing loss history.
Parameters
class deepxde.real.Real(precision)
Bases: object
set_float32()
set_float64()
3.2 deepxde.data
class deepxde.data.data.Data
Bases: object
Data base class.
3.2. deepxde.data 29
DeepXDE Documentation, Release 0.13.6
3.2. deepxde.data 31
DeepXDE Documentation, Release 0.13.6
get_matrix(sparse=False)
get_matrix_dynamic(sparse)
get_matrix_static()
get_x()
get_x_dynamic()
get_x_static()
class deepxde.data.fpde.Scheme(meshtype, resolution)
Bases: object
Fractional Laplacian discretization.
Discretize fractional Laplacian uisng quadrature rule for the integral with respect to the directions and Grunwald-
Letnikov (GL) formula for the Riemann-Liouville directional fractional derivative.
Parameters
• meshtype (string) – “static” or “dynamic”.
• resolution – A list of integer. The first number is the number of quadrature points in
the first direction, . . . , and the last number is the GL parameter.
class deepxde.data.fpde.TimeFPDE(geometryxtime, fpde, alpha, ic_bcs, resolution,
meshtype=’dynamic’, num_domain=0, num_boundary=0,
num_initial=0, train_distribution=’Sobol’, anchors=None,
solution=None, num_test=None)
Bases: deepxde.data.fpde.FPDE
Time-dependent fractional PDE solver.
D-dimensional fractional Laplacian of order alpha/2 (1 < alpha < 2) is defined as: (-Delta)^(alpha/2) u(x)
= C(alpha, D) int_{||theta||=1} D_theta^alpha u(x) d theta, where C(alpha, D) = gamma((1-alpha)/2) *
gamma((D+alpha)/2) / (2 pi^((D+1)/2)), D_theta^alpha is the Riemann-Liouville directional fractional deriva-
tive, and theta is the differentiation direction vector. The solution u(x) is assumed to be identically zero in the
boundary and exterior of the domain. When D = 1, C(alpha, D) = 1 / (2 cos(alpha * pi / 2)).
This solver does not consider C(alpha, D) in the fractional Laplacian, and only discretizes int_{||theta||=1}
D_theta^alpha u(x) d theta. D_theta^alpha is approximated by Grunwald-Letnikov formula.
get_int_matrix(training)
test()
Return a test dataset.
train_next_batch(batch_size=None)
Return a training dataset of the size batch_size.
train_points()
deepxde.data.helper.one_function(dim_outputs)
deepxde.data.helper.zero_function(dim_outputs)
3.2. deepxde.data 33
DeepXDE Documentation, Release 0.13.6
The current version only supports 1D problems with the integral int_0^x K(x, t) y(t) dt.
Parameters kernel – (x, t) –> R.
get_int_matrix(training)
losses(targets, outputs, loss, model)
Return a list of losses, i.e., constraints.
quad_points(X)
test()
Return a test dataset.
test_points()
train_next_batch(batch_size=None)
Return a training dataset of the size batch_size.
Warning: The testing points include points inside the domain and points on the boundary, and they may
not have the same density, and thus the entire testing points may not be uniformly distributed. As a result, if
you have a reference solution (solution) and would like to compute a metric such as
Model.compile(metrics=["l2 relative error"])
then the metric may not be very accurate. To better compute a metric, you can sample the points manually,
and then use Model.predict() to predict the solution on thess points and compute the metric:
x = geom.uniform_points(num, boundary=True)
y_true = ...
y_pred = model.predict(x)
error= dde.metrics.l2_relative_error(y_true, y_pred)
train_x_all
A Numpy array of all points for training. train_x_all is unordered, and does not have duplication.
train_x
A Numpy array of the points fed into the network for training. train_x is constructed from train_x_all,
ordered from BCs to PDE, and may have duplicate points.
3.2. deepxde.data 35
DeepXDE Documentation, Release 0.13.6
train_x_bc
A Numpy array of the training points for BCs. train_x_bc is constructed from train_x_all at the first step
of training, by default it won’t be updated when train_x_all changes. To update train_x_bc, set it to None
and call bc_points, and then update the loss function by model.compile().
num_bcs
num_bcs[i] is the number of points for bcs[i].
Type list
test_x
A Numpy array of the points fed into the network for testing, ordered from BCs to PDE. The BC points
are exactly the same points in train_x_bc.
train_aux_vars
Auxiliary variables that associate with train_x.
test_aux_vars
Auxiliary variables that associate with test_x.
add_anchors(anchors)
Add new points for training PDE losses. The BC points will not be updated.
bc_points()
losses(targets, outputs, loss, model)
Return a list of losses, i.e., constraints.
resample_train_points()
Resample the training points for PDEs. The BC points will not be updated.
test()
Return a test dataset.
test_points()
train_next_batch(batch_size=None)
Return a training dataset of the size batch_size.
train_points()
class deepxde.data.pde.TimePDE(geometryxtime, pde, ic_bcs, num_domain=0, num_boundary=0,
num_initial=0, train_distribution=’Sobol’, anchors=None,
exclusions=None, solution=None, num_test=None, auxil-
iary_var_function=None)
Bases: deepxde.data.pde.PDE
Time-dependent PDE solver.
Parameters num_initial (int) – The number of training points sampled on the initial location.
train_points()
indices = tf.data.Dataset.range(num_samples)
indices = indices.repeat().shuffle(num_samples).batch(batch_size)
iterator = iter(indices)
batch_indices = iterator.get_next()
3.2. deepxde.data 37
DeepXDE Documentation, Release 0.13.6
3.3 deepxde.geometry
3.3. deepxde.geometry 39
DeepXDE Documentation, Release 0.13.6
union(other)
CSG Union.
class deepxde.geometry.geometry_1d.Interval(l, r)
Bases: deepxde.geometry.geometry.Geometry
background_points(x, dirn, dist2npt, shift)
Parameters
• dirn – -1 (left), or 1 (right), or 0 (both direction).
• dist2npt – A function which converts distance to the number of extra points (not in-
cluding x).
• shift – The number of shift.
boundary_normal(x)
Compute the unit normal at x for Neumann or Robin boundary conditions.
distance2boundary(x, dirn)
inside(x)
Check if x is inside the geometry (including the boundary).
log_uniform_points(n, boundary=True)
mindist2boundary(x)
on_boundary(x)
Check if x is on the geometry boundary.
periodic_point(x, component=0)
Compute the periodic image of x for periodic boundary condition.
random_boundary_points(n, random=’pseudo’)
Compute the random point locations on the boundary.
random_points(n, random=’pseudo’)
Compute the random point locations in the geometry.
uniform_boundary_points(n)
Compute the equispaced point locations on the boundary.
uniform_points(n, boundary=True)
Compute the equispaced point locations in the geometry.
distance2boundary_unitdirn(x, dirn)
https://en.wikipedia.org/wiki/Line%E2%80%93sphere_intersection
inside(x)
Check if x is inside the geometry (including the boundary).
mindist2boundary(x)
on_boundary(x)
Check if x is on the geometry boundary.
random_boundary_points(n, random=’pseudo’)
Compute the random point locations on the boundary.
random_points(n, random=’pseudo’)
http://mathworld.wolfram.com/DiskPointPicking.html
uniform_boundary_points(n)
Compute the equispaced point locations on the boundary.
class deepxde.geometry.geometry_2d.Polygon(vertices)
Bases: deepxde.geometry.geometry.Geometry
Simple polygon.
Parameters vertices – The order of vertices can be in a clockwise or counterclockwise direction.
The vertices will be re-ordered in counterclockwise (right hand rule).
boundary_normal(x)
Compute the unit normal at x for Neumann or Robin boundary conditions.
inside(x)
Check if x is inside the geometry (including the boundary).
on_boundary(x)
Check if x is on the geometry boundary.
random_boundary_points(n, random=’pseudo’)
Compute the random point locations on the boundary.
random_points(n, random=’pseudo’)
Compute the random point locations in the geometry.
uniform_boundary_points(n)
Compute the equispaced point locations on the boundary.
class deepxde.geometry.geometry_2d.Rectangle(xmin, xmax)
Bases: deepxde.geometry.geometry_nd.Hypercube
Parameters
• xmin – Coordinate of bottom left corner.
• xmax – Coordinate of top right corner.
static is_valid(vertices)
Check if the geometry is a Rectangle.
random_boundary_points(n, random=’pseudo’)
Compute the random point locations on the boundary.
uniform_boundary_points(n)
Compute the equispaced point locations on the boundary.
3.3. deepxde.geometry 41
DeepXDE Documentation, Release 0.13.6
deepxde.geometry.geometry_2d.is_rectangle(vertices)
Check if the geometry is a rectangle. https://stackoverflow.com/questions/2303278/
find-if-4-points-on-a-plane-form-a-rectangle/2304031
1. Find the center of mass of corner points: cx=(x1+x2+x3+x4)/4, cy=(y1+y2+y3+y4)/4
2. Test if square of distances from center of mass to all 4 corners are equal
deepxde.geometry.geometry_2d.polygon_signed_area(vertices)
The (signed) area of a simple polygon.
If the vertices are in the counterclockwise direction, then the area is positive; if they are in the clockwise
direction, the area is negative.
Shoelace formula: https://en.wikipedia.org/wiki/Shoelace_formula
3.3. deepxde.geometry 43
DeepXDE Documentation, Release 0.13.6
random_points(n, random=’pseudo’)
Compute the random point locations in the geometry.
uniform_points(n, boundary=True)
Compute the equispaced point locations in the geometry.
class deepxde.geometry.geometry_nd.Hypersphere(center, radius)
Bases: deepxde.geometry.geometry.Geometry
background_points(x, dirn, dist2npt, shift)
boundary_normal(x)
Compute the unit normal at x for Neumann or Robin boundary conditions.
distance2boundary(x, dirn)
distance2boundary_unitdirn(x, dirn)
https://en.wikipedia.org/wiki/Line%E2%80%93sphere_intersection
inside(x)
Check if x is inside the geometry (including the boundary).
mindist2boundary(x)
on_boundary(x)
Check if x is on the geometry boundary.
random_boundary_points(n, random=’pseudo’)
http://mathworld.wolfram.com/HyperspherePointPicking.html
random_points(n, random=’pseudo’)
https://math.stackexchange.com/questions/87230/picking-random-points-in-the-volume-of-sphere-with-uniform-probabilit
deepxde.geometry.sampler.pseudo(n_samples, dimension)
Pseudo random.
deepxde.geometry.sampler.quasirandom(n_samples, dimension, sampler)
deepxde.geometry.sampler.sample(n_samples, dimension, sampler=’pseudo’)
Generate random or quasirandom samples in [0, 1]^dimension.
Parameters
• n_samples (int) – The number of samples.
• dimension (int) – Space dimension.
• sampler (string) – One of the following: “pseudo” (pseudorandom), “LHS” (Latin
hypercube sampling), “Halton” (Halton sequence), “Hammersley” (Hammersley sequence),
or “Sobol” (Sobol sequence).
on_initial(x)
periodic_point(x, component)
random_boundary_points(n, random=’pseudo’)
random_initial_points(n, random=’pseudo’)
random_points(n, random=’pseudo’)
uniform_boundary_points(n)
Uniform boundary points on the spatio-temporal domain.
Geometry surface area ~ bbox. Time surface area ~ diam.
uniform_initial_points(n)
uniform_points(n, boundary=True)
Uniform points on the spatio-temporal domain.
Geometry volume ~ bbox. Time volume ~ diam.
class deepxde.geometry.timedomain.TimeDomain(t0, t1)
Bases: deepxde.geometry.geometry_1d.Interval
on_initial(t)
3.4 deepxde.icbcs
Boundary conditions.
class deepxde.icbcs.boundary_conditions.BC(geom, on_boundary, component)
Bases: abc.ABC
Boundary condition base class.
Parameters
• geom – A deepxde.geometry.Geometry instance.
• on_boundary – A function: (x, Geometry.on_boundary(x)) -> True/False.
• component – The output component satisfying this BC.
collocation_points(X)
error(X, inputs, outputs, beg, end)
Returns the loss.
filter(X)
normal_derivative(X, inputs, outputs, beg, end)
class deepxde.icbcs.boundary_conditions.DirichletBC(geom, func, on_boundary, com-
ponent=0)
Bases: deepxde.icbcs.boundary_conditions.BC
Dirichlet boundary conditions: y(x) = func(x).
error(X, inputs, outputs, beg, end)
Returns the loss.
3.4. deepxde.icbcs 45
DeepXDE Documentation, Release 0.13.6
Initial conditions.
class deepxde.icbcs.initial_conditions.IC(geom, func, on_initial, component=0)
Bases: object
Initial conditions: y([x, t0]) = func([x, t0]).
collocation_points(X)
error(X, inputs, outputs, beg, end)
filter(X)
3.5 deepxde.nn
deepxde.nn.activations.get(identifier)
Returns function.
Parameters identifier – Function or string.
Returns Function corresponding to the input string or input function.
deepxde.nn.activations.layer_wise_locally_adaptive(activation, n=1)
Layer-wise locally adaptive activation functions (L-LAAF).
Examples:
To define a L-LAAF ReLU with the scaling factor n = 10:
n = 10
activation = f"LAAF-{n} relu" # "LAAF-10 relu"
3.5. deepxde.nn 47
DeepXDE Documentation, Release 0.13.6
With distribution=”uniform”, samples are drawn from a uniform distribution within [-limit, limit], with limit =
sqrt(3 * scale / n).
Parameters
• scale – Scaling factor (positive float).
• mode – One of “fan_in”, “fan_out”, “fan_avg”.
• distribution – Random distribution to use. One of “normal”, “uniform”.
• seed – A Python integer. Used to create random seeds. See tf.set_random_seed for behav-
ior.
• dtype – Default data type, used if no dtype argument is provided when calling the initial-
izer. Only floating point types are supported.
Raises ValueError – In case of an invalid value for the “scale”, mode” or “distribution” argu-
ments.
deepxde.nn.initializers.get(identifier)
Retrieve an initializer by the identifier.
Parameters identifier – String that contains the initializer name or an initializer function.
Returns Initializer instance base on the input identifier.
deepxde.nn.initializers.initializer_dict_tf()
deepxde.nn.initializers.initializer_dict_torch()
deepxde.nn.regularizers.get(identifier)
3.6 deepxde.nn.tensorflow_compat_v1
class deepxde.nn.tensorflow_compat_v1.deeponet.DeepONet(layer_sizes_branch,
layer_sizes_trunk, acti-
vation, kernel_initializer,
regularization=None,
use_bias=True,
stacked=False, train-
able_branch=True, train-
able_trunk=True)
Bases: deepxde.nn.tensorflow_compat_v1.nn.NN
Deep operator network.
Lu et al. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators.
Nat Mach Intell, 2021.
Parameters
• layer_sizes_branch – A list of integers as the width of a fully connected network, or
(dim, f) where dim is the input dimension and f is a network function. The width of the last
layer in the branch and trunk net should be equal.
3.6. deepxde.nn.tensorflow_compat_v1 49
DeepXDE Documentation, Release 0.13.6
class deepxde.nn.tensorflow_compat_v1.deeponet.FourierDeepONetCartesianProd(layer_size_Fourier_br
out-
put_shape,
layer_size_branch,
layer_size_trunk,
ac-
ti-
va-
tion,
ker-
nel_initializer,
reg-
u-
lar-
iza-
tion=None)
Bases: deepxde.nn.tensorflow_compat_v1.deeponet.DeepONetCartesianProd
Deep operator network with a Fourier trunk net for dataset in the format of Cartesian product.
There are two pairs of trunk and branch nets. One pair is the vanilla DeepONet, and the other one uses Fourier
basis as the trunk net. Because the dataset is in the format of Cartesian product, the Fourier branch-trunk nets
are implemented via the inverse FFT.
Parameters
• layer_size_Fourier_branch – A list of integers as the width of a fully connected
network, or (dim, f) where dim is the input dimension and f is a network function.
• output_shape (tuple[int]) – Shape of the output.
build()
Construct the network.
class deepxde.nn.tensorflow_compat_v1.mfnn.MfNN(layer_sizes_low_fidelity,
layer_sizes_high_fidelity, acti-
vation, kernel_initializer, regu-
larization=None, residue=False,
trainable_low_fidelity=True, train-
able_high_fidelity=True)
Bases: deepxde.nn.tensorflow_compat_v1.nn.NN
Multifidelity neural networks.
build()
Construct the network.
inputs
Return the net inputs (placeholders).
outputs
Return the net outputs (tf.Tensor).
targets
Return the targets of the net outputs (placeholders).
3.6. deepxde.nn.tensorflow_compat_v1 51
DeepXDE Documentation, Release 0.13.6
Parameters sigmas – List of standard deviation of the distribution of fourier feature embeddings.
build()
Construct the network.
class deepxde.nn.tensorflow_compat_v1.msffn.STMsFFN(layer_sizes, activation, ker-
nel_initializer, sigmas_x,
sigmas_t, regulariza-
tion=None, dropout_rate=0,
batch_normalization=None,
layer_normalization=None,
kernel_constraint=None,
use_bias=True)
Bases: deepxde.nn.tensorflow_compat_v1.msffn.MsFFN
Spatio-temporal multi-scale fourier feature networks.
References:
• https://arxiv.org/abs/2012.10047
• https://github.com/PredictiveIntelligenceLab/MultiscalePINNs
build()
Construct the network.
class deepxde.nn.tensorflow_compat_v1.nn.NN
Bases: object
Base class for all neural network modules.
apply_feature_transform(transform)
Compute the features by appling a transform to the network inputs, i.e., features = transform(inputs). Then,
outputs = network(features).
apply_output_transform(transform)
Apply a transform to the network outputs, i.e., outputs = transform(inputs, outputs).
auxiliary_vars
Return additional variables needed (placeholders).
build()
Construct the network.
built
feed_dict(training, inputs, targets=None, auxiliary_vars=None)
Construct a feed_dict to feed values to TensorFlow placeholders.
inputs
Return the net inputs (placeholders).
outputs
Return the net outputs (tf.Tensor).
targets
Return the targets of the net outputs (placeholders).
3.7 deepxde.nn.tensorflow
class deepxde.nn.tensorflow.deeponet.DeepONetCartesianProd(layer_sizes_branch,
layer_sizes_trunk,
activation, ker-
nel_initializer)
Bases: deepxde.nn.tensorflow.nn.NN
Deep operator network for dataset in the format of Cartesian product.
Parameters
• layer_size_branch – A list of integers as the width of a fully connected network, or
(dim, f) where dim is the input dimension and f is a network function. The width of the last
layer in the branch and trunk net should be equal.
• layer_size_trunk (list) – A list of integers as the width of a fully connected net-
work.
• activation – If activation is a string, then the same activation is used in both trunk
and branch nets. If activation is a dict, then the trunk net uses the activation activa-
tion[“trunk”], and the branch net uses activation[“branch”].
call(inputs, training=False)
Calls the model on new inputs.
In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph
from the provided inputs).
Note: This method should not be called directly. It is only meant to be overridden when subclassing
tf.keras.Model. To call a model on an input, always use the __call__ method, i.e. model(inputs), which
relies on the underlying call method.
Parameters
• inputs – Input tensor, or dict/list/tuple of input tensors.
3.7. deepxde.nn.tensorflow 53
DeepXDE Documentation, Release 0.13.6
• training – Boolean or boolean scalar tensor, indicating whether to run the Network in
training mode or inference mode.
• mask – A mask or list of masks. A mask can be either a tensor or None (no mask).
Returns A tensor if there is a single output, or a list of tensors if there are more than one outputs.
class deepxde.nn.tensorflow.nn.NN
Bases: keras.engine.training.Model
Base class for all neural network modules.
apply_feature_transform(transform)
Compute the features by appling a transform to the network inputs, i.e., features = transform(inputs). Then,
outputs = network(features).
apply_output_transform(transform)
Apply a transform to the network outputs, i.e., outputs = transform(inputs, outputs).
auxiliary_vars
Any additional variables needed.
Type Tensors
inputs
Return the net inputs (Tensors).
3.8 deepxde.nn.pytorch
Note: Although the recipe for forward pass needs to be defined within this function, one should call the
Module instance afterwards instead of this since the former takes care of running the registered hooks
while the latter silently ignores them.
class deepxde.nn.pytorch.nn.NN
Bases: torch.nn.modules.module.Module
Base class for all neural network modules.
apply_feature_transform(transform)
Compute the features by appling a transform to the network inputs, i.e., features = transform(inputs). Then,
outputs = network(features).
apply_output_transform(transform)
Apply a transform to the network outputs, i.e., outputs = transform(inputs, outputs).
3.9 deepxde.optimizers
3.8. deepxde.nn.pytorch 55
DeepXDE Documentation, Release 0.13.6
Warning: If L-BFGS stops earlier than expected, set the default float type to ‘float64’:
dde.config.set_default_float("float64")
3.10 deepxde.utils
External utilities.
class deepxde.utils.external.PointSet(points)
Bases: object
A set of points.
Parameters points – A NumPy array of shape (N, dx). A list of dx-dim points.
inside(x)
Returns True if x is in this set of points, otherwise, returns False.
Parameters x – A NumPy array. A single point, or a list of points.
Returns
If x is a single point, returns True or False. If x is a list of points, returns a list of
True or False.
values_to_func(values, default_value=0)
Convert the pairs of points and values to a callable function.
Parameters
• values – A NumPy array of shape (N, dy). values[i] is the dy-dim function value of the
i-th point in this point set.
• default_value (float) – The function value of the points not in this point set.
Returns
A callable function. The input of this function should be a NumPy array of shape (?,
dx).
deepxde.utils.external.apply(func, args=None, kwds=None)
Launch a new process to call the function.
This can be used to clear Tensorflow GPU memory after model execution: https://stackoverflow.com/questions/
39758094/clearing-tensorflow-gpu-memory-after-model-execution
deepxde.utils.external.standardize(X_train, X_test)
Standardize features by removing the mean and scaling to unit variance.
The mean and std are computed from the training data X_train using sklearn.preprocessing.StandardScaler, and
then applied to the testing data X_test.
Parameters
• X_train – A NumPy array of shape (n_samples, n_features). The data used to compute
the mean and standard deviation used for later scaling along the features axis.
• X_test – A NumPy array.
Returns Instance of sklearn.preprocessing.StandardScaler. X_train: Transformed
training data. X_test: Transformed testing data.
Return type scaler
deepxde.utils.external.uniformly_continuous_delta(X, Y, eps)
Compute the supremum of delta in uniformly continuous.
Parameters X – N x d, equispaced points.
3.10. deepxde.utils 57
DeepXDE Documentation, Release 0.13.6
• genindex
• modindex
• search
59
DeepXDE Documentation, Release 0.13.6
d deepxde.nn.tensorflow_compat_v1.fnn, 50
deepxde.callbacks, 21 deepxde.nn.tensorflow_compat_v1.mfnn,
deepxde.config, 25 51
deepxde.data.constraint, 29 deepxde.nn.tensorflow_compat_v1.msffn,
deepxde.data.data, 29 51
deepxde.data.dataset, 30 deepxde.nn.tensorflow_compat_v1.nn, 52
deepxde.data.fpde, 30 deepxde.nn.tensorflow_compat_v1.resnet,
deepxde.data.func_constraint, 32 53
deepxde.data.function, 33 deepxde.optimizers.config, 55
deepxde.data.helper, 33 deepxde.postprocessing, 28
deepxde.data.ide, 33 deepxde.real, 29
deepxde.data.mf, 34 deepxde.utils.external, 56
deepxde.data.pde, 35
deepxde.data.sampler, 36
deepxde.data.triple, 37
deepxde.geometry.csg, 38
deepxde.geometry.geometry, 39
deepxde.geometry.geometry_1d, 40
deepxde.geometry.geometry_2d, 40
deepxde.geometry.geometry_3d, 43
deepxde.geometry.geometry_nd, 43
deepxde.geometry.sampler, 44
deepxde.geometry.timedomain, 44
deepxde.gradients, 25
deepxde.icbcs.boundary_conditions, 45
deepxde.icbcs.initial_conditions, 47
deepxde.losses, 26
deepxde.metrics, 26
deepxde.model, 26
deepxde.nn.activations, 47
deepxde.nn.initializers, 47
deepxde.nn.pytorch.fnn, 55
deepxde.nn.pytorch.nn, 55
deepxde.nn.regularizers, 48
deepxde.nn.tensorflow.deeponet, 53
deepxde.nn.tensorflow.fnn, 54
deepxde.nn.tensorflow.nn, 54
deepxde.nn.tensorflow_compat_v1.deeponet,
48
61
DeepXDE Documentation, Release 0.13.6
A xde.geometry.geometry_nd.Hypersphere
absolute_percentage_error_std() (in mod- method), 44
ule deepxde.metrics), 26 BatchSampler (class in deepxde.data.sampler), 36
accuracy() (in module deepxde.metrics), 26 BC (class in deepxde.icbcs.boundary_conditions), 45
add_anchors() (deepxde.data.pde.PDE method), 36 bc_points() (deepxde.data.pde.PDE method), 36
append() (deepxde.callbacks.CallbackList method), 22 boundary_normal() (deep-
append() (deepxde.model.LossHistory method), 28 xde.geometry.csg.CSGDifference method),
apply() (in module deepxde.utils.external), 57 38
apply_feature_transform() (deep- boundary_normal() (deep-
xde.nn.pytorch.nn.NN method), 55 xde.geometry.csg.CSGIntersection method),
apply_feature_transform() (deep- 38
xde.nn.tensorflow.nn.NN method), 54 boundary_normal() (deep-
apply_feature_transform() (deep- xde.geometry.csg.CSGUnion method), 39
xde.nn.tensorflow_compat_v1.nn.NN method), boundary_normal() (deep-
52 xde.geometry.geometry.Geometry method),
apply_output_transform() (deep- 39
xde.nn.pytorch.nn.NN method), 55 boundary_normal() (deep-
apply_output_transform() (deep- xde.geometry.geometry_1d.Interval method),
xde.nn.tensorflow.nn.NN method), 54 40
apply_output_transform() (deep- boundary_normal() (deep-
xde.nn.tensorflow_compat_v1.nn.NN method), xde.geometry.geometry_2d.Disk method),
52 40
auxiliary_vars (deepxde.nn.tensorflow.nn.NN at- boundary_normal() (deep-
tribute), 54 xde.geometry.geometry_2d.Polygon method),
auxiliary_vars (deep- 41
xde.nn.tensorflow_compat_v1.nn.NN attribute), boundary_normal() (deep-
52 xde.geometry.geometry_2d.Triangle method),
42
B boundary_normal() (deep-
background_points() (deep- xde.geometry.geometry_nd.Hypercube
xde.geometry.geometry.Geometry method), method), 43
39 boundary_normal() (deep-
background_points() (deep- xde.geometry.geometry_nd.Hypersphere
xde.geometry.geometry_1d.Interval method), method), 44
40 boundary_normal() (deep-
background_points() (deep- xde.geometry.timedomain.GeometryXTime
xde.geometry.geometry_2d.Disk method), method), 44
40 build() (deepxde.nn.tensorflow_compat_v1.deeponet.DeepONet
background_points() (deep- method), 49
build() (deepxde.nn.tensorflow_compat_v1.deeponet.DeepONetCartesia
63
DeepXDE Documentation, Release 0.13.6
method), 49 48
build() (deepxde.nn.tensorflow_compat_v1.deeponet.FourierDeepONetCartesianProd
DeepONetCartesianProd (class in deep-
method), 50 xde.nn.tensorflow.deeponet), 53
build() (deepxde.nn.tensorflow_compat_v1.fnn.FNN DeepONetCartesianProd (class in deep-
method), 50 xde.nn.tensorflow_compat_v1.deeponet),
build() (deepxde.nn.tensorflow_compat_v1.fnn.PFNN 49
method), 51 deepxde.callbacks (module), 21
build() (deepxde.nn.tensorflow_compat_v1.mfnn.MfNN deepxde.config (module), 25
method), 51 deepxde.data.constraint (module), 29
build() (deepxde.nn.tensorflow_compat_v1.msffn.MsFFNdeepxde.data.data (module), 29
method), 52 deepxde.data.dataset (module), 30
build() (deepxde.nn.tensorflow_compat_v1.msffn.STMsFFN deepxde.data.fpde (module), 30
method), 52 deepxde.data.func_constraint (module), 32
build() (deepxde.nn.tensorflow_compat_v1.nn.NN deepxde.data.function (module), 33
method), 52 deepxde.data.helper (module), 33
build() (deepxde.nn.tensorflow_compat_v1.resnet.ResNetdeepxde.data.ide (module), 33
method), 53 deepxde.data.mf (module), 34
built (deepxde.nn.tensorflow_compat_v1.nn.NN deepxde.data.pde (module), 35
attribute), 52 deepxde.data.sampler (module), 36
deepxde.data.triple (module), 37
C deepxde.geometry.csg (module), 38
deepxde.geometry.geometry (module), 39
call() (deepxde.nn.tensorflow.deeponet.DeepONetCartesianProd
method), 53 deepxde.geometry.geometry_1d (module), 40
call() (deepxde.nn.tensorflow.fnn.FNN method), 54 deepxde.geometry.geometry_2d (module), 40
Callback (class in deepxde.callbacks), 21 deepxde.geometry.geometry_3d (module), 43
CallbackList (class in deepxde.callbacks), 22 deepxde.geometry.geometry_nd (module), 43
clear() (in module deepxde.gradients), 25 deepxde.geometry.sampler (module), 44
clockwise_rotation_90() (in module deep- deepxde.geometry.timedomain (module), 44
xde.geometry.geometry_2d), 42 deepxde.gradients (module), 25
collocation_points() (deep- deepxde.icbcs.boundary_conditions (mod-
xde.icbcs.boundary_conditions.BC method), ule), 45
45 deepxde.icbcs.initial_conditions (mod-
collocation_points() (deep- ule), 47
xde.icbcs.boundary_conditions.PeriodicBC deepxde.losses (module), 26
method), 46 deepxde.metrics (module), 26
collocation_points() (deep- deepxde.model (module), 26
xde.icbcs.boundary_conditions.PointSetBC deepxde.nn.activations (module), 47
method), 46 deepxde.nn.initializers (module), 47
collocation_points() (deep- deepxde.nn.pytorch.fnn (module), 55
xde.icbcs.initial_conditions.IC method), deepxde.nn.pytorch.nn (module), 55
47 deepxde.nn.regularizers (module), 48
compile() (deepxde.model.Model method), 26 deepxde.nn.tensorflow.deeponet (module),
Constraint (class in deepxde.data.constraint), 29 53
CSGDifference (class in deepxde.geometry.csg), 38 deepxde.nn.tensorflow.fnn (module), 54
CSGIntersection (class in deepxde.geometry.csg), deepxde.nn.tensorflow.nn (module), 54
38 deepxde.nn.tensorflow_compat_v1.deeponet
CSGUnion (class in deepxde.geometry.csg), 38 (module), 48
Cuboid (class in deepxde.geometry.geometry_3d), 43 deepxde.nn.tensorflow_compat_v1.fnn
(module), 50
D deepxde.nn.tensorflow_compat_v1.mfnn
Data (class in deepxde.data.data), 29 (module), 51
DataSet (class in deepxde.data.dataset), 30 deepxde.nn.tensorflow_compat_v1.msffn
DeepONet (class in deep- (module), 51
xde.nn.tensorflow_compat_v1.deeponet),
64 Index
DeepXDE Documentation, Release 0.13.6
Index 65
DeepXDE Documentation, Release 0.13.6
66 Index
DeepXDE Documentation, Release 0.13.6
Index 67
DeepXDE Documentation, Release 0.13.6
68 Index
DeepXDE Documentation, Release 0.13.6
Index 69
DeepXDE Documentation, Release 0.13.6
70 Index
DeepXDE Documentation, Release 0.13.6
Index 71