0% found this document useful (0 votes)
113 views2 pages

Parallel Codes For Solving PDE's: Dasika Sunder, Dipak Vaghani, Ratnesh Shukla

The document summarizes a parallel code for solving partial differential equations (PDEs) on cartesian meshes. The code can solve time-dependent hyperbolic-parabolic PDEs with non-stiff source terms. It is written in C using PETSc for parallelization and can achieve good scalability on thousands of processor cores. A test case solving compressible multiphase flow equations showed near-linear speedup as the number of processors increased from 48 to 15,360.

Uploaded by

Dasika Sunder
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
113 views2 pages

Parallel Codes For Solving PDE's: Dasika Sunder, Dipak Vaghani, Ratnesh Shukla

The document summarizes a parallel code for solving partial differential equations (PDEs) on cartesian meshes. The code can solve time-dependent hyperbolic-parabolic PDEs with non-stiff source terms. It is written in C using PETSc for parallelization and can achieve good scalability on thousands of processor cores. A test case solving compressible multiphase flow equations showed near-linear speedup as the number of processors increased from 48 to 15,360.

Uploaded by

Dasika Sunder
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Parallel Codes for Solving PDE’s

Dasika Sunder, Dipak Vaghani, Ratnesh Shukla

The code developed in the current work can solve general time dependent hyperbolic-
parabolic partial differential equations with non-stiff source and non-conservative prod-
uct terms on cartesian meshes.The general form of the equations is
∂Q
+ ∇ · F(Q, ∇Q) + B(Q) · ∇Q = S(Q) (1)
∂t
In the above equation, Q is vector of conservative variables, F(Q, ∇Q) is the conservative
flux, B(Q) · ∇Q is the non-conservative term and S(Q) is the non-stiff source term.
A large number partial differential equations of practical significance can be cast in
this form. These include, advection equation, Burgers’ equation, compressible Navier-
Stokes equations, various multiphase flow models including the Baer-Nunziato model for
deflagration-to-detonation transition, shallow water equations, acoustic wave equations,
diffuse interface models and magneto-hydrodynamic equations to name a few.

Salient features of the code


1. The code is written in C programming language and parallelized using the PETSc
toolkit and can be run convinietly on thousands of cores with very good scalability

2. 1D, 2D and 3D versions of the code are available

3. First to fourth order of accuracy for discretization in both space and time can be
chosen using a single parameter

4. The spatial discretization is based on WENO method and temporal discretization


is based on SSP-RK method. Therefore, problems containing both smooth features
and stong shocks can be solved robustly

5. To solve any PDE, only definitions of Q, F(Q, ∇Q), B(Q) and S(Q) need to be
given. The discretization details are handled automatically.

6. The code can be accessed from GitHub

1
Scalability Study
To study scalability of the code we will solve two dimensional Baer-Nunziato equa-
tions for compressible multiphase flows. The test case considered is a smooth vortex
in a domain [x, y] ∈ [−10, 10] × [−10, 10] with periodic boundary conditions on all the
boundaries. In this study, the problem is fixed (a mesh with 1500 × 1500 cells and a final
time of t = 10 which corresponds to one cycle of the vortex through the domain) and
vary the number of processors. The time taken for each simulation is tabulated below.

Number of Processors Total Time (seconds)


48 62660.8
72 41704.5
96 30546.0
120 24817.8
144 21560.8
240 12457.5
480 6355.98
960 3216.87
1920 1667.26
3840 845.508
7680 443.615
15360 250.339

Table 1: Time taken for the simulation, when different number of processors are used

100,000
Total time (seconds)

10,000

1,000

100 1,000 10,000


Number of Processors

Figure 1: Scalability test

As can be seen from the above figure, a very good scalability is achieved. This test has
been performed on the CRAY-XC40 machine in SERC department at IISc.

You might also like