0% found this document useful (0 votes)
73 views12 pages

Palabos-Npfem: Software For The Simulation of Cellular Blood Flow (Digital Blood)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views12 pages

Palabos-Npfem: Software For The Simulation of Cellular Blood Flow (Digital Blood)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Palabos-npFEM: Software for

the Simulation of Cellular


Blood Flow (Digital Blood) SOFTWARE METAPAPER

CHRISTOS KOTSALOS
JONAS LATT
BASTIEN CHOPARD
*Author affiliations can be found in the back matter of this article

ABSTRACT CORRESPONDING AUTHOR:


Christos Kotsalos
Palabos-npFEM is a computational framework for the simulation of blood flow with Computer Science
fully resolved constituents. The software resolves the trajectories and deformed state Department, University of
of blood cells, such as red blood cells and platelets, and the complex interaction Geneva, CH
between them. The tool combines the lattice Boltzmann solver Palabos for the [email protected]
simulation of blood plasma (fluid phase), a finite element method (FEM) solver for the
resolution of blood cells (solid phase), and an immersed boundary method (IBM) for
the coupling of the two phases. Palabos-npFEM provides, on top of a CPU-only version, KEYWORDS:
the option to simulate the deformable bodies on GPUs, thus the code is tailored for the Palabos-npFEM; cellular blood
fastest supercomputers. The software is integrated in the Palabos core library, and is flow simulations; digital blood;
available on the Git repository https://gitlab.com/unigespc/palabos. It offers the possibility Palabos; npFEM; GPUs

to simulate various setups, e.g. several geometries and blood parameters, and due to
its modular design, it allows external solvers to readily replace the provided ones. TO CITE THIS ARTICLE:
Kotsalos C, Latt J, Chopard B
2021 Palabos-npFEM: Software
for the Simulation of Cellular
Blood Flow (Digital Blood).
Journal of Open Research
Software, 9: 16. DOI: https://
doi.org/10.5334/jors.343
Kotsalos et al. Journal of Open Research DOI: 10.5334/jors.343 2

(1) OVERVIEW where ∇x, ∆x refer to the gradient and Laplacian with
INTRODUCTION respect to the spatial coordinates x for any fixed time
Palabos-npFEM is a highly versatile computational tool for t ≥ 0, ρ0 is the fluid density, μ is the fluid dynamic viscosity,
the simulation of cellular blood flow (at the micrometre and b is a prescribed spatial body force field per unit mass
scale), focusing on high performance computing (HPC) (e.g. gravity and immersed surfaces like blood cells).
without compromising accuracy or complexity. Regarding the solid phase, for the resolution of the
Blood plays a vital role in living organisms, transporting deformable blood cells and their trajectories, we are
oxygen, nutrients, waste products, and various kinds of solving the Elastodynamics equation [6], a non-linear
cells, to tissues and organs. Human blood is a complex equation, referring to any kind of solid body. By convention,
suspension of red blood cells (RBCs), platelets (PLTs), we call B the reference (undeformed) configuration, and
and white blood cells, submerged in a Newtonian fluid, B′ the deformed configuration. The points X ∈ B are
the plasma. At physiological hematocrit (RBCs volume called material coordinates, and the points x ∈ B′ are
fraction), i.e. 35–45%, in just a blood drop (about a mm3) called spatial coordinates. The deformation of a body
there are a few million RBCs, a few hundred thousand from a configuration B onto another configuration B′ is
PLTs, and a few thousand white blood cells. An adult described by a function ϕ : B → B′, which maps each point
person has on average five litres of blood, and the X ∈ B to a point x = ϕ(X) ∈ B′. We call ϕ the deformation
cardiovascular system spans a length of 100,000 km, map. The motion ϕ(X, t) of an elastic body must satisfy
80% of which consists of the capillaries (smallest blood the following equation for all X ∈ B and t ≥ 0 [6]
vessels). Additionally, our blood vessels are characterised
by a variety of scales, i.e. the diameter of arteries/  =  X ⋅ P + r b (2)
r0f 0 m
veins ranges from few millimetres to few centimetres,
the diameter of arterioles/venules ranges from few where ∇x refers to derivatives of material fields with
micrometres to few hundred micrometres, and the respect to the material coordinates Xi for any fixed t ≥ 0,
capillaries are about the size of a RBC diameter (about ρ0(X) denotes the mass density of the elastic body in its
eight micrometres). It is obvious that a simulation at the reference configuration, P(X, t) is the first Piola-Kirchhoff
micrometre scale of such a system (even a tiny part of it, stress field, and bm(X, t) is the material description of the
e.g. an arteriole segment) is a multi-physics/multi-scales spatial body force field b(x, t) (e.g. gravity and interaction
problem, with an extremely high computational cost. with a fluid). Equation (2) is essentially the conservation
Despite remarkable advances in experimental in-vivo and of linear momentum (Newton’s 2nd law of motion), while
in-vitro techniques [16], the type and detail of information the balance of angular momentum is automatically
provided remains limited. For example, while it is possible to satisfied from the symmetry of the Cauchy stress field (by
track the collective behaviour of blood cells, it is up to now definition of elastic bodies). For non-Cosserat-like elastic
impossible to track individual trajectories and study the bodies, i.e. elastic bodies where microscopic moments
underlying transport mechanisms. Given the cellular nature are zero, the balance of angular momentum is implicitly
of blood, its rheology can be deciphered by understanding satisfied. With minor modifications of equation (2), one
how the various cells are moving and interacting with each can readily simulate a viscoelastic material, i.e. a body
other in both healthy and non-healthy humans. Numerical that exhibits both viscous and elastic characteristics.
tools have significantly assisted in in-depth investigations Overall, the changes in (2) are small because (2) does not
of blood rheology, as they offer a controlled environment explicitly contain the law describing material behaviour.
for testing a large number of parameters and classifying Globally, the changes required might not be minor, and
their effect [4, 8]. Furthermore, the amount of detail this depends on the actual material model chosen. What is
coming from the numerical simulations at the microscopic true is that the fact that (2) does not explicitly contain the
level (following individual cells) is unparalleled compared material behaviour provides modularity to the model, in its
to the in-vitro/in-vivo counterparts. theoretical form as well as in its numerical implementation.
The multi-physics nature of blood can be numerically For more details on the continuum equations for both
described by decomposing this complex suspension into fluids and solids, the reader should consult the work by
the fluid and solid phases. Gonzalez and Stuart [6].
Regarding the fluid phase, we are interested on solving Palabos-npFEM solves equations (1) & (2) in a
the Navier-Stokes equations for the description of blood modular way, and performs the Fluid-Solid Interaction
plasma. Thus the spatial velocity v (x, t) and pressure p(x, (FSI) through the Immersed Boundary Method (IBM). By
t) fields in an isothermal and incompressible Newtonian modularity, we mean the complete decoupling of the
fluid must satisfy the following equations [6] solvers, i.e. the resolution of fluid phase is “unaware”
of the resolution of the solid phase and vice-versa.
é¶ ù
r0 ê v + ( xv ) v ú = mDxv - x p + r0 b (1) Our software framework takes care of communication
ëê ¶t ûú whenever needed in relationship with the FSI. Figure 1
x ⋅ v = 0 presents a snapshot from a Palabos-npFEM simulation,
Kotsalos et al. Journal of Open Research DOI: 10.5334/jors.343 3

FSI

/
Solid Solver

Fluid Solver

Figure 1 Digital Blood simulated in Palabos-npFEM (a simulation snapshot). The red bodies are the RBCs, while the yellow bodies
represent the PLTs. One can observe the different phases composing this complex suspension. We provide the possibility to run the
solid solver either on CPUs or GPUs (as depicted by the GPU/CPU icons), while the rest of the solvers are CPU-only versions.

and shows the different solvers involved in the numerical nature of LBM and the implicit/semi-implicit nature of
representation of blood. The Navier-Stokes equations npFEM, the latter can in principle handle larger time steps.
(1) are solved indirectly through the lattice Boltzmann Our modular design contrasts with monolithic
method (LBM) as implemented in Palabos1 [10]. Palabos approaches, which solve both phases through one
stands for Parallel Lattice Boltzmann Solver. It is an system of discretised equations, and thus use one
open source software maintained by the Scientific and solver. Monolithic approaches can be considered as well
Parallel Computing Group (SPC) at the University of computational frameworks whose design entangles
Geneva. The elastodynamics equation (2) is solved by the various solvers for performance purposes. The
the nodal projective finite elements method (npFEM) former approach, i.e. a single solver to deal with every
[7] (part of Palabos core library as well). The npFEM is phase, includes mainly tools that use dissipative particle
a mass-lumped linear FE solver that resolves both the dynamics [15] (in the field of blood flow simulation). The
trajectories and deformations of blood cells with high main advantage of a monolithic design is the performance
accuracy. The solver has the capability of capturing the gain and a more straightforward FSI (in terms of coupling
rich and non-linear viscoelastic behaviour of any type difficulty). However a single solver potentially falls short
of blood cells as shown and validated in Kotsalos et of satisfactorily addressing all the physics present in
al. [7]. The IBM [14, 13], for the coupling of the solid & a complex phenomenon. Furthermore, one needs to
fluid phases, is implemented in the Palabos library. The develop a new monolithic solver for every specific choice
IBM imposes a no-slip boundary condition, so that each of model/material/accuracy of each phase, i.e. monolithic
point of the surface and the ambient fluid moves with solvers are very specific and development time is high. In
the same velocity. The advantage of the IBM is that the the modular approach, there is the freedom to choose
fluid solver does not have to be modified except for the well optimised solvers to address the different phases,
addition of a forcing term fimm (incorporated in the last which leads to higher fidelity models. Of course, the
term of equation (1), i.e. in b). Moreover, the deformable coupling of completely independent solver streams can
bodies and their discrete representations do not need to introduce performance penalties and possibly an over-
conform to the discrete fluid mesh, which leads to a very complicated code. However, we have shown [9], that our
efficient fluid-solid coupling. A thorough presentation of modular design results in a minimal performance loss.
all the theoretical aspects, i.e. fluid/solid phases and FSI, The development of such a solver requires a multi-
can be found in Kotsalos et al. [7, 9]. physics and multi-scale approach as it involves fluid
The modular system allows different spatial discre­ and solid components that may be optimised through a
tisations, i.e. the fluid domain is discretised into a regular description at different temporal and spatial scales. To
grid with spacing Δx in all directions, and the solid bodies ensure flexibility and efficiency, code performance and re-
are discretised into triangular surface meshes. The data usability are central issues of our modular tool. According
exchange between solvers of different discretisation is to the principles proposed in MMSF (Multi-scale Modelling
handled through interpolation kernels, as dictated by the and Simulation Framework) [5, 2, 1], our cellular blood
immersed boundary method. Moreover, the temporal flow computational tool is built on a fluid solver, and a
discretisation is solver dependent. Given the explicit deformable solid bodies solver, whose implementation is
Kotsalos et al. Journal of Open Research DOI: 10.5334/jors.343 4

potentially left to the preference of the scientists. Here, The npFEM solver [7] is an open-source finite element
however, we propose a specific choice. The coupling of solver written in C++ with support for openMP (for
the two solvers is realised through a separate interface, multi-core machines) and CUDA (GPU-support). CUDA
which handles all the needed communication. Note is a general purpose parallel computing platform and
that this coupling interface acts independently of the programming model for NVIDIA GPUs. Thus npFEM
details of the fluid and solid solvers. It only requires data provides two actively supported branches, i.e. CPU and
representing physical quantities which are computed by GPU versions (as summarised in Figure 1). The npFEM
the two solvers, thus ensuring the independence with solver is derived from a heavily modified version of the
respect to the chosen numerical methods [9]. open-source library ShapeOp2 [3]. The different naming
For a detailed presentation of the numerical methods originates from the fact that the modifications make the
used in Palabos-npFEM and its HPC-centric design, the solver an FEM solver instead of a computer graphics tool,
reader should consult Kotsalos et al. [7, 9]. Here we focus as ShapeOp is initially intended for. In more details:
on the software issues.
• We have changed radically the original kernel for
IMPLEMENTATION AND ARCHITECTURE advancing the bodies in time. Our approach follows
Palabos-npFEM is build on top of two independent solvers, the redesigned projective dynamics approach as
i.e. fluid and solid solvers, and couples them through the described in Liu et al. [11].
FSI module (see Figure 1). The choice of the particular • In Computer Graphics, the solvers are approximating
solvers, namely Palabos and npFEM, is crucial for high Newton’s equations (conservation of linear and
fidelity and performant simulations. However, other users angular momenta), reducing the computational cost.
of the code could extend it by replacing the particular Our solver is not approximating Newton’s equations,
choices by alternatives, e.g. opting for a mass-spring- but we provide a converged solution, focusing on
system solid solver instead of a FEM one. The alternative accuracy and physically correct states.
solvers need to be similarly parallelisable through domain
decomposition and allow interaction with solid particles For legacy reasons, we have decided to keep the ShapeOp
through an immersed boundary method. code structure and file naming. This approach allows
Palabos [10] is an open-source library for general- the users to better understand our extensions, perform
purpose computational fluid dynamics based on the their own, and most importantly to benefit from both
lattice Boltzmann method. Palabos is written in C++ with communities, namely the computational science and
extensive use of the Message Passing Interface (MPI). computer graphics ones.
MPI is the library that handles parallelisation across Figure 2 presents the realisation of this modular
multiple cores and nodes of a supercomputer/cluster. design of Palabos-npFEM. Indeed, the solvers possess
Palabos supports CPU-only hardware. independent execution streams (their implementation

Fluid Solver t t + Δt ... t + kΔt

To be communicated:
External Forces &
Colliding Neighbours
Fluid-to-Solid MPI
non-blocking
point-to-point
communication
To be communicated:
Positions & Velocities
Solid-to-Fluid

Solid Solver t t + kΔt

Figure 2 Modular design of Palabos-npFEM, where the different solvers are independent execution streams (see arrows through time).
The coupling demands data exchange (two-way arrows) which is performed mainly through MPI communication. The inset shows
a simulation snapshot and the static allocation of the different domains in the available infrastructure. Data that reside in different
memory domains must be communicated through MPI.
Kotsalos et al. Journal of Open Research DOI: 10.5334/jors.343 5

details are black-boxes to the end user). Their coupling average computational load per operation in Palabos-
consists of exchanging data, i.e. external forces & colliding npFEM. The CPU-only version needs approximately × 3
neighbours from fluid-to-solid, and positions & velocities more CPU-cores to cover the GPU absence. A detailed
from solid-to-fluid. Our framework takes care of this performance analysis can be found in [9].
exchange and quietly handles the parallelisation through
MPI. By parallelisation, we mean the load balancing and QUALITY CONTROL
the allocation of the solvers to the available hardware. The Palabos-npFEM uses the default quality control tools
load balancing follows a straightforward and static (not integrated in Palabos. Palabos is hosted in GitLab, which
changing with time) allocation to the available hardware. provides continuous integration tools. From these
Regarding the fluid solver, the domain is decomposed into tools, we have a test checking that the latest Palabos
multiple non-overlapping sub-domains, where each one is version compiles successfully. Currently, no unit testing
handled by a single CPU-core. Regarding the solid solver, framework is implemented. Nevertheless, we provide a
the blood cells are distributed to the available CPU-cores number of example applications that any user can go
or GPUs, depending on the npFEM version that the user through, and check if the various solvers are operational.
chooses. Additionally, every blood cell fits entirely in one Palabos-npFEM is extensively documented, and there are
hardware unit, i.e. either a CPU-core or a GPU-CUDA-block instructions for installing the software in any supported
[9], but every hardware unit may receive more than one system, but also instructions for testing various example
blood cell. Both solvers exploit at the same time all the applications. See Example Applications sections below
available hardware, i.e. there is no infrastructure grouping for more details on testing Palabos-npFEM library, and
for the fluid or solid solvers. The data exchange is performed verifying its operational status. Furthermore, an extensive
through MPI non-blocking point-to-point communication, validation and verification of our framework has been
and this happens for the data that do not belong in the performed in Kotsalos et al. [7, 9].
same MPI-task (see inset in Figure 2 – simulation snapshot).
For the data belonging in the same memory space (same EXAMPLE APPLICATIONS: INSTRUCTIONS
MPI-task), the framework skips any MPI-communication Palabos-npFEM can be downloaded by cloning Palabos
and retrieves them immediately from the local memory. from the GitLab repository,3 and accessing the folder
The coupling of the two solvers and the subsequent examples → showCases → bloodFlowDefoBodies. It
communication introduces an order at which the various contains the principal application (bloodFlowDefoBodies.
solvers should be executed. The execution steps of Palabos- cpp) of Palabos-npFEM. The principal application shows
npFEM can be found in Figure 3. Figure 4 summarises the how to use Palabos & npFEM in the FSI context, covering

1
Run the solvers and Compute the
advance the systemm macroscopic fluid
from t to t+Δt: properties: density,
• Fluid Solver momentum, stress
• Solid Solver tensor (at time t)
4
Communication
Fluid-to-Solid
Solid-to-Fluid
Use the stress 2
tensor to compute
the external forces
FSI
on the solids from
Computation of b the fluid
(bodies at time t)
Find colliding
neighbours for the
3 solid bodies

Figure 3 Execution order for various operations in Palabos-npFEM. The execution order is dictated by the fluid-solid interaction (FSI),
i.e. the computation of term b in equations (1) & (2). In case of different temporal discretisation, some steps are automatically
deactivated.
Kotsalos et al. Journal of Open Research DOI: 10.5334/jors.343 6

100

Computational Load per Operation/Action (%)


80

60

40

20

0
CPU-GPU version CPU-only version
Other Communication IBM LBM npFEM

Figure 4 Average computational load per operation/action (%) in both CPU-only and CPU-GPU versions. The “Other” category gathers
the collision detection, the computation of forces on solids, and various book-keeping operations. The graph refers to cases at 35%
hematocrit.

meticulously all the possible applications of the library. Our approach falls closer to the standard solution, but
Instructions are provided to compile the software. instead of shrinking the blood cells, we randomly place
The example applications presented below can be them (random positions and orientations) at their rest
reproduced by using the different xml files provided in configuration. Obviously, this leads to large overlappings/
bloodFlowDefoBodies folder. For this, type the following interpenetrations, which we anticipate to be resolved by
in the command line after compiling the framework: the robust nature of the npFEM solver. In more details,
npFEM is an implicit solver, which makes it capable of
# CPU-only version resolving very high deformations/interpenetrations with
mpirun/mpiexec -n MPI_Tasks bloodFlowDefoBodies unconditional stability for arbitrary time steps. Taking
Example_Application.xml advantage of this property, we run the framework for
few thousand time steps, which indeed resolves the
# Hybrid CPU-GPU version overlappings. Figure 5 presents the cell packing application
mpirun/mpiexec -n MPI_Tasks bloodFlowDefoBodies_ as performed in a box at 35% hematocrit. The cell
gpu Example_Application.xml \ packing application deactivates the branches of the code
NumberOfNodes NumberOfGPUsPerNode that deal with the fluid and the FSI (since there is no need
for them), and instead uses the framework for efficiently
The locally provided README.md file helps the user run detecting colliding neighbours and for executing the
the applications in a step-by-step manner. npFEM solver. The cellPacking_params.xml (found
in bloodFlowDefoBodies folder) provides options to
EXAMPLE APPLICATION: CELL PACKING perform the cell packing under various geometries and
To perform a cellular blood flow simulation, one needs to at different hematocrit. This novel approach introduces
prepare the initial conditions, i.e. a flow field packed with no further complexity to the framework, leading to a
blood cells (cell Packing). There exist various approaches, clean and efficient cell packing application (no need for
which usually require the use of an external tool to external tool or additional code).
perform this initialisation. A standard solution is to place
shrunken blood cells randomly in the flow field, and then EXAMPLE APPLICATION: SIMULATION OF
let them grow back to their rest configuration. Another MULTIPLE BLOOD CELLS
approach is to use tools that pack spheres or ellipsoids, Having generated an initialised blood cell flow field, one
and then replace the packed bodies with the blood cells can proceed to simulations of various flow regimes,
(less accurate cell packing). e.g. tubular and shear flows. Figure 6 presents two
Kotsalos et al. Journal of Open Research DOI: 10.5334/jors.343 7

Figure 5 Cell packing as performed by Palabos-npFEM (case at 35% hematocrit). The left inset shows the initial setup with unresolved
interpenetrations, while the right one shows the resulted setup (resolved overlappings) after running the framework for a few
thousand iterations.

Figure 6 Simulation snapshots for a Couette (left, 20% hematocrit) and a Poiseuille (right, 35% hematocrit) flow. Through the
provided xml files, one can choose the geometry, the hematocrit, and various other parameters.

simulation snapshots as generated by running the of the RBC can be tuned through an external file (see
bloodFlowDefoBodies application using the shear_ initialPlacing.pos). Various parameters can be
params.xml, poiseuille_params.xml files (found in tuned from the xml file, e.g. RBC viscoelasticity (Calpha
bloodFlowDefoBodies folder). It should be highlighted parameter) and collision forces intensity (collisions_
that the cell packing application is a prerequisite for weight parameter). This lightweight application provides
running an actual simulation. a simple quality control for the robustness of the RBC
material. Figure 7 shows a snapshot from this application.
EXAMPLE APPLICATION: RBC COLLISION AT
AN OBSTACLE RHINO-GRASSHOPPER ENVIRONMENT FOR
This application (using the obstacle_params.xml, found SETTING UP NEW MATERIALS
in bloodFlowDefoBodies folder) simulates a single RBC The framework/environment presented in this section is
interacting with an obstacle. The position and orientation intended for Windows only. Setting up new materials, in
Kotsalos et al. Journal of Open Research DOI: 10.5334/jors.343 8

a platform-agnostic way, is presented in the next section. is a plug-in for Rhino intended for parametric design.
The npFEM library (derived from ShapeOp) can be Grasshopper uses the Python language for scripting
used as a stand-alone library independently of Palabos. various operations, and by using the Python standard
The library can be found by cloning Palabos from the foreign function library ctypes, one can call npFEM
GitLab repository.3 The source code is located in the from within Rhino-Grasshopper. The locally provided
folder coupledSimulators → npFEM. Inside this folder, README.md file helps the user setup the framework in a
one can navigate further to npFEM_StandAlone_RhinoGH step-by-step manner.
folder, where we provide the option to compile npFEM The ShapeOp library provides a complete description
as a stand-alone dynamic library. The produced dll can in its documentation6 on how to setup the Rhino-
later be used in Rhino4-Grasshopper5 environment. Rhino Grasshopper framework, but instead of ShapeOp, we will
is a computer aided design software, and Grasshopper be using the compiled npFEM dynamic library. For legacy
reasons with ShapeOp, we are using Rhino version 5, but
the same principles apply to newer versions.
After setting up the environment, one can open the
Rhino & Grasshopper files provided in npFEM_StandAlone_
RhinoGH → npFEM_RhinoGH. Figure 8 presents the
environment with a setup that shows a stretched RBC in
the left pane (Rhino window), and in the right pane there
is the Grasshopper window. The user can load various
meshes, setup the material properties, add forces, and
eventually run the npFEM solver. Figure 9 presents a
closer look to some critical Grasshopper components.
The generation of RBC/PLT meshes can be done either
in Rhino-Grasshopper (and any other CAD software),
or through scripts using formulas to generate the
investigated surfaces [4].
This environment offers an easy way to test and
familiarise the user with the npFEM library. The users
can experiment with different materials and solver
parameters, and observe their impact on the deformed
shape of a RBC. Additionally, one can modify the npFEM
solver, and graphically observe if the modifications work
Figure 7 Simulation snapshot of a colliding RBC at an obstacle. as expected or not.

Material
Setup
Data for
Palabos-npFEM

Load
Mesh

Force
Setup
npFEM
solver
Params &
RUN

Figure 8 Rhino-Grasshopper environment calling the npFEM stand-alone dynamic library.


Kotsalos et al. Journal of Open Research DOI: 10.5334/jors.343 9

Figure 9 Components in the Grasshopper environment, from which one can modify the body’s material, and run the npFEM solver.

SETTING UP NEW MATERIALS: MULTI- DEPENDENCIES


PLATFORM A complete list of prerequisites can be found in Palabos
Setting up new materials, in a platform-agnostic way, GitLab repository.
is more tedious than the previous solution. The Rhino-
Linux:
Grasshopper environment is the preferred way. The
• A C++ compiler (C++11 and above)
idea is to modify the file that encodes the material
• make
properties, which is located in examples → showCases →
• CMake (≥3.0)
bloodFlowDefoBodies → Case_Studies. In the current
• MPI (Message Passing Interface)
version of the library we provide template files for RBCs
and PLTs discretised with 258 and 66 surface vertices, Windows:
respectively. Extending to other bodies is a straightforward • Microsoft Visual Studio Community (≥2015)
process (possible automation with minor scripting could be • Microsoft C++ compiler, installed with Visual Studio
an option). The material is encoded in the constraints. Community
csv file (found in Case_Studies → RBC/PLT). This file • CMake (≥3.0)
encodes the various potential energies per finite element/ • Microsoft MPI
triangle (per row). Summing the contributions of the
elemental potential energies, one gets the potential energy The GPU branch of the npFEM solver supports NVIDIA GPUs
of the whole body, which describes the body’s response to with compute capability ≥6.0 (extensively tested and
deformations. Every elemental potential energy (a row in validated). However, the code could support GPUs with
constraints.csv) has a weight and various parameters. compute capability <6 (tested but not fully validated),
Therefore, by tuning these parameters one can make the by replacing the atomic operations (e.g. atomicAdd)
material stiffer or softer. with ones based on atomicCAS (Compare and Swap). For
The tuning of all the other parameters is done through more information, one should consult the CUDA toolkit
the provided xml files. Parameters of interest could be documentation. In this case, the user should modify as
the ones responsible for the viscoelastic behaviour of well the CMake file to target the correct GPU architecture
the bodies (Calpha, Cbeta in the files), and the fluid (currently set to -arch=sm_60).
characteristics (viscosity, density). A detailed explanation The periodic boundary conditions introduce a
of the mechanics can be found in [7]. dependency on parallelisation. In more details, the
directions with periodicity should be subdivided by at
least two sub-domains (each sub-domain belongs to a
(2) AVAILABILITY different MPI task). This is due to how the area per surface
OPERATING SYSTEM vertex is computed, i.e. if there is no domain subdivision,
Linux & Windows then the crossing bodies (from the outlet to inlet) are
considered as stretched (erroneous deformation, leading
PROGRAMMING LANGUAGE to code crash). On the contrary, when the outlet and inlet
C++11 and above belong to different sub-domains (MPI-wise), then the
crossing bodies are duplicated in memory and thus they
ADDITIONAL SYSTEM REQUIREMENTS are not considered as stretched/deformed. This means
Memory and disk space dependent on usage case. that Palabos-npFEM depends strictly on MPI, which is not
Kotsalos et al. Journal of Open Research DOI: 10.5334/jors.343 10

a hard constraint given the computational intensity of provided by microfluidic devices and lob-on-a-chip
blood flow simulations. Of course, this dependency could systems, for which a large interest can be observed in the
be eliminated by modifying/extending the library (area community.
computation part). Our library uses the CMake7 tool for building and
compiling. CMake is an open-source and cross-platform
LIST OF CONTRIBUTORS tool, which allows the libraries using it to be compiled
In addition to the paper authors, we wish in particular to in any supported platform. Thus, the users can deploy
acknowledge the contribution from the following person: Palabos-npFEM in cross-platform environments (from
personal computers/workstations to supercomputers)
• Joel Beny (University of Geneva), for the development and speedup their development & research.
of a major part of the GPU implementation of the Cellular blood flow simulations are extremely
npFEM solver. computationally expensive. For example [9], to simulate
a box of dimensions 503 μm3 under a shear flow at 35%
SOFTWARE LOCATION hematocrit, for physical time of 1 s, we need about 5 days
Archive in a high-end supercomputer (using 5 compute nodes, i.e.
Name: Palabos-npFEM 12 cores and 1 GPU per node). However, an allocation of
Persistent identifier: https://doi.org/10.5281/zenodo.39​ 5 consecutive days is rarely available in supercomputing
65928 centres. For this reason, we have developed an efficient
Licence: Palabos → AGPL v3 & npFEM → MPL v2 check-pointing system, which allows the user to pause
Publisher: Christos Kotsalos at any time the simulation, and restart seamlessly
Version published: 2.2.0 from where it previously stopped. This feature offers an
Date published: 03/07/2020 attractive advantage for other researchers to actively use
Code repository our library.
Name: Palabos The library is specialised on cellular blood flow
Persistent identifier: https://gitlab.com/unigespc/palabos. simulations, but its methodology could easily be applied
git to the simulation of other complex suspensions, and
Licence: Palabos → AGPL v3 & npFEM → MPL v2 fluid-structure/solid interaction applications in general.
Date published: 03/07/2020 (v2.2.0) A recent example is the simulation of Paragliders [12],
where the researchers used Palabos and a structural
LANGUAGE solver similar to npFEM. Thus we strongly believe that our
English library could be used as a building component for other
research topics.
The Palabos library has a large and active community.
(3) REUSE POTENTIAL Integrating npFEM into Palabos serves the purpose
of sharing and exposing all the details with this global
The Palabos-npFEM library gives special attention to and dynamic group of researchers and engineers.
modularity and low complexity. In more details, the The users can find support in the Palabos forum,8
software is designed based on a plug-and-play approach, and thus our library benefits from the same high-
where it is up to the user’s preference to choose the quality support mechanism that is already in-place for
individual solvers for the resolution of the various phases Palabos.
of blood. The Computational Biomedicine community
is a vibrant and dynamic community, with numerous
research contributions in various directions, thus we NOTES
expect other researchers to possibly plug their own 1 https://palabos.unige.ch .
solvers in our platform and experiment with it. Starting 2 https://www.shapeop.org/ .
point for extending and reusing Palabos-npFEM is the 3 https://gitlab.com/unigespc/palabos .
principal application located in examples → showCases 4 https://www.rhino3d.com/ .

→ bloodFlowDefoBodies (utilised for the example 5 https://www.grasshopper3d.com/ .


6 https://www.shapeop.org/documentation.php .
applications). The end user can either deploy the library
7 https://cmake.org/ .
as is by executing this provided application, or build on
8 https://palabos-forum.unige.ch/ .
top of it further functionalities/alterations.
Currently, the principal application that we provide
treats simple geometries, i.e. box and tubular flows. ACKNOWLEDGEMENTS
However, its extension to more complicated geometries
is well supported both by Palabos and npFEM. We consider We acknowledge support from the Swiss National
that promising application areas for our software are Supercomputing Centre (CSCS, Piz-Daint supercomputer),
Kotsalos et al. Journal of Open Research DOI: 10.5334/jors.343 11

the National Supercomputing Centre in the Netherlands 4. Boudjeltia KZ, et al. Spherization of red blood cells and
(Surfsara, Cartesius supercomputer), and the HPC platelets margination in COPD patients. In: Annals of the
Facilities of the University of Geneva (Baobab cluster). New York Academy of Sciences; 2020. DOI: https://doi.
org/10.1111/nyas.14489
5. Chopard B, Borgdorff J, Hoekstra AG. A framework for
FUNDING STATEMENT multi-scale modelling. In: Philosophical Transactions of the
Royal Society A: Mathematical, Physical and Engineering
This project has received funding from the European Union’s Sciences. 2014; 372(2021): 20130378. DOI: https://doi.
Horizon 2020 research and innovation programme under org/10.1098/rsta.2013.0378
grant agreement No 823712 (CompBioMed2 project), and 6. Gonzalez O, Stuart AM. A First Course in Continuum
by the Swiss PASC project “Virtual Physiological Blood: an Mechanics. Cambridge Texts in Applied Mathematics.
HPC framework for blood flow simulations in vasculature Cambridge University Press; 2008. DOI: https://doi.
and in medical devices”. org/10.1017/CBO9780511619571
7. Kotsalos C, Latt J, Chopard B. Bridging the computational
gap between mesoscopic and continuum modeling of
COMPETING INTERESTS red blood cells for fully resolved blood flow. In: Journal of
Computational Physics. 2019; 398: 108905. issn: 10902716.
The authors have no competing interests to declare. DOI: https://doi.org/10.1016/j.jcp.2019.108905
8. Kotsalos C, et al. Anomalous Platelet Transport & Fat-Tailed
Distributions; 2020. eprint: arXiv:2006.11755.
AUTHOR AFFILIATIONS 9. Kotsalos C, et al. Digital blood in massively parallel CPU/

Christos Kotsalos orcid.org/0000-0003-4323-0087 GPU systems for the study of platelet transport. In:
Computer Science Department, University of Geneva, CH Interface Focus. 2021; 11(1): 20190116. DOI: https://doi.
Jonas Latt orcid.org/0000-0001-6627-5689 org/10.1098/rsfs.2019.0116
Computer Science Department, University of Geneva, CH 10. Latt J, et al. Palabos: Parallel Lattice Boltzmann Solver.
Bastien Chopard In: Computers and Mathematics with Applications;
Computer Science Department, University of Geneva, CH 2020. issn: 08981221. DOI: https://doi.org/10.1016/j.
camwa.2020.03.022
11. Liu T, Bouaziz S, Kavan L. Quasi-Newton Methods for
AUTHOR CONTRIBUTIONS Real-Time Simulation of Hyperelastic Materials. In:
ACM Trans. Graph. 2017 May; 36(4). DOI: https://doi.
C.K. performed the research, developed the majority org/10.1145/3072959.2990496
of the computational framework, carried out the 12. Lolies T, et al. Numerical Methods for Efficient Fluid-
simulations and wrote the paper. Structure Interaction Simulations of Paragliders. In:
J.L. wrote part of the computational framework, Aerotecnica Missili & Spazio. 2019; 98(3): 221–229. issn: 0365-
supervised the research and revised the manuscript. 7442. DOI: https://doi.org/10.1007/s42496-019-00017-2
B.C. conceived and supervised the research and 13. Ota K, Suzuki K, Inamuro T. Lift generation by a two-
revised the manuscript. dimensional symmetric flapping wing: Immersed
boundary-lattice Boltzmann simulations. In: Fluid Dynamics
Research. 2012; 44(4). DOI: https://doi.org/10.1088/0169-
REFERENCES 5983/44/4/045504
14. Peskin CS. Flow patterns around heart valves: A numerical
1. Borgdorff J, et al. Performance of distributed multiscale method. In: Journal of Computational Physics. 1972
simulations. In: Philosophical Transactions of the Royal Oct.; 10(2): 252–271. DOI: https://doi.org/10.1016/0021-
Society A: Mathematical, Physical and Engineering Sciences. 9991(72)90065-4
2014; 372(2021): 20130407. DOI: https://doi.org/10.1098/ 15. Rossinelli D, et al. The in-silico lab-on-a-chip: petascale
rsta.2013.0407 and high-throughput simulations of microuidics at cell
2. Borgdorff J, et al. Foundations of distributed multiscale resolution. In: SC ‘15: Proceedings of the International
computing: Formalization, specification, and analysis. In: Conference for High Performance Computing, Networking,
Journal of Parallel and Distributed Computing. 2013; 73(4): Storage and Analysis. 2015; 1–12. DOI: https://doi.
465–483. DOI: https://doi.org/10.1016/j.jpdc.2012.12.011 org/10.1145/2807591.2807677
3. Bouaziz S, et al. Projective Dynamics: Fusing Constraint 16. Tomaiuolo G. Biomechanical properties of red blood
Projections for Fast Simulation. In: ACM Trans. Graph. cells in health and disease towards microuidics. In:
2014 July; 33(4): 154:1–154:11. DOI: https://doi. Biomicrouidics. 2014 Sept.; 8(5): 51501. DOI: https://doi.
org/10.1145/2601097.2601116 org/10.1063/1.4895755
Kotsalos et al. Journal of Open Research DOI: 10.5334/jors.343 12

TO CITE THIS ARTICLE:


Kotsalos C, Latt J, Chopard B 2021 Palabos-npFEM: Software for the Simulation of Cellular Blood Flow (Digital Blood). Journal of Open
Research Software, 9: 16. DOI: https://doi.org/10.5334/jors.343

Submitted: 17 August 2020 Accepted: 02 June 2021 Published: 24 June 2021

COPYRIGHT:
© 2021 The Author(s). This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0
International License (CC-BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original
author and source are credited. See http://creativecommons.org/licenses/by/4.0/.
Journal of Open Research Software is a peer-reviewed open access journal published by Ubiquity Press.

You might also like