0% found this document useful (0 votes)
53 views2 pages

TITAN: Built For Science: Preparing For Exascale: Six Critical Codes

Titan is a new supercomputer being built at the Oak Ridge Leadership Computing Facility. It will have a novel hybrid CPU-GPU architecture requiring applications to be adapted to utilize the GPUs. Six critical applications were chosen to pioneer this effort: S3D for combustion simulations, WL-LSMS for materials modeling, NRDF for radiation transport, Denovo for nuclear reactor simulations, CAM-SE for climate modeling, and LAMMPS for molecular dynamics. Teams were assembled to work on adapting each application to the hybrid architecture. Their goal is to have the applications ready to effectively use Titan when it comes online and help the US achieve exascale computing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views2 pages

TITAN: Built For Science: Preparing For Exascale: Six Critical Codes

Titan is a new supercomputer being built at the Oak Ridge Leadership Computing Facility. It will have a novel hybrid CPU-GPU architecture requiring applications to be adapted to utilize the GPUs. Six critical applications were chosen to pioneer this effort: S3D for combustion simulations, WL-LSMS for materials modeling, NRDF for radiation transport, Denovo for nuclear reactor simulations, CAM-SE for climate modeling, and LAMMPS for molecular dynamics. Teams were assembled to work on adapting each application to the hybrid architecture. Their goal is to have the applications ready to effectively use Titan when it comes online and help the US achieve exascale computing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

TITAN: Built for Science

The first thousand nodes of Titan are scheduled for installation in Initially 50 applications were considered, but this list was eventually
late 2011, but the Oak Ridge Leadership Computing Facility (OLCF) pared down to a set of six critical codes from various domain
began preparing for the arrival of its next leadership resource long sciences:
before the hardware was purchased. Titan’s novel architecture alone
is no high-performance computing (HPC) game-changer without S3D, used by a team led by Jacqueline Chen of Sandia National
applications capable of utilizing its innovative computing Laboratories, is a direct numerical simulation code that models
environment. combustion. In 2009, a team led by Chen used Jaguar to create the
world’s first fully resolved simulation of small lifted autoigniting
In 2009 the OLCF began compiling a list of candidate applications hydrocarbon jet flames, allowing for representation of some of the
that were to be the vanguards of Titan—the first codes that would fine-grained physics relevant to stabilization in a direct-injection
be adapted to take full advantage of its mixed architecture. This list diesel engine.
was gleaned from research done for the 2009 OLCF report Preparing
for Exascale: OLCF Application Requirements and Strategy as well WL-LSMS calculates the interactions between electrons and atoms
as from responses from current and former INCITE awardees. in magnetic materials—such as those found in computer hard disks
and the permanent magnets found in electric motors. It uses two
methods. The first is locally self-consistent multiple scattering,

Preparing for Exascale: Six Critical Codes

WL-LSMS LAMMPS
Role of material disorder, A multiple capability
statistics, and fluctuations molecular dynamics code.
in nanoscale materials
and systems.

S3D CAM-SE
Combustion simulations to Answers questions about
enable the next generation specific climate change
of diesel/bio fuels to burn adaptation and mitigation
more efficiently. scenarios.

NRDF Denovo
Radiation transport – High-fidelity radiation
important in astrophysics, transport calculations
laser fusion, combustion, that can be used in a
atmospheric dynamics, and variety of nuclear energy
medical imaging – and technology
computed on AMR grids. applications.

Oak Ridge Leadership Computing Facility


which describes the journeys of scattered electrons at the lowest Titan is in production—any changes made must be flexible. Second,
possible temperature by applying density functional theory to solve and perhaps most important, these applications are used by research
the Dirac equation, a relativistic wave equation for electron behavior. teams the world over on various platforms.
The second is the Monte Carlo Wang-Landau method, which guides
calculations to explore system behavior at all temperatures, not just “We had to make sure we made changes to the codes that won’t just
absolute zero. The two methods were combined in 2008 by a team die on the vine,” said Messer. “We had to ensure that our changes
of researchers from Oak Ridge National Laboratory (ORNL) to at the very least do no harm while they are running on other,
calculate magnetic materials at a finite temperature without non-GPU platforms.” The teams discovered that some of their
adjustable parameters. The combined code was one of the first codes modifications not only made the codes functional on hybrid systems,
to break the petascale barrier on Jaguar. but actually helped performance on non-GPU architectures. A
prime example is Denovo, which has experienced a twofold increase
The Non-Equilibrium Radiation Diffusion (NRDF) application in performance on traditional central processing unit (CPU)-
models the journey of noncharged particles. NRDF has applications structured systems since being adapted.
in astrophysics, nuclear fusion, and atmospheric radiation, while
the algorithms being developed should prove valuable in many “We’ve made changes that we’re sure are going to remain within the
other areas, such as fluid dynamics, radiation transport, groundwater ‘production trunk’ of all these codes—there won’t be one version
transport, nuclear reactors, and energy storage. that runs on Titan and another version for traditional architectures,”
said Messer. It’s essential for developers to be able to check a code
Developed by ORNL’s Tom Evans, Denovo allows fully consistent out of a repository that can be compiled on any architecture;
multi-step approaches to high-fidelity nuclear reactor simulations. otherwise, the work done by these teams won’t survive time.

CAM-SE represents two models that work in conjunction to simulate The development teams have learned plenty from their work thus
global atmospheric conditions. CAM (Community Atmosphere far. First, application adaptation for GPU architectures changes data
Model) is a global atmosphere model for weather and climate structures (the way information is stored and organized in a
research. HOMME, the High Order Method Modeling Environment, computer so that it can be used efficiently), a fact that is creating
is an atmospheric dynamical core, which solves fluid and the most difficult work for the teams. GPUs have to be fed information
thermodynamic equations on resolved scales. In order for HOMME correctly. Like voracious animals, they can’t be sated with little
to be a useful tool for atmospheric scientists, it is necessary to couple “bites” of information, but require huge amounts of data at once.
this core to physics packages—such as CAM—regularly employed
by the climate modeling community. Second, the teams have learned how to specifically program the
GPUs. This requires the developers to know about the hardware of
LAMMPS, the Large-scale Atomic/Molecular Massively Parallel the GPU to code effectively.
Simulator, was developed by a group of researchers at SNL. It is a
classical molecular dynamics code that can be used to model atoms These two discoveries are the key to enabling strong scaling of
or, more generically, as a parallel particle simulator at the atomic, applications at the exascale. “What we’ve discovered at the petascale
meso, or continuum scales. is that many research teams have run out of weak scaling,” said
Messer, referring to the label associated with increasing the size of
Once the applications were chosen, teams of experts were assembled the problem to be solved. Users have reached the point where they
to work on each. These teams included one liaison from the OLCF’s can make quantitative predictions with their codes and don’t have
Scientific Computing Group who is familiar with the code; a number to increase the problem size, or they’ve simply reached the limit of
of people from the applications’ development teams; one or more where the code base can take them.
representatives from hardware developer Cray; and one or more
individuals from NVIDIA, which will be supplying the graphics Ultimately, said Messer, all of the applications running on Titan
processing units (GPUs) to be used in Titan. will need to be able to exploit the GPUs to achieve a level of
simulation virtually unthinkable just a few years ago. Simply put,
“The main goal is to get these six codes ready so that when Titan standing up and operating Titan will be America’s first step toward
hits the floor, researchers can effectively use the system,” said Bronson the exascale and will cement its reputation as the world leader in
Messer, head of the porting and optimisation team. supercomputing. With Titan, America can continue to solve the
world’s greatest scientific challenges one simulation at a time.
Guiding principles
Before work began on these six codes, the development teams
acknowledged some fundamental principles intended to guide their
work. First, because these applications are current INCITE codes,
they are under constant development and will continue to be after

olcf.ornl.gov/titan

You might also like