0% found this document useful (0 votes)
114 views7 pages

Synopsis ON High Performance Computing VS High Throughput Computing

High performance computing (HPC) uses the fastest systems available to solve complex problems, while high throughput computing (HTC) uses many distributed resources over long periods to complete computational tasks. HTC focuses on completing many jobs over time rather than speed, using technologies like HTCondor that can distribute work across idle machines. As more sectors need both HPC and HTC capabilities, systems are converging the two approaches, but traditional HPC systems are not well-suited for HTC workloads that have short runtimes and can be parallelized across resources. Existing scheduling solutions address either HPC or HTC needs separately but do not support the growing convergence of these job types on shared systems.

Uploaded by

ankita
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
114 views7 pages

Synopsis ON High Performance Computing VS High Throughput Computing

High performance computing (HPC) uses the fastest systems available to solve complex problems, while high throughput computing (HTC) uses many distributed resources over long periods to complete computational tasks. HTC focuses on completing many jobs over time rather than speed, using technologies like HTCondor that can distribute work across idle machines. As more sectors need both HPC and HTC capabilities, systems are converging the two approaches, but traditional HPC systems are not well-suited for HTC workloads that have short runtimes and can be parallelized across resources. Existing scheduling solutions address either HPC or HTC needs separately but do not support the growing convergence of these job types on shared systems.

Uploaded by

ankita
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

SYNOPSIS

ON
HIGH PERFORMANCE COMPUTING
VS
HIGH THROUGHPUT COMPUTING

Submitted To: Mr. Deepak Kumar

Submitted By:
Ankita Chawla
B39
K1427
INTRODUCTION TO HPC

High Performance Computing (HPC): This is computing using


the fastest computer systems of any type at any given time.
The computer's structures and designs change as technology
improves.
HPC is largely a performance needed term on the application
side of the house otherwise called Computational Science as
opposed to computer science. These are usually tough non-
linear problem spaces.
INTRODUCTION TO HTC
High-throughput computing (HTC) is a computer science
term to describe the use of many computing resources over
long periods of time to accomplish a computational task.
CHALLENGES
The HTC community is also concerned with robustness and
reliability of jobs over a long-time scale. That is, being able to
create a reliable system from unreliable components. This
research is similar to transaction processing, but at a much
larger and distributed scale.
Some HTC systems, such as HTCondor and PBS, can run
tasks on opportunistic resources. It is a difficult problem,
however, to operate in this environment. On one hand the
system needs to provide a reliable operating environment for
the user's jobs, but at the same time the system must not
compromise the integrity of the execute node and allow the
owner to always have full control of their resources.
High-throughput computing vs. high-performance (computing
vs. many-task computing)
There are many differences between high-throughput
computing, high- performance computing (HPC), and many-
task computing (MTC).
HPC tasks are characterised as needing large amounts of
computing power for short periods of time, whereas HTC tasks
also require large amounts of computing, but for much longer
times (months and years, rather than hours and days).[1] HPC
environments are often measured in terms of FLOPS.
The HTC community, however, is not concerned about
operations per second, but rather operations per month or per
year. Therefore, the HTC field is more interested in how many
jobs can be completed over a long period of time instead of
how fast an individual job can complete.
HTCondor is an open-source high-throughput computing
software framework for coarse-grained distributed
parallelisation of computationally intensive tasks.[1] It can be
used to manage workload on a dedicated cluster of computers,
and/or to farm out work to idle desktop computers so-called
cycle scavenging. HTCondor runs on Linux, Unix, Mac OS X,
FreeBSD, and contemporary Windows operating systems.
HTCondor can seamlessly integrate both dedicated resources
(rack-mounted clusters) and non- dedicated desktop machines
(cycle scavenging) into one computing environment.
HPC CLUSTERS
As use of HPC clusters becomes more diversied, the industry
is witnessing a convergence of high-throughput computing
(HTC) with high-performance computing (HPC). Sectors once
focused on HPC, such as electronic design automation (EDA),
nance and insurance, chemistry, life sciences, oil and gas,
manufacturing, and defense and intelligence, now need to
optimize systems for both types of computing jobs.
While HPC workloads are compute- and data-intensive and
can sometimes take several months to complete, HTC jobs
have by nature extremely short runtimes, usually in the
millisecond range. Nearly all HTC jobs can be classi ed as
embarrassingly parallel, which means the workload can be
divided up into multiple, autonomous pieces, each of which are
capable of being independently executed. HTC jobs include
some Monte Carlo simulations, molecular dynamics
simulations, chip design, fraud detection, risk management,
and many others.
Traditional HPC systems have focused on tackling scalability
in the form of large batch jobs, or large
computinGenvironmentsnot necessarily in the form of speed
and throughput. Consequently, requests for HTC support have
plagued HPC administrators for quite some time, as their
systems are not designed for its special needs. These pain
points are accentuated by the ever-increasing call for elevating
application performance to get more results with existing
resources.

Existing software-based solutions are centered on managing


larger workloads and increasing efficiency of HPC systems
through scheduling and optimization. Depending on which
policies govern prioritisation, resource allocation and
accounting, many schedulers address HPC problems relatively
well. There are plenty of high-throughput schedulers and
solutions available in the marketplace. However, they are only
for dedicated high-throughput systems, leaving HPC
administrators with inadequate tools to respond to the growing
job convergence.

You might also like