0% found this document useful (0 votes)
21 views10 pages

Supercomputer: Quadrillion

A supercomputer is a computer with a very high computational capacity compared to regular computers. Performance is measured in floating point operations per second rather than instructions per second. The most powerful supercomputers can perform quadrillions of floating point operations per second. Supercomputers are used for computationally intensive tasks like quantum mechanics, weather forecasting, and physical simulations. They have massive numbers of processors organized into clusters or distributed across networks. Maintaining and powering supercomputers requires large amounts of electricity and cooling.

Uploaded by

irfanahmed.dba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views10 pages

Supercomputer: Quadrillion

A supercomputer is a computer with a very high computational capacity compared to regular computers. Performance is measured in floating point operations per second rather than instructions per second. The most powerful supercomputers can perform quadrillions of floating point operations per second. Supercomputers are used for computationally intensive tasks like quantum mechanics, weather forecasting, and physical simulations. They have massive numbers of processors organized into clusters or distributed across networks. Maintaining and powering supercomputers requires large amounts of electricity and cooling.

Uploaded by

irfanahmed.dba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

SUPERCOMPUTER

A supercomputer is a computer with a high-level computational capacity compared to a general-


purpose computer. Performance of a supercomputer is measured in floating-point operations per second
(FLOPS) instead of million instructions per second (MIPS). As of 2015, there are supercomputers which
can perform up to quadrillions of FLOPS.

Quadrillion
Quadrillion may mean either of the two numbers (see long and short scales for more detail):

 1,000,000,000,000,000 (one thousand million million; 1015; SI prefix peta-) for all short scale countries
 1,000,000,000,000,000,000,000,000 (one million million million million; 1024; SI prefix: yotta-) for all
long scale countries
Supercomputers play an important role in the field of computational science, and are used for a
wide range of computationally intensive tasks in various fields, including quantum mechanics, weather
forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and
properties of chemical compounds, biologicalmacromolecules, polymers, and crystals), and physical
simulations (such as simulations of the early moments of the universe, airplane and spacecraft
aerodynamics, the detonation of nuclear weapons, and nuclear fusion). Throughout their history, they
have been essential in the field of cryptanalysis.

Systems with massive numbers of processors generally take one of the two paths: in one
approach (e.g., in distributed computing), a large number of discrete computers (e.g.,laptops) distributed
across a network (e.g., the Internet) devote some or all of their time to solving a common problem; each
individual computer (client) receives and completes many small tasks, reporting the results to a central
server which integrates the task results from all the clients into the overall solution.[6][7] In another
approach, a large number of dedicated processors are placed in close proximity to each other (e.g. in
a computer cluster); this saves considerable time moving data around and makes it possible for the
processors to work together (rather than on separate tasks), for example
in mesh and hypercube architectures.

The use of multi-core processors combined with centralization is an emerging trend; one can


think of this as a small cluster (the multicore processor in a smartphone, tablet, laptop, etc.) that both
depends upon and contributes to the cloud.

Systems with a massive number of processors generally take one of two paths. In the grid
computing approach, the processing power of a large number of computers, organised as distributed,
diverse administrative domains, is opportunistically used whenever a computer is available. [6] In another
approach, a large number of processors are used in close proximity to each other, e.g. in a computer
cluster. In such a centralized massively parallel system the speed and flexibility of the interconnect
becomes very important and modern supercomputers have used various approaches ranging from
enhanced Infiniband systems to three-dimensional torus interconnects.[33][34] The use of multi-core
processors combined with centralization is an emerging direction, e.g. as in the Cyclops64 system

A typical supercomputer consumes large amounts of electrical power, almost all of which is
converted into heat, requiring cooling. For example, Tianhe-1A consumes 4.04 megawatts (MW) of
electricity.[49] The cost to power and cool the system can be significant, e.g. 4 MW at $0.10/kWh is $400
an hour or about $3.5 million per year.
Since modern massively parallel supercomputers typically separate computations from other
services by using multiple types of nodes, they usually run different operating systems on different nodes,
e.g. using a small and efficient lightweight kernel such as CNK or CNL on compute nodes, but a larger
system such as a Linux-derivative on server and I/O nodes
Tianhe-2

Sponsors 863 Program


Storage 12.4 PB
Speed 33.86 PFLOPS
Cost 2.4 billion Yuan (US$390 million)
Purpose Simulation, analysis, and government security applications.
IBM's Blue Gene/P supercomputer at Argonne National Laboratory runs over 250,000
processors using normal data center air conditioning, grouped in 72 racks/cabinets connected
by a high-speed optical network[1]
Supercomputing in India

PARAM Yuva

India's supercomputer program was started in late 1980s because Cray supercomputers were denied for


import due to an arms embargo imposed on India, as it was a dual use technology and could be used for
developing nuclear weapons.[1][2]
PARAM 8000 is considered India's first supercomputer. It was indigenously built in 1990 by Centre for
Development of Advanced Computing and was replicated and installed at ICAD Moscow in 1991 under
Russian collaboration.

PARAM Timeline

Main
Name Release Year CPUs Technology Speed
Contributor

Inmos T800 Transputers, Distributed Memory


PARAM 8000 1991 64
MIMD

PARAM 8600 256 Intel i860 5 GFLOPS

PARAM 32 to
SuperSPARC II, clos network
9900/SS 200

PARAM 32 to
UltraSPARC, clos network
9900/US 200
PARAM 32 to
DEC Alpha, clos network
9900/AA 200

PARAM Sun Enterprise 250, 400Mhz UltraSPARC II


1998 160 6.4 GFLOPS
10000 processor

PARAM 1TB storage, 248 IBM Power4 - 1GHz, IBM


2003 - April 1024 GFLOPS
Padma AIX 5.1L, PARAMnet

2008 - 4608 cores, Intel 73XX - 2.9 GHz, 25 to 200 38.1 to


PARAM Yuva
November TB, PARAMnet 3 54 TFLOPS

PARAM Yuva 2013 - February


524 TFLOPS CDAC
II - 08

PARAM Series
PARAM 8000

Unveiled in 1991, PARAM 8000 used Inmos T800 transputers. Transputers were a fairly new and
innovative microprocessor architecture designed for parallel processing at the time. It was a distributed
memory MIMD architecture with a reconfigurable interconnection network.[6] It had 64 CPUs.
PARAM 8600

PARAM 8600 was an improvement over PARAM 8000. It was a 256 CPU computer. For every four Inmos
T800, it employed an Intel i860 coprocessor.[6] The result was over 5GFLOPS at peak for vector
processing. Several of these models were exported.
PARAM 9900/SS

PARAM 9900/SS was designed to be a MPP system. It used the SuperSPARC II processor. The design
was changed to be modular so that newer processors could be easily accommodated. Typically, it used
32-40 processors. But, it could be scaled up to 200 CPUs using the clos network topology.[6] PARAM
9900/US was the UltraSPARC variant andPARAM 9900/AA was the DEC Alpha variant.
PARAM 10000

In 1998, the PARAM 10000 was unveiled. PARAM 10000 used several independent nodes, each based
on the Sun Enterprise 250 server and each such server contained two 400Mhz UltraSPARC
II processors. The base configuration had three compute nodes and a server node. The peak speed of
this base system was 6.4 GFLOPS.[7] A typical system would contain 160 CPUs and be capable of
100 GFLOPS[8] But, it was easily scalable to the TFLOP range.
PARAM Padma

PARAM Padma (Padma means Lotus in Sanskrit) was introduced in April 2003.[4] It had a peak speed of
1024 GFLOPS (about 1 TFLOP) and a peak storage of 1 TB. It used 248IBM Power4 CPUs of 1 GHz
each. The operating system was IBM AIX 5.1L. It used PARAMnet II as its primary interconnect.[8] It was
the first Indian supercomputer to break the 1 TFLOP barrier.[9]

on Intel 73XX of 2.9 GHz each. It has a storage capacity of 25 TB up to 200 TB. [11] It uses PARAMnet 3
as its primary interconnect.

Supercomputers
Aaditya

Indian Institute of Tropical Meteorology, Pune, has a machine with a theoretical peak of 790.7 teraflop/s,
called Aaditya, which is used for climate research and operational forecasting. It ranked 96th among the
world's top 500 supercomputers June 2013 list
Anupam

Anupam is a series of supercomputers designed and developed by Bhabha Atomic Research


Centre (BARC) for their internal use. It is mainly used for molecular dynamical simulations, reactor
physics, theoretical physics, computational chemistry, computational fluid dynamics, and finite element
analysis. The latest in the series is Anupam-Aagra clocked at 150 TFLOPS
PARAM Yuva II

Unveiled on 8 February 2013, this supercomputer was made by Centre for Development of Advanced
Computing in a period of three months, at a cost of 160 million(US$2 million). It performs at a peak of 524
TFLOPS, about 10 times faster than the present facility, and will consume 35% less energy as compared
to the existing facility. According to CDAC, the supercomputer can deliver sustained performance of 360.8
TFLOPS on the community standard Linpack benchmark, and would have been ranked 62 in the
November 2012 ranking list of Top500. In terms of power efficiency, it would have been ranked 33rd in
the November 2012 List of Top Green 500 supercomputers of the world. It is the first Indian
supercomputer achieving more than 500 teraflops.[11][12]

Param Yuva II will be used for research in space, bioinformatics, weather forecasting, seismic data
analysis, aeronautical engineering, scientific data processing and pharmaceutical development.
Educational institutes like the Indian Institutes of Technology and National Institutes of Technology can be
linked to the computer through the national knowledge network. This computer is a stepping stone
towards building the future petaflop-range supercomputers in India.
SAGA-220

SAGA-220 built by ISRO, is capable of performing at 220,000 gigaflop/s (220 teraflop/s). It uses about


400 NVIDIA Tesla 2070 GPUs and 400 Intel Quad Core Xeon CPUs.[14]
EKA

EKA is a supercomputer built by the Computational Research Laboratories with hardware provided


by Hewlett-Packard.This is developed by Tata sons. It is capable of performing at 132800 gigaflop/s or
132 teraflop/s.
Indian Institute of Technology, Madras has a 91.1 teraflop/s machine called Virgo. It is ranked as
364 in the Top 500 November-2012 list. It has 292 computer nodes, 2 master nodes, 4 storage nodes and
has total computing power 97 TFlops. According to Linpack Performance, Virgo is the fastest cluster in an
academic institution in India. In terms of performance, it has an Expand (Rmax) of 91.126 TF and Expand
(RPeak) of 97.843 TF. The computing efficiency is 932 Expand (MFlop/Watt). As of 2012, Virgo is at
224th position in the world (Top500), 5th ranked energy efficient machine in the world and 1st ranked
energy efficient machine in India.
Vikram-100

Inaugurated on 26 June 2015, by Prof. U. R. Rao at the Physical Research Laboratory,[15] the Vikram-


100 is a High Performance Computing (HPC) Cluster (named after eminent scientist Dr Vikram Sarabhai)
with more than 100 teraflops of sustained performance. Vikram-100 [16] has 97 compute nodes, each with
two Intel Xeon E5-2670v3 12-core Intel Haswell CPUs at 2.30 GHz (for a total of 2,328 CPU cores), 256
GB RAM and 500 GB of local scratch storage. 20 of these nodes also have two Nvidia Tesla K40 GPU
cards (for a total of 1,15,200 GPU Cores) each card capable of 1.66 Tflops (double precision).

Currently, the Vikram-100 HPC is 13th fastest supercomputer in India.[17]


PARAM Yuva

PARAM Yuva belongs to the PARAM series of supercomputer developed by the Centre for Development


of Advanced Computing. It is capable of performing at about 54000 gigaflop/s or 54 teraflop/s.
Cray XC40

SERC IISc has procured the super computer XC40 from Cray Inc. It was up for trials up to 25 January
2015. This has not yet come up on the super computers list yet and it would be up on the next listing due
in June 2015.
Bhaskara

The supercomputer, with a high-resolution regional models, will be dedicated to the nation on June 2 by
Union Minister for Earth Sciences Harsh Vardhan.

Future supercomputers
The Indian Government has proposed to commit 2.5 billion USD to supercomputing research
during the 12th five-year plan period (2012-2017). The project will be handled byIndian Institute of
Science (IISc), Bangalore.[18][19] Additionally, it was later revealed that India plans to develop a
supercomputer with processing power in the exaflop range. It will be developed by C-DAC within the
subsequent 5 years of approval.

The Supercomputer project has the backing of the Indian Government, which has set aside
approximately $2 bn for its development, apart from support to the other major initiative of building and
installing 100-150 supercomputers at the local, district and national levels under an Indian national
programme.

In March 2015, the Indian government has approved a seven-year supercomputing program
worth $730 million (Rs. 4,500-crore). The National Supercomputing grid will consist of 73 geographically-
distributed high-performance computing centers linked over a high-speed network. The mission involves
both capacity and capability machines and includes standing up three petascale supercomputers.
Computer performance:
In computing, FLOPS or flops (an acronym for floating-point operations per second) is a measure
of computer performance, useful in fields of scientific calculations that make heavy use of floating-
point calculations. For such cases it is a more accurate measure than the generic instructions per second.

FLOPS can be calculated using this equation

Most microprocessors today can carry out 4 FLOPs per clock cycle;[1] thus a single-core 2.5 GHz processor
has a theoretical performance of 10 billion FLOPS = 10 GFLOPS.

Name kiloFLOPS megaFLOPS gigaFLOPS teraFLOPS petaFLOPS exaFLOPS zettaFLOPS yottaFLOPS

Abbr. kFLOPS MFLOPS GFLOPS TFLOPS PFLOPS EFLOPS ZFLOPS YFLOPS

FLOPS 103 106 109 1012 1015 1018 1021 1024

As of November 2015, India has 11 systems on the Top500 list ranking 96, 119, 145, 166, 251,
286, 300, 313, 316, 380 and 397.

Rmax
Rpeak
Rank Site Name (TFlop/s
(TFlop/s)
)

96 Indian Institute of Science SahasraT (SERC - Cray XC40) 901.5 1244.2

119 Indian Institute of Tropical Meteorology Aaditya (iDataPlex DX360M4) 719.2 790.7

145 Tata Institute of Fundamental Research TIFR - Cray XC30 558.8 730.7

166 Indian Institute of Technology Delhi HP Apollo 6000 Xl230/250 524.4 1,170.1

251 Centre for Development of Advanced Computing PARAM Yuva - II 388.4 520.4
286 Indian Institute of Technology Kanpur Cluster Platform SL230s Gen8 344.3 359.6

CSIR Centre for Mathematical Modelling and Computer Cluster Platform 3000 BL460c
300 334.3 362.0
Simulation Gen8

313 National Centre for Medium Range Weather Forecasting iDataPlex DX360M4 318.4 350.1

316 IT Services Provider Cluster Platform SL250s Gen8 316.8 373.2

Cluster Platform 3000 BL460c


380 Network Company 271.0 388.6
Gen8

397 IT Services Provider Cluster Platform SL210T 256.3 372.7

You might also like