Supercomputer: Quadrillion
Supercomputer: Quadrillion
Quadrillion
Quadrillion may mean either of the two numbers (see long and short scales for more detail):
1,000,000,000,000,000 (one thousand million million; 1015; SI prefix peta-) for all short scale countries
1,000,000,000,000,000,000,000,000 (one million million million million; 1024; SI prefix: yotta-) for all
long scale countries
Supercomputers play an important role in the field of computational science, and are used for a
wide range of computationally intensive tasks in various fields, including quantum mechanics, weather
forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and
properties of chemical compounds, biologicalmacromolecules, polymers, and crystals), and physical
simulations (such as simulations of the early moments of the universe, airplane and spacecraft
aerodynamics, the detonation of nuclear weapons, and nuclear fusion). Throughout their history, they
have been essential in the field of cryptanalysis.
Systems with massive numbers of processors generally take one of the two paths: in one
approach (e.g., in distributed computing), a large number of discrete computers (e.g.,laptops) distributed
across a network (e.g., the Internet) devote some or all of their time to solving a common problem; each
individual computer (client) receives and completes many small tasks, reporting the results to a central
server which integrates the task results from all the clients into the overall solution.[6][7] In another
approach, a large number of dedicated processors are placed in close proximity to each other (e.g. in
a computer cluster); this saves considerable time moving data around and makes it possible for the
processors to work together (rather than on separate tasks), for example
in mesh and hypercube architectures.
Systems with a massive number of processors generally take one of two paths. In the grid
computing approach, the processing power of a large number of computers, organised as distributed,
diverse administrative domains, is opportunistically used whenever a computer is available. [6] In another
approach, a large number of processors are used in close proximity to each other, e.g. in a computer
cluster. In such a centralized massively parallel system the speed and flexibility of the interconnect
becomes very important and modern supercomputers have used various approaches ranging from
enhanced Infiniband systems to three-dimensional torus interconnects.[33][34] The use of multi-core
processors combined with centralization is an emerging direction, e.g. as in the Cyclops64 system
A typical supercomputer consumes large amounts of electrical power, almost all of which is
converted into heat, requiring cooling. For example, Tianhe-1A consumes 4.04 megawatts (MW) of
electricity.[49] The cost to power and cool the system can be significant, e.g. 4 MW at $0.10/kWh is $400
an hour or about $3.5 million per year.
Since modern massively parallel supercomputers typically separate computations from other
services by using multiple types of nodes, they usually run different operating systems on different nodes,
e.g. using a small and efficient lightweight kernel such as CNK or CNL on compute nodes, but a larger
system such as a Linux-derivative on server and I/O nodes
Tianhe-2
PARAM Yuva
PARAM Timeline
Main
Name Release Year CPUs Technology Speed
Contributor
PARAM 32 to
SuperSPARC II, clos network
9900/SS 200
PARAM 32 to
UltraSPARC, clos network
9900/US 200
PARAM 32 to
DEC Alpha, clos network
9900/AA 200
PARAM Series
PARAM 8000
Unveiled in 1991, PARAM 8000 used Inmos T800 transputers. Transputers were a fairly new and
innovative microprocessor architecture designed for parallel processing at the time. It was a distributed
memory MIMD architecture with a reconfigurable interconnection network.[6] It had 64 CPUs.
PARAM 8600
PARAM 8600 was an improvement over PARAM 8000. It was a 256 CPU computer. For every four Inmos
T800, it employed an Intel i860 coprocessor.[6] The result was over 5GFLOPS at peak for vector
processing. Several of these models were exported.
PARAM 9900/SS
PARAM 9900/SS was designed to be a MPP system. It used the SuperSPARC II processor. The design
was changed to be modular so that newer processors could be easily accommodated. Typically, it used
32-40 processors. But, it could be scaled up to 200 CPUs using the clos network topology.[6] PARAM
9900/US was the UltraSPARC variant andPARAM 9900/AA was the DEC Alpha variant.
PARAM 10000
In 1998, the PARAM 10000 was unveiled. PARAM 10000 used several independent nodes, each based
on the Sun Enterprise 250 server and each such server contained two 400Mhz UltraSPARC
II processors. The base configuration had three compute nodes and a server node. The peak speed of
this base system was 6.4 GFLOPS.[7] A typical system would contain 160 CPUs and be capable of
100 GFLOPS[8] But, it was easily scalable to the TFLOP range.
PARAM Padma
PARAM Padma (Padma means Lotus in Sanskrit) was introduced in April 2003.[4] It had a peak speed of
1024 GFLOPS (about 1 TFLOP) and a peak storage of 1 TB. It used 248IBM Power4 CPUs of 1 GHz
each. The operating system was IBM AIX 5.1L. It used PARAMnet II as its primary interconnect.[8] It was
the first Indian supercomputer to break the 1 TFLOP barrier.[9]
on Intel 73XX of 2.9 GHz each. It has a storage capacity of 25 TB up to 200 TB. [11] It uses PARAMnet 3
as its primary interconnect.
Supercomputers
Aaditya
Indian Institute of Tropical Meteorology, Pune, has a machine with a theoretical peak of 790.7 teraflop/s,
called Aaditya, which is used for climate research and operational forecasting. It ranked 96th among the
world's top 500 supercomputers June 2013 list
Anupam
Unveiled on 8 February 2013, this supercomputer was made by Centre for Development of Advanced
Computing in a period of three months, at a cost of 160 million(US$2 million). It performs at a peak of 524
TFLOPS, about 10 times faster than the present facility, and will consume 35% less energy as compared
to the existing facility. According to CDAC, the supercomputer can deliver sustained performance of 360.8
TFLOPS on the community standard Linpack benchmark, and would have been ranked 62 in the
November 2012 ranking list of Top500. In terms of power efficiency, it would have been ranked 33rd in
the November 2012 List of Top Green 500 supercomputers of the world. It is the first Indian
supercomputer achieving more than 500 teraflops.[11][12]
Param Yuva II will be used for research in space, bioinformatics, weather forecasting, seismic data
analysis, aeronautical engineering, scientific data processing and pharmaceutical development.
Educational institutes like the Indian Institutes of Technology and National Institutes of Technology can be
linked to the computer through the national knowledge network. This computer is a stepping stone
towards building the future petaflop-range supercomputers in India.
SAGA-220
SERC IISc has procured the super computer XC40 from Cray Inc. It was up for trials up to 25 January
2015. This has not yet come up on the super computers list yet and it would be up on the next listing due
in June 2015.
Bhaskara
The supercomputer, with a high-resolution regional models, will be dedicated to the nation on June 2 by
Union Minister for Earth Sciences Harsh Vardhan.
Future supercomputers
The Indian Government has proposed to commit 2.5 billion USD to supercomputing research
during the 12th five-year plan period (2012-2017). The project will be handled byIndian Institute of
Science (IISc), Bangalore.[18][19] Additionally, it was later revealed that India plans to develop a
supercomputer with processing power in the exaflop range. It will be developed by C-DAC within the
subsequent 5 years of approval.
The Supercomputer project has the backing of the Indian Government, which has set aside
approximately $2 bn for its development, apart from support to the other major initiative of building and
installing 100-150 supercomputers at the local, district and national levels under an Indian national
programme.
In March 2015, the Indian government has approved a seven-year supercomputing program
worth $730 million (Rs. 4,500-crore). The National Supercomputing grid will consist of 73 geographically-
distributed high-performance computing centers linked over a high-speed network. The mission involves
both capacity and capability machines and includes standing up three petascale supercomputers.
Computer performance:
In computing, FLOPS or flops (an acronym for floating-point operations per second) is a measure
of computer performance, useful in fields of scientific calculations that make heavy use of floating-
point calculations. For such cases it is a more accurate measure than the generic instructions per second.
Most microprocessors today can carry out 4 FLOPs per clock cycle;[1] thus a single-core 2.5 GHz processor
has a theoretical performance of 10 billion FLOPS = 10 GFLOPS.
As of November 2015, India has 11 systems on the Top500 list ranking 96, 119, 145, 166, 251,
286, 300, 313, 316, 380 and 397.
Rmax
Rpeak
Rank Site Name (TFlop/s
(TFlop/s)
)
119 Indian Institute of Tropical Meteorology Aaditya (iDataPlex DX360M4) 719.2 790.7
145 Tata Institute of Fundamental Research TIFR - Cray XC30 558.8 730.7
166 Indian Institute of Technology Delhi HP Apollo 6000 Xl230/250 524.4 1,170.1
251 Centre for Development of Advanced Computing PARAM Yuva - II 388.4 520.4
286 Indian Institute of Technology Kanpur Cluster Platform SL230s Gen8 344.3 359.6
CSIR Centre for Mathematical Modelling and Computer Cluster Platform 3000 BL460c
300 334.3 362.0
Simulation Gen8
313 National Centre for Medium Range Weather Forecasting iDataPlex DX360M4 318.4 350.1