100% found this document useful (1 vote)
87 views

Submitted By: Submitted To: Group 1 Mr. Virgilio Ariola

The document summarizes supercomputers, which are the most advanced and powerful computers currently available. It discusses how supercomputers have evolved over time from early vector processing designs by Seymour Cray in the 1960s-1970s to today's designs using clusters of commodity processors. The document also outlines common uses of supercomputers for highly complex simulations and calculations, as well as specialized supercomputer technologies and research goals to develop even more powerful exascale supercomputers.

Uploaded by

michiikawaii
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
87 views

Submitted By: Submitted To: Group 1 Mr. Virgilio Ariola

The document summarizes supercomputers, which are the most advanced and powerful computers currently available. It discusses how supercomputers have evolved over time from early vector processing designs by Seymour Cray in the 1960s-1970s to today's designs using clusters of commodity processors. The document also outlines common uses of supercomputers for highly complex simulations and calculations, as well as specialized supercomputer technologies and research goals to develop even more powerful exascale supercomputers.

Uploaded by

michiikawaii
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 9

Submitted by: Submitted to:

GROUP 1 IV-2 Mr. Virgilio Ariola


Albay
Aquino
Arquisola
Almanzor
Atad
Aure
Azores
Baja
Balandra
A supercomputer is a computer that is at the frontline of current processing capacity, particularly
speed of calculation. Supercomputers introduced in the 1960s were designed primarily by
Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray
left to form his own company, Cray Research.
He then took over the supercomputer market with his new designs, holding the top spot in
supercomputing for five years (1985–1990). In the 1980s a large number of smaller competitors
entered the market, in parallel to the creation of the minicomputer market a decade earlier, but
many of these disappeared in the mid-1990s "supercomputer market crash".

Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional"


companies such as Cray, IBM and Hewlett-Packard, who had purchased many of the 1980s
companies to gain their experience. As of May 2008, the IBM Roadrunner, located at Los Alamos
National Laboratory, is the fastest supercomputer in the world.

The term supercomputer itself is rather fluid, and today's supercomputer tends to become
tomorrow's ordinary computer. CDC's early machines were simply very fast scalar processors,
some ten times the speed of the fastest machines offered by other companies. In the 1970s most
supercomputers were dedicated to running a vector processor, and many of the newer players
developed their own such processors at a lower price to enter the market. The early and mid-
1980s saw machines with a modest number of vector processors working in parallel to become
the standard. Typical numbers of processors were in the range of four to sixteen. In the later 1980s
and 1990s, attention turned from vector processors to massive parallel processing systems with
thousands of "ordinary" CPUs, some being off the shelf units and others being custom designs.
Today, parallel designs are based on "off the shelf" server-class microprocessors, such as the
PowerPC, Opteron, or Xeon, and most modern supercomputers are now highly-tuned computer
clusters using commodity processors combined with custom interconnects.

Common uses
Supercomputers are used for highly calculation-intensive tasks such as problems involving
quantum mechanical physics, weather forecasting, climate research, molecular modeling
(computing the structures and properties of chemical compounds, biological macromolecules,
polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels,
simulation of the detonation of nuclear weapons, and research into nuclear fusion), cryptanalysis,
and such like A particular class of problems, known as Grand Challenge problems, are problems
whose full solution requires semi-infinite computing resources. Capability computing is
typically thought of as using the maximum computing power to solve a large problem in the
shortest amount of time. Often a capability system is able to solve a problem of a size or
complexity that no other computer can. Capacity computing in contrast is typically thought of as
using efficient cost-effective computing power to solve somewhat large problems or many small
problems or to prepare for a run on a capability system.
 A supercomputer generates large amounts of heat and must be cooled. Cooling most
supercomputers is a major HVAC problem.

 Information cannot move faster than the speed of light between two parts of a
supercomputer. For this reason, a supercomputer that is many metres across must have
latencies between its components measured at least in the tens of nanoseconds. Seymour
Cray's supercomputer designs attempted to keep cable runs as short as possible for this
reason: hence the cylindrical shape of his Cray range of computers. In modern
supercomputers built of many conventional CPUs running in parallel, latencies of 1-5
microseconds to send a message between CPUs are typical.

 Supercomputers consume and produce massive amounts of data in a very short period of
time. According to Ken Batcher, "A supercomputer is a device for turning compute-bound
problems into I/O-bound problems." Much work on external storage bandwidth is needed
to ensure that this information can be transferred quickly and stored/retrieved correctly.

Technologies developed for supercomputers include:

 Vector processing
 Liquid cooling
 Non-Uniform Memory Access (NUMA)
 Striped disks (the first instance of what was later called RAID)
 Parallel file systems

Processing techniques
Vector processing techniques were first developed for supercomputers and continue to be used in
specialist high-performance applications. Vector processing techniques have trickled down to the
mass market in DSP architectures and SIMD (Single Instruction Multiple Data) processing
instructions for general-purpose computers.

Modern video game consoles in particular use SIMD extensively and this is the basis for some
manufacturers' claim that their game machines are themselves supercomputers. Indeed, some
graphics cards have the computing power of several TeraFLOPS. The applications to which this
power can be applied was limited by the special-purpose nature of early video processing. As
video processing has become more sophisticated, Graphics processing units (GPUs) have evolved
to become more useful as general-purpose vector processors, and an entire computer science sub-
discipline has arisen to exploit this capability: General-Purpose Computing on Graphics
Processing Units (GPGPU).
Special-purpose supercomputers
Special-purpose supercomputers are high-performance computing devices with a hardware
architecture dedicated to a single problem. This allows the use of specially programmed FPGA
chips or even custom VLSI chips, allowing higher price/performance ratios by sacrificing
generality. They are used for applications such as astrophysics computation and brute-force
codebreaking. Historically a new special-purpose supercomputer has occasionally been faster than
the world's fastest general-purpose supercomputer, by some measure. For example, GRAPE-6
was faster than the Earth Simulator in 2002 for a particular special set of problems.

Examples of special-purpose supercomputers:

• Belle, Deep Blue, and Hydra, for playing chess


• Reconfigurable computing machines or parts of machines
• GRAPE, for astrophysics and molecular dynamics
• Deep Crack, for breaking the DES cipher
• MDGRAPE-3, for protein structure computation
• D.E. Shaw Research Anton, a specialised supercomputer suitable for home use

Research and development


IBM is developing the Cyclops64 architecture, intended to create a "supercomputer on a chip".

Other PFLOPS projects include one by Narendra Karmarkar in India, a CDAC effort targeted for
2010, and the Blue Waters Petascale Computing System funded by the NSF ($200 million) that is
being built by the NCSA at the University of Illinois at Urbana-Champaign (slated to be
completed by 2011). In May 2008 a collaboration was announced between NASA, SGI and Intel
to build a 1 petaflops computer, Pleiades, in 2009, scaling up to 10 PFLOPs by 2012. Meanwhile,
IBM is constructing a 20 PFLOPs supercomputer at Lawrence Livermore National Laboratory,
named Sequoia, which is scheduled to go online in 2011.

Given the current speed of progress, supercomputers are projected to reach 1 exaflops (1018) (one
quintillion FLOPS) in 2019. Futurist Ray Kurzweil expects supercomputers capable of human
brain neural simulations, for which according to Kurzweil 10 exaflops (1019) would be required, in
2025.

Erik P. DeBenedictis of Sandia National Laboratories theorizes that a zettaflops (1021) (one
sextillion FLOPS) computer is required to accomplish full weather modeling, which could cover a
two week time span accurately. Such systems might be built around 2030.
Supercomputer

The Cray-2, the world's fastest computer from 1985 to 1989


Mainframes (often colloquially referred to as Big Iron) are computers used mainly by large
organizations for critical applications, typically bulk data processing such as census, industry and
consumer statistics, ERP, and financial transaction processing.

The term probably had originated from the early mainframes, as they were housed in enormous,
room-sized metal boxes or frames. Later the term was used to distinguish high-end commercial
machines from less powerful units.

Today in practice, the term usually refers to computers compatible with the IBM System/360 line,
first introduced in 1965. (IBM System z10 is the latest incarnation.) Otherwise, large systems that
are not based on the System/360 but are used for similar tasks are usually referred to as servers or
even supercomputers. However, "server", "supercomputer" and "mainframe" are not synonymous
(see client-server).

Some non-System/360-compatible systems derived from or compatible with older (pre-Web)


server technology may also be considered mainframes. These include the Burroughs large
systems, the UNIVAC 1100/2200 series systems, and the pre-System/360 IBM 700/7000 series.
Most large-scale computer system architectures were firmly established in the 1960s and most
large computers were based on architecture established during that era up until the advent of Web
servers in the 1990s. (Interestingly, the first Web server running anywhere outside Switzerland ran
on an IBM mainframe at Stanford University as early as 1990. See History of the World Wide
Web for details.)

There were several minicomputer operating systems and architectures that arose in the 1970s and
1980s, but minicomputers are generally not considered mainframes. (UNIX arose as a
minicomputer operating system; Unix has scaled up over the years to acquire some mainframe
characteristics.)

Many defining characteristics of "mainframe" were established in the 1960s, but those
characteristics continue to expand and evolve to the present day.

Market
IBM mainframes dominate the mainframe market at well over 90% market share. Unisys
manufactures ClearPath mainframes, based on earlier Sperry and Burroughs product lines. In
2002, Hitachi co-developed the zSeries z800 with IBM to share expenses, but subsequently the
two companies have not collaborated on new Hitachi models. Hewlett-Packard sells its unique
NonStop systems, which it acquired with Tandem Computers and which some analysts classify as
mainframes. Groupe Bull's DPS, Fujitsu (formerly Siemens) BS2000, and Fujitsu-ICL VME
mainframes are still available in Europe. Fujitsu, Hitachi, and NEC (the "JCMs") still maintain
nominal mainframe hardware businesses in their home Japanese market, although they have been
slow to introduce new hardware models in recent years.

The amount of vendor investment in mainframe development varies with marketshare. Unisys,
HP, Groupe Bull, Fujitsu, Hitachi, and NEC now rely primarily on commodity Intel CPUs rather
than custom processors in order to reduce their development expenses, and they have also cut
back their mainframe software development. (However, Unisys still maintains its own unique
CMOS processor design development for certain high-end ClearPath models but contracts chip
manufacturing to IBM.) In stark contrast, IBM continues to pursue a different business strategy of
mainframe investment and growth. IBM has its own large research and development organization
designing new, homegrown CPUs, including mainframe processors such as 2008's 4.4 GHz quad-
core z10 mainframe microprocessor. IBM is rapidly expanding its software business, including its
mainframe software portfolio, to seek additional revenue and profits. IDC and Gartner server
marketshare measurements show IBM System z mainframes continuing their long-running
marketshare gains among high-end servers of all types, and IBM continues to report increasing
mainframe revenues even while steadily reducing prices.

Differences to supercomputers
The distinction between supercomputers and mainframes is not a hard and fast one, but
supercomputers generally are used for problems which are limited by calculation speed, while
mainframes are used for problems which are limited by input/output and reliability and for
solving multiple business problems concurrently (mixed workload). The differences and
similarities are as follows:

• Both types of systems offer parallel processing. Supercomputers typically expose it to the
programmer in complex manners, while mainframes typically use it to run multiple tasks.
One result of this difference is that adding processors to a mainframe often speeds up the
entire workload transparently.

• Supercomputers are optimized for complicated computations that take place largely in
memory, while mainframes are optimized for comparatively simple computations
involving huge amounts of external data. For example, weather forecasting is suited to
supercomputers, and insurance business or payroll processing applications are more suited
to mainframes.

• Supercomputers are often purpose-built for one or a very few specific institutional tasks
(e.g. simulation and modeling). Mainframes typically handle a wider variety of tasks (e.g.
data processing, warehousing). Consequently, most supercomputers can be one-off
designs, whereas mainframes typically form part of a manufacturer's standard model
lineup.

• Mainframes tend to have numerous ancillary service processors assisting their main
central processors (for cryptographic support, I/O handling, monitoring, memory handling,
etc.) so that the actual "processor count" is much higher than would otherwise be obvious.
Supercomputer design tends not to include as many service processors since they don't
appreciably add to raw number-crunching power. This distinction is perhaps blurring over
time as Moore's Law constraints encourage more specialization in server components.

• Mainframes are exceptionally adept at batch processing, such as billing, owing to their
heritage, decades of increasing customer expectations for batch improvements, and
throughput-centric design. Supercomputers generally perform quite poorly in batch
processing.

Statistics
• 90% of IBM's mainframes have CICS transaction processing software installed.[8] Other
software staples include the IMS and DB2 databases, and WebSphere MQ and WebSphere
Application Server middleware.

• As of 2004, IBM claimed over 200 new (21st century) mainframe customers — customers
that had never previously owned a mainframe.

• Most mainframes run continuously at over 70% busy. A 90% figure is typical, and modern
mainframes tolerate sustained periods of 100% CPU utilization, queuing work according
to business priorities without disrupting ongoing execution.

• Mainframes have a historical reputation for being "expensive," but the modern reality is
much different. As of late 2006, it is possible to buy and configure a complete IBM
mainframe system (with software, storage, and support), under standard commercial use
terms, for about $50,000 (U.S.). The price of z/OS starts at about $1,500 (U.S.) per year,
including 24x7 telephone and Web support.\

• In the unlikely event a mainframe needs repair, it is typically repaired without interruption
to running applications. Also, memory, storage and processor modules of chips can be
added or hot swapped without interrupting applications. It is not unusual for a mainframe
to be continuously switched on for months or years at a stretch.
An IBM 704 mainframe

An IBM zSeries 800

You might also like