0% found this document useful (0 votes)
13 views

Chapter 3 Computer

Uploaded by

shihabsince99
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Chapter 3 Computer

Uploaded by

shihabsince99
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Chapter 03: HISTORY AND CLASSIFICATION OF COMPUTERS

EARLY COMPUTING DEVICES


Abacus : The Abacus is an ancient device that's like the earliest computer. It's
also called Soroban. It uses beads on a rack to show numbers, and you can
quickly add or subtract by moving the beads. It was invented around 600 BC.
Napier's Bones: Napier's Bones is a tool made by John Napier, a Scottish
scientist, for doing calculations. It has eleven rods with numbers on them. It was
created in the early 1600s and people were still using improved versions of it in
1890.
Pascaline : The Pascaline, made by Blaise Pascal in 1642, was the first machine
for doing math. It had gears, cogwheels, and dials to help with adding and
subtracting numbers easily.

Punched cards: In the 1800s, a Frenchman named Joseph Jacquard made a


loom that used punched cards to make patterned cloth automatically. This idea
of storing instructions on cards is similar to how computer programs work today.
Difference Engine : In 1822, Charles Babbage, an Englishman and a math
professor, created a mechanical computer called the Difference Engine. It could
handle complex math problems and was based on the idea of geared wheels.
The machine could accurately generate mathematical tables with up to 20 digits.
Analytic Engine : In 1833, Charles Babbage, inspired by the success of the
Difference Engine, came up with the Analytic Engine. It could store 1000
numbers with 50 decimals each and perform basic arithmetic at an average
speed of 60 additions per minute. Although he couldn't build a working model
due to limited precision engineering at that time, his ideas influenced the design
of future computers. Features like punched card instructions, internal memory,
and an arithmetic unit were later incorporated into early computers designed
100 years later.
ADA : Ada Lovelace, a talented mathematician and disciple of Charles Babbage,
created the binary number system for Babbage's machine. She is often seen as
the world's first programmer, and the programming language ADA is named
after her. Babbage's Engines are seen as closer to modern computers than
earlier devices, and many consider Charles Babbage the real father of
computers.
Punched Cards : Herman Hollerith introduced punched cards, widely used in
today's computers as input. In the late 1800s, business machines and calculators
with this technology emerged in Europe and America.
Hollerith Machine : In the 1880s, Harman Hollerith invented a machine using
punched cards to process census data quickly, reducing the time from 8 to less
than 3 years. This innovation was adopted by other countries and later used by
insurance companies. Hollerith founded the Tabulating Machine Company,
which merged into IBM in 1911.

In the late 1930s and early 1940s, Bell Relay Computers were developed at Bell
Telephone Laboratories under George Stibitz's direction, using
electromechanical relays for fast and accurate calculations. The first electro-
mechanical computer, Mark-1, was created by Dr. Howard Aiken at Harvard
University and produced by IBM in 1944. It could perform rapid calculations and
was a realization of Charles Babbage's Analytical Engine. IBM later developed
Mark-II through IV computers.
The Computer Generations

1. First-Generation Computers (1942-1955): Massive computers using vacuum


tubes, generating heat and requiring frequent maintenance. Limited memory
and processing speed.They used magnetic drums or tape for secondary storage.
Examples include ENIAC, EDVAC, UNIVAC-I, IBM-701, IBM-650, and IAS Machine.
2. Second-Generation Computers (1955-1964): Introduced transistors, semi
conductor device for making computers smaller, more reliable, generated little
heat and less expensive. Used magnetic cores for memory.Many second-
generation computers had main memory capacities of less than 100 kilobytes
and microsecond processing speeds. Examples include IBM-1620, IBM7094,
CDC-1604, CDC-3600, UNIVAC-1108, PDP-I, and NCR-304.

3. Third-Generation Computers (1964-1975): Used integrated circuits for


increased memory and faster processing. Main memory capacities increased to
several megabytes and processing speeds jumped to millions of instructions per
second (MIPS).Telecommunications became common, enabling automated
operating systems. Examples include IBM-360 Series, IBM-370 Series, HCL-2900
Series, Honeywell-6000 Series, PDP-8, and VAX.

4. Fourth-Generation Computers (1975-2000): Utilized LSI and VLSI


technologies for microprocessors,Microcomputers, which use microprocessor
CPUs and a variety of peripheral devices and easy-to-use software packages to
form small personal computer (PC). downsizing computing systems. Main
memory capacities increased. Examples include DEC-10, STAR1000, PDP-II,
CRAY-I, CRAY-X-MP, CRAY-2, and IBM PC/AT.

5. Fifth-Generation Computers (2000-…): Focus on developing 'intelligent'


machines with Artificial Intelligence, automatic programming, and advanced
capabilities.Fifth-generation computers will be highly complex knowledge
processing machines. Japan, USA and many other countries are working on
systems, which use Artificial Intelligence. Anticipated to have billions of
instructions per second and extensive storage capacities. Examples and
widespread development ongoing.

Moore’s Law

In 1965, Gordon E. Moore predicted that the number of transistors in integrated


circuits would double approximately every 2 years. This prediction, known as
"Moore's Law," has proven remarkably accurate, with the number of transistors
doubling every 18 months. The growth is evident in both Dynamic Random
Access Memory (DRAM) chips and microprocessor chips. The increase in
memory capacity and processor speed has led to larger and more complex
software applications. Interestingly, disk capacity in PCs also followed a similar
trend, doubling every 18 months. The implication of Moore's Law is that we can
expect more powerful computers at reasonable costs in the future, and it will be
up to our creativity to use this increased computing power effectively.

Classification of Computers

Computers can be divided into following categories by functional criteria (data


representation):
1.Digital Computers
2.Analog Computers
3. Hybrid Computers

Digital Computers: Digital computers work with digits, specifically binary digits
(0s and 1s). They are fast counting devices that perform operations like addition.
Other operations like subtraction, multiplication, and division are achieved
through the addition operation. Digital computer circuits are complex and
follow programmed instructions in a specific language.

Analog Computers: Analog computers represent numbers by measuring


physical quantities, such as length or voltage. They derive data from
measurements, and their accuracy depends on measurement precision.
Examples include speedometers, voltmeters, and flight simulators.

Hybrid Computers: Hybrid computers combine features of both analog and


digital computers, providing the speed of analog and the accuracy of digital
systems. They are used for special problems where measurement data is
converted into digits and processed. Examples include radar control systems for
national defense and passenger flights, as well as hospital ICU systems that
measure physical signs and convert them into digital signals for analysis.

We can also classify the computer systems into following categories by using the
capacity performance criteria (size, cost, speed and memory):
 Supercomputers
 Mainframe computers
 Minicomputers, or Midrange computers
 Workstations
 Microcomputers, or Personal computers

Supercomputers: Supercomputers are the most powerful and largest


computers. They process massive data, performing over a trillion calculations
per second. Examples include the Cray T-90 system and the Japanese
supercomputer Fugaku. They are used for complex tasks like weather prediction,
aircraft design, drug development, and molecule modeling.

Mainframe Computers: Mainframes are the largest common-use computers,


found in organizations like banks and insurance companies. They manage large
databases and are increasingly used as specialized servers for secure online
transactions, such as purchasing airline tickets.
Minicomputers: Minicomputers, released in the 1960s, are mid-sized
computers between mainframes and personal computers. They handle more
input and output than personal computers.

Microcomputers or Personal Computers (PCs): Microcomputers, or PCs, are


small and personal. IBM's first microcomputer was called the IBM-PC in 1981,
leading to the term PC. PCs, including IBM compatibles, are popular for their
rapid technological improvements, with continual gains in speed and capacity
while maintaining stable or reduced size and price. They are part of a family
known for constant advancements in technology.

You might also like