0% found this document useful (0 votes)
3 views10 pages

3 Week3 Classification Computers 1

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 10

Unit - 1 : History and Classification of Computer

Structure of Unit:
1.0 Objectives
1.1 Introduction
1.2 Classification of Computer
1.3 The Computer Evolution
1.4 Block Diagram of Computer
1.5 Area of Application
1.6 Summary
1.7 Self Assessment Questions
1.8 Reference Books

1.0 Objectives
After completing this unit you will be able to:
 Understand the classification of computers
 Learn about development and evolution of computer
 Understand the application areas of computer
 Understand the working of computer system.
1.1 Introduction
The word ‘computer’ is an old word that has changed its meaning several times in the last few centuries.
Originating from the Latin, by the mid-17th century it meant ‘someone who computes’. The American
Heritage Dictionary (1980) gives its first computer definition as “a person who computes.” The computer
remained associated with human activity until about the middle of the 20th century when it became applied
to “a programmable electronic device that can store, retrieve, and process data” as Webster’s Dictionary
(1980) defines it. Today, the word computer refers to computing devices, whether or not they are electronic,
programmable, or capable of ‘storing and retrieving’ data.
The Techencyclopedia (2003) defines computer as “a general purpose machine that processes data according
to a set of instructions that are stored internally either temporarily or permanently.” The computer and all
equipment attached to it are called hardware. The instructions that tell it what to do are called “software” or
“program”. A program is a detailed set of humanly prepared instructions that directs the computer to function
in specific ways. Furthermore, the Encyclopedia Britannica (2003) defines computers as “the contribution
of major individuals, machines, and ideas to the development of computing.” This implies that 2the computer
is a system. A system is a group of computer components that work together as a unit to perform a common
objective.
The term ‘history’ means past events. The encyclopedia Britannica (2003) defines it as “the discipline that
studies the chronological record of events (as affecting a nation or people), based on a critical examination
of source materials and usually presenting an explanation of their causes.” The Oxford Advanced Learner’s
Dictionary (1995) simply defines history as “the study of past events.…” In discussing the history of
computers, chronological record of events – particularly in the area of technological development – will be
explained. History of computer in the area of technological development is being considered because it is
usually the technological advancement in computers that brings about economic and social advancement. A
faster computer brings about faster operation and that in turn causes an economic development. This unit
1
will discuss classes of computers, computer evolution and highlight some roles played by individualsin these
developments.
1.2 Classification of Computer
Computing machines can be classified in many ways and these classifications depend on their functions and
definitions. They can be classified by the technology from which they were constructed, the uses to which
they are put, their capacity or size, the era in which they were used, their basic operating principle and by the
kinds of data they process. Some of these classification techniques are discussed as follows:
1.2.1 Classification by Technology
This classification is a historical one and it is based on what performs the computer operation, or the
technology behind the computing skill.
I Flesh: Before the advent of any kind of computing device at all, human beings performed computation by
themselves. This involved the use of fingers, toes and any other part of the body.
II Wood: Wood became a computing device when it was first used to design the abacus. Shickard in 1621
and Polini in 1709 were both instrumental to this development.
III Metals: Metals were used in the early machines of Pascal, Thomas, and the production versions from
firms such as Brundsviga, Monroe, etc
IV Elector Mechanical Devices: As differential analyzers, these were present in the early machines of
Zuse, Aiken, Stibitz and many others
V Electronic Elements: These were used in the Colossus, ABC, ENIAC, and the stored program computers.
This classification really does not apply to developments in the last sixty years because several kinds of new
electro technological devices have been used thereafter.
1.2.2 Classification by Capacity
Computers can be classified according to their capacity. The term ‘capacity’ refers to the volume of work
or the data processing capability a computer can handle. Their performance is determined by the amount of
data that can be stored in memory, speed of internal operation of the computer, number and type of peripheral
devices, amount and type of software available for use with the computer.
The capacity of early generation computers was determined by their physical size - the larger the size, the
greater the volume. Recent computer technology however is tending to create smaller machines, making it
possible to package equivalent speed and capacity in a smaller format. Computer capacity is currently
measured by the number of applications that it can run rather than by the volume of data it can process. This
classification is therefore done as follows:

I Micro Computers: The Microcomputer has the lowest level capacity. The machine has memories that
are generally made of semiconductors fabricated on silicon chips. Large-scale production of silicon chips
began in 1971 and this has been of great use in the production of microcomputers. The microcomputer is a
digital computer system that is controlled by a stored program that uses a microprocessor, a programmable
read-only memory (ROM) and a random-access memory (RAM). The ROM defines the instructions to be
executed by the computer while RAM is the functional equivalent of computer memory.

The Apple IIe, the Radio Shack TRS-80, and the Genie III are examples of microcomputers and are
essentially fourth generation devices. Microcomputers have from 4k to 64k storage location and are capable
of handling small, single-business application such as sales analysis, inventory, billing and payroll.
2
II Mini Computers: In the 1960s, the growing demand for a smaller stand-alone machine brought about
the manufacture of the minicomputer, to handle tasks that large computers could not perform economically.
Minicomputer systems provide faster operating speeds and larger storage capacities than microcomputer
systems. Operating systems developed for minicomputer systems generally support both multiprogramming
and virtual storage. This means that many programs can be run concurrently. This type of computer system
is very flexible and can be expanded to meet the needs of users.
Minicomputers usually have from 8k to 256k memory storage location, and a relatively established application
software. The PDP-8, the IBM systems 3 and the Honeywell 200 and 1200 computer are typical examples
of minicomputers.
III Medium-size Computers: Medium-size computer systems provide faster operating speeds and larger
storage capacities than mini computer systems. They can support a large number of high-speed input/output
devices and several disk drives can be used to provide online access to large data files as required for direct
access processing and their operating systems also support both multiprogramming and virtual storage. This
allows the running of variety of programs concurrently. A medium-size computer can support a management
information system and can therefore serve the needs of a large bank, insurance company or university.
They usually have memory sizes ranging from 32k to 512k. The IBM System 370, Burroughs 3500 System
and NCR Century 200 system are examples of medium-size computers.
IV Large Computers: Large computers are next to Super Computers and have bigger capacity than the
Medium-size computers. They usually contain full control systems with minimal operator intervention. Large
computer system ranges from single-processing configurations to nationwide computer-based networks
involving general large computers. Large computers have storage capacities from 512k to 8192k, and
these computers have internal operating speeds measured in terms of nanosecond, as compared to small
computers where speed is measured in terms of microseconds. Expandability to 8 or even 16 million
characters is possible with some of these systems. Such characteristics permit many data processing jobs to
be accomplished concurrently.
Large computers are usually used in government agencies, large corporations and computer services
organizations. They are used in complex modeling, or simulation, business operations, product testing,
design and engineering work and in the development of space technology. Large computers can serve as
server systems where many smaller computers can be connected to it to form a communication network.
V Super Computers: The supercomputers are the biggest and fastest machines today and they are used
when billion or even trillions of calculations are required. These machines are applied in nuclear weapon
development, accurate weather forecasting and as host processors for local computer and time sharing
networks. Super computers have capabilities far beyond even the traditional large-scale systems. Their
speed ranges from 100 million-instruction-per-second to well over three billion. Because of their size,
supercomputers sacrifice a certain amount of flexibility. They are therefore not ideal for providing a variety
of user services. For this reason, supercomputers may need the assistance of a medium-size general purpose
machines (usually called front-end processor) to handle minor programs or perform slower speed or smaller
volume operation.
1.2.3 Classification by their Basic Operating Principle
Using this classification technique, computers can be divided into Analog, Digital and Hybrid systems. They
are explained as follows:

I Analog Computers: Analog computers were well known in the 1940s although they are now uncommon.
In such machines, numbers to be used in some calculation were represented by physical quantities - such as
3
electrical voltages. According to the Penguin Dictionary of Computers (1970), “an analog computer must
be able to accept inputs which vary with respect to time and directly apply these inputs to various devices
within the computer which performs the computing operations of additions, subtraction, multiplication, division,
integration and function generation….” The computing units of analog computers respond immediately to
the changes which they detect in the input variables. Analog computers excel in solving differential equations
and are faster than digital computers.

II Digital Computers: Most computers today are digital. They represent information discretely and use a
binary (two-step) system that represents each piece of information as a series of zeroes and ones.

The Pocket Webster School & Office Dictionary (1990) simply defines Digital computers as “a computer
using numbers in calculating.” Digital computers manipulate most data more easily than analog computers.
They are designed to process data in numerical form and their circuits perform directly the mathematical
operations of addition, subtraction, multiplication, and division. Because digital information is discrete, it can
be copied exactly but it is difficult to make exact copies of analog information.
III Hybrid Computers: These are machines that can work as both analog and digital computers.

1.3 The Computer Evolution


The computer evolution is indeed an interesting topic that has been explained in some different ways over
the years, by many authors. According to The Computational Science Education Project, US, the computer
has evolved through the following stages:
The Mechanical Era (1623-1945)

Trying to use machines to solve mathematical problems can be traced to the early 17th century. Wilhelm
Schickhard, Blaise Pascal, and Gottfried Leibnitz were among mathematicians who designed and implemented
calculators that were capable of addition, subtraction, multiplication, and division included The first multi-
purpose or programmable computing device was probably Charles Babbage’s Difference Engine, which
was begun in 1823 but never completed. In 1842, Babbage designed a more ambitious machine, called the
Analytical Engine but unfortunately it also was only partially completed. Babbage, together with Ada Lovelace
recognized several important programming techniques, including conditional branches, iterative loops and
index variables.

Babbage designed the machine which is arguably the first to be used in computational science. In 1933,
George Scheutz and his son, Edvard began work on a smaller version of the difference engine and by 1853
they had constructed a machine that could process 15-digit numbers and calculate fourth-order differences.
The US Census Bureau was one of the first organizations to use the mechanical computers which used
punch-card equipment designed by Herman Hollerith to tabulate data for the 1890 census. In 1911 Hollerith’s
company merged with a competitor to found the corporation which in 1924 became International Business
Machines (IBM).
First Generation Electronic Computers (1937-1953)
These devices used electronic switches, in the form of vacuum tubes, instead of electromechanical relays.
The earliest attempt to build an electronic computer was by J. V. Atanasoff, a professor of physics and
mathematics at Iowa State in 1937. Atanasoff set out to build a machine that would help his graduate
students solve systems of partial differential equations. By 1941 he and graduate student Clifford Berry had
4
succeeded in building a machine that could solve 29 simultaneous equations with 29 unknowns. However,
the machine was not programmable, and was more of an electronic calculator.

A second early electronic machine was Colossus, designed by Alan Turing for the British military in 1943.
The first general purpose programmable electronic computer was the Electronic Numerical Integrator and
Computer (ENIAC), built by J. Presper Eckert and John V. Mauchly at the University of Pennsylvania.
Research work began in 1943, funded by the Army Ordinance Department, which needed a way to compute
ballistics during World War II. The machine was completed in 1945 and it was used extensively for
calculations during the design of the hydrogen bomb. Eckert, Mauchly, and John von Neumann, a consultant
to the ENIAC project, began work on a new machine before ENIAC was finished. The main contribution
of EDVAC, their new project, was the notion of a stored program. ENIAC was controlled by a set of
external switches and dials; to change the program required physically altering the settings on these controls.
EDVAC was able to run orders of magnitude faster than ENIAC and by storing instructions in the same
medium as data, designers could concentrate on improving the internal structure of the machine without
worrying about matching it to the speed of an external control. Eckert and Mauchly later designed what was
arguably the first commercially successful computer, the UNIVAC; in 1952. Software technology during
this period was very primitive.
Second Generation (1954-1962)

The second generation witnessed several important developments at all levels of computer system design,
ranging from the technology used to build the basic circuits to the programming languages used to write
scientific applications. Electronic switches in this era were based on discrete diode and transistor technology
with a switching time of approximately 0.3 microseconds. The first machines to be built with this technology
include TRADIC at Bell Laboratories in 1954 and TX-0 at MIT’s Lincoln Laboratory. Index registers
were designed for controlling loops and floating point units for calculations based on real numbers.
A number of high level programming languages were introduced and these include FORTRAN (1956),
ALGOL (1958), and COBOL (1959). Important commercial machines of this era include the IBM 704
and its successors, the 709 and 7094. In the 1950s the first two supercomputers were designed specifically
for numeric processing in scientific applications.
Third Generation (1963-1972)
Technology changes in this generation include the use of integrated circuits (ICs) semiconductor devices
with several transistors built into one physical component, semiconductor memories, microprogramming as
a technique for efficiently designing complex processors and the introduction of operating systems and time-
sharing. The first ICs were based on small-scale integration (SSI) circuits, which had around 10 devices per
circuit (‘chip’), and evolved to the use of medium-scale integrated (MSI) circuits, which had up to 100
devices per chip. Multilayered printed circuits were developed and core memory was replaced by faster,
solid state memories.
In 1964, Seymour Cray developed the CDC 6600, which was the first architecture to use functional
parallelism. By using 10 separate functional units that could operate simultaneously and 32 independent
memory banks, the CDC 6600 was able to attain a computation rate of one million floating point operations
per second (Mflops). Five years later CDC released the 7600, also developed by Seymour Cray. The
CDC 7600, with its pipelined functional units, is considered to be the first vector processor and was
capable of executing at ten Mflops. The IBM 360/91, released during the same period, was roughly twice
as fast as the CDC 660.
5
Early in this third generation, Cambridge University and the University of London cooperated in the
development of CPL (Combined Programming Language, 1963). CPL was, according to its authors, an
attempt to capture only the important features of the complicated and sophisticated ALGOL. However, like
ALGOL, CPL was large with many features that were hard to learn. In an attempt at further simplification,
Martin Richards of Cambridge developed a subset of CPL called BCPL (Basic Computer Programming
Language, 1967). In 1970 Ken Thompson of Bell Labs developed yet another simplification of CPL called
simply B, in connection with an early implementation of the UNIX operating system. comment)

Fourth Generation (1972-1984)

Large scale integration (LSI - 1000 devices per chip) and very large scale integration (VLSI - 100,000
devices per chip) were used in the construction of the fourth generation computers. Whole processors
could now fit onto a single chip, and for simple systems the entire computer (processor, main memory, and
I/O controllers) could fit on one chip. Gate delays dropped to about 1ns per gate. Core memories were
replaced by semiconductor memories.

Large main memories like CRAY 2 began to replace the older high speed vector processors, such as the
CRAY 1, CRAY X-MP and CYBER In 1972, Dennis Ritchie developed the C language from the design of
the CPL and Thompson’s B. Thompson and Ritchie then used C to write a version of UNIX for the DEC
PDP-11. Other developments in software include very high level languages such as FP (functional
programming) and Prolog (programming in logic).

IBM worked with Microsoft during the 1980s to start what we can really call PC (Personal Computer) life
today. IBM PC was introduced in October 1981 and it worked with the operating system (software) called
‘Microsoft Disk Operating System (MS DOS) 1.0. Development of MS DOS began in October 1980
when IBM began searching the market for an operating system for the then proposed IBM PC and major
contributors were Bill Gates, Paul Allen and Tim Paterson. In 1983, the Microsoft Windows was announced
and this has witnessed several improvements and revision over the last twenty years.

Fifth Generation (1984-1990)


This generation brought about the introduction of machines with hundreds of processors that could all be
working on different parts of a single program. The scale of integration in semiconductors continued at a
great pace and by 1990 it was possible to build chips with a million components - and semiconductor
memories became standard on all computers. Computer networks and single-user workstations also became
popular.

Parallel processing started in this generation. The Sequent Balance 8000 connected up to 20 processors to
a single shared memory module though each processor had its own local cache. The machine was designed
to compete with the DEC VAX-780 as a general purpose Unix system, with each processor working on a
different user’s job. However Sequent provided a library of subroutines that would allow programmers to
write programs that would use more than one processor, and the machine was widely used to explore
parallel algorithms and programming techniques. The Intel iPSC-1, also known as ‘the hypercube’ connected
each processor to its own memory and used a network interface to connect processors. This distributed
memory architecture meant memory was no longer a problem and large systems with more processors (as
many as 128) could be built. Also introduced was a machine, known as a data-parallel or SIMD where
there were several thousand very simple processors which work under the direction of a single control unit.
Both wide area network (WAN) and local area network (LAN) technology developed rapidly.
6
Sixth Generation (1990 -)
Most of the developments in computer systems since 1990 have not been fundamental changes but have
been gradual improvements over established systems. This generation brought about gains in parallel
computing in both the hardware and in improved understanding of how to develop algorithms to exploit
parallel architectures. Workstation technology continued to improve, with processor designs now using a
combination of RISC, pipelining, and parallel processing.
Wide area networks, network bandwidth and speed of operation and networking capabilities have kept
developing tremendously. Personal computers (PCs) now operate with Gigabit per second processors,
multi-Gigabyte disks, hundreds of Mbytes of RAM, colour printers, high-resolution graphic monitors, stereo
sound cards and graphical user interfaces. Thousands of software (operating systems and application
software) are existing today and Microsoft Inc. has been a major contributor.
Finally, this generation has brought about micro controller technology. Micro controllers are ’embedded’
inside some other devices (often consumer products) so that they can control the features or actions of the
product. They work as small computers inside devices and now serve as essential components in most
machines.

1.4 Block Diagram of Computer

Figure 1.1 : Block Diagram of Computer

A computer can process data, pictures, sound and graphics. They can solve highly complicated problems
quickly and accurately.

Input Unit:
Computers need to receive data and instruction in order to solve any problem. Therefore we need to input
the data and instructions into the computers. The input unit consists of one or more input devices. Keyboard
7
is the one of the most commonly used input device. Other commonly used input devices are the mouse,
floppy disk drive, magnetic tape, etc. All the input devices perform the following functions.
 Accept the data and instructions from the outside world.
 Convert it to a form that the computer can understand.
 Supply the converted data to the computer system for further processing.
Storage Unit:
The storage unit of the computer holds data and instructions that are entered through the input unit, before
they are processed. It preserves the intermediate and final results before these are sent to the output devices.
It also saves the data for the later use. The various storage devices of a computer system are divided into
two categories.
1. Primary Storage: Stores and provides data very fast. This memory is generally used to hold the
program being currently executed in the computer, the data being received from the input unit, the
intermediate and final results of the program. The primary memory is temporary in nature. The data
is lost, when the computer is switched off. In order to store the data permanently, the data has to be
transferred to the secondary memory. Very small portion of primary storage memory is permanent
is nature eg. ROM which holds the data permanent even if power off.
The cost of the primary storage is more or compared to the secondary storage. Therefore most
computers have limited primary storage capacity.
2. Secondary Storage: Secondary storage is used like an archive. It stores several programs,
documents, data bases etc. The programs that you run on the computer are first transferred to the
primary memory before it is actually run. Whenever the results are saved, again they get stored in
the secondary memory. The secondary memory is slower and cheaper than the primary memory.
Some of the commonly used secondary memory devices are Hard disk, CD, etc.,

Memory Size:
All digital computers use the binary system, i.e. 0’s and 1’s. Each character or a number is represented by
an 8 bit code.

 The set of 8 bits is called a byte.


 A character occupies 1 byte space.
 A numeric occupies 2 byte space.
 Byte is the space occupied in the memory.

The size of the primary storage is specified in KB (Kilobytes) or MB (Megabyte). One KB is equal to 1024
bytes and one MB is equal to 1000KB. The size of the primary storage in a typical PC usually starts at
16MB. PCs having 32 MB, 48MB, 128 MB, 256MB memory are quite common.

Output Unit:

The output unit of a computer provides the information and results of a computation to outside world.
Printers, Visual Display Unit (VDU) are the commonly used output devices. Other commonly used output
devices are floppy disk drive, hard disk drive, and magnetic tape drive.
8
Arithmetic Logical Unit:
All calculations are performed in the Arithmetic Logic Unit (ALU) of the computer. It also does comparison
and takes decision. The ALU can perform basic operations such as addition, subtraction, multiplication,
division, etc and does logic operations viz, >, <, =, ‘etc. Whenever calculations are required, the control
unit transfers the data from storage unit to ALU once the computations are done, the results are transferred
to the storage unit by the control unit and then it is send to the output unit for displaying results.
Control Unit:
It controls all other units in the computer. The control unit instructs the input unit, where to store the data
after receiving it from the user. It controls the flow of data and instructions from the storage unit to ALU. It
also controls the flow of results from the ALU to the storage unit. The control unit is generally referred as the
central nervous system of the computer that control and synchronizes it’s working.
Central Processing Unit:
The control unit and ALU of the computer are together known as the Central Processing Unit (CPU). The
CPU is like brain performs the following functions:
• It performs all calculations.
• It takes all decisions.
• It controls all units of the computer.
A PC may have CPU-IC such as Intel 8088, 80286, 80386, 80486, Celeron, Pentium, Pentium Pro,
Pentium II, Pentium III, Pentium IV, Dual Core, and AMD etc.

1.5 Area of Application


Digital Media
Digital media, including both graphics and sound, have become central both to our culture and our science.
There are (at least) three general areas that might serve as a focus for certificate students interested in these
computer applications.
Graphics
Courses for a graphics media track might include COS 426 (Computer Graphics) or COS 429 (Computer
Vision), plus COS 436 (Human Computer Interface Technology) or COS 479 (Pervasive Information
Systems). The choices are wide and will vary with the student. Those interested in a graphics track for the
applications certificate should see Prof. Adam Finkelstein.
Music
The collaboration between Music and Computer Science at Princeton has a long and rich history. Specific
cross-listed COS/MUS courses include MUS/COS 314 (Introduction to Computer Music) and COS
325/MUS 315 (Transforming Reality by Computer). A music track for the certificate might include one of
these two, plus COS 436 (Human Computer Interface Technology) or COS 479 (Pervasive Information
Systems). Again, a wide range of choices is possible.
Policy and Intellectual Property
The legal and political aspects of digital media are becoming increasingly important in our society. A track
for the certificate that focused in this area might typically include COS 491 (Information Technology and
The Law), plus any one of many other possible courses, depending on the student’s particular interest.
9
1.6 Summary
Development of computer system is twofold ways. In one direction it is technological development & in
other uses is high. Expectation & demand increases in business. User are becoming more matured towards
its uses and demand also increases.
1.7 Self Assessment Questions
1. Write the difference between fifth & sixth generation computer system.
2. Explain draw backs of first generation.
3. Describe application area of computer in education.
4. List input devices.
5. Differentiate Primary & Secondary storage.
6. Write an essay on “Evolution of compute”
7. Explain the Block diagram of computer in detail. And also explain the classification of computer
in detail.
1.8 Reference Books
- Henry.C.Lucas, Jr. (2001) 'Information Technology'; Tata Mc Graw Hill Publication Company
Limited, New Delhi.
- D.P.Sharma (2008); 'Information Technology'; College Book Centre, Jaipur.
- Jain, Jalan, Ranga, Chouhan, (2009); 'Information Technology'; Ramesh Book Depot, Jaipur.
- P.K.Sinha, Priti Sinha,(2007) 'Computer Fundamentals'; BPB Publication, New Delhi.

10

You might also like