INTRODUCTION TO COMPUTER SCIENCE Part 1
INTRODUCTION TO COMPUTER SCIENCE Part 1
COMPUTER: A computer is basically defined as a tool or machine used for processing data to
give required information. It is capable of:
giving out the result (output) on the screen or the Visual Display Unit (VDU).
Data: The term data refers to facts about a person, object or place, e.g. name, age, complexion,
school, class, height etc.
Information: This is referred to as processed data or a meaningful statement, e.g. net pay of
workers, examination results of students, list of successful candidates in an examination or
interview etc.
The following are the three major methods that have been widely used for data processing over
the years:
The manual method of data processing involves the use of chalk, wall, pen, pencil and the like.
These devices, machines or tools facilitate human efforts in recording, classifying, manipulating,
sorting and presenting data or information. The manual data processing operations entail
considerable manual efforts. Thus, the manual method is cumbersome, tiresome, boring,
frustrating and time consuming. Furthermore, the processing of data by the manual method is
likely to be affected by human errors. When there are errors, then the reliability, accuracy,
neatness, tidiness, and validity of the data would be in doubt. The manual method does not allow
for the processing of large volumes of data on a regular and timely basis.
The Mechanical Method
The mechanical method of data processing involves the use of machines such as the typewriter,
roneo machines, adding machines and the like. These machines facilitate human efforts in
recording, classifying, manipulating, sorting and presenting data or information. The mechanical
operations are basically routine in nature. There is virtually no creative thinking. Mechanical
operations are noisy, hazardous, error prone and untidy. The mechanical method does not allow
for the processing of large volumes of data continuously and timely.
The computer method of carrying out data processing has the following major features:
There is a store where data and instructions can be stored temporarily and permanent.
Output reports are usually very neat, decent and can be produced in various forms such as
adding graphs, diagrams and pictures etc.
Characteristics of a Computer
Speed: The computer can manipulate large data at incredible speed and response time can be
very fast.
Accuracy: Its accuracy is very high and its consistency can be relied upon. Errors committed
in computing are mostly due to human rather than technological weakness.
Storage: It has both internal and external storage facilities for holding data and instructions.
This capacity varies from one machine to the other.
Automatic: Once a program is in the computer’s memory, it can run automatically each time it
is opened. The individual has little or no instruction to give again.
Reliability: Being a machine, a computer does not suffer human traits of tiredness and lack of
concentration. It will perform the last job with the same speed and accuracy as the first job every
time even if ten million jobs are involved.
Flexibility: It can perform any type of task once it can be reduced to logical steps. Modern
computers can be used to perform a variety of functions like on-line processing, multi-
programming, real time processing etc.
The computing system is made up of the computer system, the user and the environment in
which the computer is operated.
The Computer System
The Hardware
The computer hardware comprises the input unit, the processing unit and the output unit.
The input unit comprises those media through which data is fed into the computer. Examples
include the keyboard, mouse, joystick, trackball and scanner.
The processing unit is made up of the Arithmetic and Logic Unit (ALU), the control unit and the
main memory. The main memory also known as the primary memory is made up of the Read
Only Memory (ROM) and the Random Access Memory (RAM).
The output unit is made up of those media through which data, instructions for processing the
data (program), and the result of the processing operation are displayed for the user to see.
Examples of the output unit are the monitor (Visual Display Unit) and the printer.
Software
Computer software is the series of instructions that enable the computer to perform a task or
group of tasks. A program is made up of a group of instructions to perform a task. Series of
programs linked together make up software. Computer programs could be categorised into
system software, utility software, and application programs.
Computer Users
Computer users are the different categories of personnel that operate the computer. We have
expert users and casual users. The expert users could be further categorised into computer
engineers, computer programmers and computer operators.
The computing environment includes the building housing the other elements of the computing
system namely the computer and the users, the furniture, auxiliary devices such as the voltage
stabiliser, the Uninterruptible Power Supply System (UPS), the fans, the air conditioners etc. The
schematic diagram of the computing system is presented in Fig. 2a. to Fig. 2d.
A HISTORICAL OVERVIEW OF THE COMPUTER
A complete history of computing would include a multitude of diverse devices such as the
ancient Chinese abacus, the Jacquard loom (1805) and Charles Babbage’s “analytical engine”
(1834). It would also include a discussion of mechanical, analog and digital computing
architectures. As late as the 1960s, mechanical devices, such as the Merchant calculator, still
found widespread application in science and engineering. During the early days of electronic
computing devices, there was much discussion about the relative merits of analog vs. digital
computers. In fact, as late as the 1960s, analog computers were routinely used to solve systems
of finite difference equations arising in oil reservoir modeling. In the end, digital computing
devices proved to have the power, economics and scalability necessary to deal with large scale
computations. Digital computers now dominate the computing world in all areas ranging from
the hand calculator to the supercomputer and are pervasive throughout society. Therefore, this
brief sketch of the development of scientific computing is limited to the area of digital, electronic
computers.
The evolution of digital computing is often divided into generations. Each generation is
characterised by dramatic improvements over the previous generation in the technology used to
build computers, the internal organisation of computer systems, and programming languages.
Although not usually associated with computer generations, there has been a steady
improvement in algorithms, including algorithms used in computational science. The following
history has been organised using these widely recognized generations as mileposts.
Three machines have been promoted at various times as the first electronic computers. These
machines used electronic switches, in the form of vacuum tubes, instead of electromechanical
relays. In principle the electronic switches were more reliable, since they would have no moving
parts that would wear out, but technology was still new at that time and the tubes were
comparable to relays in reliability. Electronic components had one major benefit, however: they
could “open” and “close” about 1,000 times faster than mechanical switches. The earliest attempt
to build an electronic computer was by J. V. Atanasoff, a professor of physics and mathematics
at Iowa State, in 1937. Atanasoff set out to build a machine that would help his graduate students
solve systems of partial differential equations. By 1941, he and graduate student Clifford Berry
had succeeded in building a machine that could solve 29 simultaneous equations with 29
unknowns. However, the machine was not programmable, and was more of an electronic
calculator. A second early electronic machine was Colossus, designed by Alan Turning for the
British military in 1943. This machine played an important role in breaking codes used by the
German army in World War II. Turning’s main contribution to the field of computer science was
the idea of the Turning Machine, a mathematical formalism widely used in the study of
computable functions. The first general purposes programmable electronic computer was the
Electronic Numerical Integrator and Computer (ENIAC), built by J. Presper Eckert and John V.
Mauchly at the University of Pennsylvania. Work began in 1943, funded by the Army Ordinance
Department, which needed a way to compute ballistics during World War II. The machine
wasn’t completed until 1945, but then it was used extensively for calculations during the design
of the hydrogen bomb. By the time it was decommissioned in 1955 it had been used for research
on the design of wind tunnels, random number generators, and weather prediction. Eckert,
Mauchly, and John Von Neumann, a consultant to the ENIAC project, began work on a new
machine before ENIAC was finished. The main contribution of EDVAC, their new project, was
the notion of a stored program.
Software technology during this period was very primitive. The first programs were written out
in machine code, i.e. programmers directly wrote down the numbers that corresponded to the
instructions they wanted to store in memory. By the 1950s programmers were using a symbolic
notation, known as assembly language, then hand-translating the symbolic notation into machine
code. Later programs known as assemblers performed the translation task. As primitive as they
were, these first electronic machines were quite useful in applied science and engineering.
Atanasoff estimated that it would take eight hours to solve a set of equations with eight
unknowns using a Marchant calculator, and 381 hours to solve 29 equations for 29 unknowns.
The Atanasoff-Berry computer was able to complete the task in under an hour. The first problem
run on the ENIAC, a numerical simulation used in the design of the hydrogen bomb, required 20
seconds, as opposed to forty hours using mechanical calculators. Eckert and Mauchly later
developed what was arguably the first commercially successful computer, the UNIVAC; in 1952,
45 minutes after the polls closed and with 7% of the vote counted, UNIVAC predicted
Eisenhower would defeat Stevenson with 438 electoral votes (he ended up with 442).
The second generation saw several important developments at all levels of computer system
design, from the technology used to build the basic circuits to the programming languages used
to write scientific applications. Electronic switches in this era were based on discrete diode and
transistor technology with a switching time of approximately 0.3 microseconds. The first
machines to be built with this technology include TRADIC at Bell Laboratories in 1954 and TX-
0 at MIT’s Lincoln Laboratory. Memory technology was based on magnetic cores which could
be accessed in random order, as opposed to mercury delay lines, in which data was stored as an
acoustic wave that passed sequentially through the medium and could be accessed only when the
data moved by the I/O interface. Important innovations in computer architecture included index
registers for controlling loops and floating point units for calculations based on real numbers.
Prior to this accessing successive elements in an array was quite tedious and often involved
writing self-modifying codes (programs which modified themselves as they ran; at the time
viewed as a powerful application of the principle that programs and data were fundamentally the
same, this practice is now frowned upon as extremely hard to debug and is impossible in most
high level languages). Floating point operations were performed by libraries of software routines
in early computers, but were done in hardware in second generation machines. During this
second generation many high level programming languages were introduced, including
FORTRAN (1956), ALGOL (1958), and COBOL (1959). Important commercial machines of
this era include the IBM 704 and 7094. The latter introduced I/O processors for better throughput
between I/O devices and main memory. The second generation also saw the first two
supercomputers designed specifically for numeric processing in scientific applications. The term
“supercomputer” is generally reserved for a machine that is an order of magnitude more
powerful than other machines of its era. Two machines of the 1950s deserve this title. The
Livermore Atomic Research Computer (LARC) and the IBM 7030 (aka Stretch) were early
examples of machines that overlapped memory operations with processor operations and had
primitive forms of parallel processing.
The third generation brought huge gains in computational power. Innovations in this era include
the use of integrated circuits, or ICs (semiconductor devices with several transistors built into
one physical component), semiconductor memories starting to be used instead of magnetic cores,
microprogramming as a technique for efficiently designing complex processors, the coming of
age of pipelining and other forms of parallel processing, and the introduction of operating
systems and time-sharing. The first ICs were based on small-scale integration (SSI) circuits,
which had around 10 devices per circuit (or “chip”), and evolved to the use of medium-scale
integrated (MSI) circuits, which had up to 100 devices per chip. Multilayered printed circuits
were developed and core memory was replaced by faster, solid state memories. Computer
designers began to take advantage of parallelism by using multiple functional units, overlapping
CPU and I/O operations, and pipelining (internal parallelism) in both the instruction stream and
the data stream. In 1964, Seymour Cray developed the CDC 6600, which was the first
architecture to use functional parallelism. By using 10 separate functional units that could
operate simultaneously and 32 independent memory banks, the CDC 6600 was able to attain a
computation rate of 1 million floating point operations per second (1 Mflops). Five years later
CDC released the 7600, also developed by Seymour Cray. The CDC 7600, with its pipelined
functional units, is considered to be the first vector processor and was capable of executing at 10
Mflops. The IBM 360/91, released during the same period, was roughly twice as fast as the CDC
6600. It employed instruction look ahead, separate floating point and integer functional units and
pipelined instruction stream. The IBM 360-195 was comparable to the CDC 7600, deriving
much of its performance from a very fast cache memory.
The next generation of computer systems saw the use of large scale integration (LSI –1000
devices per chip) and very large scale integration (VLSI –100,000 devices per chip) in the
construction of computing elements. At this scale entire processors will fit onto a single chip,
and for simple systems the entire computer (processor, main memory, and I/O controllers) can fit
on one chip. Gate delays dropped to about Ins per gate. Semiconductor memories replaced core
memories as the main memory in most systems; until this time the use of semiconductor memory
in most systems was limited to registers and cache. During this period, high speed vector
processors, such as the CRAY 1, CRAY X-MP and CYBER 205 dominated the high
performance computing scene. Computers with large main memory, such as the CRAY 2, began
to emerge. A variety of parallel architectures began to appear; however, during this period the
parallel computing efforts were of a mostly experimental nature and most computational science
was carried out on vector processors. Microcomputers and workstations were introduced and saw
wide use as alternatives to time-shared mainframe computers. Developments in software include
very high level languages such as FP (functional programming) and Prolog (programming in
logic). These languages tend to use a declarative programming style as opposed to the imperative
style of Pascal, C. FORTRAN, et al. In a declarative style, a programmer gives a mathematical
specification of what should be computed, leaving many details of how it should be computed to
the compiler and/or runtime system. These languages are not yet in wide use, but are very
promising as notations for programs that will run on massively parallel computers (systems with
over 1,000 processors). Compilers for established languages started to use sophisticated
optimisation techniques to improve codes, and compilers for vector processors were able to
vectorise simple loops (turn loops into single instructions that would initiate an operation over an
entire vector).
The development of the next generation of computer systems is characterised mainly by the
acceptance of parallel processing. Until this time, parallelism was limited to pipelining and
vector processing, or at most to a few processors sharing jobs. The fifth generation saw the
introduction of machines with hundreds of processors that could all be working on different parts
of a single program. The scale of integration in semiconductors continued at an incredible pace,
so that by 1990 it was possible to build chips with a million components – and semiconductor
memories became standard on all computers. Other new developments were the widespread use
of computer networks and the increasing use of single-user workstations. Prior to 1985, large
scale parallel processing was viewed as a research goal, but two systems introduced around this
time are typical of the first commercial products to be based on parallel processing. The Sequent
Balance 8000 connected up to 20 processors to a single shared memory module (but each
processor had its own local cache). The machine was designed to compete with the DEC VAX-
780 as a general purpose Unix system, with each processor working on a different user’s job.
However, Sequent provided a library of subroutines that would allow programmers to write
programs that would use more than one processor, and the machine was widely used to explore
parallel algorithms and programming techniques. The Intel iPSC-1, nicknamed “the hypercube”,
took a different approach. Instead of using one memory module, Intel connected each processor
to its own memory and used a network interface to connect processors. This distributed memory
architecture meant memory was no longer a bottleneck and large systems (using more
processors) could be built. The largest iPSC-1 had 128 processors. Toward the end of this period,
a third type of parallel processor was introduced to the market.
CLASSIFICATION OF COMPUTERS
There are basically three types of electronic computers. These are the Digital, Analog and Hybrid
computers.
This represents its variables in the form of digits. The data it deals with, whether representing
numbers, letters or other symbols, are converted into binary form on input to the computer. The
data undergoes a processing after which the binary digits are converted back to alpha numeric
form for output for human use. Because of the fact that business applications like inventory
control, invoicing and payroll deal with discrete values (separate, disunited, discontinuous), they
are best processed with digital computers. As a result of this, digital computers are mostly used
in commercial and business places today.
The Analog Computer
It measures rather than counts. This type of computer sets up a model of a system. The common
type represents its variables in terms of electrical voltage and sets up circuit analog to the
equation connecting the variables. The answer can be either by using a voltmeter to read the
value of the variable required, or by feeding the voltage into a plotting device. Analog computers
hold data in the form of physical variables rather than numerical quantities. In theory, analog
computers give an exact answer because the answer has not been approximated to the nearest
digit. Whereas, when we try to obtain the answers using a digital voltmeter, we often find that
the accuracy is less than that which could have been obtained from an analog computer. It is
almost never used in business systems. It is used by scientists and engineers to solve systems of
partial differential equations. It is also used in controlling and monitoring of systems in such
areas as hydrodynamics and rocketry in production. There are two useful properties of this
computer once it is programmed:
It is simple to change the value of a constant or coefficient and study the effect of such
changes.
It is possible to link certain variables to a time pulse to study changes with time as a variable,
and chart the result on an X-Y plotter.
In some cases, the computer user may wish to obtain the output from an analog computer as
processed by a digital computer or vice versa. To achieve this, he set up a hybrid machine where
the two are connected and the analog computer may be regarded as a peripheral of the digital
computer. In such a situation, a hybrid system attempts to gain the advantage of both the digital
and the analog elements in the same machine. This kind of machine is usually a special-purpose
device which is built for a specific task. It needs a conversion element which accepts analog
inputs, and outputs digital values. Such converters are called digitisers. There is a need for a
converter from analog to digital also. It has the advantage of giving real-time response on a
continuous basis. Complex calculations can be dealt with by the digital elements, thereby
requiring a large memory, and giving accurate results after programming. They are mainly used
in aerospace and process control applications.
Classification by Purpose
Depending on their flexibility in operation, computers are classified as either special purpose or
general purpose.
Special-Purpose Computers
A special purpose computer is one that is designed to solve a restricted class of problems. Such
computers may even be designed and built to handle only one job. In such machines, the steps or
operations that the computer follows may be built into the hardware. Most of the computers used
for military purposes fall into this class. Other examples of special purpose computers include:
Computers used as robots in factories like vehicle assembly plants and glass industries.
Special-purpose computers are usually very efficient for the tasks for which they are specially
designed. They are very much less complex than the general-purpose computers. The simplicity
of the circuiting stems from the fact that provision is made only for limited facilities. They are
very much cheaper than the general-purpose type since they involve fewer components and are
less complex.
General-Purpose Computers
Payroll
Banking
Billing
Sales analysis
Cost accounting
Manufacturing scheduling
Inventory control
General-purpose computers are more flexible than special purpose computers. Thus, the
former can handle a wide spectrum of problems.
They are less efficient than the special-purpose computers due to such problems as the
following:
In the past, the capacity of computers was measured in terms of physical size. Today, however,
physical size is not a good measure of capacity because modern technology has made it possible
to achieve compactness.
A better measure of capacity today is the volume of work that a computer can handle. The
volume of work that a given computer handles is closely tied to the cost and to the memory size
of the computer. Therefore, most authorities today accept rental price as the standard for ranking
computers. Here, both memory size and cost shall be used to rank (classify) computers into three
main categories as follows:
Microcomputers
Medium/mini/small computers
Large computer/mainframes.
Microcomputers
Microcomputers, also known as single board computers, are the cheapest class of computers. In
the microcomputer, we do not have a Central Processing Unit (CPU) as we have in the larger
computers. Rather we have a microprocessor chip as the main data processing unit. They are the
cheapest and smallest, and can operate under normal office conditions. Examples are IBM,
APPLE, COMPAQ, Hewlett Packard (HP), Dell and Toshiba, etc.
Normally, personal computers are placed on the desk; hence they are referred to as desktop
personal computers. Still other types are available under the categories of personal computers.
They are:
Laptop Computers: These are small size types that are batteryoperated. The screen is used to
cover the system while the keyboard is installed flat on the system unit. They could be carried
about like a box when closed after operation and can be operated in vehicles while on a journey.
Notebook Computers: These are like laptop computers but smaller in size. Though small, the
notebook computer comprises all the components of a full system.
Palmtop Computers: The palmtop computer is far smaller in size. All the components are
complete as in any of the above, but it is made smaller so that it can be held on the palm.
It can be used to produce documents like memos, reports, letters and briefs.
It can assist in searching for specific information from lists or from reports.
It can attend to several users at the same time, thereby being able to process several jobs at a
time
It is possible to network personal computers, that is, linking of two or more computers.
With inventions and innovations everyday, the personal computer is at the risk of becoming
obsolete
Some computers cannot function properly without the aid of a cooling system, e.g. air
conditioners or fans in some locations.
Mini Computers
Mini computers have memory capacity in the range ‘128- 256 Kbytes’ and are also not
expensive but reliable and smaller in size compare to mainframe. They were first introduced in
1965; when DEC (Digital Equipment Corporation) built the PDP – 8.Other mini computers are
WANG VS.
Mainframe Computers
The mainframe computers, often called number crunchers have memory capacity of the order of
‘4 Kbytes’, and are very expensive. They can execute up to 100 MIPS (Meanwhile Instructions
per Second). They have large systems and are used by many people for a variety of purposes.