CSC 1201 - Introduction To Computer Science-1
CSC 1201 - Introduction To Computer Science-1
1 INTRODUCTION ..........................................................................................................2
2 HISTORY OF COMPUTERS.......................................................................................2
2.1 EARLY HISTORY .................................................................................................2
2.2 DEVELOPMENT IN THE 20TH CENTURY .........................................................3
2.2.1 Early Electronic Computers...................................................................3
2.2.2 EDVAC, EDSAC, ENIAC and UNIVAC.............................................4
2.2.3 The Transistor and Integrated Circuits ..................................................4
2.2.4 Future Trends.........................................................................................5
3 BASIC IDEAS AND TERMS........................................................................................5
4 COMPUTER ORGANISATION ..................................................................................6
4.1 INPUT UNIT.........................................................................................................6
4.2 OUTPUT UNIT.....................................................................................................6
4.3 MEMORY UNIT ...................................................................................................6
4.4 ARITHMETIC & LOGIC UNIT (ALU) ................................................................6
4.5 CENTRAL PROCESSING UNIT (CPU)...............................................................7
4.6 SECONDARY STORAGE UNIT ...........................................................................7
5 CLASSIFICATION OF COMPUTERS.......................................................................7
5.1 DIGITAL, ANALOG and HYBRID COMPUTERS...............................................8
5.2 PROCESSING POWER and SIZE........................................................................8
5.3 GENERATIONS OF COMPUTERS .....................................................................8
6 HARDWARE and SOFTWARE.................................................................................10
6.1 HARDWARE.......................................................................................................10
6.1.1 Peripherals ...........................................................................................10
6.2 SOFTWARE ........................................................................................................11
6.2.1 System Software (Operating System)..................................................11
6.2.2 Application Software...........................................................................11
6.2.3 Other Categories ..................................................................................11
7 PROBLEM SOLVING.................................................................................................11
7.1 SOLVING PROBLEMS WITH A COMPUTER ..................................................11
7.2 ALGORITHMS....................................................................................................12
7.2.1 Properties of Algorithms .....................................................................12
7.3 FLOWCHARTS...................................................................................................13
8 PROGRAMMING LANGUAGES .............................................................................13
8.1 MACHINE LANGUAGES...................................................................................14
8.2 ASSEMBLY (LOW-LEVEL) LANGUAGES ........................................................14
8.3 HIGH-LEVEL LANGUAGES .............................................................................14
9 REFERENCE................................................................................................................15
Compiled by
Mansur Babagana
Department of Mathematical Sciences
Bayero University, Kano
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE
1 INTRODUCTION
From the subject matter “Computer Science” many people assume that computer science is
concerned with answering questions about what computers are, how they work and how they are used.
It is, but there is more to computer science than that. Computers are indeed some of the most
interesting and complex items of technology in everyday use, but they are only around in such
numbers because they are useful tools. Often however, they are more trouble than they are worth.
When that happens it is usually largely the fault of those who design and built the computer system.
Why is it their fault? You may ask.
A common reason is that those involved did not properly understand how to find out what
was really required and therefore did not know how to build a system that met the requirement.
Gaining the necessary understanding to be able to successfully carry out such tasks is a goal of
computer science. Now the following intuitive definition of computer science can be given.
A more precise definition of what a computer is will be given later, but for now the following
will suffice.
2 HISTORY OF COMPUTERS
2.1 EARLY HISTORY
The history of computers starts at about 2000 years ago, at the birth of the abacus, a wooden
rack holding two horizontal wires with beads strung to them. When these beads are moved around,
according to certain rules memorized by the user, all regular arithmetic problems can be done.
Calculating devices took a different turn when John Napier, a Scottish mathematician,
published his discovery of logarithms in 1614. As any person can attest, adding two 10-digit numbers
is much simpler than multiplying them together, and the transformation of a multiplication problem
into an addition problem is exactly what logarithms enable. This simplification is possible because of
the following logarithmic property: the logarithm of the product of two numbers is equal to the sum of
the logarithms of the numbers. By 1624 tables with 14 significant digits were available for the
logarithms of numbers from 1 to 20,000, and scientists quickly adopted the new labour-saving tool for
tedious astronomical calculations.
Most significant for the development of computing, the transformation of multiplication into
addition greatly simplified the possibility of mechanization. Analog calculating devices based on
Napier's logarithms—representing digital values with analogous physical lengths—soon appeared. In
1620 Edmund Gunter, an English mathematician who coined the terms cosine and cotangent, built a
device for performing navigational calculations: the Gunter Scale or, as navigators simply called it,
the Gunter. Around 1632 an English clergyman and mathematician named William Oughtred built the
first slide rule, drawing on Napier's ideas. That first slide rule was circular, but Oughtred also built the
first rectangular one in 1633.
Slide rule is a device consisting of graduated scales capable of relative movement, by means
of which simple calculations may be carried out mechanically. Typical slide rules contain scales for
multiplying, dividing, and extracting square roots, and some also contain scales for calculating
trigonometric functions and logarithms. The slide rule remained an essential tool in science and
engineering and was widely used in business and industry until it was superseded by the portable
electronic calculator late in the 20th century.
Compiled by
Mansur Babagana
Page 2 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE
The logarithmic slide rule is a compact device for rapidly performing calculations with
limited accuracy. With the invention of logarithms, and the computation and publication of tables of
logarithms made it possible to effect multiplication and division by the simpler operations of addition
and subtraction. Napier's early conception of the importance of simplifying mathematical calculations
resulted in his invention of logarithms, and this invention made possible the slide rule.
French philosopher, mathematician, and physicist Blaise Pascal is usually credited for
building the first digital computer in 1642. The machine added and subtracted numbers with dials and
was made to help his father, a tax collector. In 1671 a German mathematician Gottfried Wilhelm von
Leibniz invented a special gearing system that was built in 1694. Leibniz system enables
multiplication on Pascal’s machine.
In the early 19th century, French inventor Joseph-Marie Jacquard devised a specialized type of
computer: a silk loom. Jacquard’s loom used punched cards to program patterns that helped the loom
create woven fabrics. Although Jacquard was rewarded and admired by French emperor Napoleon I
for his work, he fled for his life from the city of Lyon pursued by weavers who feared their jobs were
in jeopardy due to Jacquard’s invention. The loom prevailed, however: when Jacquard died, more
than 30,000 of his looms existed in Lyon. The looms are still used today, especially in the
manufacture of fine furniture fabrics.
Another early mechanical computer was the Difference Engine, designed in the early 1820s
by British mathematician and scientist Charles Babbage. Although never completed by Babbage, the
Difference Engine was intended as a machine with a 20-decimal capacity that could solve
mathematical problems. Babbage also made plans for another machine, the Analytical Engine,
considered the mechanical precursor of the modern computer. The Analytical Engine was designed to
perform all arithmetic operations efficiently; however, Babbage’s lack of political skills kept him
from obtaining the approval and funds to build it.
Augusta Ada Byron, countess of Lovelace, was a personal friend and student of Babbage. She
was the daughter of the famous poet Lord Byron and one of only a few women mathematicians of her
time. She prepared extensive notes concerning Babbage ideas and the Analytical Engine. Lovelace
conceptual programs for the machine led to the naming of a programming language (Ada) in her
honour. Although the Analytical Engine was never built, its key concepts, such as the capacity to store
instructions, the use of punched cards as a primitive memory, and the ability to print, can be found in
modern computers.
through strict separation of program instructions from data. His computer had to read instructions
from punched cards, which could be stored away from the computer. He also urged the U.S. based
National Bureau of Standards not to support the development of computers, insisting that there would
never be a need for more than five or six of them nationwide.
Instrumentation Telemetry Systems (MITS). The Altair used an 8-bit Intel 8080 microprocessor, had
256 bytes of RAM, received input through switches on the front panel, and displayed output on rows
of Light-Emitting Diodes (LEDs). Refinements in the PC continued with the inclusion of video
display, better storage devices, and CPUs with more computational abilities.
Graphical User Interfaces (GUIs) were first design by the Xerox Corporation, then later used
successfully by Apple Computer, Inc. Today the development of sophisticated operating systems such
as Windows, the Mac OS, and Linux enable computer users to run programs and manipulate data in
ways that were unimaginable in the mid-20th century.
Process
Data Data
Input Output
Storage
Compiled by
Mansur Babagana
Page 5 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE
• Program: A program is a set of instructions that tells the computer exactly how to manipulate the
input data and produce the desired output.
• Information: A distinction is sometimes made between data and information. When data is
converted into a more useful or intelligible form then it is said to be processed into
information.
• Hardware: is the general term used to describe all the electronic and mechanical elements of the
computer, together with those devices used with the computer.
• Software: is the general term used to describe all the various programs that may be used on a
computer system together with their associated documentation.
• Bit (Binary Digit): The smallest unit of information storable in a computer, expressed as 0 or 1.
• Byte (Binary Digit Eight): A set of 8 adjacent bits, which represent a unit of computer memory
equal to that needed to store a single character.
• Memory: Physical device that is used to store such information as data or program on a
temporary or permanent basis for use in a computer.
• Semi-conductor Memory: Any class of computer memory devices consisting of one or more
integrated circuits.
• Random Access Memory (RAM): is a volatile type of memory, i.e., data is lost if the power
supply is removed.
• Read Only Memory (ROM): is a non-volatile type of memory, i.e., data is not lost when the
power supply is removed.
4 COMPUTER ORGANISATION
Virtually every computer regardless of difference in physical appearance can be divided into
six logical units, or sections:
mechanisms, allowing the computer to perform such tasks as determining whether two items stored in
memory are equal.
CONTROL
Interprets stored instructions
in sequence.
Issues commands to all elements
of the computer
ARITHMETIC &
LOGIC
Performs arithmetic
and logical operations.
INPUT OUTPUT
Information
Data and
- the result
Instructions
of processing
MAIN MEMORY
Holds: data, instructions and
results of processing
SECONDARY MEMORY
To supplement main storage
5 CLASSIFICATION OF COMPUTERS
There are several methods of classifying computers. First the main distinction between digital
and analog devices is given, followed by classification of decreasing power and size, and finally
classification by age of technology is given.
1
For example here in Kano, you can get 120GB (≅120,000MB) Hard disk (Secondary Storage) at around N15,000, but a
512MB RAM (Primary Storage) can cost anywhere between N12,000 an N15,000.
Compiled by
Mansur Babagana
Page 7 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE
(b) Mainframes: These are large general purpose computers with extensive processing, storage
and input/output capabilities. They are used in centralized computing environment and
normally data input is achieved via terminals wired to the mainframe computer. Mainframe
computers usually need a specialized environment in which to operate – with dust,
temperature and humidity carefully controlled. Mainframes are usually owned by giant
corporate organizations, such as universities, research institutes, giant banks, etc. Mainframes
are usually sophisticate and large; thus they call for great detail of support from their
manufacturers and representatives. Example of mainframes are IBM 360/370 system and NCR
V-8800 system. The market of mainframes is dominated by IBM.
(c) Minicomputers (mini): is a name originally given to computers that physically went within a
single equipment cabinet, i.e. on the order of a few cubic feet. Compared with large
computers, minicomputers were cheaper and smaller, with smaller memory. The word
minicomputer is no longer used very specifically, it predates the term microcomputer and a
boundary between these two classes of devise is unclear. Examples of minicomputers are
PDP II, VAX 750/6000, NCR 9300, DEC, HP 3000, IBM system 38 and MV400.
(d) Microcomputers: are computer systems that utilize a microprocessor as their central and
arithmetic element. The personal computer (PC) is one form. The power and price of a
microcomputer is determine partly by the speed and power of the processor and partly by the
characteristics of other computers of the system, i.e. the memory, the disk units, the display,
the keyboard, the flexibility of the hardware, and the operating system and other software.
Examples include IBM PC and its compatibles and Apple Macintosh.
different countries, faced with different problems – it is difficult and not very profitable to try and
establish where ‘generations’ start and finish.
(a) First Generation: These are a series of calculating and computing devices whose designs
were started between 1940 (approximately) and 1955. These machines are characterized by
• electronic tube (valve) circuitry
• being huge
• having instruction coded in machine language
• being slow and often unreliable
Despite these seeming handicaps, impressive computations in weather forecasting, atomic
energy calculations, and similar scientific applications were routinely performed on them.
Important first generation development machines include the Manchester Mark I,
EDSAC, EDVAC, SEAC, Whirlwind, IAS and ENAIC, while the earliest commercially
available computers include the Ferran Mark I, UNIVAC I, and LEO I.
(b) Second Generation: These are machines whose designs were started after 1955
(approximately). The second generation saw the replacement of vacuum tubes in computer
circuits with the transistor. Second generation computers were characterized by the following:
• more reliable than the first generation
• could perform more calculations
• used symbolic languages such as Fortran for coding
• more efficient storage
• faster input and output
Examples of second generation computers include LEO mark III, ATLAS and IBM 7000
series.
(c) Third Generation: These are machines whose design was initiated after 1960
(approximately). Probably the most significant criterion of difference between the second and
third generations lies in the concept of computer architecture. Individual transistors were not
used anymore. They were replaced by very small electric circuits which were put onto a small
piece of material called silicon. The circuits contain many tiny transistors. These circuits are
called Integrated Circuits (IC). The ICs in the third generation are classified into SSI and
MSI.
• SSI (small-scale integration): this is an integration of generally less than 100
transistors on the single silicon chip.
• MSI (medium-scale integration): this is an integration in the range of 100 to
10,000 transistors on a single silicon chip.
Examples of third generation of computers are ICL 1900 series and the IBM 360 series.
(d) Fourth Generation: A designation covering machines that were design after 1970
(approximately), i.e. the current generation. The fourth generation computers used LSI and
VLSI level of integration on the silicon chip.
• LSI (large-scale integration): an IC fabrication technology that allows a very
large number of components (at least 10,000 transistors) to be integrated on a
single silicon chip.
• VLSI (very large-scale integration): an IC fabrication technology that allows over
100,000 transistors to be integrated on a single silicon chip.
With the development of LSI and VLSI led to the development of the modern
microprocessor. Modern microprocessors can contain more than 40 million transistors.
(e) Fifth Generation: These are the types of computer currently under development in number
of countries, especially Japan, and predicted as becoming available early in the 21st century.
The features are conjectural at present but point toward “intelligent” machine which may have
massively parallel processing, widespread use of intelligent knowledge-based systems, and
Compiled by
Mansur Babagana
Page 9 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE
natural language interfaces. Progress has not been as fast as originally planned although some
significant advances have been made.
6.1.1 Peripherals
Peripheral is a term used for devices such as disk drives, printers, modems, and joysticks, that
are connected to a computer and are controlled by its microprocessor. Although peripheral often
implies “additional but not essential”, many peripheral devices are critical elements of a fully
functioning and useful computer system. Few people, for example, would argue that disk drives are
nonessential, although computers can function without them.
Keyboards, monitors and mice are also strictly considered peripheral devise, but because they
represent primary source of input and output in most computer systems, they can be considered more
as extension of the system unit than as peripherals.
(a) Keyboard: (input) is a keypad device with buttons or keys (similar to typewriters) that a user
presses to enter data characters and commands into the computer.
(b) Disk Drives: (storage) is a device that reads or writes, or both on a disk medium. The disk
medium may be either magnetic as with floppy disks or hard drives; optical as with CD-ROM
(compact disc-read only memory) disks; or a combination of the two, as with magneto-optical
disks.
(c) Monitor: (output) is a device connected to a computer that displays information on a screen
(like a TV). Modern computer monitors can display a wide variety of information, including
text, icons (pictures representing commands), photographs, computer rendered graphics,
video and animation.
(d) Mouse: (input) is a common pointing device, a pointer on the screen (cursor) is controlled by
moving the device, which has one or more push buttons that transmit instructions to the
computer.
(e) Modem: (input/output) is used to translate information transferred through telephone lines or
cable. The term stands for modulate and demodulate which changes the signal from digital
which computers use, to analog, which telephones use and then back again.
(f) Printer: (output) The printer takes the information on your screen and transfers it to a paper
or hard copy. There are many different types of printers with various level of quality. The
three basic types of printer are dot matrix, inkjet and laser.
• Dot matrix printers work like a typewriter transferring ink from a ribbon to paper
with a series of matrix of tiny pins.
• Ink jet printers work like dot matrix printers but fire a stream of ink from a
cartridge directly onto the paper.
• Laser printers use the same technology as a photocopier using heat to transfer
toner onto paper
Compiled by
Mansur Babagana
Page 10 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE
6.2 SOFTWARE
Software, on the other hand, is the set of instructions a computer uses to manipulate data, such
as word-processing program or a video game. These programs are usually stored and transferred via
the computer’s hardware to and from the CPU. Software also governs how the hardware is utilized:
for example, how information is retrieved from a storage device. The interaction between the input
and output hardware is controlled by the Basic Input Output System (BIOS) software.
Software as a whole can be divided into a number of categories based on the types of work
done by the programs. The two primary software categories are Operating Systems (System
Software), which control the workings of the computer, and application software, which addresses the
multitude of tasks for which people use computers.
7 PROBLEM SOLVING
7.1 SOLVING PROBLEMS WITH A COMPUTER
The computer itself is useless without a program to control it. A computer works by obeying a
sequence of instructions, which constitutes a program. Hence, to solve a problem with a computer
there must be a program to solve it.
Writing a program requires careful planning and organization. First, you must have a clear
idea of what the problem is, and what the program is intended to achieve. Until this is clear, it is
Compiled by
Mansur Babagana
Page 11 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE
effectively impossible to design the strategy to follow in writing the program instructions. Secondly,
the input to be processed must be known, as well as the information that will be generated.
To solve a problem using a computer, you need to:
• have a clear idea of what the problem is;
• know the input (data) to be processed;
• know the output (information) to be generated;
• know the strategy to be used to transform input to output, and
• know what data (if any) are to be generated for further processing.
Evolving the method of solving a problem (called Problem Solving Strategy) is a human task,
not that of a computer. You must know the strategy to solve the problem; the computer merely
manipulates your data according to your instructions. Hence, if the logic of your strategy is wrong, the
output will be wrong. Great care must therefore be taking in specifying the detailed steps that the
computer would take in solving the problem. Such a step-by-step procedure required by a computer is
called an algorithm.
7.2 ALGORITHMS
Any computing problem can be solved by executing a series of actions in a specific order. A
procedure for solving a problem in terms of:
1. the actions to be taken, and
2. the order in which these actions execute is called an algorithm.
The word “class” in the definition needs some elaboration. A problem to be solved may have
many (sometimes an infinite number of) instances. For example, consider the problem of finding the
square of a number. Each different number for which we want to find the square represents an
instance of the square problem. Therefore an algorithm is always designed for the purpose of solving
a problem in all its instances. Algorithms designed this way can be re-used in other programs. This
property is called reusability. Note that a computer program is an algorithm
Compiled by
Mansur Babagana
Page 12 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE
7.3 FLOWCHARTS
One of the neatest ways of describing an algorithm is to illustrate it as a flowchart. Flowchart
is an important tool for planning the sequence of operations before writing it.
A flowchart consists of a set of boxes, the nature of operations to be performed along with
connection lines and arrows that shows the flow of control between the various operations. Flowcharts
are helpful as they provide a graphical representation of an algorithm. Where they are used
consistently they will make algorithms easy to write, easy to refine and easy to follow. They contain a
number of symbols which should be noted.
start stop
Rectangular boxes are used to indicate manipulation of information in the memory of the
computer.
8 PROGRAMMING LANGUAGES
The words that make up instructions and the rules which the instructions must obey form the
computer language you must use to talk to the computer. A computer language is used to write or
code computer programs. For this reason, computer languages are also called computer programming
languages or simply, programming languages.
Programmers write instructions in various programming languages, some directly
understandable by the computer and others that require intermediate translation steps. Hundreds of
computer languages are in use today. These may be divided into three general types:
(i) Machine languages,
(ii) Assembly languages (also called low-level languages)
(iii) High-level languages.
Compiled by
Mansur Babagana
Page 13 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE
0110, 1001 are machine operations code for “ADD” and “STORE” respectively
001110, 010101, 011010 are the addresses of x, y, and z respectively
• operation-code (also known as OPcode) is the numerical value that represent the
instructions to be carried out.
• operand: denote the memory address containing the data to be used.
A typical machine language statement looks like
0011 00100110
OPcode Operand
The machine language instructions are executed in the sequence in which they occur in
memory.
It is important to note that programming in machine language is hardly done nowadays,
instead more easier ways are devised using assembly languages or high-level languages.
instruction that look almost like every day English and contain commonly used mathematical
notations. A program that adds x to y and stores the result in z, written in a high-level language might
contain a statement such as
z = x + y
Obviously, high level languages are much more desirable (from the programmer’s stand point) than
either machine languages or assembly languages.
The processes of compiling of high-level language program into machine language take a
considerable amount of time. Interpreter programs were developed that can directly execute high-
level language programs without the need for compiling them into machine language.
Although compiled programs execute faster than interpreted programs, interpreters are
popular in program-development environments, in which programs are changed frequently as new
features are added and errors corrected. Once a program is developed a compiled version can be
produced to run most efficiently.
Next we look at computer programming using a popular, easy to learn programming language
called BASIC; where we apply the concepts describe on Section 7 – Problem Solving.
9 REFERENCE
1) Computer Studies for Beginners Book 1 & Book 2
5) French, C. S
Computer Science, 5th Edition
© 1996 ELST Continuum
6) Microsoft Corporation
Encarta Reference Library Premium 2005 DVD
• Computer
• Computer Science
© ® 1993-2004 Microsoft Corporation