0% found this document useful (0 votes)
281 views15 pages

CSC 1201 - Introduction To Computer Science-1

Uploaded by

Abdul Tukur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
281 views15 pages

CSC 1201 - Introduction To Computer Science-1

Uploaded by

Abdul Tukur
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Table of Contents

1 INTRODUCTION ..........................................................................................................2
2 HISTORY OF COMPUTERS.......................................................................................2
2.1 EARLY HISTORY .................................................................................................2
2.2 DEVELOPMENT IN THE 20TH CENTURY .........................................................3
2.2.1 Early Electronic Computers...................................................................3
2.2.2 EDVAC, EDSAC, ENIAC and UNIVAC.............................................4
2.2.3 The Transistor and Integrated Circuits ..................................................4
2.2.4 Future Trends.........................................................................................5
3 BASIC IDEAS AND TERMS........................................................................................5
4 COMPUTER ORGANISATION ..................................................................................6
4.1 INPUT UNIT.........................................................................................................6
4.2 OUTPUT UNIT.....................................................................................................6
4.3 MEMORY UNIT ...................................................................................................6
4.4 ARITHMETIC & LOGIC UNIT (ALU) ................................................................6
4.5 CENTRAL PROCESSING UNIT (CPU)...............................................................7
4.6 SECONDARY STORAGE UNIT ...........................................................................7
5 CLASSIFICATION OF COMPUTERS.......................................................................7
5.1 DIGITAL, ANALOG and HYBRID COMPUTERS...............................................8
5.2 PROCESSING POWER and SIZE........................................................................8
5.3 GENERATIONS OF COMPUTERS .....................................................................8
6 HARDWARE and SOFTWARE.................................................................................10
6.1 HARDWARE.......................................................................................................10
6.1.1 Peripherals ...........................................................................................10
6.2 SOFTWARE ........................................................................................................11
6.2.1 System Software (Operating System)..................................................11
6.2.2 Application Software...........................................................................11
6.2.3 Other Categories ..................................................................................11
7 PROBLEM SOLVING.................................................................................................11
7.1 SOLVING PROBLEMS WITH A COMPUTER ..................................................11
7.2 ALGORITHMS....................................................................................................12
7.2.1 Properties of Algorithms .....................................................................12
7.3 FLOWCHARTS...................................................................................................13
8 PROGRAMMING LANGUAGES .............................................................................13
8.1 MACHINE LANGUAGES...................................................................................14
8.2 ASSEMBLY (LOW-LEVEL) LANGUAGES ........................................................14
8.3 HIGH-LEVEL LANGUAGES .............................................................................14
9 REFERENCE................................................................................................................15

Compiled by
Mansur Babagana
Department of Mathematical Sciences
Bayero University, Kano
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE

1 INTRODUCTION
From the subject matter “Computer Science” many people assume that computer science is
concerned with answering questions about what computers are, how they work and how they are used.
It is, but there is more to computer science than that. Computers are indeed some of the most
interesting and complex items of technology in everyday use, but they are only around in such
numbers because they are useful tools. Often however, they are more trouble than they are worth.
When that happens it is usually largely the fault of those who design and built the computer system.
Why is it their fault? You may ask.
A common reason is that those involved did not properly understand how to find out what
was really required and therefore did not know how to build a system that met the requirement.
Gaining the necessary understanding to be able to successfully carry out such tasks is a goal of
computer science. Now the following intuitive definition of computer science can be given.

Definition (Computer Science): Computer Science is concerned with


application of scientific principles to the design, construction and
maintenance of systems based upon the use of computers.

A more precise definition of what a computer is will be given later, but for now the following
will suffice.

Definition (Computer 1): A computer is an electronic device that accepts,


processes, stores, and output data at high speeds according to
programmed instructions.

2 HISTORY OF COMPUTERS
2.1 EARLY HISTORY
The history of computers starts at about 2000 years ago, at the birth of the abacus, a wooden
rack holding two horizontal wires with beads strung to them. When these beads are moved around,
according to certain rules memorized by the user, all regular arithmetic problems can be done.
Calculating devices took a different turn when John Napier, a Scottish mathematician,
published his discovery of logarithms in 1614. As any person can attest, adding two 10-digit numbers
is much simpler than multiplying them together, and the transformation of a multiplication problem
into an addition problem is exactly what logarithms enable. This simplification is possible because of
the following logarithmic property: the logarithm of the product of two numbers is equal to the sum of
the logarithms of the numbers. By 1624 tables with 14 significant digits were available for the
logarithms of numbers from 1 to 20,000, and scientists quickly adopted the new labour-saving tool for
tedious astronomical calculations.
Most significant for the development of computing, the transformation of multiplication into
addition greatly simplified the possibility of mechanization. Analog calculating devices based on
Napier's logarithms—representing digital values with analogous physical lengths—soon appeared. In
1620 Edmund Gunter, an English mathematician who coined the terms cosine and cotangent, built a
device for performing navigational calculations: the Gunter Scale or, as navigators simply called it,
the Gunter. Around 1632 an English clergyman and mathematician named William Oughtred built the
first slide rule, drawing on Napier's ideas. That first slide rule was circular, but Oughtred also built the
first rectangular one in 1633.
Slide rule is a device consisting of graduated scales capable of relative movement, by means
of which simple calculations may be carried out mechanically. Typical slide rules contain scales for
multiplying, dividing, and extracting square roots, and some also contain scales for calculating
trigonometric functions and logarithms. The slide rule remained an essential tool in science and
engineering and was widely used in business and industry until it was superseded by the portable
electronic calculator late in the 20th century.
Compiled by
Mansur Babagana
Page 2 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE

The logarithmic slide rule is a compact device for rapidly performing calculations with
limited accuracy. With the invention of logarithms, and the computation and publication of tables of
logarithms made it possible to effect multiplication and division by the simpler operations of addition
and subtraction. Napier's early conception of the importance of simplifying mathematical calculations
resulted in his invention of logarithms, and this invention made possible the slide rule.
French philosopher, mathematician, and physicist Blaise Pascal is usually credited for
building the first digital computer in 1642. The machine added and subtracted numbers with dials and
was made to help his father, a tax collector. In 1671 a German mathematician Gottfried Wilhelm von
Leibniz invented a special gearing system that was built in 1694. Leibniz system enables
multiplication on Pascal’s machine.
In the early 19th century, French inventor Joseph-Marie Jacquard devised a specialized type of
computer: a silk loom. Jacquard’s loom used punched cards to program patterns that helped the loom
create woven fabrics. Although Jacquard was rewarded and admired by French emperor Napoleon I
for his work, he fled for his life from the city of Lyon pursued by weavers who feared their jobs were
in jeopardy due to Jacquard’s invention. The loom prevailed, however: when Jacquard died, more
than 30,000 of his looms existed in Lyon. The looms are still used today, especially in the
manufacture of fine furniture fabrics.
Another early mechanical computer was the Difference Engine, designed in the early 1820s
by British mathematician and scientist Charles Babbage. Although never completed by Babbage, the
Difference Engine was intended as a machine with a 20-decimal capacity that could solve
mathematical problems. Babbage also made plans for another machine, the Analytical Engine,
considered the mechanical precursor of the modern computer. The Analytical Engine was designed to
perform all arithmetic operations efficiently; however, Babbage’s lack of political skills kept him
from obtaining the approval and funds to build it.
Augusta Ada Byron, countess of Lovelace, was a personal friend and student of Babbage. She
was the daughter of the famous poet Lord Byron and one of only a few women mathematicians of her
time. She prepared extensive notes concerning Babbage ideas and the Analytical Engine. Lovelace
conceptual programs for the machine led to the naming of a programming language (Ada) in her
honour. Although the Analytical Engine was never built, its key concepts, such as the capacity to store
instructions, the use of punched cards as a primitive memory, and the ability to print, can be found in
modern computers.

2.2 DEVELOPMENT IN THE 20TH CENTURY

2.2.1 Early Electronic Computers


American inventor, Herman Hollerith, used an idea similar to Jacquard’s loom when he
combined the use of punched cards with devices that create and electronically read the cards. Hollerith
tabulator was used for the 1890 U.S. census, and it made the computational time three to four times
shorter than the time previously needed for hand counts. Hollerith’s Tabulator Machine Company
eventually merged with two other companies to form the Computing-Tabulating-Recording Company.
In 1924 the company changed its name to International Business Machines (IBM).
In 1936 British mathematician Alan Turing proposed the idea of a machine that could process
equations without human directions. The machine (now known as a Turing Machine) resembled an
automatic typewriter that used symbols for math and logic instead of letters. Turing intended the
machine to be a Universal Machine that could be used to duplicate or represent the function of any
other existing machine. Turing’s machine was the theoretical precursor to the modern digital
computer. The Turing Machine model is still used by modern computational theorists.
In the 1930s American mathematician Howard Aiken developed the Mark I calculating
machine, which was built by IBM. This electronic calculating machine used relays and
electromagnetic components to replace mechanical components. In later machines, Aiken used
vacuum tubes and “solid-state transistors” (tiny electrical switches) to manipulate binary numbers.
Aiken also introduced computers to universities by establishing the first computer science program at
Harvard University in Cambridge, Massachusetts. Aiken obsessively mistrusted the concept of storing
a program within a computer, insisting that the integrity of the machine could be maintained only
Compiled by
Mansur Babagana
Page 3 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE

through strict separation of program instructions from data. His computer had to read instructions
from punched cards, which could be stored away from the computer. He also urged the U.S. based
National Bureau of Standards not to support the development of computers, insisting that there would
never be a need for more than five or six of them nationwide.

2.2.2 EDVAC, EDSAC, ENIAC and UNIVAC


At the Institute for Advanced Study in Princeton, New Jersey, Hungarian-American
mathematician John von Neumann developed one of the first computers used to solve problems in
mathematics, meteorology, economics and hydrodynamics. Von Neumann’s 1945 design for the
Electronic Discrete Variable Automatic Computer (EDVAC) – in stark contrast to the designs of
Aiken, his contemporary – was the first electronic computer designed to incorporate a program stored
entirely within its memory.
At the University of Cambridge, meanwhile, Maurice Wilkes and others built what is
recognized as the first true stored-program computer with significant calculational ability. The
Electronic Delay Storage Automatic Calculator (EDSAC) was built on von Neumann's principles and,
like the Manchester Mark I, became operational in 1949. Wilkes built the machine chiefly to study
programming issues, which he realized were becoming more important than the hardware details.
American physicist John Mauchly proposed the electronic digital computer called the
Electronic Numerical Integrator And Computer (ENIAC). He helped build it along with American
engineer John Presper Eckert, Jr. at the Moore School of Engineering at the University of
Pennsylvania in Philadelphia. ENIAC was operational in 1945 and introduced to the public in 1946. It
is regarded as the first successful, general purpose digital computer. It occupied 167m 2 ( 1800 ft 2 )
weighed more than 27,000kg ( 60,000lbs ), and contained more than 18,000 vacuum tubes. Roughly
2,000 of the computer’s vacuum tubes were replaced each month by a team of six technicians. ENIAC
could multiply two 10-decimal digits at a rate of 300 per second. Many of ENIAC’s first tasks were
for military purposes, such a calculating ballistic firing tables and designing atomic weapons. Since
ENIAC was initially not a stored program machine, it had to be preprogrammed for each task.
Eckert and Mauchly eventually formed their own company, which was then bought by the
Rand Corporation. They produced the Universal Automatic Computer (UNIVAC), which was used
for a broader variety of commercial applications. The first UNIVAC was delivered to the United
States Census Bureau in 1951. By 1957, there were 46 UNIVACs in use.
Between 1937 and 1939, while teaching at Iowa State College, American physicist John
Vincent Atanasoff built a prototype computing device called the Atanasoff-Berry Computer, or ABC,
with the help of his assistant, Clifford Berry. Atanasoff developed the concepts that were later used in
the design of the ENIAC. Atanasoff’s device was the first computer to separate data processing from
memoey, but it is not clear whether a functional version was ever built. Atanasoff did not receive
credit for his contributions until 1973, when a lawsuit regarding the patent of ENIAC was settled.

2.2.3 The Transistor and Integrated Circuits


In 1948, at Bell Telephone Laboratories, American physicists Walter Houser Brattain, John
Bardeen and William Bradford Shockley developed the transistor, a device that can act as an electric
switch. The transistor has a tremendous impact on computer design, replacing costly, energy-
inefficient, and unreliable vacuum tubes.
In the late 1960s Integrated Circuits (IC) (tiny transistors and other electrical components
arranged on a single chip of silicon) replaced individual transistors in computers. Integrated circuits
resulted from the simultaneous, independent work of Jack Kilby at Texas Instruments and Robert
Noyce of the Fairchild Semiconductor Corporation in the late 1950s. As Integrated Circuits became
miniaturized, more components could be designed into a single computer circuit. In the 1970s
refinements in Integrated circuits technology led to the development of the modern microprocessor,
integrated circuits that contained thousands of transistors. Modern microprocessors can contain more
than 200 million transistors.
Manufactures used I.C. technology to build smaller and cheaper computers. The first of these
so-called Personal Computers (PCs) – the Altair 8800 – appeared in 1975, sold by Micro
Compiled by
Mansur Babagana
Page 4 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE

Instrumentation Telemetry Systems (MITS). The Altair used an 8-bit Intel 8080 microprocessor, had
256 bytes of RAM, received input through switches on the front panel, and displayed output on rows
of Light-Emitting Diodes (LEDs). Refinements in the PC continued with the inclusion of video
display, better storage devices, and CPUs with more computational abilities.
Graphical User Interfaces (GUIs) were first design by the Xerox Corporation, then later used
successfully by Apple Computer, Inc. Today the development of sophisticated operating systems such
as Windows, the Mac OS, and Linux enable computer users to run programs and manipulate data in
ways that were unimaginable in the mid-20th century.

2.2.4 Future Trends


Several researchers claimed the “record” for the largest single calculation ever performed.
One large single calculation was accomplished by physicists at IBM in 1995. They solve one million
trillion ( 1,000,000,000,000,000,000 = 1 × 1018 ) mathematical sub-problems by continuously running
448 computers for two years. Their analysis demonstrated the existence of a previously hypothetical
subatomic particle called a Glue Ball. Japan, Italy, and the United States are collaborating to develop
new super computers that will run these types of calculations 100 times faster.
In 1996 IBM challenge Garry Kasparov, the reigning world chess champion, to a chess match
with a supercomputer called Deep Blue. The computer had the ability to compute more than 100
million chess positions per second, Deep Blue lost to Gary Kasparov. In a 1997 rematch Deep Blue
defeated Kasparov, becoming the first computer to win a match against a reigning world chess
champion with regulation time controls. Many experts predict these types of parallel processing
machines will soon surpass human chess playing ability, and some speculate that massive calculating
power will one day replace intelligence.
Deep Blue serves as a prototype for future computers that will be required to solve complex
problems. At issue, however, is whether a computer can be developed with the ability to learn to solve
problems on its own, rather than one programmed to solve a specific set of tasks.
The computer field continues to experience huge growth. Advances in technologies continue
to produce cheaper and more powerful computers offering the promise that in the near future,
computers will reside in most, if not all homes, offices and schools.

3 BASIC IDEAS AND TERMS


Here some of the ideas and terminology of computer is given below.
• Data: “Data” is the name given to basic facts.
• Computer: A computer is a device that works under the control of stored programs,
automatically accepting, storing and processing data to produce information that is the
result of the processing.
When a computer processes data it actually performs a number of separate functions as
follows:
• Input: The computer accepts data from outside for processing within.
• Storage: The computer holds data internally before, during and after processing.
• Processing: The computer perform operations on the data it holds within.
• Output: The computer produces data from within for external use.
This is summarized in the following figure.

Process
Data Data
Input Output
Storage

Figure 3-1: The basic functions of a computer

Compiled by
Mansur Babagana
Page 5 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE

• Program: A program is a set of instructions that tells the computer exactly how to manipulate the
input data and produce the desired output.
• Information: A distinction is sometimes made between data and information. When data is
converted into a more useful or intelligible form then it is said to be processed into
information.
• Hardware: is the general term used to describe all the electronic and mechanical elements of the
computer, together with those devices used with the computer.
• Software: is the general term used to describe all the various programs that may be used on a
computer system together with their associated documentation.
• Bit (Binary Digit): The smallest unit of information storable in a computer, expressed as 0 or 1.
• Byte (Binary Digit Eight): A set of 8 adjacent bits, which represent a unit of computer memory
equal to that needed to store a single character.
• Memory: Physical device that is used to store such information as data or program on a
temporary or permanent basis for use in a computer.
• Semi-conductor Memory: Any class of computer memory devices consisting of one or more
integrated circuits.
• Random Access Memory (RAM): is a volatile type of memory, i.e., data is lost if the power
supply is removed.
• Read Only Memory (ROM): is a non-volatile type of memory, i.e., data is not lost when the
power supply is removed.

4 COMPUTER ORGANISATION
Virtually every computer regardless of difference in physical appearance can be divided into
six logical units, or sections:

4.1 INPUT UNIT


This is the receiving section of the computer that obtains information (data and computer
programs) from various input devices, like keyboard, scanner, mouse, etc. The input unit then places
this information at the disposal of the other units to facilitate the processing of the information.
Today, most users enter information into computers via keyboards and mouse devices. Other input
devices include microphones (for speaking to the computer) and digital cameras (for taking
photographs and making videos).

4.2 OUTPUT UNIT


This is the shipping section of the computer that takes information that the computer has
processed and placed it on various output devices, like monitor, printer, and speaker, making the
information available for use outside the computer. Computers can output information in various
ways, including displaying the output on screens, playing it on audio/video devices, printing it on
paper or using the output to control other devices.

4.3 MEMORY UNIT


This is the rapid-access, relatively low-cost storage house section of the computer, it
facilitates the temporary storage of data. The memory unit retains information that has been entered
through the input unit, enabling that information to be immediately available for processing. In
addition, the unit retains processed information until that information can be transmitted to output
devices. Often the memory unit is called either memory or primary memory – random access memory
(RAM) is an example of primary memory. Primary memory is usually volatile, which means that it is
erased when the machine is powered off.

4.4 ARITHMETIC & LOGIC UNIT (ALU)


The ALU is the manufacturing section of the computer. It is responsible for the performance
of calculations such as addition, subtraction, multiplication and division. It also contains decision
Compiled by
Mansur Babagana
Page 6 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE

mechanisms, allowing the computer to perform such tasks as determining whether two items stored in
memory are equal.

4.5 CENTRAL PROCESSING UNIT (CPU)


The CPU serves as the administrative section of the computer. This is the computer’s
coordinator, responsible for supervising the operation of the other sections. The CPU alerts the input
unit when information should be read into the memory unit, instructs the ALU about when to use
information from the memory unit in calculations and tells the output unit when to send the
information from the memory unit to certain output devices. Sometimes the ALU & CPU are regarded
as a single unit.

4.6 SECONDARY STORAGE UNIT


This unit is the long-term, high capacity storage housing of the computer. Secondary storage
devices, such as hard drives and disks, normally hold programs or data that other units are not actively
using; the computer then can retrieve this information when it is needed – hours, days, months or even
years later. Information in secondary storage takes much longer to access than information in primary
storage. However, the price per unit of secondary storage is much less than the price per unit of
primary memory.1 Secondary storage is usually non-volatile – it retains information even when the
computer is switched off.
THE PROCESSOR

CONTROL
Interprets stored instructions
in sequence.
Issues commands to all elements
of the computer

ARITHMETIC &
LOGIC
Performs arithmetic
and logical operations.

INPUT OUTPUT
Information
Data and
- the result
Instructions
of processing
MAIN MEMORY
Holds: data, instructions and
results of processing

SECONDARY MEMORY
To supplement main storage

Data / Instruction flow


KEY
Command / Signal flow

Figure 4-1: Computer System Organization showing its Logical Units

5 CLASSIFICATION OF COMPUTERS
There are several methods of classifying computers. First the main distinction between digital
and analog devices is given, followed by classification of decreasing power and size, and finally
classification by age of technology is given.

1
For example here in Kano, you can get 120GB (≅120,000MB) Hard disk (Secondary Storage) at around N15,000, but a
512MB RAM (Primary Storage) can cost anywhere between N12,000 an N15,000.
Compiled by
Mansur Babagana
Page 7 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE

5.1 DIGITAL, ANALOG and HYBRID COMPUTERS


(a) Digital Computer: The word “digital” as used here means whole numbers (discrete). A
digital computer is computer that stores and performs a series of mathematical and logical
operations on data expressed as discrete signals interpreted as numbers, usually in the form of
binary notation.
(b) Analog Computer: are akin to measuring instruments such as thermometers and voltmeters
with pointers and dials. They process data in the form of electric voltages, which are variable
positions of a pointer on a dial. The output from analog computers is often in the form of
smooth graphs from which information can be read.
(c) Hybrid Computer: is a computer employing both analog and digital techniques. Hybrid
computers are not that common, and they are used in specialized fields like robotics.

5.2 PROCESSING POWER and SIZE


The following classification is in order of decreasing power and size. However, there are no
sharp dividing lines in that, for example, a model at the top of a manufacturer’s range of
minicomputers might well be more powerful than the model at the bottom of a range of mainframes.
(a) Supercomputers: These are the largest, fastest and most expensive computers. They are very
powerful computers that have extremely fast processors, capable of performing 30-50 Gflops,
i.e. 30–50 billion floating–point operations per second. They are very expensive (several
million dollars) and are typically used for the toughest computational tasks. Supercomputers
are used, for example, in meteorology, engineering, nuclear physics, and astronomy. Several
hundreds are in operation world wide at present. Examples of supercomputers include the
CRAY-MP and CRAY 2 supercomputers which cost over $5 million each. Principal
manufactures are Cray Research and NEC, Fujitsu, and Hitachi of Japan.

(b) Mainframes: These are large general purpose computers with extensive processing, storage
and input/output capabilities. They are used in centralized computing environment and
normally data input is achieved via terminals wired to the mainframe computer. Mainframe
computers usually need a specialized environment in which to operate – with dust,
temperature and humidity carefully controlled. Mainframes are usually owned by giant
corporate organizations, such as universities, research institutes, giant banks, etc. Mainframes
are usually sophisticate and large; thus they call for great detail of support from their
manufacturers and representatives. Example of mainframes are IBM 360/370 system and NCR
V-8800 system. The market of mainframes is dominated by IBM.

(c) Minicomputers (mini): is a name originally given to computers that physically went within a
single equipment cabinet, i.e. on the order of a few cubic feet. Compared with large
computers, minicomputers were cheaper and smaller, with smaller memory. The word
minicomputer is no longer used very specifically, it predates the term microcomputer and a
boundary between these two classes of devise is unclear. Examples of minicomputers are
PDP II, VAX 750/6000, NCR 9300, DEC, HP 3000, IBM system 38 and MV400.

(d) Microcomputers: are computer systems that utilize a microprocessor as their central and
arithmetic element. The personal computer (PC) is one form. The power and price of a
microcomputer is determine partly by the speed and power of the processor and partly by the
characteristics of other computers of the system, i.e. the memory, the disk units, the display,
the keyboard, the flexibility of the hardware, and the operating system and other software.
Examples include IBM PC and its compatibles and Apple Macintosh.

5.3 GENERATIONS OF COMPUTERS


Generations of computers is an informal system of classifying computer systems as advances
have been made in electronic technology and, latterly, in software. Since the design of digital
computers has been a continuous process for the past 50 years – by the wide variety of people in
Compiled by
Mansur Babagana
Page 8 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE

different countries, faced with different problems – it is difficult and not very profitable to try and
establish where ‘generations’ start and finish.
(a) First Generation: These are a series of calculating and computing devices whose designs
were started between 1940 (approximately) and 1955. These machines are characterized by
• electronic tube (valve) circuitry
• being huge
• having instruction coded in machine language
• being slow and often unreliable
Despite these seeming handicaps, impressive computations in weather forecasting, atomic
energy calculations, and similar scientific applications were routinely performed on them.
Important first generation development machines include the Manchester Mark I,
EDSAC, EDVAC, SEAC, Whirlwind, IAS and ENAIC, while the earliest commercially
available computers include the Ferran Mark I, UNIVAC I, and LEO I.

(b) Second Generation: These are machines whose designs were started after 1955
(approximately). The second generation saw the replacement of vacuum tubes in computer
circuits with the transistor. Second generation computers were characterized by the following:
• more reliable than the first generation
• could perform more calculations
• used symbolic languages such as Fortran for coding
• more efficient storage
• faster input and output
Examples of second generation computers include LEO mark III, ATLAS and IBM 7000
series.

(c) Third Generation: These are machines whose design was initiated after 1960
(approximately). Probably the most significant criterion of difference between the second and
third generations lies in the concept of computer architecture. Individual transistors were not
used anymore. They were replaced by very small electric circuits which were put onto a small
piece of material called silicon. The circuits contain many tiny transistors. These circuits are
called Integrated Circuits (IC). The ICs in the third generation are classified into SSI and
MSI.
• SSI (small-scale integration): this is an integration of generally less than 100
transistors on the single silicon chip.
• MSI (medium-scale integration): this is an integration in the range of 100 to
10,000 transistors on a single silicon chip.
Examples of third generation of computers are ICL 1900 series and the IBM 360 series.

(d) Fourth Generation: A designation covering machines that were design after 1970
(approximately), i.e. the current generation. The fourth generation computers used LSI and
VLSI level of integration on the silicon chip.
• LSI (large-scale integration): an IC fabrication technology that allows a very
large number of components (at least 10,000 transistors) to be integrated on a
single silicon chip.
• VLSI (very large-scale integration): an IC fabrication technology that allows over
100,000 transistors to be integrated on a single silicon chip.
With the development of LSI and VLSI led to the development of the modern
microprocessor. Modern microprocessors can contain more than 40 million transistors.

(e) Fifth Generation: These are the types of computer currently under development in number
of countries, especially Japan, and predicted as becoming available early in the 21st century.
The features are conjectural at present but point toward “intelligent” machine which may have
massively parallel processing, widespread use of intelligent knowledge-based systems, and

Compiled by
Mansur Babagana
Page 9 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE

natural language interfaces. Progress has not been as fast as originally planned although some
significant advances have been made.

6 HARDWARE and SOFTWARE


6.1 HARDWARE
Computer hardware consists of the components that can be physically handled. The function
of these components is basically divided into four main categories: input, processing, storage and
output services. The functions of these units were already discussed in section 4.
The four units are interdependent (i.e. the function of one unit depends on the function of the
other.) They interact harmoniously to provide the full function of the computer’s hardware. The units
connect to the microprocessors, specifically the computer’s central processing unit (CPU) – the
electronic circuitry that provides the computational ability and control of the computer.
For most computers, the principal output device is the keyboard. Storage devices include
external floppy disks and internal hard disks, output devices that display data include monitors and
printers

6.1.1 Peripherals
Peripheral is a term used for devices such as disk drives, printers, modems, and joysticks, that
are connected to a computer and are controlled by its microprocessor. Although peripheral often
implies “additional but not essential”, many peripheral devices are critical elements of a fully
functioning and useful computer system. Few people, for example, would argue that disk drives are
nonessential, although computers can function without them.
Keyboards, monitors and mice are also strictly considered peripheral devise, but because they
represent primary source of input and output in most computer systems, they can be considered more
as extension of the system unit than as peripherals.
(a) Keyboard: (input) is a keypad device with buttons or keys (similar to typewriters) that a user
presses to enter data characters and commands into the computer.
(b) Disk Drives: (storage) is a device that reads or writes, or both on a disk medium. The disk
medium may be either magnetic as with floppy disks or hard drives; optical as with CD-ROM
(compact disc-read only memory) disks; or a combination of the two, as with magneto-optical
disks.
(c) Monitor: (output) is a device connected to a computer that displays information on a screen
(like a TV). Modern computer monitors can display a wide variety of information, including
text, icons (pictures representing commands), photographs, computer rendered graphics,
video and animation.
(d) Mouse: (input) is a common pointing device, a pointer on the screen (cursor) is controlled by
moving the device, which has one or more push buttons that transmit instructions to the
computer.
(e) Modem: (input/output) is used to translate information transferred through telephone lines or
cable. The term stands for modulate and demodulate which changes the signal from digital
which computers use, to analog, which telephones use and then back again.
(f) Printer: (output) The printer takes the information on your screen and transfers it to a paper
or hard copy. There are many different types of printers with various level of quality. The
three basic types of printer are dot matrix, inkjet and laser.
• Dot matrix printers work like a typewriter transferring ink from a ribbon to paper
with a series of matrix of tiny pins.
• Ink jet printers work like dot matrix printers but fire a stream of ink from a
cartridge directly onto the paper.
• Laser printers use the same technology as a photocopier using heat to transfer
toner onto paper

Compiled by
Mansur Babagana
Page 10 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE

6.2 SOFTWARE
Software, on the other hand, is the set of instructions a computer uses to manipulate data, such
as word-processing program or a video game. These programs are usually stored and transferred via
the computer’s hardware to and from the CPU. Software also governs how the hardware is utilized:
for example, how information is retrieved from a storage device. The interaction between the input
and output hardware is controlled by the Basic Input Output System (BIOS) software.
Software as a whole can be divided into a number of categories based on the types of work
done by the programs. The two primary software categories are Operating Systems (System
Software), which control the workings of the computer, and application software, which addresses the
multitude of tasks for which people use computers.

6.2.1 System Software (Operating System)


The Operating System (OS) is the basic software that controls a computer. The OS has three
major functions:
• it coordinates and manipulates computer hardware, such as computer memory, printer, disks,
keyboard, mouse, and monitor;
• it organizes files on a variety of storage media such as floppy disks, hard drives, compact
disks, digital video discs and tape;
• it manages hardware errors and lost of data.

6.2.2 Application Software


Application software directs the computer to execute commands given by the user and may be
said to include any program that processes data for a use. Application software thus include word-
processors, spreadsheets, database management, and many other applications.

6.2.3 Other Categories


Two additional categories that are neither system nor application software, although they
contain elements of both are:
(a) Network Software: which enables groups of computer to communicate; and
(b) Language Software: which provides programmers with tools they need to write programs.
Language software will be discussed later.
In addition to these task-based categories, several types of software are described based on their
method of distribution. These include the so-called
(c) Canned Programs or Package Software: which are developed and sold primarily through
retail outlets.
(d) Freeware and Public-Domain Software: which are made available without cost by its
developer.
(e) Shareware: which is similar to freeware but usually carries a small fee for those who liked
the program;
Lastly the infamous,
(f) Vapourware: which is software that either does not reach the market or appears much later
than promised.

7 PROBLEM SOLVING
7.1 SOLVING PROBLEMS WITH A COMPUTER
The computer itself is useless without a program to control it. A computer works by obeying a
sequence of instructions, which constitutes a program. Hence, to solve a problem with a computer
there must be a program to solve it.
Writing a program requires careful planning and organization. First, you must have a clear
idea of what the problem is, and what the program is intended to achieve. Until this is clear, it is

Compiled by
Mansur Babagana
Page 11 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE

effectively impossible to design the strategy to follow in writing the program instructions. Secondly,
the input to be processed must be known, as well as the information that will be generated.
To solve a problem using a computer, you need to:
• have a clear idea of what the problem is;
• know the input (data) to be processed;
• know the output (information) to be generated;
• know the strategy to be used to transform input to output, and
• know what data (if any) are to be generated for further processing.
Evolving the method of solving a problem (called Problem Solving Strategy) is a human task,
not that of a computer. You must know the strategy to solve the problem; the computer merely
manipulates your data according to your instructions. Hence, if the logic of your strategy is wrong, the
output will be wrong. Great care must therefore be taking in specifying the detailed steps that the
computer would take in solving the problem. Such a step-by-step procedure required by a computer is
called an algorithm.

7.2 ALGORITHMS
Any computing problem can be solved by executing a series of actions in a specific order. A
procedure for solving a problem in terms of:
1. the actions to be taken, and
2. the order in which these actions execute is called an algorithm.

Definition (Algorithm): Professionals often define an algorithm as a step-by-


step procedure that will solve a specific class of problems.

The word “class” in the definition needs some elaboration. A problem to be solved may have
many (sometimes an infinite number of) instances. For example, consider the problem of finding the
square of a number. Each different number for which we want to find the square represents an
instance of the square problem. Therefore an algorithm is always designed for the purpose of solving
a problem in all its instances. Algorithms designed this way can be re-used in other programs. This
property is called reusability. Note that a computer program is an algorithm

7.2.1 Properties of Algorithms


An algorithm must have the following properties:
(i) An algorithm must be able to determine after a finite number of steps whether a solution to a
specific problem exists or not. All algorithms must terminate whether it takes a long or short
time to be completed. For this reason, the process of listing all the positive integers, one by one,
is not an algorithm since it goes on forever.
(ii) Sequence of steps leading to a solution must be clear, not subject to misinterpretation and
capable of been rigorously followed from the beginning to the end. A computer can process
only those algorithms whose individual steps involve tasks it can understand.
(iii) The input data must be clear and well-defined. To do this we need to know and specify a set of
input that the algorithm is allowed to manipulate. For example, a square algorithm will only
deal with numbers as input, whereas an algorithm to sort things in order should be able to
accept numbers as well as names of people as input.
(iv) Given correct input, an algorithm must give a correct output.
(v) An algorithm should be efficient. For any particular problem there are usually several alternative
algorithms that can serve as solutions to that problem. Some algorithms, when carried out, can
be completed in a short time, while others can take a very long time. Some will require small
computer memory space, while others will require more. Yet, some algorithms are very simple,
while others are complex in logic. Also, many algorithms work well for some input cases and
poorly for others. It is desirable that all algorithms yield the desired results without waste of any
set of allowable inputs.

Compiled by
Mansur Babagana
Page 12 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE

7.3 FLOWCHARTS
One of the neatest ways of describing an algorithm is to illustrate it as a flowchart. Flowchart
is an important tool for planning the sequence of operations before writing it.

Definition (Flowchart): A flowchart is a diagrammatic representation of an


algorithm.

A flowchart consists of a set of boxes, the nature of operations to be performed along with
connection lines and arrows that shows the flow of control between the various operations. Flowcharts
are helpful as they provide a graphical representation of an algorithm. Where they are used
consistently they will make algorithms easy to write, easy to refine and easy to follow. They contain a
number of symbols which should be noted.

(i) The Terminal symbol(s)

start stop

Ovals are used to indicate starting and stopping points of algorithm.

(ii) The Process symbol

Rectangular boxes are used to indicate manipulation of information in the memory of the
computer.

(iii) The Decision symbol

A diamond shape box is used to indicate logical decision/comparison.

(iv) The input/output symbol

Parallelograms are used where data input or output is to be performed.


There are many other symbols but the four above are the ones we will likely be using.

8 PROGRAMMING LANGUAGES
The words that make up instructions and the rules which the instructions must obey form the
computer language you must use to talk to the computer. A computer language is used to write or
code computer programs. For this reason, computer languages are also called computer programming
languages or simply, programming languages.
Programmers write instructions in various programming languages, some directly
understandable by the computer and others that require intermediate translation steps. Hundreds of
computer languages are in use today. These may be divided into three general types:
(i) Machine languages,
(ii) Assembly languages (also called low-level languages)
(iii) High-level languages.
Compiled by
Mansur Babagana
Page 13 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE

8.1 MACHINE LANGUAGES


This is the only language that computer understand. Any computer can directly understand
only its own machine language. Machine language is the natural language of a particular computer. It
is defined by the hardware design by that computer. Machine languages generally consist of strings of
numbers (ultimately reduced to 1s and 0s) that instruct computers to perform their must elementary
operations one at a time. Machine languages are machine-dependant, i.e., a particular machine
language can be used on only one type of computer.
Machine languages are cumbersome for humans, as can be seen from the following section of
a machine-language program that adds “x” to “y” and stores the result in “z”, (where “x”, “y” and
“z” are labels representing values stored in the computer storage).

0110 00110 010101 1001 011010


where,

0110, 1001 are machine operations code for “ADD” and “STORE” respectively
001110, 010101, 011010 are the addresses of x, y, and z respectively

A machine language program statement normally has two parts


operation-code operand

• operation-code (also known as OPcode) is the numerical value that represent the
instructions to be carried out.
• operand: denote the memory address containing the data to be used.
A typical machine language statement looks like
0011 00100110
OPcode Operand

The machine language instructions are executed in the sequence in which they occur in
memory.
It is important to note that programming in machine language is hardly done nowadays,
instead more easier ways are devised using assembly languages or high-level languages.

8.2 ASSEMBLY (LOW-LEVEL) LANGUAGES


As computers became more popular, it became apparent that machine language programming
is too slow, tedious and error prone. Instead of using strings of numbers that computers could directly
understand, programmer began using English-like abbreviations formed the basis of assembly
languages. Translator programs called assembler were developed to convert assembly language
programs to machine languages at computer speed.
The following section of an assembly language program also adds x to y and stores the result
in z, but clearer than its machine language equivalent:
LOAD y
ADD x
STORE z
Although such code is clearer to humans, it is incomprehensible to computers until translated to
machine language by assemblers.
Computer usage rapidly increased with the advent of assembly languages, but these still
required many instructions to accomplish even the simplest tasks.

8.3 HIGH-LEVEL LANGUAGES


To speed the programming process, high-level languages were developed in which single
statements accomplish substantial task. Translator programs called compilers convert high-level
language programs into machine language. High-level languages allow programmers to write
Compiled by
Mansur Babagana
Page 14 of 15 Department of Mathematical Sciences
Bayero University, Kano.
C S C 1 2 0 1 / 2 2 1 0 : I N T R O D U C T I O N TO COMPUTER S C IENCE

instruction that look almost like every day English and contain commonly used mathematical
notations. A program that adds x to y and stores the result in z, written in a high-level language might
contain a statement such as
z = x + y
Obviously, high level languages are much more desirable (from the programmer’s stand point) than
either machine languages or assembly languages.
The processes of compiling of high-level language program into machine language take a
considerable amount of time. Interpreter programs were developed that can directly execute high-
level language programs without the need for compiling them into machine language.
Although compiled programs execute faster than interpreted programs, interpreters are
popular in program-development environments, in which programs are changed frequently as new
features are added and errors corrected. Once a program is developed a compiled version can be
produced to run most efficiently.
Next we look at computer programming using a popular, easy to learn programming language
called BASIC; where we apply the concepts describe on Section 7 – Problem Solving.

9 REFERENCE
1) Computer Studies for Beginners Book 1 & Book 2

2) Deitel, H.L; Deitel, P. J,


C++ How to Program, 4th Edition,
© 2003 Pearson Education, Inc.

3) Deitel, H.L; Deitel, P. J,


Visual Basic.NET How to Program, 2nd Edition
© 2003 Pearson Education, Inc.

4) Encyclopedia Britannica, Inc.


Encyclopedia Britannica 2005 Deluxe Edition CD-ROM
• Computers
• Computer Science
© 2005 Encyclopedia Britannica, Inc.

5) French, C. S
Computer Science, 5th Edition
© 1996 ELST Continuum

6) Microsoft Corporation
Encarta Reference Library Premium 2005 DVD
• Computer
• Computer Science
© ® 1993-2004 Microsoft Corporation

7) Oxford University, Press


Oxford Dictionary of Computing, 4th Edition
© 1996 Oxford University, Press

8) Wyatt, Allan L.,


Computer Professional’s Dictionary
© 1990 Osborne McGraw-Hill.
Compiled by
Mansur Babagana
Page 15 of 15 Department of Mathematical Sciences
Bayero University, Kano.

You might also like