CSC101 - Introduction To Computer
CSC101 - Introduction To Computer
IN COLLABORATION WITH
EDUCATIONAL CONSULT
ADEALU IYANA-IPAJA STUDY CENTRE
STUDY PACK
ON
INTRODUCTION TO COMPUTER (CSC101 )
PREPARED BY
PRINCE ASADE MOJEED ADENIYI (B.Sc. M.Sc. Computer Science, MBA Finance)
Page 1 of 43
MODULE ONE
1.0 History and Development of Computer Technology
1.1 Introduction
Living life let alone running a business without computers seems almost unthinkable today. Since
2011, the United Nations has declared access to the internet a basic human right because it makes
a difference in how people live. With business, it is arguable that companies that use computers
best are most inclined to succeed simply because the possibilities and versatility of technology
today are so expansive. The advantages of computers in business go far beyond better inventory
tracking and record keeping.
The term computer is derived from the Latin term ‘computare’, this means to calculate or
programmable machine. A computer is a set of electronic device that can systematically and
sequentially follow a set of instructions called a program to perform high-speed arithmetic and
logical operations on data. Nothing epitomizes modern life better than the computer. In other
words, a computer is an advanced electronic device that takes raw data as input from the user,
processes these data under the control of set of instructions (called program), gives the result
(output), and saves output for the future use. It can process both numerical and non-numerical
(arithmetic and logical) calculations.
Today computers do much more than simply computing: supermarket scanners calculate grocery
bill while keeping store inventory; and Automatic Teller Machines (ATM) let us conduct banking
transactions from virtually anywhere in the world. But where did all this technology come from
and where is it heading to? To fully understand and appreciate the impact computers have on
accounting and promises they hold for the future, it is important to understand its historical
evolution.
1.2 Evolution of Computer
The abacus, which emerged about 5,000 years ago in Asia and is still in use today in places like
China, Japan and Russia, may be considered the first computer. The word Abacus is said to have
been derived from the Greek word ‘abax’, meaning ‘calculating board’ or ‘calculating table’. This
device allows users to make computations using a system of sliding beads arranged on a rack.
Early merchants used the abacus to keep trading transactions. But as the use of paper and pencil
spread, particularly in Europe, the abacus lost its importance. It took nearly 12 centuries, however,
for the next significant advance in computing devices to emerge.
Figure I. Abacus Device
Page 2 of 43
In 1642, Blaise Pascal (1623-1662), the 18-year-old son of a French tax collector, invented what
he called a numerical wheel calculator (popularly known as the Pascaline) to help his father with
his duties. This brass rectangular box, called the Pascaline, used eight movable dials to add sums
up to eight figures long only.
Pascal's device used a base of ten to accomplish this. The drawback to the Pascaline, however, was
its limitation to addition and its inability to perform other arithmetic operations of multiplication,
division and subtraction.
Figure II: Pascaline
In 1694, a German mathematician and philosopher, Gottfried Wilhem von Leibniz (1646-1716),
improved the Pascaline by creating a machine that could also multiply. Like its predecessor,
Leibniz's mechanical multiplier worked by a system of gears and dials. Partly by studying Pascal's
original notes and drawings.
Leibniz was able to refine his machine. It was not until 1820, however, that mechanical calculators
gained widespread use. Charles Xavier Thomas de Colmar, a Frenchman, invented a machine that
could perform the four basic arithmetic functions. Colmar's mechanical calculator, the
arithometer, presented a more practical approach to computing because it could add, subtract,
multiply and divide. With its enhanced versatility, the arithometer was widely used up until the
First World War. Although later inventors refined Colmar's calculator, together with fellow
inventors Pascal and Leibniz, he helped define the age of mechanical computation.
Figure III. Leibniz Machine
The real beginnings of computers as we know them today, however, lay with an English
mathematics Professor, Charles Babbage (1791-1871) who is called the "Grand Father" of the
computer. Frustrated at the many errors he found while examining calculations for the Royal
Astronomical Society, Babbage begun the automation of computers by 1812. Babbage noticed a
natural harmony between machines and mathematics: machines were best at performing tasks
repeatedly without mistake; while mathematics, particularly the production of mathematics tables,
Page 3 of 43
often required the simple repetition of steps. The problem centered on applying the ability of
machines to the needs of mathematics. Babbage's first attempt at solving this problem was in 1822
when he proposed a machine to perform differential equations, called a Difference Engine. After
working on the Difference Engine for 10 years, Babbage was suddenly inspired to begin work on
the first general-purpose computer, which he called the Analytical Engine. Babbage's assistant,
Augusta Ada King, Countess of Lovelace (1815-1842) was instrumental in the machine's design.
One of the few people who understood the Engine's design as well as Babbage, she helped revise
plans, secure funding from the British government, and communicate the specifics of the
Analytical Engine to the public. Also, Lady Lovelace's fine understanding of the machine allowed
her to create the instruction routines to be fed into the computer, making her the first female
computer programmer. In the 1980's, the U.S. Defense Department named a programming
language ADA in her honor.
Figure IV. Babbage Machine (the Analytical Engine)
Babbage's engine outlined the basic elements of a modern general purpose computer and was a
breakthrough concept. Consisting of over 50,000 components, the basic design of the Analytical
Engine included input devices in the form of perforated cards containing operating instructions
and a "store" for memory of 1,000 numbers of up to 50 decimal digits long. It also contained a
"mill" with a control unit that allowed processing instructions in any sequence, and output devices
to produce printed results.
In 1889, an American inventor, Herman Hollerith (1860-1929) found a faster way to compute the
U.S. census. The previous census in 1880 had taken nearly seven years to count and with an
expanding population, the bureau feared it would take 10 years to count the latest census. Unlike
Babbage's idea of using perforated cards to instruct the machine, Hollerith's method used cards to
store data information which he fed into a machine that compiled the results mechanically. Instead
of ten years, census takers compiled their results in just six weeks with Hollerith's machine. In
addition to their speed, the punch cards served as a storage method for data and they helped reduce
computational errors. Hollerith brought his punch card reader into the business world, founding
Tabulating Machine Company in 1896, later to become International Business Machines (IBM) in
1924 after a series of mergers.
Figure V. Hollerith Machine
Page 4 of 43
In the ensuing years, several engineers made other significant advances. Vannevar Bush (1890-
1974) developed a calculator for solving differential equations in 1931. The machine could solve
complex differential equations that had long left scientists and mathematicians baffled. Similarly,
John V. Atanasoff, a professor at Iowa State College (now called Iowa State University) and his
graduate student, Clifford Berry, envisioned an all-electronic computer that applied Boolean
algebra to computer circuitry. By extending this concept to electronic circuits in the form of on or
off, Atanasoff and Berry had developed the first all electronic computer by 1940. Their project,
however, lost its funding and their work was over shadowed by similar developments by other
scientists.
1.1 Generations of Computers
The history of the computer goes back several decades however and there are five definable
generations of computers. A significant technological development that changes fundamentally,
how computers operate-leading to more compact, less expensive, but more powerful, efficient and
robust machines define each generation. The following are the five generations of computers.
First Generation (1940-1955) – Vacuum Tubes
With the onset of the Second World War, governments sought to develop computers to exploit their
potential strategic importance. This increased funding for computer development projects
hastened technical progress. By 1941 German engineer, Konrad Zuse had developed a computer,
the Z3, to design airplanes and missiles. The Allied forces, however, made greater strides in
developing powerful computers. In 1943, the British completed a secret code-breaking computer
called Colossus to decode German messages. The Colossus's impact on the development of the
computer industry was rather limited for two important reasons.
First, Colossus was not a general-purpose computer; it was only designed to decode secret
messages. Second, the existence of the machine was kept secret until decades after the war.
American efforts produced a broader achievement. Howard H. Aiken (1900-1973), a Harvard
engineer working with IBM, succeeded in producing an all-electronic calculator by 1944. The
purpose of the computer was to create ballistic charts for the U.S. Navy.
Figure VI. Colossus Machine
Page 5 of 43
It was about half as long as a football field and contained about 500 miles of wiring. The machine
was slow (taking 3-5 seconds per calculation) and inflexible (in that sequences of calculations
could not change); but it could perform basic arithmetic as well as more complex equations.
Another computer development spurred by the war was the Electronic Numerical Integrator and
Computer (ENIAC), produced by a partnership between the U.S. government and the University
of Pennsylvania. Consisting of 18,000 vacuum tubes, 70,000 resistors and 5 million soldered
joints, the computer was such a massive piece of machinery that it consumed 160 kilowatts of
electrical power, enough energy to dim the lights in an entire section of Philadelphia. Developed
by John Presper Eckert (1919-1995) and John W. Mauchly (1907-1980), ENIAC, unlike the
Colossus it was a general-purpose computer that computed at high speed.
Figure VII: ENIAC Computer
In the mid-1940's John von Neumann (1903-1957) joined the University of Pennsylvania team and
designed the Electronic Discrete Variable Automatic Computer (EDVAC) in 1945 with a memory
to hold both a stored program as well as data. The key element to the von Neumann architecture
was the central processing unit, which allowed all computer functions to be coordinated through
a single source. In 1951, the UNIVAC I (Universal Automatic Computer), built by Remington
Rand, became one of the first commercially available computers to take advantage of these
advances.
Figure VIII: EDVAC Computer
Page 6 of 43
These early computers used vacuum tubes as circuitry and magnetic drums for memory as a result
taking up entire rooms and costing a fortune to run. These were inefficient materials which
generated a lot of heat, sucked huge electricity and subsequently generated a lot of heat which
caused ongoing breakdowns. These first generation computers relied on ‘machine language’
(which is the most basic programming language that can be understood by computers). These
computers were limited to solving one problem at a time. Input was based on punched cards and
paper tape. Output came out on print-outs. The two notable machines of this era were the UNIVAC
and ENIAC machines – the UNIVAC is the first every commercial computer which was purchased
in 1951 by a business – the US Census Bureau.
Second Generation Computers (1956-1963) – Transistors
By 1948, the invention of the transistor greatly changed the computer's development. The
transistor replaced the large, cumbersome vacuum tube in televisions, radios and computers. As
a result, the size of electronic machinery has been shrinking ever since. The transistor was at work
in the computer by 1956. Coupled with early advances in magnetic-core memory, transistors led
to second generation computers that were smaller, faster, more reliable and more energy-efficient
than their predecessors. The first large-scale machines to take advantage of this transistor
technology were early supercomputers. These computers, developed for atomic energy
laboratories, could handle an enormous amount of data, a capability much in demand by atomic
scientists. The machines were costly, however, and tended to be too powerful for the business
sector's computing needs, thereby limiting their attractiveness. Only two LARCs were ever
installed: one in the Lawrence Radiation Labs in Livermore, California and the other at the U.S.
Navy Research and Development Center in Washington, D.C. Second generation computers
replaced machine language with assembly language, allowing abbreviated programming codes to
replace long, difficult binary codes.
Throughout the early 1960's, there were a number of commercially successful second generation
computers used in business, universities, and government from companies such as Burroughs,
Control Data, Honeywell, IBM, Sperry-Rand, and others. These second generation computers
contained all the components associated with the modern day computer: printers, tape storage,
disk storage, memory, operating systems, and stored programs. One important example was the
IBM 1401, which was universally accepted throughout industry, and is considered by many to be
Page 7 of 43
the Model T of the computer industry. By 1965, most large business routinely processed financial
information using second generation computers. More sophisticated high-level languages such as
COBOL (Common Business-Oriented Language) and FORTRAN (Formula Translator) came into
common use during this time, and have expanded to the current day. These languages replaced
binary machine code with words, sentences, and mathematical formulas, making it much easier to
program a computer. New types of careers (programmer, analyst, and computer systems expert)
and the entire software industry began with second generation computers. The second generation
computers were smaller, faster, cheaper and less heavy on electricity use. They still relied on
punched card for input/printouts. Transistor-driven machines were the first computers to store
instructions into their memories – moving from magnetic drum to magnetic core ‘technology’.
Third Generation Computers (1964-1971) – Integrated Circuits
Though transistors were clearly an improvement over the vacuum tube, they still generated a great
deal of heat, which damaged the computer's sensitive internal parts. The quartz rock eliminated
this problem. Jack Kilby, an engineer with Texas Instruments, developed the integrated circuit (IC)
in 1958. The IC combined three electronic components onto a small silicon disc, which was made
from quartz. Scientists later managed to fit even more components on a single chip, called a
semiconductor. As a result, computers became ever smaller as more components were squeezed
onto the chip. Another third-generation development included the use of an operating system that
allowed machines to run many different programs at once with a central program that monitored
and coordinated the computer's memory.
Fourth Generation (1972– 2010) – Microprocessors
After the integrated circuits, the only place to go was down-in size. Large Scale Integration (LSI)
could fit hundreds of components onto one chip. By the 1980's, Very Large Scale Integration
(VLSI) squeezed hundreds of thousands of components onto a chip. Ultra-Large Scale Integration
(ULSI) increased that number into the millions, thus helping to diminish the size and price of
computers. It also increased their power, efficiency and reliability. The Intel 4004 chip, developed
in 1971, took the integrated circuit one step further by locating all the components of a computer
(central processing unit, memory, and input and output controls) on a minuscule chip. Whereas
previously the integrated circuit had had to be manufactured to fit a special purpose, now one
microprocessor could be manufactured and then programmed to meet any number of demands.
Soon everyday household items such as microwave ovens, television sets and automobiles with
electronic fuel injection incorporated microprocessors.
Such condensed power allowed everyday people to harness a computer's power. They were no
longer developed exclusively for large business or government contracts. By the mid-1970's,
computer manufacturers sought to bring computers to general consumers. These minicomputers
came complete with user-friendly software packages that offered even non-technical users an
array of applications, most popularly word processing and spreadsheet programs. Pioneers in this
field were Commodore, Radio Shack and Apple Computers. In the early 1980's, arcade video
games such as Pac Man and home video game systems such as the Atari 2600 ignited consumer
interest for more sophisticated, programmable home computers.
Page 8 of 43
In 1981, IBM introduced its Personal Computer (PC) for use in the home, office and schools. The
1980's saw an expansion in computer use in all three arenas as clones of the IBM PC made the
personal computer even more affordable. The number of personal computers in use more than
doubled from 2 million in 1981 to 5.5 million in 1982. Ten years later, 65 million PCs were being
used. Computers continued their trend toward a smaller size, working their way down from
desktop to laptop computers (which could fit inside a briefcase) to palmtop (able to fit inside a
breast pocket). In direct competition with IBM's PC was Apple's Macintosh line, introduced in
1984. Notable for its user-friendly design, the Macintosh offered an operating system that allowed
users to move screen icons instead of typing instructions. Users controlled the screen cursor using
a mouse, a device that mimicked the movement of one's hand on the computer screen.
As computers became more widespread in the workplace, new ways to harness their potential
developed. As smaller computers became more powerful, they could be linked together, or
networked, to share memory space, software, information and communicate with each other. As
opposed to a mainframe computer, networked computers allowed individual computers to form
electronic co-ops. Using either direct wiring, called a Local Area Network (LAN), or telephone
lines, these networks could reach enormous proportions. A global web of computer circuitry, the
Internet, for example, links computers worldwide into a single network of information. The two
most popular use of computer for networks today are the Internet and electronic mail, or E-mail,
which allow users to send messages through networked terminals across the office or across the
world.
What filled a room in the 1940s now fit in the palm of the hand. The Intel chip housed thousands
of integrated circuits. The year 1981 saw the first ever computer (IBM) specifically designed for
home use and 1984 saw the Macintosh introduced by Apple. Microprocessors even moved beyond
the realm of computers and into an increasing number of everyday products. The increased power
of these small computers meant they could be linked, creating networks, which ultimately led to
the development, birth and rapid evolution of the Internet. Other major advances during this
period have been the Graphical user interface (GUI), the mouse and more recently the astounding
advances in lap-top capability and hand-held devices.
Fifth Generation (2011 and Beyond) – Artificial Intelligence
Many advances in the science of computer design and technology are coming together to enable
the creation of fifth-generation computers. Two such engineering advances are parallel
processing, which replaces single central processing unit design with a system harnessing the
power of many CPUs to work as one. Another advance is superconductor technology, which allows
the flow of electricity with little or no resistance, greatly improving the speed of information flow.
Using recent engineering advances, computers are able to accept spoken word instructions (voice
recognition) and imitate human reasoning. The ability to translate a foreign language is also
moderately possible with fifth generation computers. Computers today have some attributes of fifth
generation computers. For example, expert systems assist doctors in making diagnoses by
applying the problem-solving steps a doctor might use in assessing a patient's needs. It will take
several more years of development before expert systems are in widespread use.
Page 9 of 43
Computer devices with artificial intelligence are still in development, but some of these
technologies are beginning to emerge and be used such as voice recognition. Leaning to the future,
computers will be radically transformed again by quantum computation, molecular and nano
technology. The essence of fifth generation will be using these technologies to ultimately create
machines which can process and respond to natural language, and have capability to learn and
organise themselves.
Page 10 of 43
MODULE TWO
COMPONENTS OF A COMPUTER SYSTEM
Physical Components of a Computer System
Input unit
The central processing unit (CPU)
Page 11 of 43
Control unit
Arithmetic unit
Registers
Memory unit
Output unit
1. Input Unit:
Computers need to receive data and instructions to solve any problem. The input unit basically
links the external world or environment to the computer system. It consists of one or more input
devices. The keyboard and mouse are the most commonly used input devices.
Page 12 of 43
(ii)In connection with the operating system, it coordinates all of the computer’s activities, for
example, retrieving files from the disk, interpreting data and commands entered from key board
(Input Unit) and sending data to the output devices such as printer.
(iii) It performs arithmetic calculations such as addition and subtraction using binary system of
mathematics.
(iv) It performs logical operations using equal to, greater than and less than comparison.
Page 13 of 43
Control unit performs following functions −
It controls all activities of computer
Supervises flow of data within CPU
Directs flow of data within CPU
Transfers data to Arithmetic and Logic Unit
Transfers results to memory
Fetches results from memory to output devices
2.4 Registers:
The CPU consists of several temporary storage units, which are used to store instructions &
intermediate data, which may be generated during processing.
3. Memory Unit:
Data and instructions given to computer as well as results given by computer are stored in this unit.
Unit of memory is "Byte". 1 Byte = 8 Bits. The memory unit is the section of the system where
data, instruction and output are stored or held during processing, or for onward transfer to either
the control unit or the ALU or even to an output device.
The data and instructions required for processing have to be stored in the memory unit before
actual processing starts. Similarly, the results generated has to be preserved before it is displayed.
The memory unit thus provides space to store input data, intermediate results and final output
generated. E.g.: hard disks, pen drives, floppy disks.
Data and programs are held in the internal storage until required. The memory of computer can
be classified into (3) three, namely: primary memory, secondary memory, which is divided into
Random Access Memory (RAM) and Read Only Memory (ROM).
Random Access Memory
RAM stores data temporarily while a program is being run. it enables information to be erased or
overwritten while the user is processing data on the computer. RAM is highly volatile; it loses
information easily as a result of power failure. It enables the user to read and also write information
whenever he requires. The storage facility is very limited; it is the main or primary or user memory.
Read Only Memory (ROM):
It is static, non-volatile memory. It is a permanent storage medium on chips that contain instruction
and data, which are used permanently on the computer Users can read it but the user cannot change
it. Data is permanently recorded on memory chips and turning the computer off does not affect it.
ROM is generally used to store permanent information needed by the computer for example it
Page 14 of 43
contains the initial programs code, the instruction computer must follow to start up when you first
turn it on. The content of ROM can only be accessed and read by the users, it cannot be modified
or erased at user -level. Unlike RAM, it is not volatile in nature. it provides access to time to
information.
Tertiary Memories:
Tertiary memories are located outside the processor and are not connected directly to the processor.
It supports off–line processing of data. It has a lower access time to information than secondary
memory. Examples are punch cards, computer (flow line) papers, magnetic tape, floppy disk and
magnetic disk pack, flash drive etc.
4. Output Unit :
It is used to print or display the result obtained by the execution of a program. Whenever the user
wants output from the computer, the control unit sends a signal to this unit to be ready to accept
processed data from the memory and to display it. E.g. Monitor, Printer, Speakers, etc.
The Peripheral Units
The Peripheral Units are of three types, they are:
(i) Input devices
(ii) Output devices
(iii) Auxiliary storage devices.
(i) Input devices
These are the computer devices, which are used to enter data and information into the computer
system. Some examples of input devices are:
1.The keyboard
This is one of the mostly used input devices. It allows data to be keyed directly into the CPU from
the key terminals. The computer keyboard resembles the typewriter keys but different from it in
many ways: some keyboards have functions that can perform the function of a mouse.
ii. Mouse:
This is the second most commonly used input devices. The device is called “mouse” because its
shape looks so much like that of a rat, when mouse is dragged on the mouse pad, it moves the
cursor. Mouse can move the cursor in any direction. Icons characters are quickly located with
mouse.
iv. Light Pen
Light pen, as the name sounds and connotes is a PEN that works by light or current or electricity.
It works in conjunction with special monitor called GRAPHICS VISUAL DISPLAY
Page 15 of 43
UNIT (GVDU). The GVDU has special feature of hardware and software that is capable of
detecting the light coming from the “ball point” end of the light pen. If you had played on the sand,
you must have drawn things on it. The GVDU serves as sand while the light pen serves as your
finger.
v. Scanner:
The scanner is a camera, which is capable of taking the photograph of an object, and then pass it
into the computer memory or storage for further processing. When this is done, the computer can
modify the picture in so many ways. The scanner is connected to the system unit just as the other
peripherals are. Scanner can scan a section of a photograph by selecting the required area out of
the whole. Photocopier cannot do this.
vi. Voice Input:
Quality of voice input data can never be as the original voice due to the conversion to binary code
done by the digitizing sound sampler. The wavelength and height determine the generated voice.
vii Card Reader:
This is another Input device which works by recognizing holes in a small card.
(ii) OUTPUT DEVICES
Output devices are the tools used to get out the result of the processed data called out put
(information)from the computer system. Output devices include: monitor (VDU), printer, plotter,
speaker and microfilm.
i. Monitor
It displays the output or information on its screen. Such copy is called soft copy. Its shape is very
close to that of television. If a copy of what can be seen on the screen is required on paper, it can
be obtained from either a printer or a plotter or from a turtle. Copy on paper of programs from a
CPU is called hard copy.
ii.Printer
The printer displays the output from a computer system unit on a paper. The paper is generally
called a flow line and also called hard copy. Printers can be classified into two categories.
(i) Impact printers and
(ii) Non–Impact printers.
(a)Impact Printers
These are the printers that print character, they use carbon papers or ribbons to transfer impression
on the paper or flow line. Each head is hit by a hammer head against a carbon paper to make the
character visible. e.g. line printers, daisy wheel printers and dot matrix printers.
(b)Non-Impact Printers
Page 16 of 43
These have no character head (s) do not use carbon paper but rather TONER (liquid ink) through
electronic beams e.g. ink jet printers, thermal printers, laser printer.
iii. Plotters
The plotters are the right devices for drawing pictures and graphs. Plotters are usually called graph
plotters. Plotters use pen that can move in different directions. However, there are plotters that
use electrostatic printing rather than pen and ink. Colour graph plotters use many pens, each for a
colour instead of one pen.
iv. Speaker
The output device that provides sound information is the speaker. The system unit output, which
is in digital form, passes through a “digitizing sound sampler” which produces a waveform. The
waveform is connected to a vibrator, which produces sound. The sound we hear is nothing but
vibration received by the eardrum.
v. Computer Output on Microfilm (COM):
Modern, high resolution photographic film is capable of storing large quantities of data in readable
character form in a relatively small space, and is consequently useful as an output medium for data
that is required to be stored (archived) or sent by post. Photographic output takes two forms;
microfilm and microfiche
(iii) Auxiliary Storage or Backing Storage
Auxiliary Storage is used to provide mass storage for programs and files which are not currently
being operated on but which will be transferred to the main storage when required. The devices
include microfilm, magnetic tape unit, magnetic disk unit and floppy disk unit.
Characteristics of the computer system
The characteristics of the computer system are as follows −
Speed
Page 17 of 43
A computer works with much higher speed and accuracy compared to humans while performing
mathematical calculations. Computers can process millions (1,000,000) of instructions per second.
The time taken by computers for their operations is microseconds and nanoseconds. Some
calculations that would have taken hours and days to complete otherwise, can be completed in a few
seconds using the computer. For example, calculation and generation of salary slips of thousands of
employees of an organization, weather forecasting that requires analysis of a large amount of data
related to temperature, pressure and humidity of various places, etc.
Accuracy
Computers perform calculations with 100% accuracy. Errors may occur due to data inconsistency or
inaccuracy. For example, the computer can accurately give the result of division of any two numbers
up to 10 decimal places.
Diligence
A computer can perform millions of tasks or calculations with the same consistency and accuracy.
It doesn’t feel any fatigue or lack of concentration. Its memory also makes it superior to that of
human beings. When used for a longer period of time, the computer does not get tired or fatigued. It
can perform long and complex calculations with the same speed and accuracy from the start till the
end.
Versatility
Versatility refers to the capability of a computer to perform different kinds of works with same
accuracy and efficiency. At one moment you can use the computer to prepare a letter document and
in the next moment you may play music or print a document
Reliability
A computer is reliable as it gives consistent result for similar set of data i.e., if we give same set of
input any number of times, we will get the same result.
Automation
Computer performs all the tasks automatically i.e. it performs tasks without manual intervention.
Memory
A computer has built-in memory called primary memory where it stores data. Secondary storage are
removable devices such as CDs, pen drives, etc., which are also used to store data.
Computers have several limitations too. Computer can only perform tasks that it has been
programmed to do.
Computer cannot do any work without instructions from the user. It executes instructions as
specified by the user and does not take its own decisions.
Page 18 of 43
MODULE THREE
PROGRAMMING METHODOLOGY
What is Computer Programming Methodology?
A Methodology is a system of methods with its orderly and integrated collection of various methods,
tools and notations. When programs are developed to solve real-life problems like inventory
management, payroll processing, student admissions, examination result processing, etc. they tend
to be huge and complex. The approach to analyzing such complex problems, planning for software
development and controlling the development process is called programming methodology.A
computer program is a series of instructions written in the language of the computer which specifies
processing operations that the computer is to carry out on data. It is a coded list of instructions that
tell" a computer how to perform a set of calculations or operations. Programming is the process of
producing a computer program. Programming involves the following activities; writing a program,
compiling the program, running the program, debugging the programs. The whole process is
repeated until the program is finished.
1.2. Problem Solving with Computer:
There are a number of concepts of relevance to problem solving using computers. Two particular
concepts includes computability and complexity. A problem is said to be computable if a machine
can in principle perform it. Some mathematical functions are not computable. The complexity of a
problem is measured in terms of resources required, time and storage The steps involved in solving
a problem using a computer program includes;
Step 1. Define the Problem:
State in the clearest possible terms the problem you wish to solve. It is impossible to write a computer
program to solve a problem that has been ambiguously or imprecisely stated.
Step 2. Devise an Algorithm:
An algorithm is a step-by-step procedure for solving the problem. Each of the steps must be a simple
operation which the computer is capable of doing. A universally-used representation of an algorithm
is a flowchart or flow diagram, in which boxes representing procedural steps are connected by arrows
indicating the proper sequence of the steps. In many problems you will need to define a mathematical
procedure, expressed in strictly numerical terms since the use of computers to do higher level analytic
processes such as solving algebraic equations or doing integrals in a non-numerical fashion is
relatively limited. The Algorithm can also be represented using Pseudo-code
Step 3. Code the Program: The steps in an algorithm, translated into a series of instructions to the
computer, comprise the computer program. There are many languages in which computer programs
can be coded, each with its own syntax, vocabulary, and special features.
Step 4. Debug the Program: Most programs of any length don't work properly the first time they
are run and must therefore be debugged." Often, during the debugging phase, errors and ambiguities
in the program are corrected.
Types of Programming Methodologies
Page 19 of 43
There are many types of programming methodologies prevalent among software developers:
Procedural Programming
Problem is broken down into procedures, or blocks of code that perform one task each.
All procedures taken together form the whole program. It is suitable only for small programs that
have low level of complexity.
Example: For a calculator program that does addition, subtraction, multiplication, division, square
root and comparison, each of these operations can be developed as separate procedures. In the
main program each procedure would be invoked on the basis of user’s choice.
Object-oriented Programming
Here the solution revolves around entities or objects that are part of problem. The solution deals
with how to store data related to the entities, how the entities behave and how they interact with
each other to give a cohesive solution.
Example: If we have to develop a payroll management system, we will have entities like
employees, salary structure, leave rules, etc. around which the solution must be built.
Functional Programming
Here the problem, or the desired solution, is broken down into functional units. Each unit performs
its own task and is self-sufficient. These units are then stitched together to form the complete
solution.
Example: A payroll processing can have functional units like employee data maintenance, basic
salary calculation, gross salary calculation, leave processing, loan repayment processing, etc.
Logical Programming
Here the problem is broken down into logical units rather than functional units.
Example: In a school management system, users have very defined roles like class teacher, subject
teacher, lab assistant, coordinator, academic in-charge, etc. So the software can be divided into
units depending on user roles. Each user can have different interface, permissions, etc.
Software developers may choose one or a combination of more than one of these methodologies
to develop a software. Note that in each of the methodologies discussed, problem has to be broken
down into smaller units. To do this, developers use any of the following two approaches:
Top-down approach
Bottom-up approach
Page 20 of 43
necessary to perform its task. The following illustration shows an example of how you can follow
modular approach to create different modules while developing a payroll processing program.
Bottom-up Approach
In bottom-up approach, system design starts with the lowest level of components, which are then
interconnected to get higher level components. This process continues till a hierarchy of all system
components is generated. However, in real-life scenario it is very difficult to know all lowest level
components at the outset. So bottoms up approach is used only for very simple problems.
Let us look at the components of a calculator program.
Page 21 of 43
Programming Methodologies ─ Understanding the Problem
A typical software development process follows these steps:
Requirement gathering
Problem definition
System design
Implementation
Testing
Documentation
Training and support
Maintenance
The first two steps assist the team in understanding the problem, the most crucial first step towards
getting a solution. Person responsible for gathering requirement, defining the problem and
designing the system is called system analyst.
Requirement Gathering
Usually, clients or users are not able to clearly define their problems or requirements
Page 22 of 43
They have a vague idea of what they want. So system developers need to gather client requirements
to understand the problem that needs to be resolved, or what needs to be delivered. Detailed
understanding of the problem is possible only by first understanding the business area for which
the solution is being developed. Some key questions that help in understanding a business include:
What is being done?
How is it being done?
What is the frequency of a task?
What is the volume of decisions or transactions?
What are the problems being encountered?
Some techniques that help in gathering this information are:
Interviews
Questionnaires
Studying existing system documents
Analyzing business data
System analysts needs to create clear and concise but thorough requirements document in order to
identify SMART – specific, measurable, agreed upon, realistic and time-based – requirements. A
failure to do so results in:
Incomplete problem definition
Incorrect program goals
Re-work to deliver required outcome to client
Increased costs
Delayed delivery
Due to the depth of information required, requirement gathering is also known as detailed
investigation.
Problem Definition
After gathering requirements and analyzing them, problem statement must be stated clearly.
Problem definition should unambiguously state what problem or problems need to be solved.
Having a clear problem statement is necessary to:
Define project scope
Keep the team focused
Keep the project on track
Page 23 of 43
Validate that desired outcome was achieved at the end of project
Identifying the Solution
Often, coding is supposed to be the most essential part of any software development process.
However, coding is just a part of the process and may actually take the minimum amount of time
if the system is designed correctly. Before the system can be designed, a solution must be identified
for the problem at hand.
The first thing to be noted about designing a system is that initially the system analyst may come
up with more than one solutions. But the final solution or the product can be only one. In-depth
analysis of data gathered during the requirement gathering phase can help in coming to a unique
solution. Correctly defining the problem is also crucial for getting to the solution.
When faced with the problem of multiple solutions, analysts go for visual aids like flowcharts,
data flow diagrams, entity relationship diagrams, etc. to understand each solution in depth.
Flowcharting
Flowcharting is the process of illustrating workflows and data flows in a system through symbols
and diagrams. It is an important tool to assist the system analyst in identifying a solution to the
problem. It depicts the components of the system visually.
Page 24 of 43
They act as blueprints for actual program coding
Flowcharts are important for program documentation
Flowcharts are an important aid during program maintenance
These are the disadvantages of flowcharting −
Complex logic cannot be depicted using flowcharts
In case of any change in logic or data/work flow, flowchart has to be redrawn completely
Data Flow Diagram
Data flow diagram or DFD is a graphical representation of data flow through a system or sub-
system. Each process has its own data flow and there are levels of data flow diagrams. Level 0
shows the input and output data for the whole system. Then the system is broken down into
modules and level 1 DFD shows data flow for each module separately. Modules may further be
broken down into sub-modules if required and level 2 DFD drawn.
Pseudocode
After the system is designed, it is handed over to the project manager for implementation, i.e.
coding. The actual coding of a program is done in a programming language, which can be
understood only by programmers who are trained in that language. However, before the actual
coding occurs, the basic operating principles, work flows and data flows of the program are written
using a notation similar to the programming language to be used. Such a notation is
called pseudocode.
Here is an example of a pseudocode in C++. The programmer just needs to translate each statement
into C++ syntax to get the program code.
Page 25 of 43
Identifying Mathematical Operations
All instructions to the computer are finally implemented as arithmetic and logical operations at
machine level. These operations are important because they −
Occupy memory space
Take time in execution
Determine software efficiency
Affect overall software performance
System analysts try to identify all major mathematical operations while identifying the unique
solution to problem at hand.
Applying Modular Techniques
A real-life problem is complex and big. If a monolithic solution is developed it poses these
problems −
Difficult to write, test and implement one big program
Modifications after the final product is delivered is close to impossible
Maintenance of program very difficult
One error can bring the whole system to a halt
To overcome these problems, the solution should be divided into smaller parts called modules.
The technique of breaking down one big solution into smaller modules for ease of development,
implementation, modification and maintenance is called modular technique of programming or
software development.
Advantages of Modular Programming
Modular programming offers these advantages −
Enables faster development as each module can be developed in parallel
Modules can be re-used
As each module is to be tested independently, testing is faster and more robust
Debugging and maintenance of the whole program easier
Modules are smaller and have lower level of complexity so they are easy to understand
Identifying the Modules
Identifying modules in a software is a mind boggling task because there cannot be one correct way
of doing so. Here are some pointers to identifying modules −
If data is the most important element of the system, create modules that handle related data.
Page 26 of 43
If service provided by the system is diverse, break down the system into functional
modules.
If all else fails, break down the system into logical modules as per your understanding of
the system during requirement gathering phase.
For coding, each module has to be again broken down into smaller modules for ease of
programming. This can again be done using the three tips shared above, combined with specific
programming rules. For example, for an object oriented programming language like C++ and Java,
each class with its data and methods could form a single module.
Step-by-Step Solution
To implement the modules, process flow of each module must be described in step by step fashion.
The step by step solution can be developed using algorithms or pseudocodes. Providing step by
step solution offers these advantages −
Anyone reading the solution can understand both problem and solution.
It is equally understandable by programmers and non-programmers.
During coding each statement simply needs to be converted to a program statement.
It can be part of documentation and assist in program maintenance.
Micro-level details like identifier names, operations required, etc. get worked out
automatically
Let’s look at an example.
Control Structures
Page 27 of 43
As you can see in the above example, it is not necessary that a program logic runs sequentially.
In programming language, control structures take decisions about program flow based on given
parameters. They are very important elements of any software and must be identified before any
coding begins.
Algorithms and pseudocodes help analysts and programmers in identifying where control
structures are required.
Control structures are of these three types −
Decision Control Structures
Decision control structures are used when the next step to be executed depends upon a criteria.
This criteria is usually one or more Boolean expressions that must be evaluated. A Boolean
expression always evaluates to “true” or “false”. One set of statements is executed if the criteria is
“true” and another set executed if the criteria evaluates to “false”. For example, if statement
Selection Control Structures
Selection control structures are used when program sequence depends upon the answer to a specific
question. For example, a program has many options for the user. The statement to be executed next
will depend on the option chosen. For example, switch statement, case statement.
Repetition / Loop Control Structures
Repetition control structure is used when a set of statements in to be repeated many times. The
number of repetitions might be known before it starts or may depend on the value of an expression.
For example, for statement, while statement, do while statement, etc.
As you can see in the image above, both selection and decision structures are implemented
similarly in a flowchart. Selection control is nothing but a series of decision statements taken
sequentially.
Page 28 of 43
Here are some examples from programs to show how these statements work −
Page 29 of 43
Promotes effective communication between team members
Enables analysis of problem at hand
Acts as blueprint for coding
Assists in debugging
Becomes part of software documentation for future reference during maintenance phase
These are the characteristics of a good and correct algorithm −
Has a set of inputs
Steps are uniquely defined
Has finite number of steps
Produces desired output
Example Algorithms
Let us first take an example of a real-life situation for creating algorithm. Here is the algorithm for
going to the market to purchase a pen.
Step 4 in this algorithm is in itself a complete task and separate algorithm can be written for it. Let
us now create an algorithm to check whether a number is positive or negative.
Page 30 of 43
Flowchart Elements
Flowchart is a diagrammatic representation of sequence of logical steps of a program. Flowcharts
use simple geometric shapes to depict processes and arrows to show relationships and process/data
flow.
Flowchart Symbols
Here is a chart for some of the common symbols used in drawing flowcharts.
Page 31 of 43
Connects two or more parts of a flowchart, which are
on the same page.
On-page Connector
Page 32 of 43
Using Clear Instructions
As you know, computer does not have intelligence of its own; it simply follows
the instructions given by the user. Instructions are the building blocks of a computer program,
and hence a software. Giving clear instructions is crucial to building a successful program. As a
programmer or software developer, you should get into the habit of writing clear instructions. Here
are two ways to do that.
Clarity of Expressions
Expression in a program is a sequence of operators and operands to do an arithmetic or logical
computation. Here are some examples of valid expressions −
Comparing two values
Defining a variable, object or class
Arithmetic calculations using one or more variables
Retrieving data from database
Updating values in database
Page 33 of 43
Writing unambiguous expressions is a skill that must be developed by every programmer. Here
are some points to be kept in mind while writing such expressions −
Unambiguous Result
Evaluation of the expression must give one clear cut result. For example, unary operators should
be used with caution.
Page 34 of 43
In this module, we will cover how to write a good program. But before we do that, let us see what
the characteristics of a good program are −
Portable − The program or software should run on all computers of same type. By same
type we mean a software developed for personal computers should run on all PCs. Or a
software for written for tablets should run on all tablets having the right specifications.
Efficient − A software that does the assigned tasks quickly is said to be efficient. Code
optimization and memory optimization are some of the ways of raising program efficiency.
Effective − The software should assist in solving the problem at hand. A software that does
that is said to be effective.
Reliable − The program should give the same output every time the same set of inputs is
given.
User friendly − Program interface, clickable links and icons, etc. should be user friendly.
Self-documenting − Any program or software whose identifier names, module names, etc.
can describe itself due to use of explicit names.
Here are some ways in which good programs can be written.
Proper Identifier Names
A name that identifies any variable, object, function, class or method is called an identifier. Giving
proper identifier names makes a program self-documenting. This means that name of the object
will tell what it does or what information it stores. Let’s take an example of this SQL instruction:
Page 35 of 43
Look at line 10. It tells anyone reading the program that a student’s ID, name and roll number are
to be selected. The names of the variables make this self-explanatory. These are some tips to create
proper identifier names −
Use language guidelines
Don’t shy from giving long names to maintain clarity
Use uppercase and lowercase letters
Don’t give same name to two identifiers even if the language allows it
Don’t give same names to more than one identifier even if they have mutually exclusive
scope
Comments
In the image above, look at line 8. It tells the reader that the next few lines of code will retrieve
list of students whose report card is to be generated. This line is not part of the code but given only
to make the program more user friendly.
Such an expression that is not compiled but written as a note or explanation for the programmer is
called a comment. Look at the comments in the following program segment. Comments start with
//.
Page 36 of 43
Indentation
Distance of text from left or right margin is called indent. In programs, indentation is used to
separate logically separated blocks of code. Here’s an example of indented program segment:
As you can see, indented program is more understandable. Flow of control from for loop to if and
back to for is very clear. Indentation is especially useful in case of control structures.
Inserting blank spaces or lines is also part of indentation. Here are some situations where you can
and should use indentation −
Blank lines between logical or functional blocks of code within the program
Blank spaces around operators
Tabs at the beginning of new control structures
Programming Methodologies - Debugging
Identifying and removing errors from a program or software is called debugging. Debugging is
ideally part of testing process but in reality it is done at every step of programming. Coders should
debug the smallest of their modules before moving on. This decreases the number of errors thrown
up during the testing phase and reduces testing time and effort significantly. Let us look at the
types of errors that can crop up in a program.
Syntax Errors
Syntax errors are the grammatical errors in a program. Every language has its own set of rules,
like creating identifiers, writing expressions, etc. for writing programs. When these rules are
violated, the errors are called syntax errors. Many modern integrated development
environments can identify the syntax errors as you type your program. Else, it will be shown when
you compile the program. Let us take an example −
Page 37 of 43
In this program, the variable prod has not been declared, which is thrown up by the compiler.
Semantic Errors
Semantic errors are also called logical errors. The statement has no syntax errors, so it will
compile and run correctly. However, it will not give the desired output as the logic is not correct.
Let us take an example.
Page 38 of 43
Look at line 13. Here programmer wants to check if the divisor is 0, to avoid division by 0.
However, instead of using the comparing operator ==, assignment operator = has been used. Now
every time the “if expression” will evaluate to true and program will give output as “You cannot
divide by 0”. Definitely not what was intended!!
Logical errors cannot be detected by any program; they have to be identified by the programmer
herself when the desired output is not achieved.
Runtime Errors
Runtime errors are errors that occur while executing the program. This implies that the program
has no syntax errors. Some of the most common run time errors your program may encounter are
−
Infinite loop
Division by '0'
Wrong value entered by user (say, string instead of integer)
Code Optimization
Any method by which code is modified to improve its quality and efficiency is called code
optimization. Code quality determines life span of code. If the code can be used and maintained
for a long period of time, carried over from product to product, its quality is deemed to be high
and it has a longer life. On the contrary, if a piece of code can be used and maintained only for
short durations, say till a version is valid, it is deemed to be of low quality and has a short life.
Reliability and speed of a code determines code efficiency. Code efficiency is an important factor
in ensuring high performance of a software.
There are two approaches to code optimization −
Intuition based optimization (IBO) − Here the programmer tries to optimize the program
based on her own skill and experience. This might work for small programs but fails
miserably as complexity of the program grows.
Evidence based optimization (EBO) − Here automated tools are used to find out
performance bottlenecks and then relevant portions optimize accordingly. Every
programming language has its own set of code optimization tools. For example, PMD,
FindBug and Clover are used to optimize Java code.
Code is optimized for execution time and memory consumption because time is scarce and
memory expensive. There has to be a balance between the two. If time optimization increases
load on memory or memory optimization makes the code slower, purpose of optimization will
be lost.
Page 39 of 43
Execution Time Optimization
Optimizing code for execution time is necessary to provide fast service to the users. Here are some
tips for execution time optimization −
Use commands that have built-in execution time optimization
Use switch instead of if condition
Minimize function calls within loop structures
Optimize the data structures used in the program
Memory Optimization
As you know, data and instructions consume memory. When we say data, it also refers to interim
data that is the result of expressions. We also need to keep a track of how many instructions are
making up the program or the module we are trying to optimize. Here are some tips for memory
optimization −
Use commands that have built-in memory optimization
Keep the use of variables that need to be stored in registers minimum
Avoid declaring global variables inside loops that are executed many times
Avoid using CPU intensive functions like sqrt()
Program Documentation
Any written text, illustrations or video that describe a software or program to its users is
called program or software document. User can be anyone from a programmer, system analyst
and administrator to end user. At various stages of development multiple documents may be
created for different users. In fact, software documentation is a critical process in the overall
software development process.
Page 40 of 43
In modular programming documentation becomes even more important because different modules
of the software are developed by different teams. If anyone other than the development team wants
to or needs to understand a module, good and detailed documentation will make the task easier.
These are some guidelines for creating the documents −
Documentation should be from the point of view of the reader
Document should be unambiguous
There should be no repetition
Industry standards should be used
Documents should always be updated
Any outdated document should be phased out after due recording of the phase out
Advantages of Documentation
These are some of the advantages of providing program documentation −
Keeps track of all parts of a software or program
Maintenance is easier
Programmers other than the developer can understand all aspects of software
Improves overall quality of the software
Assists in user training
Ensures knowledge de-centralization, cutting costs and effort if people leave the system
abruptly
Example Documents
A software can have many types of documents associated with it. Some of the important ones
include −
User manual − It describes instructions and procedures for end users to use the different
features of the software.
Operational manual − It lists and describes all the operations being carried out and their
inter-dependencies.
Design Document − It gives an overview of the software and describes design elements in
detail. It documents details like data flow diagrams, entity relationship diagrams, etc.
Requirements Document − It has a list of all the requirements of the system as well as an
analysis of viability of the requirements. It can have user cases, reallife scenarios, etc.
Page 41 of 43
Technical Documentation − It is a documentation of actual programming components
like algorithms, flowcharts, program codes, functional modules, etc.
Testing Document − It records test plan, test cases, validation plan, verification plan, test
results, etc. Testing is one phase of software development that needs intensive
documentation.
List of Known Bugs − Every software has bugs or errors that cannot be removed because
either they were discovered very late or are harmless or will take more effort and time than
necessary to rectify. These bugs are listed with program documentation so that they may
be removed at a later date. Also they help the users, implementers and maintenance people
if the bug is activated.
Program Maintenance
Program maintenance is the process of modifying a software or program after delivery to achieve
any of these outcomes −
Correct errors
Improve performance
Add functionalities
Remove obsolete portions
Despite the common perception that maintenance is required to fix errors that come up after the
software goes live, in reality most of the maintenance work involves adding minor or major
capabilities to existing modules. For example, some new data is added to a report, a new field
added to entry forms, code to be modified to incorporate changed government laws, etc.
Types of Maintenance
Maintenance activities can be categorized under four headings −
Corrective maintenance − Here errors that come up after on-site implementation are
fixed. The errors may be pointed out by the users themselves.
Preventive maintenance − Modifications done to avoid errors in future are called
preventive maintenance.
Adaptive maintenance − Changes in the working environment sometimes require
modifications in the software. This is called adaptive maintenance. For example, if
government education policy changes, corresponding changes have to be made in student
result processing module of school management software.
Perfective maintenance − Changes done in the existing software to incorporate new
requirements from the client is called perfective maintenance. Aim here is to be always be
up-to-date with the latest technology.
Page 42 of 43
Maintenance Tools
Software developers and programmers use many tools to assist them in software maintenance.
Here are some of the most widely used −
Program slicer − selects a part of the program that would be affected by the change
Data flow analyzer − tracks all possible flows of data in the software
Dynamic analyzer − traces program execution path
Static analyzer − allows general viewing and summarizing of the program
Dependency analyzer − assists in understanding and analyzing interdependence of
different parts of the program
Page 43 of 43